For years, lax or non-existent security has been an open secret of the connected technology industry. Manufacturers raced to add internet connectivity to everything from fridges and kettles to cars and factory sensors, driven by consumer demand and the promise of data-driven revenue streams. All too often security was, where it existed at all, an afterthought. Default passwords were universal across entire product lines, sometimes publicly listed on manufacturer websites for convenience. Security updates, were left to users to discover and install.
Cyber resilience experts and governments have watched the fuse of this ticking security timebomb burn down with growing alarm. The consequences have already been severe: botnets built from compromised consumer routers have knocked national infrastructure offline; ransomware delivered through insecure industrial devices has shut down hospitals; state-sponsored actors have burrowed into corporate networks through a humble smart camera or unpatched VPN appliance. The regulatory response has arrived in the UK and EU over the last few years.
The Cyber Resilience Act: connected device security at the heart of the EU resilience agenda
The EU Cyber Resilience Act (CRA) entered into force in December 2024 after years of consultation, negotiation, and intense industry lobbying. It will apply generally from 11 December 2027. Article 14 (manufacturer reporting obligations) will apply from 11 September 2026, and Chapter IV from 11 June 2026. Products placed on the market before 11 December 2027 are only subject to Article 14 of the Regulation unless they are substantially modified.
The CRA is significant in scope and ambition, a sweeping horizontal regulation that applies not to a single sector but to the majority of 'products with a digital element' (PDEs) sold on the European market. The CRA covers not just consumer gadgets but enterprise software, industrial control systems, routers, password managers, smart home devices, and any hardware or software product that connects — directly or indirectly — to another device or network.
Exemptions to the CRA are few: medical devices and motor vehicles, which are already subject to their own regulatory frameworks (the MDR/IVDR and UNECE WP.29 respectively) plus PDEs used in civil aviation, and certain defence and national security products.
Open-source software developed and supplied entirely outside a commercial context is also excluded, though this carve-out is carefully circumscribed and commercial open-source support arrangements may still fall within scope. This presents a challenge for OSS operating systems in particular which need to meet a range of requirements, despite not being directly responsible for the coding covered by the regulation.
The CRA introduces a risk-based classification for PDEs with three tiers:
- Default category (unclassified PDEs): the vast majority of products. These must meet essential cyber security requirements but may use a manufacturer's own form of conformity assessment.
- Important PDEs (Class I): products identified in Annex III as posing higher cyber security risk — including identity management software, password managers, network traffic management tools, VPNs, and smart home products with security functions. These generally require third-party conformity assessment or adherence by the manufacturer to harmonised standards.
- Critical PDEs (Class II): a narrower set of high-risk products listed in Annex III — such as hypervisors, hardware security modules, industrial firewalls, and smart meter gateways. These require conformity assessment by a notified body.
The European Commission retains delegated powers to update Annexes III and IV, meaning the classification of PDEs can evolve over time, an important point for teams managing long product lifecycles.
The CRA's central demand is simple to state, if not always simple to implement: build security in from the start.
Essential cyber security requirements
Annex I of the CRA sets out the essential requirements, which operate as the substantive backbone of the regulation. Manufacturers must ensure their products are:
- designed, developed and produced with appropriate levels of cyber security, having regard to risk (the "security by design" principle)
- free from known exploitable vulnerabilities at the time of placing on the market
- equipped with secure by default configurations
- protected against unauthorised access through appropriate authentication, identity and access management controls
- capable of protecting data at rest and in transit through encryption or equivalent measures
- designed to minimise attack surfaces, including limiting unnecessary interfaces, network services, and data collection
- resilient against denial-of-service attacks where relevant
- capable of receiving security updates, including automatic updates where proportionate, for the entirety of the support period.
The CRA also imposes process requirements on manufacturers, including:
- maintaining a documented security policy for products
- establishing a coordinated vulnerability disclosure (CVD) policy
- actively monitoring and remediating vulnerabilities throughout the product's lifecycle
- providing security updates promptly and, where feasible, separately from functional updates.
Support period and end-of-life obligations
One of the CRA's most commercially significant provisions concerns the minimum support period. Manufacturers must provide security updates for a period that reflects the expected product lifetime and for at least five years (unless the product's intended lifetime is shorter). During this period:
- security updates must be distributed promptly and free of charge
- updates must be clearly distinguished from functional updates
- upon end of support, manufacturers must notify users and the market surveillance authority.
Five years of support sounds reasonable without considering the modern economics of consumer technology. A manufacturer might sell a smart speaker for £39. The margin after manufacturing, logistics, and retail is slim. The prospect of paying an engineering team to monitor and patch that product for five years, for free, after the consumer has already paid for it, fundamentally alters the business model. It remains to be seen whether the long term result of this Regulation will be a reduction in consumer choice and cheap electronics in landfill, higher prices for PDEs or goods incorporating them, or a major evolution in long term product security management (or a combination of these things).
The clock is ticking: 24 hours to report
If the support period obligation is commercially uncomfortable, the CRA's incident reporting requirements are operationally challenging.
From 11 September 2026, if a manufacturer discovers that a vulnerability in one of its products is being actively exploited, it has 24 hours to notify ENISA, the EU's cyber security agency. A more detailed report must follow within 72 hours, and a full account within 14 days. A single reporting mechanism is currently in development.
24 hours might seem like a reasonable reporting frame for a widespread vulnerability but it will require a level of efficiency and fast decision making not always easy for large organisations. If a security researcher emails a cryptically worded bug report to a tip off inbox over a weekend it may be hours before a junior engineer finds it, triages it and escalates. The legal team likely needs to be involved to assess disclosure obligations. The communications team needs to assess whether a press statement is required. The CISO needs to brief the CEO. The product team needs to assess which versions are affected and whether a patch exists. All of this, under the CRA, must culminate in a formal notification to a European regulatory authority within a single day.
This is not a process that can be improvised. Organisations that do not have pre-built, rehearsed incident response protocols, ones that explicitly address regulatory notification timelines, will struggle when the first real-world event occurs. The pressure falls squarely on in-house legal and compliance teams to get those protocols in place before the reporting obligations switch on.
Not just for manufacturers
One of the more overlooked aspects of the CRA is that it does not limit the responsibilities it imposes to manufacturers - importers and distributors are newly on the hook too. If your business buys PDEs from an overseas manufacturer and sells them into the EU market, you are on the regulatory radar.
Importers must verify that manufacturers have carried out the required conformity assessments before bringing products to market. Distributors must check that products carry the required CE marking and documentation (including non-physical software products). Neither can simply argue that the non-compliance was the manufacturer's fault and they will be at risk of regulatory action and civil claims if they try.
For procurement and supply chain teams, this means substantially more rigorous supplier due diligence — and for legal teams, it means revisiting commercial contracts with suppliers to ensure there is proper contractual recourse if a product turns out to be non-compliant.
The maximum penalty for breaching the CRA's core requirements is a fine of €15 million, or 2.5% of global annual turnover, whichever is higher.
Across the Channel the UK is ahead of the game
On digital product security, the UK was not waiting for Brussels. The Product Security and Telecommunications Infrastructure Act 2022 came into force in April 2024, accompanied by the Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations 2023 (SI 2023/1007), which set out the substantive technical requirements. The regime (known as the PSTI regime) is administered and enforced by the Office for Product Safety and Standards (OPSS) and made the UK one of the first countries in the world to impose mandatory minimum security requirements on consumer connected products.
The PSTI regime is, for now, narrower in scope than the CRA. It focuses on consumer-facing products — eg smartphones, smart TVs, home routers, connected cameras, wearable devices, and the like — rather than the full breadth of business and industrial technology that the CRA covers. And where the CRA's essential requirements run to pages of detailed technical obligations, the PSTI currently imposes three key requirements, though manufacturers should not mistake simplicity for leniency.
PSTI - three rules, no excuses
No more default passwords: this is the headline requirement, and it's straightforward. A manufacturer can no longer ship a product where every unit uses the same default password like the notorious "admin/admin" combination that has underpinned countless security breaches. Each device must either have a unique password set at the factory, or must prompt the user to create one before setup is complete.
Publish a vulnerability disclosure policy: manufacturers must make publicly available a clear policy explaining how security researchers and members of the public can report vulnerabilities, what information is needed, and how quickly the manufacturer will respond. A web page with a security contact email and a commitment to acknowledge reports within a reasonable timeframe will fulfil this requirement. This is not an onerous task on its face but many manufacturers, particularly smaller ones, have never made this information available.
Be honest about your support period: manufacturers must clearly state — on packaging, in product listings, and in accompanying documentation — how long the product will receive security updates. There is, at present, no legally mandated minimum in the UK (unlike the CRA's five-year floor). But a manufacturer cannot simply stay silent. If your product gets security updates for one year and then nothing, you have to say so, at the point of sale, before the consumer hands over their money.
Mandatory transparency about support periods, even in the absence of a minimum support period, is genuinely powerful. It enables consumers to make informed purchasing decisions and creates reputational pressure on manufacturers to offer meaningful support windows. Market forces, armed with better information, can drive standards upward even without regulators mandating specific durations.
The watchdog has teeth
The OPSS is the UK's enforcement authority under the PSTI regime. It is empowered to issue compliance notices, stop notices requiring manufacturers to halt supply of non-compliant products, and — in the most serious cases — recall notices pulling products back from consumers who have already purchased them.
Financial penalties can reach £10 million or 4% of qualifying worldwide revenue, whichever is higher, for manufacturers. For ongoing contraventions, there is an additional daily penalty of up to £20,000 per day; something designed to attract headlines and create a real sense of pressure.
The OPSS has already signalled that it is actively monitoring the market and has been in touch with a number of manufacturers. Enforcement action, when it comes, is likely to be public and high-profile. Regulators elsewhere in the world are watching the UK's experience closely, and OPSS has every incentive to demonstrate that the regime has genuine teeth.
Two systems, one problem
For businesses selling consumer goods into both the UK and EU markets, which describes a large proportion of significant technology companies, the practical reality is two parallel compliance regimes. They share common roots in ETSI EN 303 645, the international consumer IoT security standard, and they share a common philosophy. But their specific requirements, documentation obligations, enforcement mechanisms, and timelines are different.
There is no mutual recognition between the UK's statement of compliance and the EU's CE marking process. A product that is compliant in one market may need additional steps to be compliant in the other. Post Brexit, this divergence is a fact of life rather than a problem to be solved, and businesses need to build compliance processes that address both jurisdictions concurrently.
The bigger picture
The CRA and PSTI join a growing body of regulation, including the EU's NIS2 Directive for critical infrastructure operators and its UK equivalent (soon to be replaced under the Cyber Security and Resilience Bill) the US SEC's cyber security disclosure rules for listed companies, and sector-specific regimes across financial services and healthcare, all seeking to construct a regulatory architecture for the digital economy that treats security not as an optional extra but as a legal baseline.
The revised EU Product Liability Directive, adopted in October 2024, adds a further dimension. Under the updated rules, software is expressly a "product" for liability purposes, meaning manufacturers can face civil claims from individuals harmed by insecure products, not just regulatory fines. And if a manufacturer fails to push out a security update that the CRA requires, a court may well treat that omission as a product defect. The potential liability exposure is considerable and there will be nothing to prevent multiple penalties being levied across regimes in addition to the price of civil litigation.
Stepping back from the regulatory detail, a broader shift becomes visible. Governments are arriving at a consensus that cyber security cannot be left to market forces alone — that the incentives facing manufacturers, particularly those competing on price in consumer markets, are structurally misaligned with the security interests of users and society. The cost of insecurity is diffuse and borne by victims; the cost of security is immediate and borne by manufacturers. Left to the market, security loses.