As its title suggests, the Digital Omnibus seeks to offer something for everyone when it comes to digital regulation simplification, but for many organisations, it is AI Act compliance that has been the straw that broke the camel's back after years of new obligations, following one after another in rapid succession.
The significant challenges of complying with uncertain obligations applying to a technology in a great state of flux, while attempting to resolve seeming conflicts between the AI Act and existing regulatory frameworks such as the GDPR has led to widespread regulatory fatigue.
With an opportunity to address a range of problems which have become apparent as organisations have struggled to engage with the AI Act, the Digital Omnibus has made some significant adjustments, while leaving other concerns unaddressed. We set out the key changes below, before focusing in more detail on the impact for developers of high-risk AI systems, who will be caught by some of the most notable changes. The creation of a new legal basis for processing special category personal data for the detection and correction of harmful bias through a new Article 4a of the AI Act is separately considered here and here.
Consolidation of regulatory power
Under the Digital Omnibus the new EU AI Office will be responsible for the supervision and enforcement of AI Act obligations for AI systems based on General Purpose AI models if the system and the model are developed by the same provider. Systems that form, or are integrated into, a Very Large Online Platform or Very Large Online Search Engine under the Digital Servies Act will also be subject to the oversight of the AI Office. This will mean that the Commission’s supervisory and enforcement powers under the AI Act and the DSA are exercised consistently.
It will also reflect a lesson learned from the GDPR and the challenges and resentments that can result from of having certain Member States effectively responsible for the regulation of tech giants accross the whole EU. The European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) have adopted a joint Opinion on the European Commission's proposal for the Digital Omnibus on AI which addresses this chage, among other points. The EDPB and EDPS recommend further clarification of the role of the AI Office, concerned its new powers may impede the independent supervision of EU institutions' use of AI systems. They also suggest national Data Protection Authorities should be directly involved in the supervision of data processing within EU-level AI regulatory sandboxes to avoid a total disconnect between AI Act and GDPR regulation.
Literacy becomes a macro-problem
One of the most controversial changes introduced by the Digital Omnibus is the removal of the duty on providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and others, such as contractors, operating AI systems on their behalf. This duty is now replaced by an obligation resting with the Commission and EU Member States to encourage providers and deployers to improve literacy. On the one hand, AI literacy obligations are some of the widest-reach requirements under the AI Act and are seen by many organisations as unnecessarily prescriptive. On the other, while the burden of ensuring that human actors understand the implications of their use of AI systems should apply from the top down, the redirection of this duty away from business risks it becoming a general aspiration rather than a tangible point of compliance, ultimately reducing the scope and frequency of AI training and leaving unaddressed the the structural vulnerabilities caused by humans failing to question AI or its deployment. The EDPB and EDPS are clearly concerned about this have called for the employer duty to be retained.
Registration relaxations
Providers of AI systems that have been exempted from classification as high-risk under Article 6(3) AI Act, for example because they are used for preparatory tasks to be followed by human intervention and do not in of themselves present a risk to fundamental rights or the wellbeing of persons, will no longer be required to register those systems in the EU database. Providers will self-assess risk before the system is made available on the market. The EDPB and EDPS advise against introducing this, arguing that it would undermine accountability and public trust.
Postponing transparency
The Digital Omnibus delays the application of the transparency obligation in Article 50(2) of the AI Act for AI systems placed on the market before 2 August 2026 for a further six months, until 2 February 2027. Providers of AI systems generating synthetic audio, image, video, or text content will be required to ensure that the outputs of the AI system are marked in a machine-readable format and can be detected as having been artificially generated or manipulated. This is likely to be achieved through the use of watermarks or metadata tagging. The delay will allow time for the Code of Practice on the marking and labelling of AI-generated content to come into force in 2026 before the provisions apply.
Broader carve-outs on system management
Access to simplified compliance with quality management system requirements (Article 17 AI Act) which were previously offered only to microenterprises under Article 63 AI Act, is now extended to small and medium sized entities. This addresses a concern that start-ups which did not fall into the very limited microenterprise category, would face a significant regulatory burden from day one of operations, making the EU a less attractive place for an initial launch.
Sectoral conformity takes priority
The Digital Omnibus addresses the seeming conflict arising when a high-risk system falls into two regulatory classifications with conformity procedures (such as when emotion recognition systems are used in medical devices). In such cases the AI provider should follow the conformity assessment procedure under the sectoral regulation rather than attempt to follow multiple conformity assessment programs. Notified bodies under sectoral regulation must apply to be designated as notified bodies under the AI Act by 2 February 2028 if they want to assess high-risk AI systems. Although not explicitly stated, it appears that until February 2028, designation following conformity assessment under a sectoral regulation will be adequate for the purpose of AI Act compliance.
Living in the real world: the practical impact on real-world testing of high-risk AI systems
One of the support measures, which is intended to reconcile innovation with compliance with the provisions of the AI Act, is the so-called real-world testing of high-risk AI systems outside AI regulatory sandboxes.
Current approach of the AI Act for real-world testing and practical problems
Participation in AI regulatory sandboxes, i.e. testing environments for high-risk AI systems that have not yet been launched on the market under realistic conditions, controlled by a competent authority, can be very complex and challenging for providers. Therefore, the AI Act currently allows providers of certain high-risk AI systems qualifying as high-risk AI systems due to their high-risk purpose or field of use ('purpose-based' high risk AI systems, Article 6(2), Annex III AI Act) - to test their systems in real-world conditions outside these regulatory sandboxes (Article 60 AI Act). This includes AI systems in the areas of biometrics, critical infrastructure, education, employment, private and public services, law enforcement, migration management or administration of justice and democratic processes.
Testing in real-world conditions enables the temporary testing of an AI system outside a laboratory or sandbox in order to collect reliable and robust data and to assess and verify the conformity of the AI system with the requirements of the AI Act without placing the system on the market or putting it into service. This means performing tests under real-world conditions but before making the AI system available to dedicated users. Testing under real-world conditions is subject to a large number of requirements, including preparing a test plan in advance which needs to be approved by the competent authority, getting the consent of participants and ensuring data collected and processed for the tests is only transferred to countries outside the EU where applicable safeguards are applied (Article 60(4) AI Act).
In practice, there is uncertainty about the exact procedure and requirements for participating in AI regulatory sandboxes and real-world testing, as the AI Act defers the detail and concrete guidance to implementing acts that are yet to be adopted. There are also concerns that the extensive Article 60 requirements for testing under real-world conditions conflict with the intended innovation-friendly approach of the AI Act.
Does the Digital Omnibus address the issues?
The Digital Omnibus contains both a proposal to expand the locations for real-world testing and to allow Member States and the Commission to specify the testing condition for certain high-risk AI systems in a ‘voluntary real-world testing agreement’.
By expanding the scope of Article 60 AI Act, high-risk AI systems in connection with certain physical products listed in Annex I AI Act, such as machinery, toys, lifts and their safety components, or medical devices, will benefit from the ability to test under the same real-world conditions.
In addition the Digital Omnibus proposes a new Article 60a AI Act, creating a new legal basis for providers of high-risk AI systems under Annex I Section B AI Act (products for traffic and mobility regulation for aviation, road, rail, agricultural, and maritime transport) to facilitate real-world testing in conjunction with a voluntary agreement. This voluntary agreement between interested Member States and the Commission, sets out the testing requirements for and contains detailed information on the plan for testing under real-world conditions. The benefit of signing up to the agreement will be that the current AI Act requirements for testing these types of high-risk AI systems would no longer apply.
While this proposed amendment aims to provide greater clarity regarding the testing procedures, its success will likely depend on whether and under what conditions the voluntary testing agreement is actually concluded.
Backstop deadlines for providers of high-risk AI systems
Much has been made of the planned change to the application dates of Chapter III AI Act on high-risk AI which the Digital Omnibus links to completion of standards and guidance. The rules will only apply once the Commission adopts a Decision confirming their completion and after that, a six month transition period for Annex III systems (purpose-based), and 12 months for Annex I (product-based) systems. However, if no such Decision is adopted, a back stop kicks in and the rules will apply from 2 December 2027 (Annex III) and 2 August 2028 (Annex I).
How much will these changes help?
It's clear that the Digital Omnibus proposal is attempting to address some of the pain points of AI Act compliance. The proposed changes regarding real-world testing of high-risk AI and the adjustment of implementation deadlines are intended to give providers more space for development and more time to adapt their systems to the AI Act. However, the requirements for real-world testing remain quite complex, and for certain 'purpose-based' high-risk AI systems, still depend on implementing acts of the EU Commission. It therefore remains to be seen in practice whether the proposed changes will be sufficient to achieve the desired effect of ensuring that the AI Act does not hamper innovation.
A number of areas of concern remain unaddressed by the Digital Omnibus. Hopes that the requirement for providers of high-risk systems to conduct fundamental rights impact assessments under Article 27 might be removed, given the overlap with data protection impact assessments, are unfulfilled. The tight limitations on research exemptions which exclude all but the least commercial R&D efforts have been retained. The Digital Omnibus reflects a fairly modest attempt at regulatory streamlining, at least as far as the AI Act is concerned, but at the very least, its acknowledgment that businesses need time to engage with regulation and guidance and its resultant adjustment of timescales may give businesses the space they need to ensure full compliance and an AI Act which achieves its aim of regulating AI without impacting innovation.
The changes to the AI Act's implementation timeline make sense in as much as it's hard to comply when you don't know what compliance looks like. However, the real question is whether the Digital Omnibus, or at least the AI part of it, can be passed before the current deadline for the application of Chapter III which is 2 August 2026.