Artificial Intelligence Act

Here you'll find all relevant and recent updates on the AI Act.

Introduction on the AI Act of the European Union

In April 2021, the European Commission presented its proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules for AI, also known as the AI Act of the European Union.

Here, our AI Lawyers provide relevant background and information as well as the current status on the AI Act proposal: 

  • An article feed with relevant articles and comments, 
  • the Taylor Wessing legislation tracker,
  • links to all relevant documents,
  • graphic overviews on specific AI Act topics,
  • frequently asked questions and answers,
  • as well as relevant links for other information and further research.

This page is maintained by Taylor Wessing's AI Lawyers within our TMT Team based in Düsseldorf, Germany. We advise our clients on IT, telecommunications and data protection law and have particular experience in legal issues relating to digitalization and artificial intelligence.

The AI Act of the European Union is still in the making and this page is continuously updated. We encourage all visitors to contribute with ideas and suggestions. Do not hesitate to get in touch with us!

Artificial intelligence

AI, lawful bases, transparency and fairness: how to thread the GDPR needle | Tech Me Up! Session #4

Tech Me Up! Session #4

13 September 2023

by multiple authors

Click here to find out more
Artificial intelligence

The autopilot’s fault? Who is liable when AI fails? | Tech Me Up! Session #3

Tech Me Up! Session #3

23 August 2023

by multiple authors

Click here to find out more
Interface - AI – are we getting the balance between regulation and innovation right?

Open source generative AI in games

Marie Keup and Lucas de Groot look at how games developers and publishers can ensure they don't run into issues when using open source generative AI in their games.

31 July 2023

by Marie Keup and Lucas de Groot

6 of 6 Insights

Click here to find out more
Interface - AI – are we getting the balance between regulation and innovation right?

What games businesses need to consider when drafting a generative AI acceptable use policy

Martijn Loth highlights the top ten considerations to help games businesses mitigate risks associated with using generative AI when developing video games.

31 July 2023

by Martijn Loth

3 of 6 Insights

Click here to find out more

The AI Act of the European Union: Legislation tracker

The AI Act is adopted as a European regulation under an ordinary legislative procedure in accordance with Art. 294 TFEU. The AI Act proposed by the European Commission must be adopted jointly by the European Parliament and the Council of the EU. The Council has already published its position. In the Parliament, the proposed draft version is revised by various competent committees. In the case of the AI Act, the Internal Market Committee (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) are in charge. In addition, other committees are responsible for individual topics and articles of the AI Act. After the Commission's draft has been revised, the plenary then votes on the legislative text. As soon as the Parliament has adopted its own position, the Commission, the Council and the Parliament can start the final negotiations (so-called "trilogue").

Below you will find all relevant documents in our legislation tracker, with which we keep you up to date on the legislative process of the AI Act.

PARLIAMENT

On 27 April 2023, the Parliament agreed on a draft. The key committees voted on the draft on 11 May 2023. The final vote in plenary took place in June 2023.

Date

Document

Document Type

Link

Description

11 May 2023 Position adopted by the key committees (IMCO and LIBE) Position on Proposal
EN  The Parliament also proposed amendments in its position on the legislative proposal. In particular, the heated discussions about general-purpose and generative AI – most prominent example: ChatGPT – gave rise to intensive revisions. First, the AI definition was brought in line with the OECD definition. Also, further AI practices were prohibited: Biometric identification systems, for example, are to be completely banned – contrary to what was originally proposed – without providing for exceptions for cases of terrorist attacks or kidnapping, for example. General-purpose AI and generative AI are to be regulated according to a tiered approach. According to this, the main responsibility should first fall on economic actors who integrate such AI into their applications. Providers of general-purpose AI should only have a supporting role with transparency obligations. In the case of generative AI, such as ChatGPT, on the other hand, a summary of the proprietary training data must be provided and it must be disclosed that the text is AI-generated. For classification as a high-risk AI system, it should now additionally be a condition that the system poses a significant risk to people’s health, safety or fundamental rights.
COUNCIL OF THE EU

Date

Document 

Document Type

Link

Description

06 December 2022 General Approach
Final Position on Proposal EN In its position, the Council has proposed changes to the original proposal of the Commission. For example, the position includes a restricted definition of AI. In addition, AI systems that serve military purposes or national security, as well as AI use by private individuals, are to be excluded from the scope of application. Social scoring should also be prohibited for private individuals – and not only for public authorities. But the list of high-risk AI systems has also been adjusted: For example, AI systems for the detection of deepfakes by law enforcement authorities and for crime analysis and evaluation of large data sets are no longer to be classified as high-risk. On the other hand, AI systems for risk assessment in life and health insurance and for use as security components in critical digital infrastructure were added as high-risk AI systems.
25 November 2022 Position on Proposal Note on General Approach from Permanent Representatives Committee to Secretariat of the Council EN  
COMMISSION

Date

Document

Document Type

Link

Description

28 September 2022 Proposal AI Liability Directive  Legislative Proposal
EN  The proposed directive aims to adapt the civil-law rules of non-contractual liability to the AI age. In particular, it provides for disclosure obligations for evidence and rebuttable presumptions. These are intended to take effect when obligations under the AI Act are breached and, in this way, improve the chances of success for harmed parties to claim damages. The AI Liability Directive is thus closely intertwined with the AI Act.
21 April 2021  Proposal AI Act

Legislative Proposal  EN

The legislative proposal divides AI systems into four different risk categories according to its risk-based approach. Based on this, the proposal contains regulations on bans, lays down strict requirements and obligations for high-risk AI systems and provides for relatively lenient transparency obligations for other AI systems. In this way, the development, putting onto the market and use of AI systems in the European Union are to be regulated in a harmonised manner.
21 April 2021  Proposal AI Act Annex   Legislative Proposal  EN  The annexes contain further specifications that are referred to in the provisions of the regulation. For example: the techniques and concepts of Artificial Intelligence relevant to the AI Act, a list of high-risk systems or specifications on the conformity assessment.

Other relevant documents on the AI Act

EUROPEAN INSTITUTIONS AND CONSULTATIVE BODIES (without Legislative Power)

Date

Document

Document Type

Link

Description

29 December 2021 European Central Bank I
European Central Bank II
Opinion EN 
EN
 
02 December 2021 European Committee of the Regions Opinion EN
 
22 September 2021 European Economic and Social Committee (EESC) Opinion
EN
 
 18 June 2021 European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) Opinion EN   
21 April 2021 Final Impact Assessment
(EU Commission)
Impact Assessment EN  
23 July 2020 Inception Impact Assessment (EU Commission)
Impact Assessment EN
 
19 February 2020 White Paper (EU Commission)   EN  
08 April 2019 European Commission about Guidelines for Trustworthy Artificial Intelligence Guidelines  EN   
07 December 2018 Coordinated Plan for Artificial Intelligence of the European Commission
Annex to the Coordinated Plan
Coordinated Plan EN
EN
 
 25 April 2018 AI Strategy of the European Commission Strategy EN   
OTHERS

Date

Document

Document Type

Link

Description

 25 November 2022 Statement by Germany on General Approach of the Council of the EU Note  EN  
 November 2022 Intellera Consulting Report  EN  Estimating compliance costs for a small-medium AI provider
 November 2022 Open Loop
(supported by Meta)
Report  EN Testing of some articles of the AI Act with companies around the world to assess how understandable, feasible, and effective they are 
05 October 2022 European Digital SME Alliance Opinion EN   
05 October 2022 European Association of the Machine Tool Industries and Related Manufacturing Technologies (CECIMO) Opinion  EN   
26 September 2022 Expert Statements in German Parliament Opinion  DE   
19 September 2022
German Opinion Opinion  EN   
30 March 2022 German Insurance Association Opinion DE  
25 November 2021 German Bar Association  Opinion DE   
10 August 2021 German Banking Industry Opinion DE  
06 August 2021 German Electro and Digital Industry Association Opinion EN   
06 August 2021  German AI Association Opinion EN  
04 August 2021 German Medical Technology Association Opinion DE  
30 June 2021 German Women Lawyers Association Opinion

DE

 

AI Act – Overviews

AI act tracker

AI Act: Competent Parliamentary Committees

Learn more (pdf)

More

Overview of employment-related restricted shares

AI Act: Fines

Learn more (pdf)

More

AI act tracker

AI Act: Addressees

Learn more (pdf)

More

Summary of growth shares

AI Act: Prohibited AI Pratices

Learn more (pdf)

More

FAQ on the AI Act (created with the help of ChatGPT)

What is Artificial Intelligence (according to the AI Act)?

According to the AI Act, AI is defined as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."

Annex I of the AI Act lists the following techniques and approaches that are considered to be part of AI:

  • Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems
  • Statistical approaches, Bayesian estimation, search and optimization methods.

It's worth noting that the AI Act's definition of AI is focused on its outputs and objectives, rather than its underlying technology or algorithms. The regulation aims to establish a framework for the ethical and trustworthy development and use of AI systems in the EU, with a focus on ensuring that they respect fundamental rights and are subject to human oversight.

What does the AI Act regulate?

The AI Act is a proposed regulation by the European Union that aims to establish a legal framework for the development, deployment, and use of AI systems in the EU. The proposed regulation seeks to ensure that AI systems used in the EU are transparent, reliable, and safe, and that they respect fundamental rights and values.

The AI Act covers a wide range of AI applications, including but not limited to:

  • Biometric identification and categorization systems
  • Critical infrastructure, such as transport and energy systems
  • Educational and vocational training systems
  • Employment, workers, and human resources management systems
  • Law enforcement and judicial systems
  • Marketing and advertising systems
  • Recruitment and human resources systems
  • Social scoring systems

The regulation governs a variety of aspects related to the development, deployment and use of AI systems, including:

  • Classification of AI systems: The regulation distinguishes between different types of AI systems, depending on the level of risk they pose. High-risk AI systems, such as autonomous vehicles or credit scoring systems, are subject to stricter requirements and controls than lower-risk AI systems, such as chatbots.
  • Requirements for AI systems: The regulation sets out minimum requirements that AI systems must meet in order to be safe, transparent, reliable and accountable. These include requirements for data quality, human supervision and control, transparency and traceability of decisions, and compliance with ethical principles.
  • Conformity assessment and certification: The regulation provides that high-risk AI systems must be subject to conformity assessment and certification to ensure that they meet the requirements and standards of the regulation.
  • Responsibility and sanctions: The regulation stipulates that the responsibility for the use of AI systems lies primarily with the providers and users of the systems and provides for possible sanctions for violations of the regulation.
  • Monitoring and oversight: The regulation requires EU Member States to establish appropriate monitoring and oversight authorities to ensure compliance with the requirements and standards of the regulation.

The AI Act also establishes a European Artificial Intelligence Board, which will be responsible for overseeing the implementation and enforcement of the regulation across the EU.

Who is affected by the AI Act?

The AI Act will affect a wide range of stakeholders involved in the development, deployment, and use of AI systems in the EU. In particular, the AI Act will be relevant for the following stakeholders:

  • AI developers and providers: Companies and organizations that develop or provide AI systems in the EU will need to comply with the requirements and obligations set out in the regulation.
  • Users and operators of AI systems: Companies and organizations that use or operate AI systems in the EU will need to ensure that they comply with the requirements and obligations set out in the regulation.
  • Regulators and supervisory authorities: National authorities in the EU will be responsible for enforcing the regulation and ensuring that AI systems used in their respective countries comply with the requirements and obligations set out in the regulation. The European Artificial Intelligence Board will provide guidance and support to national authorities in this regard.
  • Consumers and citizens: The regulation aims to protect the rights and interests of consumers and citizens in the EU who interact with AI systems. This includes ensuring that AI systems are transparent and that users are informed about the use of AI in their interactions with companies and organizations.
Is the AI Act relevant outside the EU?

The AI Act is primarily aimed at regulating the development, deployment, and use of AI systems in the EU. However, its impact is likely to be felt beyond the borders of the EU, given the global nature of the AI industry and the potential for AI systems to be used across multiple jurisdictions.

There are a few reasons why the AI Act may be relevant outside the EU:

  • Compliance with the AI Act may be required for companies that operate in the EU or provide AI systems to customers in the EU. Companies based outside the EU that provide AI systems to EU customers will need to ensure that their systems comply with the requirements and obligations set out in the regulation.
  • The AI Act may influence the development of AI regulations in other jurisdictions. Other countries and regions may look to the EU's regulatory approach as a model for their own AI regulations, or may seek to align their regulations with the EU's standards in order to facilitate cross-border trade and cooperation.
  • The AI Act's focus on ethical and trustworthy AI may set a precedent for global AI governance. The regulation's emphasis on ensuring that AI systems respect fundamental rights and values, such as human dignity and privacy, reflects a growing global consensus on the need for ethical and responsible AI.
 
When does the AI Act take effect?

The AI Act has not yet been adopted. The European Commission proposed the regulation in April 2021, and it will need to be reviewed and approved by the European Parliament and the Council of the EU before it can become law.

On 6 December 2022, the Council of the EU published its general approach regarding the AI Act proposal. During the last votes in the Parliament, heated discussions about new, disruptive AI applications on the market – namely: ChatGPT – caused delays in the process. On 27 April 2023, the Parliament agreed on a draft of its position. The leading committees voted on the draft on 11 May 2023. The final vote in plenary took take place in June 2023. The final negotiations, the so-called trilogue, begins.

It is therefore possible that the AI Act will still enter into force in 2023. The majority of the provisions will then apply another 24 months later, during which time companies and organizations will have to ensure that their AI systems comply with the requirements and obligations set out in the regulation.

Contact our AI Lawyers

Get in touch with our AI Act experts for more information on the AI Act and how they can help you. You can either contact them directly or reach out to them via aiact@taylorwessing.com!

 

AI Act

AI – are we getting the balance between regulation and innovation right?

Interface edition on the AI Act

Read now
Read now
Test for Green CTA

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe

Featured insight

Technology, media & communications

Metaverse, Mixed Reality and a whole new business world?

More

Related events

There are no upcoming events