作者

Dr. Benedikt Kohn, CIPP/E

高级律师

Read More
作者

Dr. Benedikt Kohn, CIPP/E

高级律师

Read More

2021年12月27日

AI regulation – will Switzerland be following the EU's lead?

  • Briefing

Artificial intelligence ("AI") has long determined our lives in many areas – and so far largely unregulated. This will change in the near future, though, as initiatives to regulate the now no longer entirely new technology are multiplying in many parts of the world. However, there are significantly different approaches to the concrete design, as the following article on possible regulatory proposals of the European Union ("EU") and Switzerland will show.

European Union Approach: The Artificial Intelligence Act

The draft regulation on the regulation of the use of AI published by the European Commission on 21 April 2021, the so-called "Artificial Intelligence Act"("AI Act"), has attempted the world's most ambitious push to regulate AI in concrete terms. It follows a risk-based approach in which AI applications are classified according to their potential risk in four categories: "unacceptable risk", "high risk", "low risk" and "minimal risk".

AI systems with unacceptable risk

Applications of AI with an unacceptable risk will be banned under the regulation. These include AI systems that can manipulate human behavior and thereby cause harm to people, applications that enable public authorities to assess the trustworthiness of individuals on the basis of their social behavior or personality-related characteristics and treat them unfavorably as a result, and – with some exceptions – biometric real-time remote identification in publicly accessible spaces for law enforcement purposes.

AI systems with high risk

The core of the draft is the comprehensive regulation of high-risk AI systems, i.e. those AI applications that pose a high risk to the health, safety or fundamental rights of people. What constitutes these is specified in an annex to the draft regulation, which can be updated continuously in order to be able to react to new developments at any time. These applications will not be banned, but must meet strict requirements in order to be authorized in the European market. For example, the systems must be developed on the basis of data that meet certain quality criteria and achieve an appropriate level of accuracy and security, a risk management system, detailed technical documentation and automatic logging must be set up, and a sufficient level of transparency for users and control bodies must be ensured.

High-risk systems under the draft regulation are intended to include AI applications that make decisions about people in areas sensitive to fundamental rights. These are systems for the biometric identification and categorisation of persons, for the management and operation of critical infrastructure, for the regulation of access to educational institutions, for recruiting and personnel management or for access to essential private and public services Systems to support law enforcement, asylum and border control, and the judiciary are also covered.

AI systems with low and minimal risk

AI systems other than those listed above are to remain largely unregulated in order to also maintain innovation-friendly conditions in the European Union. This concerns, even if not initially evident, the majority of AI applications, for example search algorithms, spam filters or video games. In addition, certain transparency obligations are standardised for low-risk applications such as so-called "chatbots" or "deep fakes".

High fines

The regulation – still in the draft stage – provides for a three-tier sanction concept with high fines. For example, the use of a prohibited AI system or failure to meet certain quality requirements can result in a fine of 30 million euros or – in the case of companies – 6 % of their annual worldwide turnover, dependent on whichever is higher. This means that the range of fines is significantly higher than that of the General Data Protection Regulation.

AI Act as a model or its own "Swiss way"?"

As a country located in the heart of Europe but not a member of the European Union, the question arises whether Switzerland will follow the approach of the AI Act or go its own way in regulating AI? This interesting topic was discussed in the summer of 2021 by members of academia and practiced in a workshop funded by the "Strategy Lab" of the Digital Society Initiative (DSI) while summarized in a position paper. Unlike the AI Act, however, this is not yet a concrete draft law, but merely non-binding proposals on what regulation of AI could look like.

As much freedom as possible – as much regulation as necessary

Just like the EU, Switzerland also sees a need for action in the area of regulating AI. According to the authors of the proposal, instead of simply adopting foreign regulations, Switzerland should first wait, carefully examine them and then develop its own position. The authors recognise two equally important goals with regard to a possible regulation: On the one hand, it must leave as much room as possible for the development and use of AI, but on the other hand, it must ensure that no disadvantages arise, for example, through the discrimination of those affected or through undermining the principles of the rule of law for society as a whole. The EU Commission also sees this conflict of interests and attempts to resolve it in its draft regulation through the risk-based approach described above.

Selective adjustments instead of specific AI statute

Unlike the EU Commission, however, the authors do not see the need for a separate law for the general regulation of AI or algorithms. Instead, already existing laws should be examined with regard to the challenges of the use of AI and, if necessary, adapted selectively. Challenges exist above all in the five areas of recognisability and comprehensibility, discrimination, manipulation, liability as well as data protection and data security – in many of these areas, however, there are already legal norms with partly suitable regulations.

For example, everything concerning the processing of personal data could be regulated with the means of existing data protection law, to which regulations on transparency could be added, such as a labelling obligation for AI systems, in order to create traceability. The issue of discrimination could be regulated with the help of a general equal treatment law, which would sanction discrimination on the basis of certain characteristics. Manipulation is covered by existing competition law and the possibilities of revocation and rescission in general civil law. Finally, product liability law could be used to create the necessary liability regulations for the use of AI systems and the introduction of a general IT security law could ensure the necessary security of the applications.

Creation of new regulations will still be required

According to the authors of the position paper, the creation of new regulations tailored to the topic of AI is not completely impossible. For instance, it should be examined whether - as in the EU Commission's AI Act draft - certain AI systems should be banned. In addition, the creation of authorisation procedures and a public register showing in which areas of public administration algorithmic systems are used could be useful.

Different approaches make sense

Even if the creation of a separate law does have advantages, especially in the regulation of complex topics, such a law is of course not mandatory. In Germany, for example, many of the regulations necessary for regulating AI could be incorporated into existing laws, such as the Allgemeines Gleichbehandlungsgesetz, Bundesdatenschutzgesetz, Bürgerliches Gesetzbuch, Produkthaftungsgesetz or Gesetz gegen den unlauteren Wettbewerb – and this for instance in implementation of an EU directive, which could ensure a uniform level of regulation throughout Europe.

However, such approaches fail to recognise that the EU, which has around 450 million inhabitants, also pursues interests that go beyond mere regulation. The AI Act can also be seen as a political message and is designed to become a milestone in legislation, following the example of the GDPR, which is also designed as a regulation, and which should find as many imitators as possible. To achieve this goal, however, the EU cannot afford to maintain a wait-and-see attitude, but must move forward consistently. A scramble for the national implementation of certain regulations, which can often take several years, would be fatal in terms of external impact.

In contrast, for Switzerland, which can proceed less strategically and can therefore react far more flexibly, it may well make sense to first wait and see what other European nations intend to regulate and calmly observe whether this also passes the practical test. The different approaches are therefore justified by the different situations of the legislators – making both approaches reasonable.

Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe