Authors

Debbie Heywood

Senior Counsel – Knowledge

Read More

Victoria Hordern

Partner

Read More
Authors

Debbie Heywood

Senior Counsel – Knowledge

Read More

Victoria Hordern

Partner

Read More

20 November 2023

Radar - November 2023 – 1 of 3 Insights

A new era of international cooperation? The AI Safety Summit and associated developments

What's the issue?

There remains widespread disagreement as to whether and, if so, when AI will ever pose an existential threat, but it's undeniable that safety issues, particularly around disinformation, privacy and cyber security, are already in evidence.  Consequently, and perhaps inevitably with such new and rapidly evolving technology, there is also disagreement as how best to address current and potential risks without stifling innovation.

What's the development?

The first international AI Safety Summit hosted by the UK at Bletchley Park took place on 1-2 November 2023.  It attracted political heavyweights including the EU's Ursula von der Leyen, UN Secretary-General António Guterres, US Vice President Harris (although not President Biden himself), as well as representatives from China's Ministry of Science and Technology. Academics and tech leaders, notably OpenAI's Sam Altman and X's Elon Musk, were also in attendance. There were, though, notable absences including the President of France and the German Chancellor, and there have been complaints that civil society and campaign groups were not afforded a sufficient presence.

Key developments arising from the summit include:

  • the signing of the 'Bletchley Park Declaration' in which representatives of 28 governments including the UK, US and China, plus the EU, committed to working together on shared AI safety standards, particularly in relation to Frontier AI (advanced AI systems)
  • a non-binding agreement between 11 of the attendee countries including the EU, US, UK Japan and Australia (but not China) and eight leading AI companies including ChatGPT, OpenAI, Microsoft, Mega, Google and Amazon, to allow regulators to review their products before placing them on the market and to collaborate on pre-and post-launch safety testing
  • support for an international expert body on AI
  • the announcement of a report on the state of AI science, to be written by a group of leading academics led by Yoshua Bengio, and supported by an advisory panel comprising representatives of attendee countries
  • a commitment to further summits with the next one to be hosted in France.

A number of initiatives were announced around the summit including:

UK

The Prime Minister announced the world's first AI Safety Institute to advance knowledge of AI safety, evaluate and test new AI and explore a range of risks.  In his speech, the Prime Minister also re-iterated the UK's approach to regulating AI set out in its AI White Paper.  DSIT published a discussion paper to support the summit and a report evaluating the six-month pilot of the UK's AI Standards Hub.  In addition, leading frontier AI firms responded to the government's request to outline their safety policies.

USA

President Biden issued an Executive Order on safe, secure and trustworthy AI (EO). The EO requires:

  • developers of the most powerful AI systems to share their safety test results and other critical information with the US government
  • the National Institute of Standards and Technology to develop standards, tools and tests to help ensure AI systems are safe, secure and trustworthy.  The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.  The Departments of Energy and Homeland Security will also address AI systems' threats to critical infrastructure as well as chemical, biological, radiological, nuclear and cyber security risks
  • development of strong new standards for biological synthesis screening to protect against the risks of using AI to engineer dangerous biological materials
  • protection of Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content
  • an advanced cyber security program to develop AI tools to find and fix vulnerabilities in critical software
  • development of a National Security Memorandum to direct further actions on AI and security.

The EO calls on Congress to pass bipartisan data privacy legislation and makes a number of privacy-related directions while also covering:

  • advancing equity and civil rights
  • consumer, patient and student rights
  • supporting workers
  • promoting innovation and competition
  • advancing American leadership abroad
  • ensuring responsible and effective government use of AI.

Vice President Harris subsequently announced a range of commitments and policy developments at the summit, including the establishment of an AI Safety Institute intended to operationalise NIST's AI risk management framework, creating guidelines, tools, benchmarks and best practice recommendations to identify and mitigate AI risk.  It will also enable information sharing and research, including with the UK's planned AI Safety Institute.  The VP also announced draft policy guidance on US government use of AI, and the US made a political declaration on the responsible military use of AI and autonomy.

G7

The G7 leaders have agreed International Guiding Principles for all actors in the AI ecosystem and an International Code of Conduct for developers of advanced AI systems as part of the Hiroshima AI process. 

The guiding principles document is intended to be a 'living document' building on the existing OECD AI principles. It currently sets out 11 non-exhaustive principles to help "seize the benefits and address the risks and challenges brought by AI". They are intended to apply to all AI actors when and as applicable, to cover design, development, deployment and use of advanced AI systems. They include commitments to mitigate risks and misuse, and identify vulnerabilities, to encourage responsible information sharing, reporting of incidents, investment in security and a creation of a labelling system to enable users to identify AI-generated content.

The G7 suggests organisations follow the voluntary Code of Conduct which sets out a list of actions to help maximise benefits and minimise risks of advanced AI systems with actions for all stages of the AI lifecycle.

EU

The latest round of trilogues on the EU's draft AI Act was held on 24 October 2023. Agreement was reportedly reached on provisions for classifying high-risk AI applications and on general guidance for using enhanced foundation models. Since then, there have been reports suggesting new disagreements around regulation of foundation models which threaten to derail the legislation. The next and potentially final round of trilogues is planned for 6 December. In her speech at the summit, Ursula von der Leyen not only highlighted the EU's progress with the AI Act but also focused on the EU's plans to set up a European AI Office to deal with the most advanced AI models with an oversight and enforcement capacity. A high-level meeting in Brussels is planned in January 2024 to strengthen EU cooperation on AI development.

In the meantime, the European Data Protection Supervisor (EDPS) has published an Opinion on the AI Act setting out its final recommendations. Much of the Opinion relates to the EDPS's role as the notified body and market surveillance authority as well as competent authority for the supervision of the provision or use of AI systems in respect to which it asks for a number of clarifications. The EDPS also calls for privacy protections to be at the forefront of the legislation, and for a right for individuals to lodge complaints regarding the impact of AI systems on them with the EDPS explicitly recognised as competent to receive complaints alongside DPAs who, the EDPS recommends, should be designated as the national supervisory authorities under the AI Act to cooperate with authorities that have specific expertise in deploying AI systems.

United Nations

The UN announced the launch of a high-level advisory body on AI. This is a multi-stakeholder body intended to undertake analysis and make recommendations for international governance of AI.  The 38 participating experts are made up of government, private sector and civil society stakeholders.  They will consult widely to "bridge perspectives across stakeholder groups and networks".

Other initiatives

These include:

  • The Partnership on AI consultation on its guidelines on safe foundation model deployment
  • The Global Privacy Assembly resolution (sponsored by the EDPS) on generative AI systems, committing to ensuring the application of enforcement of data protection and privacy legislation in the context of generative AI, working cooperatively, and encouraging stakeholders to take privacy and data protection into account when developing and using generative AI systems
  • Germany announced an AI action plan to boost investment in Germany but also focused on collaboration with its EU partners.

What does this mean for you?

Many will agree with UK Prime Minister Sunak's view that global consensus is the only genuinely effective path to managing potential AI-related doomsday scenarios, but it's important to ask what the summit has really achieved. Getting a wide range of power brokers to sit down and discuss the issues is certainly an important step, and the positioning of the UK as rainmaker has been moderately successful. However, Sunak's communiqué, now signed by politicians from a wide range of countries including the US, China, Nigeria, Canada and Singapore, stops short of calling for specific AI regulation.

This is in line with the UK government's policy outlined in its 2023 White Paper on AI, but at odds with the EU's approach which is to introduce AI-specific legislation. The communiqué is ambitious (calling for international co-operation and the need to be inclusive) but there is no call for specific AI regulation or enforcement, and the 20+ countries which have signed obviously falls far short of global coverage. 

However, the summit does appear to be the start of something big – a change in mood music, perhaps. For example, there are commitments for further summits in the years ahead. Significantly, the pledges to establish AI Safety Institutes in the UK and the US and to test AI technology before its release onto the market also indicate a desire for cross-border collaboration on evaluating risks and promoting safety, as well as – in theory at least – collaboration between Big Tech and governments. 

Getting to a place of global agreement on AI regulation at this point was always going to be a tough ask. In the first place, there is disagreement as to the nature of the safety issues posed by AI and whether we should be focusing on future existential threats or on the currently destabilising potential of deepfakes and disinformation (or indeed how to effectively focus on both concerns). It's also hard to envisage progress on AI safety regulation keeping up with the pace of technological advances. 

The Prime Minister himself acknowledged that the rapid development of technology is in tension with the time and resources required to consult, draft and implement legislation, but there are strong voices calling for some form of international oversight body. Perhaps what form such a body should take will be high on the agenda of the next summit, but for the foreseeable future, a fragmented approach to the safety concerns around AI will persist.

Call To Action Arrow Image

Latest insights in your inbox

Subscribe to newsletters on topics relevant to you.

Subscribe
Subscribe

Related Insights

Technology, media & communications

Data and cyber security - 2023 roundup

11 December 2023

by Debbie Heywood

Click here to find out more
Technology, media & communications

Radar - 2023 roundup

11 December 2023

by Debbie Heywood

Click here to find out more