3 of 4

8 February 2024

Metaverse February 2024 – 3 of 4 Insights

Inheriting the future – children, AI and data

Victoria Hordern looks at the impact of AI on children, and at the role of AI and data protection legislation in protecting them from potential AI-related harms.

More
Author

Victoria Hordern

Partner

Read More
Author

Victoria Hordern

Partner

Read More

It is probably too early to say how AI systems will affect children (for better or worse) but, just like the internet and social media, we know that AI will have a significant impact. How are regulators and politicians anticipating that impact? Will the laws that develop be sufficient to both protect children and give them the skills to use AI for good?

Legislative and regulatory proposals on AI

The European Commission’s draft AI Act is leading the way in terms of legislation to regulate AI. The Commission's original draft published in April 2021 included provisions to protect children. The Recitals recognise that children are a specific vulnerable group who should be protected from manipulation, exploitation and social control practices and can be particularly susceptible to subliminal components. They also specifically point out that children have rights under Article 24 of the EU Charter of Fundamental Rights and Freedoms (EU Charter) (being a child’s right to protection and care that is necessary for their well-being, and all actions relating to children must be in their best interests) as well as under the UN Convention on the Rights of the Child.

The other significant reference to children in the Commission’s proposal relates to the risk management system obligations for high-risk AI systems. Any risk management system must give specific consideration as to whether the high-risk AI system is likely to be accessed or have an impact on children. While there are particular AI systems listed as high-risk in the Commission’s draft which relate to children (for example, those used for student educational assessments), an AI system trained using children's personal data or targeted at children is not, simply because of those facts, considered to be a high-risk (or prohibited) AI system. So, for example, AI systems used on social media or gaming platforms known to have substantial numbers of child users whose data would be processed as part of the AI system would not, by default, be considered high-risk under the AI Act.

Where an AI system is high-risk, there is an obligation to develop and manage a risk management system. Part of this obligation is to consider whether children are likely to be affected by the AI system and to reduce any risks to those children accordingly. In eliminating or reducing the risks, there is a requirement to give due consideration to the technical knowledge, experience, education, and training to be expected by the user and the environment in which the system is to be used. If users of a high-risk AI system are predominantly children, this obligation appears to require a proper understanding of a child’s awareness in using the AI system, which will vary according to age, technological experience, and educational skills among other factors.

The Council’s draft of the AI Act published in November 2022 clarified in a new Recital 5a that the laws on the protection of minors should not be affected by the AI Act but did not otherwise significantly alter the Commission’s drafting when it comes to considering children. The European Parliament proposed an amendment in Article 9 to an obligation on providers when implementing the risk management system to give specific consideration to whether the high-risk AI system is likely to adversely impact vulnerable groups or children. Additionally, the EP wished to classify as high-risk systems both AI systems used by social media platforms that are Very Large Online Platforms under the Digital Services Act that use recommender systems for user-generated content.

Given the lack of significant disagreement between the three EU institutions in this area, we should expect the final version of the AI Act to be pretty similar to the Commission's original approach, an expectation given weight by the draft version of the consolidated AI Act leaked towards the end of January 2024. In other words, the final version of the AI Act will not specifically prohibit AI systems which operate in particular ways from targeting children. Nor will it classify AI systems targeted at children as high-risk simply because they target children.

From a US perspective, neither the US Blueprint for an AI Bill of Rights nor the White House Executive Order on the Safe, Secure and Trustworthy Development and Use of AI (EO 14110) propose any specific general provisions concerning the use by children of AI systems and the impact of AI on children. The focus of the US proposals where they mention children, is on the use of AI to help combat child abuse. Similarly, the UK White Paper (Establishing a pro-innovation approach to regulating AI) underlines the importance of providing education to businesses, consumers and the general public on AI, but there is no specific call to provide relevant education for children on their use of AI.

There are, of course, other national and multinational initiatives concerning AI which touch on the position of children.  In November 2021, UNICEF produced version 2 of its Policy guidance on AI for children which sets out recommendations for building AI policies and systems that uphold child rights.  In its guidance, UNICEF highlights that children interact with or are impacted by AI systems that are not designed for them, and AI will transform children’s lives in ways we cannot yet understand. The guidance emphasises that children’s developmental stages and different learning abilities need to be considered in the design and implementation of AI systems. This is underlined by the report that the 5 Rights Foundation produced in July 2021 called ‘Pathways: How digital design puts children at risk’ which comments:

"…no child is the same. Some have more resilience than others, and the circumstances and support available to help them to navigate childhood vary. But no connected child is exempt from the demands of the digital world and no child can be supervised 24/7."

Additionally, the UNICEF guidance calls for governance frameworks to be established and adjusted to oversee processes that ensure that the application of AI systems do not infringe child rights. As AI legal and regulatory frameworks begin to take concrete shape around the world, it will be important to consider how far they meet UNICEF’s goals.

It's not just about AI regulation

Existing legal frameworks will apply alongside AI-specific legislation and can help protect children when it comes to AI systems.  An obvious example is data protection.Some AI systems may need to be trained using children's personal data, and others may generate data that impacts children.  Where this is personal data, data protection legislation will apply in the EU and UK (plus many other jurisdictions).  In the EU and UK, processing data relating to children will require compliance with the General Data Protection Regulation (GDPR) and the UK GDPR respectively.  Some of the key areas where data protection legislation will play a part in regulating AI and, in particular, AI using children's data include:

Lawful basis

When considering how children’s personal data will be used by developers and users of AI models, a key question is what lawful bases are available under the GDPR to collect and process children’s data to train a model.

Under Article 6 GDPR, the most commonly relied upon lawful basis will be legitimate interest. However, the balancing test the controller is required to carry out will need to carefully consider the interests, fundamental rights and freedoms of children since "children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned…" (Recital 38).Therefore, a controller will need to document in its legitimate interest assessment why, for its AI system, its use of children’s personal data, for example, to train the model, is not overridden by a child’s interests including consideration of what is in the best interests of the child (from the EU Charter). This means that the standard for reliance on legitimate interest when processing children’s personal data is high given the need to place the best interests of the child as paramount.

Solely automated decision making

Recital 71 to the GDPR states that a solely automated decision which produces legal effects or similarly significantly affects an individual should not concern a child. Recitals are not, however, binding and in its 2018 opinion considering solely automated decision making, the Article 29 Working Party (A29) allowed that this did not represent an absolute prohibition on this processing for children since it was not reproduced in the operative provisions of the GDPR itself.

A29 recommended in its opinion that, as a rule, controllers should not rely on the exceptions in Article 22(2) to justify such processing where it relates to children. Indeed, reliance on the exceptions is not straightforward given one relates to entering into a contract (a controller will need to assess whether the contract entered into with a child is valid) and another relates to explicit consent (how can a controller ensure a child is sufficiently informed about the consent they are giving to their data use in an AI system, and how can it demonstrate that consent is freely given since children are classified as vulnerable?).

This suggests a very narrow path for a controller to argue that it does not contravene Article 22 when using an AI system that makes solely automated decisions which produce a legal effect or similarly significantly affects a child. In its opinion, A29 goes on to state that children "can be particularly susceptible in the online environment and more easily influenced by behavioural advertising…the age and maturity of the child may affect their ability to understand the motivation behind [certain] marketing or the consequences." This reflects the likely position of EU data protection authorities when assessing how AI systems can lawfully use children’s personal data in an Article 22 situation.

Special category data

If the processing of children’s data by an AI system involves special category data, the challenges are even greater in view of the limited circumstances set out under Article 9 GDPR. It will be difficult to argue free and informed explicit consent has been obtained from a child given children are considered to be a vulnerable group (and of course parental consent may additionally be needed due to Article 8 GDPR). It’s possible that an AI developer could argue that they should be entitled to rely on the basis that personal data has been manifestly made public by a child if the data appears, for instance, on their publicly available social media profile. However, the developer then has to navigate issues of fairness (as well as any restrictions in the social media platform’s terms about scraping data from its platform), particularly as the child may have signed up to the platform in breach of its terms of service.

The only other available bases under Article 9 GDPR are either health-related or rely on local law to set out circumstances. In other words, it is not immediately straightforward to see how an AI developer or controller of AI-generated data would identify a lawful basis to use a child’s special category data under GDPR. The draft AI Act does envisage a further justification for processing special category data for ensuring bias monitoring (Article 10(5)) in high-risk AI systems only (so not for other AI systems) which would, on the face of it, extend to children’s data.

Fairness

Even if an AI developer or user can satisfy the requirement for lawfulness, it must also grapple with satisfying the requirement for fairness. Fairness requires a controller to consider the impact of the processing on children and its justification for any adverse impact, and to be using data in a way the child would reasonably expect, and not mislead children when their data is collected. A key aspect of fairness (linked to transparency) is ensuring that individuals are made aware of risks, rules, safeguards and rights in relation to processing personal data and how to exercise those rights (GDPR, Recital 39). In its guidance on children’s information, the UK's ICO indicates that a controller needs to tell a child of the automated decision making and explain to them, in language they can understand, the logic involved and the significance and envisaged consequences of the processing. Given the central requirement for fairness under the GDPR and the need to reduce bias when training AI systems, this is a particularly sensitive issue.

AI, children and enforcement

There has not so far been any widespread substantial regulatory enforcement activity involving the use of AI concerning children. Over in the US, the Federal Trade Commission issued an algorithmic deletion demand in early 2022 concerning a mobile app designed for use by children which violated the US Children’s Online Privacy Protection Act (USA v Kurbo Inc and WW International Inc, Case 3-22-cv-00946-TSH). The children’s data was used to train the company’s algorithms but the company had failed to notify parents.

Separately in October 2023, the ICO issued a preliminary enforcement notice against Snap concerning the Snap generative AI Chatbot, ‘My AI’. Snap has partnered with OpenAI’s ChatGPT to develop ‘My AI’ which helps Snapchatters who have questions. Snap is open in its guidance about using My AI that the responses may include "biased, incorrect, harmful, or misleading content…and [a user] should not share confidential or sensitive information" with it. Snap is also clear that it will use user data to train My AI. In its initial investigations, the ICO considered that Snap failed to adequately identify and assess the risks to users of My AI who include children aged 13 – 17. The ICO considered that the risk assessment that Snap carried out did not adequately assess the data protection risks posed by gen-AI technology particularly to children. As at the time of writing, the ICO has not issued a final enforcement notice and Snap has not issued any formal public comments on the ICO’s preliminary enforcement notice.

OpenAI has also been subject to regulator scrutiny regarding its use of personal data in ChatGPT. The Italian DPA, the Garante, announced an immediate ban on ChatGPT and an investigation into its parent company OpenAI's GDPR compliance in March 2023. The Garante had a number of concerns, one of which was that OpenAI did not verify the age of users thereby exposing minors to unsuitable answers. While disagreeing with the Garante's findings. OpenAI temporarily disabled access to ChatGPT in Italy. Other EU regulators also began to scrutinise OpenAI and the EDPB set up a dedicated task force to foster cooperation and exchange information on possible enforcement actions conducted by data protection authorities. The Garante, subsequently lifted its ban subject to OpenAI making changes to its privacy practices, including around transparency, lawful basis, and age verification for Italian users, however, in February 2024, it sent OpenAI a notice informing it of alleged breaches of data protection law relating to ChatGPT. At the time of writing, a final decision was pending.

So what?

Many would say that children were not considered when the internet was in its infancy and the regulatory authorities shouldn’t make the same mistake again with AI. The GDPR framework already includes specific safeguards which will constrain the use of certain AI technologies which have the most significant impact on children’s personal data and it may be that the authorities consider these to be sufficient to deter the most harmful consequences. The current UK government appears to take that view, since its White Paper explicitly relies on data protection law among other legal frameworks (including competition, consumer protection, financial services regulation, and the new online safety framework), as regulating the use of AI and negating the need for AI-specific legislation. Clearly concepts such as privacy by design which is hardwired in (courtesy of the GDPR) will help to temper the impact of certain AI systems on children; although perhaps the notion of 'safety by design' should also be mandatory for any AI systems that interact with them.

The debate about the impact of the internet and social media on children is ongoing. As AI becomes increasingly embedded in our lives, the problems that children face in navigating growing up won't necessarily change; it’s just that the world that they encounter from an early age is becoming more complex. Only time will tell whether the regulatory frameworks being proposed and the law already existing will help provide a safe and enjoyable experience for children’s use of AI.

Back to

Global Data Hub

Go to Global Data Hub main hub