5 / 6

2023年7月31日

AI and games – 5 / 6 观点

AI, data and gaming

Laura Craig and Miles Harmsworth look at the use of personal data in AI tools used by the video games sector, and at the evolving regulatory framework.

更多
作者

Laura Craig

律师

Read More

Miles Harmsworth

律师

Read More

With players demanding more engaging and realistic player experiences, the games sector has been an early adopter of AI tools. To use AI to improve player experience, AI models need to be trained, which can involve tracking millions of player interactions and data points, including those of children.  This carries inherent risk in an already highly regulated area. In this article we look at how AI will impact the video games sector, the risks involved in achieving a more immersive gaming experience, and how current data protection law and incoming AI regulation aim to respond.

How will AI impact the gaming sector?

In the context of data, AI is set to impact gaming in several ways:

NPCs will come to life

Gamers will be familiar with non-player characters, or 'NPCs'. NPCs can be incredibly realistic with unique daily routines, or exist as simple enemies or 'window dressing' to add artificial life to a game's environment. To date, the use of AI for NPCs has generally been confined to programming them in two forms:

  • Pathfinding – telling the NPC where it can and cannot go in the game; and
  • Finite-state machines – enabling the NPC to change its behaviour based on certain conditions, for example when enemy soldier NPCs in Metal Gear Solid gather around you, when you enter their line of sight.

With more granular data being fed into increasingly complex models, developments in AI will lead to new opportunities to create dynamic NPCs with varied responses, emotions, and memories to make games truly immersive. This will inevitably require the input of more personal data from real world individuals to create dynamic AI models. And this is something we are already seeing with SEED (a new group within gaming giant EA) which is using game play data on their top players to train AI, which will enable NPCs to simulate their actions.

To create truly believable NPCs, complex written or spoken dialogue is required.  Developing these can be a time consuming and expensive process. Consequently, there has been a proliferation of dialogue-generating AI tools which have drastically shortened the time to realisation. On 21 March 2023, Ubisoft announced an in-house AI NPC dialogue solution; Replica AI has offered 'AI voice actor' services since 2021; and Guerrilla Games employed machine learning for NPC dialogue in February 2022's blockbuster game Horizon Forbidden West, giving players tailored interactions based on in game decision making.

However, in response, voice actors have raised concerns around that data that is being fed into these models. Eurogamer reported that gaming actors were concerned at the thought of their voices being replicated, something made possible by using 'digital double' clauses in contracts allowing for an actor’s likeness or voice to be recreated in the future – with the National Association of Voice Actors refusing to back the practice. In fact, this is one of the key issues driving the actors' strike in Hollywood.

A further issue is that EU/UK data protection law only allows a person's personal data to be used where there's a lawful basis and for purposes that have been communicated to the individual. If games developers want to expand on NPCs' capabilities and technology businesses want to offer related services such as AI dialogue services, they each need to be diligent in understanding the source of the training data and complying with applicable data protection law.

Cheat detection

Cheating is a challenge in online multiplayer games; it negatively impacts gamers by forcing them to compete against those with unfair advantages. This has resulted in a persistent battle between cheat detectors and the cheaters themselves, each  developing tools to outsmart the other, and is causing serious consequences for games publishers, who can (and have) experienced significant drops in player numbers, where high levels of cheating go unpoliced, leading to a loss in game play and revenue.

Some of the traditional anti-cheat technologies are highly unpopular with gamers for being overly invasive. Denuvo anti-cheat is a key example. Once it identifies a cheater, the program 'fingerprints' the offender and notifies the developer. While effective, in order to achieve this level of precision it requires kernel-level access, which in theory means it has access to the whole of a player's machine. New AI-powered tools could take this individual analysis even further with the ability to synthesise information and draw inferences based on traditional approaches of tracking keystrokes and background PC processes combined with new AI-powered abilities to compute and assess in-game behaviour and communications. Despite potential risk around surveillance and monitoring, some players think that AI-powered anti-cheat could 'save' first person shooter games via this assessment of behaviour beyond the abilities of ordinary anti-cheat detectors. Indeed, we are seeing the beginning of this with companies such as Anybrain already producing AI-enabled programs to combat cheating, which includes the tracking of 'mental fatigue and stress'.

If developers persist with AI-enabled anti-cheat programs, they will need to balance the need to keep the details of the program confidential, with the need to comply with their transparency requirements under data protection law. At present, an easy workaround would be to inform individuals that their personal data is being "used to train the AI anti-cheat software", but with regulators demanding seemingly more and more granularity for each decision, the challenge of providing meaningful transparency without negating the efficacy of the software, will grow. Ultimately, to maintain and build trust, developers and publishers should ensure that there is transparency around algorithms by clearly explaining how mechanics driven by AI (including any AI-powered cheat detection) operate within the game.  They also need to ensure they process the minimal amount of data required, and that they do not retain the data for longer than necessary to achieve their purpose.

Player profiles

Linked to cheat detection is more general player analysis. Exactly how many companies are doing this and to what extent is unclear, but Activision, EA and Epic set out in their licence agreements that user data will be captured. Middleware analytics tools from providers such as Unity, GameAnalytics or AWS are on the rise, aiming to encourage user engagement via large-scale analysis – including the use of AI to maximise player retention and spend.

Several operators in the industry are exploring avenues to analyse actions from players within games to enhance their revenue. The Financial Times has reported that Meta has obtained several patents linked to the collection of biometric data from VR headsets to analyse the effectiveness of advertising. With AI-powered tracking tools now capable of analysing players and drawing correlations between gameplay data (including ability to perform repetitive tasks, user inputs, time spent in-game) and personality attributes, player data being fed into algorithms can assess user attributes and, potentially, increase monetisation and advertising opportunities on an individual player level.

The key issue here is that player data is now being collected and analysed to an advanced degree with the aim of driving in-game purchases, playtime and other monetisation mechanisms. To provide transparency, users should be made aware of this practice so that they can make an informed decision on how they engage with these AI analytics processes. If publishers rely on legitimate interests to collect and process personal information for these purposes, they will have to carry out a Legitimate Interests Assessment (LIA) to balance their own interests against the rights and freedoms of the individuals concerned.  They may struggle to demonstrate that a player’s rights and freedoms have not overridden their own legitimate interests where personal data is processed in a manner they did not ‘reasonably expect’.  Would you as a gamer, by way of example, 'reasonably expect' each unique keystroke to be logged and tracked to supercharge AI-driven advertising potential or develop new game functionality? Transparency is one of the primary principles in European and UK data protection law and also features in the UK government's AI principles, so it's important to document the use of AI which uses personal data in a privacy policy in way in which individuals can understand – something of a challenge.

Children and young gamers

Games have always been popular with young people, with Roblox, Minecraft and Fortnite exploding in prominence, in recent years. Children represent a key market in the industry, with 82% of children in the UK aged 12 -15 playing online games in 2022. Children now spend significant amounts of time in multiplayer or online games, yet the gaming sector appears to have raised fewer privacy concerns than social media or streaming outlets despite the possible vulnerabilities that young people present – a particular area of risk is using AI chatbots to communicate in-game with children. 

Scrutiny of how the games industry protects children is on the rise, so when it comes to the interaction between children's data and AI, games businesses should proceed with caution. We've already seen Chinese tech conglomerate Tencent announce that it would comply with a Chinese directive to incorporate facial recognition into games to restrict screentime, in line with strict gaming regulation in China which imposes time limits on how long minors can play (supposedly to curb addiction in children), although there would be no possibility of doing this under current EU or UK law.

In the UK, the ICO has been in discussion with the gaming sector to facilitate and examine compliance with the Children's Code, setting out the need for data minimisation, privacy by design, appropriate governance and taking a risk-based approach with minor players. With the UK's sector-specific approach to AI regulation, it will be vital to ensure that the use of AI using or generating personal data in the UK (particularly where children are involved) is in line with data protection law and ICO guidance.

Regulating personal data in AI

Regulation in general has struggled to keep up with the frantic pace of change within the video games sector. Use of personal data is a heavily regulated space internationally with a lack of international consensus.  This is amplified when you take into account the developing approach to regulating AI which is also fragmented.  For example, while the EU is developing top-down legislation to regulate AI, the UK favours a sector-based approach whereby individual regulators use existing law to regulate AI.  This means that in the UK, the ICO will be heavily involved in regulating AI using or generating personal data. 

The underlying principles which the UK government is proposing to require regulators to take into account overlap heavily with the UK GDPR principles.  Given the issues around fairness, transparency and explainability, lawful basis and data security, not to mention the fact that many AI tools will process special data (eg biometric data), games developers and publishers using AI in games need to consider data protection every step of the way – implementing the (UK) GDPR principle of data protection by design and default, implementing appropriate privacy policies and data processing agreements and, where necessary, data transfer arrangments, and ensuring they can demonstrate accountability.  A data protection impact assessment will also likely be required before using AI which processes or generates personal data.  To add to the complexity, there are also conflicts between using AI and data protection – for example, around data minimisation.   You can read more about the UK's approach to data and AI here, but, of course, this is not an issue peculiar to the UK - any country which has data protection law will be looking at the way in which AI models process personal data, regardless of or in addition to AI-specific regulation. 

The different approaches to regulating AI and data protection are evident by taking just a few examples:

  • The EU introduced the strictest data regulations in the world with the GDPR in 2018, and will be introducing specific legislation to regulate AI which primarily targets 'high risk' AI systems.
  • The UK has the UK GDPR, with a planned new Data Protection Bill aimed at reducing compliance burdens on businesses. In terms of AI regulation, the UK is taking a pro-innovation approach to allow different regulators to issue guidance and practice codes for specific sectors.
  • The US has a patchwork approach to data regulation, with various state and Federal laws. It is also progressing AI legislation, including the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the Blueprint for an AI Bill of Rights, and existing laws and regulations applicable to AI systems (including Federal Trade Commission assessment).
  • China implemented the Personal Information Protection Law in 2021, aimed at preventing misuse of personal information. China is also leading the way with AI legislation, including regulation of recommendation algorithms and chatbots, and recently passed stringent security requirements for generative AI.

This sheer diversity in approach is a major challenge for the games sector given its international nature. Considerations are also likely to go beyond data privacy and AI-specific legislation, for example, engaging online safety rules under the UK's Online Safety Bill (once it comes in) and the EU's Digital Services Act.

Balancing challenges and opportunities

The intersection between data, gaming and AI presents fresh challenges for games developers and publishers, but it also creates new opportunities to create compelling experiences for players by making NPC behaviour believable or eliminating cheats from competition. As summarised by GameDesigning.org, “the goal of AI is to immerse the player as much as possible, by giving the characters in the game a lifelike quality, even if the game itself is set in a fantasy world.”

An emerging technology, AI is set to become exponentially more sophisticated in coming years and regulators will maintain scrutiny. Businesses operating in the interactive entertainment sector which have already implemented effective privacy compliance processes will be best placed to ensure that the adoption of AI can be appropriately balanced with the requirements and rights under data protection law. Keeping up to speed with incoming regulation - and integrating this with privacy compliance programmes - will be crucial.

返回

Interface

前往 Interface主页