2025年3月3日
Artificial intelligence (AI) has become a cornerstone of modern business operations, offering solutions that range from automating routine tasks to providing deep insights through data analysis. But, to ensure compliance and maintain trust with stakeholders, the integration of AI systems requires a thorough understanding of data protection laws.
To implement AI effectively while complying with data protection regulations, businesses must understand key legal and technical terms.
The EU AI Act (Article 3(1)) defines an AI system as: "A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This broad definition ensures that AI regulation remains flexible to technological advancements while focusing on systems with decision-making capabilities.
Under the General Data Protection Regulation (GDPR), personal data refers to any information relating to an identified or identifiable natural person. This includes both direct identifiers (eg names, addresses) and indirect identifiers (eg IP addresses, behavioural data).
Deploying AI introduces complex data protection risks. From handling large datasets to ensuring transparency and compliance with legal frameworks, businesses must navigate several challenges:
In addition to managing these challenges, businesses must comply with the AI Act. Multiple AI applications and systems are prohibited under Article 5 of the AI Act, including:
High-risk AI systems and AI systems with transparency requirements are not prohibited per se, however, businesses using them need to comply with specific stipulations of the AI Act.
Understanding these legal boundaries is essential for businesses using AI, especially in customer interactions or employee monitoring.
Businesses must implement clear strategies to minimise risks, enhance transparency, and uphold individuals' rights.
Before deploying AI systems that process personal data, businesses should perform a DPIA under Article 35 of the GDPR to identify and mitigate potential risks to data subjects. This assessment helps businesses to understand the data flows and implement necessary safeguards.
Example: the use of AI for employee performance evaluations should be assessed on how the system processes personal data and its potential impact on employees' privacy.
Only necessary personal data should be processed for specific, predefined purposes (Article 5(1)(c) of the GDPR).
Example: a company using AI-powered customer support chatbots must ensure that chat transcripts are not used for unrelated purposes, such as targeted advertising unless explicit consent is obtained.
Businesses should inform individuals as to how their data is being used in AI processes, including the logic involved and the potential consequences, and ensure AI decisions are explainable to users. This includes informing users when they are interacting with AI (Article 50 of the AI Act) and providing explanations of how AI decisions are made.
Example: an AI-driven product recommendation tool in a web shop should be disclosed to customers and the online retailer should explain how the customers’ browsing data influences recommendations.
AI systems that make automated decisions affecting individuals' rights must allow for human intervention (Article 22 of the GDPR). The human reviewer must have the authority to correct unjust AI-driven decisions.
Example: in hiring processes, an AI resumé screening system should not autonomously reject candidates; a human recruiter must review decisions before finalisation.
The GDPR grants individuals a set of data subject rights (Articles 12–22 of the GDPR) that businesses must uphold when implementing AI systems.
Right to access and transparency (Articles 12 and 15 of the GDPR)
Individuals have the right to request access to their personal data processed by AI systems. Businesses can:
Right to rectification (Article 16 of the GDPR)
If AI-driven decisions rely on inaccurate or outdated data, individuals must be able to request corrections. Businesses can:
Right to erasure (Article 17 of the GDPR)
AI models must enable the deletion of personal data upon request, which can be technically challenging when data has been used in model training. Solutions include:
Right to restriction of processing (Article 18 of the GDPR)
Individuals may request that their data not be used in AI processing under certain circumstances. Businesses can implement data flagging systems that tag restricted data, ensuring it is excluded from AI model updates, and use privacy-enhancing technologies such as zero-knowledge proofs to confirm data restrictions without exposing personal information.
Right to object to automated decision-making (Article 22 of the GDPR)
AI-driven decisions that significantly impact individuals require human oversight and contestability mechanisms. Businesses should, therefore, not only design AI workflows that enable human intervention. They should also ensure auditability of AI-driven decisions, so affected individuals can review how their data was used and provide an opt-out option for individuals who don’t want their data to be used for AI-based profiling or decision-making.
By implementing robust technical and organisational measures to protect personal data processed by AI systems from unauthorised access, alteration, or loss, businesses can comply with GDPR standards. Businesses should continuously monitor AI systems for compliance and update them when necessary to address emerging risks.
Example: encrypt data both in transit and at rest within AI applications to safeguard against breaches.
Integrating AI into business operations offers significant advantages but requires careful attention to data protection compliance. By understanding key concepts, recognising potential challenges, and implementing practical measures, businesses can harness the power of AI while respecting individuals' privacy rights.