The AI Act and its impact on european data protection
I. Introduction
The European Union’s AI Act represents a pioneering legislative effort to regulate Artificial Intelligence (AI) in Europe. As part of a larger legislative framework, it aims to address the expected widespread adoption of AI technologies, which could significantly transform various sectors, including healthcare, education and business. The AI Act is designed to coexist with existing regulations, in particular the General Data Protection Regulation (GDPR), and establishes a robust framework that prioritizes data protection and the ethical use of AI. This article briefly describes the intersection between the AI Act and the GDPR, highlighting the implications for data protection, ethical considerations and future research directions.
The big question is: How do AIs work? What is AI? The legislation does not provide a definition of what an AI specifically is, rather definitions are provided in different pieces of legislation to fit the scope and purpose of an instrument. Technically, an AI would be interpreted as a set of algorithms that operate on data and are trained to respond to inputs with automated outputs by categorizing the data they have acquired and studied during a phase known as the machine learning (ML) phase.
The ML process is a long step in the formation of artificial intelligence that, philosophically, never ends. The ML processes change and vary from AI system to AI system, but one element remains constant: the AI is fed data to ensure its functioning. The nature of the data remains different, but it is still data. The matter becomes even more complicated when it comes to personal data, which is subject to the General Data Protection Regulation (hereafter referred to as the EU GDPR).
II GDPR: A brief overview
The European Union’s famous General Data Protection Regulation (GDPR) aims to create an operational framework for data governance, creating obligations for data collectors and processors, while strengthening the rights of European data subjects. The tool that makes the GDPR, in theory, a suitable piece of legislation to protect the privacy and security needs of data subjects is the factor of extraterritoriality: Article 3 of the GDPR allows the legislation to apply even if the processor or controller, or both, are located outside the territory of the European Union. The principle behind this is the targeting of operations to the whole of the European Union, or to a Member State. This is done by including on websites a currency such as Europe, a specific official language part of the European Union, or by offering services that are carried out by entities, whether public or private, in the European Union.
The GDPR pays particular attention to profiling, automated decision-making and biometric data, among other practices, all of which revolve around one main principle: data minimization. The GDPR provides data subjects with a definitive guarantee that the amount of personal data collected will be drastically minimized, so that only the information necessary for performance is collected. In fact, data collectors are responsible for providing sufficient documentation as to why the collection of said personal data is necessary.
While on paper the GDPR appears to be a standardizable practice that should become part of a global framework, experts have a different view. In particular, the EU’s Fundamental Rights Agency (FRA) criticizes data protection authorities (DPAs) for their lack of transparency and resources, but also for their independence from the influence of local governments or powerful companies. In addition, the European Data Protection Board (EDPB) should strengthen its cooperation with the many DPAs in order to carry out certain tasks, such as those listed in Article 1(h) and Article 58(1), which the DPAs cannot fully perform due to a lack of procedural and technical tools. Finally, knowledge of the DPA’s powers leads to fewer reports of breaches by data controllers and/or processors, as well as insufficient understanding and oversight of modern technology, especially AI systems.
Overall, while experts and those working in the field consider it to be adequate, there are still many challenges ahead, especially due to the upcoming massive deployment of AI systems. The GDPR is a complex instrument, consisting of 11 chapters and recitals. This sub-section breaks down the main features of the GDPR, in particular its material and territorial scope (Articles 2 and 3), the principles behind data processing (Chapter 2), the main rights of data subjects (Chapter 3) and the authorities behind data governance in the EU (Chapter 6).This articulated description of the GDPR will help to understand how data governance is dealt with in the AI Act.
III GDPR and AI
The GDPR is the main instrument in the European Union that governs data, protects the rights of Data Subjects, and imposes obligations on data Controllers and Data Processors. Due to the nature of Artificial Intelligence being tied to data and personal data, the EU AI Act is not the only legislation that comes into play with examining the legislative framework related to AI systems. The GDPR imposes strict obligations, in general, for data minimization and necessity. But it also, imposes specific obligations on organizations using machine learning algorithms, particularly regarding automated decision-making that significantly affects users. The requirement for transparency and the right to explanation align closely with the objectives of the AI Act, which seeks to regulate high-risk AI systems. This intersection presents both challenges and opportunities for innovative industries aiming to adhere to strict data protection laws while fostering ethical AI usage.
In addition, the GDPR significantly impacts the use of blackbox AI models, particularly in sectors such as public administration, judicial system and healthcare, where the need for explainable AI has become paramount and the importance of transparency and traceability is vital for establishing trust. Furthermore, the AI Act complements the GDPR by establishing a regulatory framework that mandates robustness and explainability in AI systems, especially in sensitive domains like healthcare, where ethical considerations are critical.
The path to the development of AI systems and data analytics services necessarily passes through adequate GDPR compliance and must include:
- legitimacy
- objectivity
- transparency
- purpose limitation
- minimization
- accuracy
- integrity and
- confidentiality/protection
But AI systems often rely on large amounts of data to learn and make decisions (Data Bulimia). This data may include personal information, raising privacy and security concerns as the AI systems can collect and process personal data. So, it is important to ensure compliance with data protection laws such as the GDPR. AI algorithms can perpetuate biases present in the data they are trained on, leading to discriminatory outcomes. AI systems, especially those making critical decisions, should be transparent and explainable to ensure accountability. AI systems may be vulnerable to cyber-attacks, which could compromise personal data and cause significant harm. In essence, AI is a tool that can be used to process and analyze data, while data protection is a framework that governs the collection, processing and storage of personal data.
To ensure that AI is developed and used responsibly, it is crucial to consider both technological advances and ethical implications. Striking a balance between innovation and protection is essential to harness the benefits of AI, while mitigating its potential risks.
IV. High-Risk AI Systems
The AI Act categorizes certain AI systems as ‘high risk’, subjecting them to stricter regulatory requirements. For a more comprehensive analysis, consider the following:
1. specific examples of high-risk AI systems:
Biometric identification systems: Facial recognition, fingerprint scanning and iris recognition systems.
Critical infrastructure systems: AI systems used to control critical infrastructure such as power grids, transportation systems and healthcare systems.
Human Resource Management Systems: AI systems used for recruitment, hiring and performance evaluation.
Law enforcement systems: AI systems used for predictive policing, risk assessment and facial recognition in surveillance.
2. key requirements for high-risk AI systems
Risk assessment: Conduct thorough risk assessments to identify and mitigate potential harms.
Data quality and quantity: Ensure that the data used to train and operate the AI system is accurate, relevant and representative.
Human oversight: Implement robust human oversight mechanisms to monitor the AI system’s decisions and intervene when necessary.
Transparency and accountability: Develop AI systems that are transparent and can explain their decision-making processes.
Robustness and cybersecurity: Ensure that the AI system is resilient to attacks and can maintain its performance under different conditions.
Non-discrimination and fairness: Design and develop AI systems that are free from bias and discrimination.
3. the role of data protection officers (DPOs):
Monitoring compliance: DPOs play a critical role in monitoring compliance with both the AI Act and the GDPR.
Data Protection Impact Assessments (DPIAs): DPOs can assist in conducting DPIAs for high-risk AI systems.
Data subject rights: DPOs can assist in ensuring that data subject rights are respected, including the right to access, rectify and erase personal data.
Incident response: DPOs can be involved in responding to data breaches and security incidents involving AI systems.
V. Addressing Obligations: AI Act and GDPR
The AI Act does not directly set out specific provisions for obtaining consent, as the GDPR does. Instead, it relies heavily on the GDPR’s existing framework for data protection. However, the AI Act emphasizes that high-risk AI systems, particularly those that process personal data, must be designed and implemented in a way that complies with the principles enshrined in the GDPR, including the requirement for valid consent. For AI systems that involve the use of biometric data (such as facial recognition or other forms of health data), consent must be explicit, particularly where sensitive personal data is involved. The AI Act reinforces the requirements of the GDPR by requiring high-risk AI systems to operate transparently and be subject to strict documentation, meaning that system providers must be able to demonstrate that they have obtained proper consent from data subjects.
The EU AI law extends the right to access, rectify or erase personal data to AI systems. Under the GDPR, data subjects have the right to request access, correction, and erasure of personal data under certain conditions (for example, if the data is no longer needed for the original purpose or if consent is withdrawn). For example, where data subjects are also users of high-risk AI systems, they must be able to access, correct and request erasure of their data. The EU AI law imposes these conditions on the developers of AI systems in the standard systems.
The GDPR sets out transparency requirements for automated decision-making and profiling, requiring that the data subject be informed of how the decision was made and that the logic behind the decision be fully explained. The EU AI Act extends this process by enforcing the idea of algorithm transparency, and its relationship to the GDPR’s transparency and accountability requirements. The regulation requires transparency by mandating that individuals be informed when they interact with AI systems.
This requirement ensures that data subjects are aware of when and how AI systems are using their personal data, which is similar to the GDPR’s accountability and transparency principles. In addition, the EU AI law requires high-risk AI systems to document their algorithms and decision-making process. This documentation must always be available to regulators and, in certain cases, to the public. The onus is on AI providers to ensure that their systems are designed with transparency in mind and can explain their decision-making logic in a way that individuals can understand.
The EU AI law further expands on profiling and automated decision making, as these are crucial and current for artificial intelligence. Article 22 of the GDPR allows the use of automated decision-making but gives data subjects the right not to have decisions taken by automated means, including profiling, to the fullest extent possible.
This right does not apply where it is necessary for the conclusion or performance of a contract between a data subject and a data controller, where it is authorized by the EU and where explicit consent has been given. In any case where the data controller is the European Union or another entity, appropriate measures must be taken to safeguard the data subject’s freedoms and legal interests. This includes allowing the data subject to request human intervention.
The EU AI Regulation regulates profiling and automated decision-making specifically in the case of high-risk AI systems. Based on Article 5 of the Regulation, AI systems that operate on profiling or whose sole purpose is to profile natural persons in order to predict a potential criminal offence are prohibited. However, Article 6 recognizes all other profiling AI systems as high-risk AI systems. With regard to automated decision making, Article 6 of the EU AI Act defines AI systems that use automated decision making as high-risk systems and further regulates them in Article 86, which extends the obligation to provide an explanation for automated decisions where they produce legal effects that affect individuals legally or otherwise have an impact on their health, safety or fundamental rights. The explanation must be provided by the AI developer and does not apply where exceptions or limitations to the obligation under this paragraph result from Union or national law in conformity with Union law. As with the GDPR, non-compliance could result in penalties for AI systems. Finally, the EU AI Act strengthens these protections by requiring providers to implement systems that allow users to understand and challenge AI-driven decisions, ensuring that the right to human oversight and decision correction is preserved.
VI Considerations on AGI
Artificial General Intelligence (AGI), still hypothetical but incredibly close to reality, is the next step in AI development and progress. AGI is a step in the existence of AI where the system becomes independent of input and machine learning becomes an automated step, like how humans experience or perceive reality and learn. Although a legal definition of AGI is still in the works, a Californian court may soon provide one because of a lawsuit brought against Elon Musk to understand whether GPT-4 models exhibit behavior similar to AGI.
The definition of AGI will be crucial in helping policymakers and legal scholars to identify areas of non-compliance with existing laws, or to determine the legal instruments that need to be used to regulate these instruments. However, one thing is certain: while the definitions and theories of experts may differ, they all agree that AGI will be drastically different from Artificial Intelligence, rendering the EU AI law itself powerless against technologies that implement AGI. Not only will the EU AI Act become inadequate, but most of the legislation related to data sovereignty and governance will require significant changes: AGI functions would depend on cross-border data flows, which would be difficult to control. AGI has a theoretical processing capability unlike any technology ever created by humans, leading to automated processing of personal data with little human intervention or control over data handling. This means that the mere existence of AGI would undermine Articles 12 to 23 of the GDPR, including infringements of data subjects’ rights and obligations on data controllers, such as data minimization, lawful use of data, and automated decision making and profiling.
Under the EU AI Act, while the principles of data governance and sovereignty remain similar, the related issue remains different: classification. According to Article 5 of the AI Act, which details the prohibited AI practices mentioned earlier in the paper, AGI could potentially violate all the listed obligations due to the large volumes of data it can process and its capabilities, which are completely outside of human control.
While this remains hypothetical, it could potentially evolve depending on interpretation, judicial decisions and the actual deployment of Artificial General Intelligence. The only guarantee is that current data governance and sovereignty regimes, in their current state, are unable to provide sufficient legal certainty and regulation of this technology.
VII Conclusions
The purpose of the EU AI Regulation is to extend the scope of the GDPR to artificial intelligence. The regulation creates a compliance framework for data governance based on the foundations laid by the GDPR, but it does not create an additional data governance framework. In terms of data governance, the EU AI Regulation can be interpreted as an extension of the GDPR with respect to narrow AI systems. As concluded in the previous paragraph, the EU AI Act and the GDPR are sufficient to address the legal issues arising from the data governance of AI systems, but it is imperative that the European Union implements amendments and changes in preparation for addressing the issues arising from artificial general intelligence.
The synergies between the two frameworks seem to conclude on both being Risk and safety-oriented regulation (security and safety) and that they are personal rights-oriented regulations. The overlaps lie on automated decision processes (22 covers but not sufficiently) the processing power and the prohibition of biometrics, the transparency and information obligation, the risk analysis and the protection by design/by default. Finally, the risks are the statistical processing and inferences, the explanatory power and transparency and the treatment of the rights to object and opt-out in AI processing that are not clear.