The interplay of the AI Act and GDPR can be considered on several levels. Firstly, both regulations aim to protect the fundamental rights and freedoms of individuals. Whereas the AI Act seeks to ensure that the development and implementation of AI systems does not lead to unintended consequences that could harm individuals or society, GDPR guarantees data subjects’ rights to privacy, security and to be forgotten – or to be removed from databases. Secondly, AI-based technologies often use personal data as a key part of their functionality, making compliance with GDPR indispensable for compliance with the AI Act. In practice, this means that companies developing AI systems must take data protection into account as early as at the design stage of the technology (privacy by design) and follow a data minimisation approach. And thirdly, a consistent legal framework that allows both regulations to coexist harmoniously is crucial not only in protecting the rights of individuals, but also providing sustainable development AI-driven technologies.
Independence of protection regimes: AI Act and GDPR
Despite their interplay, the AI Act and GDPR function as independent legal regimes, a fact which is emphasised in the AI Act preamble. The independence of these regimes is reflected in the diversity of their regulatory approaches. GDPR defines detailed requirements for the processing of personal data, imposing obligations on data controllers and processors related to transparency, data minimisation, purpose limitation and data subjects’ rights. The AI Act, while based on broadly similar values such as protecting human rights and preventing discrimination, is more focused on the risks associated with AI technology, regardless of whether those risks are related to personal data or other factors. An important element of this independence is that the AI Act can also apply to systems that do not process personal data, which would therefore not fall under the GDPR regime at all. For example, AI systems used by industry to optimise manufacturing processes would be regulated by the AI Act, even if they do not involve the processing of personal data. This shows that the two acts, while complementary, have separate fields of application and purpose, often being referred to as being in ‘legislative tandem’. In terms of the risks that each regulation protects us from, the scope of the AI Act is far broader, as it seeks to provide protection against all risks associated with the use of AI systems, whereas GDPR aims to protect against impacts on the rights and freedoms of individuals, mainly in terms of privacy.
Differences in approach: input vs output
One of the interesting criteria by which to potentially distinguish between the AI Act and GDPR is the approach to regulation at the input and output levels. The GDPR can be perceived as focused mainly on input data, covering protection of personal data that is in general processed by various entities. In this respect it regulates what data can be collected, how long it can be stored, and what rights the data subjects have related thereto. When data can be collected and the basis on which they may be collected are crucial. The AI Act, on the other hand, focuses on outputs, that is, what effects an AI system may generate and the potential risks arising from it. In this sense, the GDPR highlights all stages of processing, while AI Act is focused on the results produced by AI systems. Regardless of the above, the pivotal truth of Garbage In, Garbage Out, remains relevant at all times.
Management of risk
Although both the AI Act and GDPR take a risk based approach, their application in practice differs considerably. In the case of the GDPR, the rights-based approach is applied, which intends to grant specific rights for data subjects, along with additional requirements directly addressing the elimination of risks, with a multitude of prescriptive regulations. The AI Act, on the other hand, applies an obligation-based approach, which imposes several obligations for providers and users, while not focusing on directly granting specific rights for individuals. In this respect, the AI Act is more of a prohibitive regulation, centred on conformity management and broad-based compliance. Moreover, with GDPR, the approach is to assess the risks associated with the processing of personal data and implement appropriate technical and organisational measures to minimise them. For example, if data processing is likely to lead to a high risk of violating the rights and freedoms of individuals, the data controller is required to conduct a data protection impact assessment (DPIA) and take measures to minimise that risk. The AI Act also takes a risk-based approach, but its scope is broader and more complex. The AI Act categorises AI systems according to the level of risk they pose to society – from minimal to limited to high risk. Each of these categories comes with different regulatory requirements. For example, high-risk AI systems, such as those used in critical infrastructures, require strict controls, including registration obligations and marking, conformity assessment and oversight mechanisms. The differences in how the two pieces of legislation approach risk underscores how they differ, but also how they complement each other. While the GDPR focuses on protecting individuals from data processing risks, the AI Act addresses the broad spectrum of risks associated with the use of AI in various social and economic contexts.
Mutual enhancement of the standard of protection
The AI Act and GDPR, despite their independence, can contribute to each other’s standards for protecting fundamental and other rights. One example here is the integration of the principles of transparency and accountability, which although understood differently are key in both acts. The GDPR requires personal data to be processed transparently, meaning that data subjects must be informed about how and why their data is being processed. The AI Act elaborates on this principle by requiring transparency in the operation of AI systems, with the goal of enabling users to understand how a system works, what its limitations are and what risks its use entails. Similarly, the principle of accountability, which is the cornerstone of the GDPR, is reflected in the AI Act. Entities responsible for processing personal data must implement appropriate technical and organisational measures to ensure compliance with the GDPR. The AI Act introduces analogous requirements for developers and users of AI systems, placing obligations on them to monitor these systems and manage their risks. The intertwining of these standards can lead to higher quality regulation that better protects individuals’ rights in the digital age. For example, integrating privacy rules with AI ethical rules can lead to more comprehensive regulations that can meet the challenges of the future.
Terminology
The AI Act refers to GDPR directly, in terms of the scope of its application, as well as in its terminology. This way, the AI Act refers to definitions such as: personal data; non-personal data; data of special categories. At the same time, the AI act occasionally uses terms known in GDPR, but understood differently, with the difference often being subtle. This is the case when it comes to: automated decision making; consent; risk management; biometric data/verification/identification; impact assessment; data collection, etc.
The difference also shows when assessing the subjective scope of both regulations. While in the case of GDPR, obligations are addressed to ‘data controllers’ and ‘data processors’ when processing the data of ‘data subjects’ (as, from the point of view of the GDPR, the interface with data is crucial). in the AI Act, roles are determined in terms of the relation to the AI system. We therefore have ‘provider’, ‘importers’, ‘distributors’, ‘deployers’, ‘operators’ and ‘producers’.
Regulatory bodies: expansion of powers vs establishment of a new entity
The issue of regulating and overseeing compliance with both the AI Act and GDPR is key in terms of effective implementation of these laws. One of the dilemmas is the choice between expanding the competencies of existing regulatory bodies, such as the national data protection authorities (in Poland this is PUODO, i.e. the head of personal data protection office), or establishing a new, dedicated entity responsible for AI oversight. Expanding the powers of existing bodies could ensure regulatory consistency and leverage already existing structures and experience. However, the AI Act introduces new challenges that may require specialised knowledge and dedicated resources, which argues for the establishment of a new body. The new body could focus exclusively on AI issues, providing more detailed and relevant oversight of this rapidly evolving area. Choosing the right approach to overseeing compliance with the AI Act and GDPR will be crucial to the successful implementation of both pieces of legislation. Regardless of the choice, adequate resources and competencies will need to be in place for supervisors to perform their tasks effectively. It is worth mentioning that Irene Loizidou Nicolaidou, the vice-president of the European Data Protection Board, said in July this year that “Data protection authorities should play a significant role in enforcing the AI Act, as most artificial intelligence systems involve the processing of personal data. I firmly believe that the data protection authorities are suitable for this role due to their full independence and deep understanding of the risks of artificial intelligence to fundamental rights, based on their past experience.”
Penalties: the risk of overlap
Penalties for violations of GDPR are widely known and include hefty fines of up to €20 million or 4% of a company’s annual worldwide turnover, whichever is higher. The AI Act also introduces a system of financial penalties of up to €35 million or 7% of a company’s annual turnover, whichever is higher. However, there is a risk of overlapping sanctions for violations that may be simultaneously regulated by both acts. For example, if an AI system violates the GDPR’s provisions on personal data processing while failing to comply with the AI Act’s transparency or risk management requirements, the responsible entity may be penalised under both acts. Needless to say, the rule that artificial intelligence must be GDPR-compliant gains all the more significance.
Conclusion
The relationship between the AI Act and GDPR is complex and multidimensional. Although the two acts operate as independent regimes, their intertwining is inevitable, given the crucial importance of personal data in the context of AI. Both the AI Act and GDPR seek to protect fundamental rights, although they do so in different ways and with different priorities. Integrating these two legal regimes can help create more comprehensive and effective regulations that better protect individuals’ rights in the digital age. The challenges of surveillance and sanctions, as well as the need to ensure regulatory consistency, will require close cooperation and coordination between the authorities responsible for implementing these regulations. In the future, as AI plays an ever-increasing role in our lives, regulations will need to be further adapted and harmonised to meet new challenges while ensuring the protection of individuals’ rights and fostering technological innovation.
Fortunately, UODO, Poland’s personal data protection authority, seems to be well aware of the above, underlining that the key task of the Polish legislator is to ensure compatibility between the regulations implementing the AI Act and the protection of personal data and the constitutional right to privacy.
GDPR | AI Act |
(personal) data | AI system |
full scope (where personal data are included) | Limited scope – obligations differ depending on the system’s level of risk |
data controller and data processor | AI system: deployers, providers, importers, distributors, operators and producers. |
input focused (data collection) | output focused |
rights and obligations based | obligations and restrictions based |
key focus – inter-reference with data | key focus – interference with system |
purpose for data processing | purpose for system usage |
data breach | incidents |