By Szymon Sieniewicz, head of TMT/IP practice, Linklaters Warsaw
Linklaters_pole

 

The EU adopted its ground-breaking AI law in May 2024, becoming the first jurisdiction in the world to adopt such an extensive AI regulatory framework. The EU AI Act imposes many obligations on providers and users of AI systems and in essence, all businesses need to take some manner of actions to prepare for the new AI regulations. Here are the top five things that businesses need to know about the EU AI Act.

  1. Application timeline

The EU AI Act came into force on 1 August 2024. Organisations still have some time to adapt to the specific requirements of the Act, with some obligations applying in the near term and others not becoming applicable until 2027. The implementation timeline follows a carefully orchestrated schedule, featuring key deadlines that span a three-year period. Below are the key dates by which various provisions of the Act will come into effect:

  • 1 August 2024: The EU AI Act comes into effect.
  • 2 February 2025: Provisions regarding prohibited AI practices and AI literacy become effective.
  • 2 August 2025: Provisions pertaining to general-purpose AI (GPAI), governance, sanctions, and confidentiality will come into effect.
  • 2 August 2026: Most of the obligations under the EU AI Act will begin to apply.
  • 2 August 2027: The rules for high-risk AI systems embodied in products will come into effect.

This phased rollout provides organisations with a structured timeline to align with the EU AI Act’s requirements.

  1. Risk-based approach

The EU AI Act adopts a risk-based approach, something which businesses will already be familiar with from the GDPR. This means that the obligations under the EU AI Act vary depending on how the AI system is intended to be used. The higher the identified risk, the stricter the rules that must be followed.

Here is the summary of the main risk tiers:

  • Prohibited AI practices: The most dangerous AI applications will be completely banned. This includes AI systems that use subliminal techniques to manipulate behaviour or cause harm. The EU AI Act provides a list of these prohibited AI practices.
  • High-risk AI systems: Certain AI applications fall into the category of high-risk AI systems and are subject to the most rigorous compliance obligations. The list of high-risk AI systems includes AI systems used in recruitment, remote biometric identification and credit scoring.
  • General-purpose AI models: The regulation of general-purpose AI (GPAI) models also follows a risk-based approach. GPAI’s that present systemic risks are subject to stringent compliance obligations, whereas other GPAI models have limited obligations, which focus on documentation and copyright issues.
  • Other: There are some obligations related to AI literacy and transparency that apply to AI systems, regardless of whether they are high-risk or general-purpose. These obligations are relatively narrow and pertain to specific uses.
  1. Broad scope of application

The territorial scope of application of the EU AI Act is extensive. It not only affects the businesses established within the EU but also captures international businesses with a tangential connection to the EU. This broad application arises from the extraterritorial nature of the EU AI Act, which applies to:

  • Providers of AI systems that are put into service or placed on the market in the EU
  • Deployers of AI systems established in the EU
  • Providers or deployers of AI systems regardless of their location, where the output from the system is used in the EU

Additionally, the EU AI Act encompasses all actors involved in the lifecycle of an AI system. The most stringent obligations are placed on:

  • Providers (persons or entities that develop an AI system or a GPAI model or that have an AI system or a GPAI model developed and place it on the market or put the AI system into service under their own name or trademark)
  • Deployers (persons or entities that use an AI system under their authority)

Moreover, some obligations are placed on other parties, including importers, distributors and product manufacturers.

  1. What should you do now?

With certain requirements of the EU AI Act soon coming into effect, early preparation is crucial. Most businesses should focus on actioning four key steps:

  1. Identify AI systems: Review and identify software and hardware products within the organisation to determine if they qualify as AI systems under the EU AI Act. If uncertain about a system’s status, it is safer to treat it within the scope.
  2. Carry out a risk assessment: Categorise AI applications by the EU AI Act’s risk tiers. Assess whether they fall under prohibited AI practices, high-risk AI systems, or other categories.
  3. Define obligations: Identify the specific obligations applicable under the EU AI Act based on your organisation’s role (provider, deployer, distributor or importer).
  4. Develop a compliance plan: Create your compliance plan to identify and address regulatory requirements under the EU AI Act. This plan should include:
  • Compliance documentation: Create or update AI policies. This may include AI management policies, technology usage policies, supplier engagement policies, AI incident response plans, fundamental rights impact assessment templates, privacy policies and consumer-facing terms and conditions.
  • AI training and oversight: Ensure high-quality data input and human oversight with intervention capabilities.
  • AI literacy: Employees should be trained on the principles of the AI compliance program implemented by your organisation.

Conduct a thorough gap analysis to identify areas of non-compliance and regularly review and update the AI compliance programme once established.

  1. Other applicable laws

On top of the AI Act, AI systems are heavily regulated under other frameworks, particularly data protection laws (including the GDPR) as well as consumer protection and intellectual property laws. The obligations under the EU AI Act do not override those other obligations. Sector specific regulations may also apply, which includes regulatory frameworks for the financial sector and medical devices, just to name a few. European businesses need to take into consideration not only the relevant EU laws, but also the national laws in Member States where they do business.

For international businesses, it is important to remember that the EU AI Act will likely influence the global regulatory landscape. Other jurisdictions may follow the EU, which we have already seen with the GDPR and is a phenomenon commonly known as the “Brussels Effect”. Some jurisdictions have already implemented their local AI laws. Mainland China has already implemented provisions governing recommendation algorithms in 2021 and regulations on generative AI in 2023. On the other hand, the US is developing its own complex patchwork of state-specific AI laws. The international regulatory matrix is increasingly becoming more complex, and it is important for businesses to both understand this matrix and to prepare for compliance.