EU AI Act – Europe’s first regulation on artificial intelligence soon to enter into force

The EU’s Artificial Intelligence Act (AI Act), which seeks to regulate the use of AI in the EU, is expected to enter into force soon. This regulation will determine how businesses use AI in the EU and how they will be held accountable.

Initially proposed by the European Commission in 2021, the EU Council and Parliament have since reached a provisional agreement in December 2023 [1], which was formally adopted by member state representatives on the Committee of Permanent Representatives (COREPER) [2], and Parliamentarians on the IMCO and LIBE committees in February 2024 [3]. Pending the final adoption by the European Parliament and the European Council, both of whom are expected to vote in favour [3], the Act is likely to enter into force still in the first half of 2024.

The AI Act aims to classify all AI applications according to their risk, imposing different sets of rules for each risk category. The categories are “unacceptable, high, limited, and minimal” risk [4]. Amid the rise of generative AI in 2023, a fifth category for “general-purpose AI” was also added. AI systems with “unacceptable” risks are prohibited in the EU, while “high-risk” AI systems such as those deployed in critical infrastructure, law enforcement, safety components, education, and public services, will be subject to strict requirements including risk assessment and mitigation systems, detailed documentation, and human oversight. “Limited risk” applications are subject to transparency obligations (e.g. chatbots making themselves known as such), while applications that fall in the “minimal risk” category will not be regulated – the European Commission notes that the vast majority of AI systems currently used in the EU fall into this category [4].

Obligations for providers of general-purpose AI models would notably include the provision of technical documentation and compliance with the Copyright Directive. While these obligations are slightly relaxed for providers of open license models, providers of general-purpose AI models that pose systemic risks must comply with a more stringent set of requirements, including model evaluations, risk assessments, and adversarial testing.

After entering into force, the obligations under the AI Act will be implemented in stages, with rules on prohibited AI systems applying after 6 months, rules on general-purpose AI after 12 months, and rules on high-risk AI systems after 24 months. Meanwhile, ASEAN companies which either are exporting AI goods and services to the EU or have goods and services that use AI are advised to familiarize themselves with the risk categories and the obligations that arise for them from the new rules. A draft version of the text can be found here.

Sources:
[1] EU Council (https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/).
[2] Euractiv (https://www.euractiv.com/section/digital/news/all-eyes-on-coreper-from-a-convention-to-a-declaration/).
[3] Pinsent Masons (https://www.pinsentmasons.com/out-law/news/eu-ai-act-takes-latest-step-through-european-parliament).
[4] European Commission (https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai).