The EU AI Act Explained
A comprehensive look at Europe's landmark AI regulation
AUG 2024
BY KATHRIN GARDHOUSE & MCKENZIE LLOYD-SMITH
Summary: The EU's Artificial Intelligence Act, effective August 2, 2024, has established a comprehensive AI regulatory framework, classifying AI systems by risk and imposing stringent requirements on high-risk applications. This article explores the comprehensive regulation, including the obligations, penalties, and specific provisions, while also addressing the challenges and criticisms it faces.
The European Union's Artificial Intelligence Act (EU AI Act) came into force on August 2 2024, becoming the world's most comprehensive and likely most influential regulatory framework for artificial intelligence (AI). This legislation, four years in the making, is designed to protect fundamental rights while ensuring the safe development and use of AI technologies across the European Union and beyond.
The Context
The rapid advancement of AI technologies has brought both immense opportunities and significant challenges. Recognizing the need for a balanced approach to regulation, the European Union has taken a proactive stance in developing a legal framework that addresses the potential risks of AI while fostering innovation and maintaining Europe's competitiveness in the global AI landscape.
Timeline for the development of the Act:¹
February 2020: The European Commission published a white paper on AI, laying the groundwork for future legislation.
April 21, 2021: The European Commission officially proposed the AI Act.
December 6, 2022: The European Council adopted its position on the AI Act, allowing negotiations to begin with the European Parliament.
December 9, 2023: After months of negotiations, the EU Council and Parliament reached a provisional agreement on the AI Act.
March 13, 2024: The European Parliament passed the AI Act.
May 21, 2024: The EU Council gave final approval to the AI Act.
July 12, 2024: AI Act is published in the Official Journal of the EU
Timeline for the implementation of the Act (see Art. 113):¹
2 August 2024: Entry into force of the law
2 February 2025: Ban on AI systems with unacceptable risk
2 May 2025: Codes of practice shall be ready by this date (Art. 56(9), Recital 179)
2 August 2025: EU AI Act Governance rules and obligations for General Purpose AI (GPAI) become applicable as well as penalties provisions
2 August 2026: Remainder of the EU AI Act starts to apply, except obligations for high-risk systems listed in Annex I
2 August 2027: Application of the EU AI Act to high-risk AI systems under Annex I.
Key features of the EU AI Act
1. Risk-based approach
At the heart of the EU AI Act is a tiered, risk-based system for classifying and regulating AI. This approach ensures that the level of regulatory oversight is proportional to the potential harm an AI system could cause. The Act categorizes AI practices and systems into four risk levels:
Prohibited AI Practices: The placing on the market, putting into use, or use of these AI systems are outright prohibited. Examples include social scoring systems, manipulative AI designed to exploit vulnerabilities, and certain uses of real-time biometric identification in public spaces. (Art. 5)
High-Risk AI systems: This category is subject to the most stringent regulations. It includes AI systems used in critical infrastructure, education, employment, essential public and private services, law enforcement, and migration management. As we have seen above, the AI Act provisions for these systems come into force in August 2026 or 2027, depending on the kind of high-risk AI system. (Art. 6)
Limited Risk: This category captures systems that natural persons directly interact with, which generate synthetic audio, image, video or text, or which manipulate audio, video, or image content constituting a deep fake. It also captures certain emotion recognition and biometric categorization systems. (Art. 50)
Minimal Risk: The majority of current AI applications fall into this category and remain largely unregulated, such as AI-enabled video games and spam filters. The Codes of Conduct mentioned in the above timeline will be applicable to all non-high-risk systems, but adherence to these Codes is voluntary. (Art. 95)
In addition, the EU AI Act also applies to General Purpose AI (GPAI) models, defined as:
"an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."
These models are not per se high risk, but when integrated into a high-risk AI system, the same obligations would apply to them as well. Note further that there is the additional distinction between GPAI models and GPAI models with systemic risks. (Art. 51) The EU AI Act sets out criteria for the determination of systemic risks, mostly focused on the capabilities of the models. It allows for flexibility in this regard, enabling the Commission to react to technological advancements.
2. Broad scope and extraterritorial application
The Act has an expansive reach, applying to all organizations placing on the EU market or putting AI systems into service within the EU, regardless of where they are headquartered. It also applies to providers and deployers of AI systems located in a third country where the output of the AI system is used in the EU. This extraterritorial scope ensures that any company wishing to deploy AI systems or use their output in the EU market must comply with the Act's provisions (Art. 2).
3. Stringent requirements for High-Risk AI Systems
Providers and deployers of high-risk AI systems must meet extensive but differing obligations. Providers of high-risk AI systems face obligations including:
Conducting thorough risk assessments (Art. 9)
Ensuring high-quality data for training, validation, and testing (Art. 10)
Maintaining comprehensive technical documentation (Art. 11)
Implementing logging capabilities to enable auditability and keep relevant records (Art. 12, 18, and 19)
Enabling deployers to interpret and use AI systems appropriately and to comply with their respective obligations (Art. 13)
Ensuring human oversight is possible (Art. 14)
Meeting standards for accuracy, robustness, and cybersecurity (Art. 15)
Ensure the AI system undergoes a Conformity Assessment, declare conformity, and mitigate any non-conformity with the EU AI Act (Art. 16(f) and (g) and Art. 20)
Register the AI system, where applicable (Art. 49(1))
Put in place a Quality Management System (Art. 17)
Using systems in accordance with provided instructions (Para. 1)
Assigning human oversight to competent and trained personnel (Para. 2)
Ensuring input data is relevant and representative for the system's intended purpose (Para. 4)
Monitoring system operation based on instructions and informing providers of potential risks (Para. 5)
Suspending system use and informing relevant authorities if risks are identified (Para. 5)
Keeping automatically generated logs for at least six months, unless otherwise specified (Para. 6)
Informing workers' representatives and affected workers about the use of high-risk AI systems in the workplace (Para. 8)
Complying with registration obligations for public authorities and EU institutions (Para. 9)
Using provided information to conduct data protection impact assessments when applicable (Para. 10)
Obtaining authorization for post-remote biometric identification systems used in law enforcement (Para. 11)
Documenting each use of post-remote biometric identification systems in relevant police files (Para. 11)
Submitting annual reports on the use of post-remote biometric identification systems to relevant authorities (Para. 11)
Informing natural persons when they are subject to decisions made or assisted by high-risk AI systems (Para. 13)
Cooperating with relevant authorities in implementing the regulation (Para. 14)
It's important to note that some of these obligations have specific conditions or exceptions, particularly for financial institutions and law enforcement agencies. In addition, care must be taken by deployers that they do not inadvertently become a provider due to actions taken described in Art. 25.
4. General Purpose AI (GPAI) Model Provisions
The Act introduces specific rules for GPAI models, recognizing their unique characteristics and potential impacts:
All GPAI model providers must provide technical documentation, instructions for use, comply with copyright laws, and publish summaries of training data. Exceptions apply to open-source models. Compliance with these requirements can be aided by relying on Codes of Practice that are to be provided by May 2025. (Art. 53)
Additional requirements apply to GPAI models that present systemic risks, including conducting model evaluations, adversarial testing, documentation and reporting obligations, cybersecurity measures, and risk mitigation. (Art. 55)
Note that these obligations apply to the providers of GPAI models only. No specific mention is made of any obligations deployers have under the Act. This is due to the fact that an AI model without the additional components that make it an AI system, such as human-AI interfaces and other deployment infrastructure, can’t well be deployed. It can be open-sourced, but that would be on the provider to do.
5. Transparency Obligations applicable to certain AI systems
Users must be informed when interacting with the limited-risk systems described above, and in some cases, AI-generated content must be labeled as such. These obligations are imposed on either the provider or the deployer of the AI system. Special mention is made of deployers of emotion recognition and biometric categorization systems (except for those systems used to prevent or investigate crime), emphasizing that they too must inform individuals of the operation of the system and process the personal data in accordance with applicable laws (Art. 50).
6. Governance and Enforcement Structure
The Act establishes a complex governance framework to ensure effective implementation:
An AI Office within the European Commission will oversee compliance for GPAI models.
National competent authorities and market surveillance authorities will play crucial roles in enforcement.
An AI Board will facilitate cooperation between member states.
7. Non-Compliance Penalties
To ensure adherence to the new regulations, the Act introduces substantial fines for violations:
Up to €35 million or 7% of global annual turnover for engaging in prohibited AI practices
Up to €15 million or 3% for violations related to high-risk AI systems
Fines for GPAI Providers: Up to 15 million EUR or 3% of total worldwide turnover if:
The provider is found to have intentionally or negligently violated the provisions of the AI Act
Failure to comply with a request for a document or for information
Failure to comply with a corrective measure
Failure to make available to the Commission access to the general-purpose AI model or general-purpose AI model with systemic risk with a view to conducting an evaluation
Note that there are no fines imposed on GPAI model providers for violations of their general obligations set out above in section 4, whether or not the model is categorized as a systemic risk model. However, the Commission can withdraw the model from the market under certain circumstances.
Global Impact and Future Implications
As the first comprehensive international AI regulation, the EU AI Act is expected to have far-reaching effects beyond Europe's borders. Much like the General Data Protection Regulation (GDPR) set a global standard for data privacy, the AI Act may influence AI governance worldwide.
Companies developing or deploying AI systems will need to carefully assess their compliance with the Act, potentially leading to significant changes in how AI is developed, tested, and implemented globally. The Act may also spur innovation in "AI governance" technologies and methodologies to meet the new regulatory requirements.
Moreover, the Act's emphasis on ethical AI development aligns with growing global concerns about AI safety and trustworthiness. It may accelerate efforts to create international standards and best practices for responsible AI development.
Challenges and Criticisms
While the EU AI Act represents a major step forward in AI regulation, it has faced some criticism. Some argue that the regulations may stifle innovation or place European companies at a competitive disadvantage. We have already seen some tech giants holding off on deploying in the EU due to concerns the GDPR regulators raised around their compliance with data protection requirements. AI Act obligations may further dissuade companies from placing their system on the EU market.
Others contend that the Act doesn't go far enough in addressing potential AI risks. Examples of loopholes are carve-outs for public authorities, and relatively weak regulation of GPAI models which experts say may pose the greatest threats.
The practical implementation of the Act, particularly in rapidly evolving areas like general-purpose AI, will likely present challenges and may require ongoing adjustments to the regulatory framework and to the codes of practice and harmonized standards that are currently in the works.
Conclusion
The EU AI Act marks a pivotal moment in the governance of artificial intelligence. By establishing a comprehensive regulatory framework, the European Union aims to create an environment where AI can flourish while respecting fundamental rights and ensuring public safety. As AI continues to transform various aspects of society, the impact of this landmark legislation will be felt far beyond Europe's borders, potentially shaping the future of AI development and deployment on a global scale.
° ° °
At MindPort, we believe that the future of AI lies in its ability to seamlessly integrate into the human experience, enhancing our capabilities and enriching our interactions. From crafting bespoke governance frameworks to conducting educational workshops and risk assessments, we ensure that businesses can confidently leverage AI to achieve transformative outcomes while adhering to the highest standards of security and ethics.
If you want support in adopting AI responsibly, building a next-generation product, developing an AI strategy, or just want to learn more, get in touch.
° ° °
Learn about our approach to AI Strategy
Explore our research into AIX and Human-Centered Design Research
Sign up receive our insight & reports straight to your inbox. Always interesting, and never more than once per month. We promise.
Share this Insight: