The EU AI Act Explained

A comprehensive look at Europe's landmark AI regulation


AUG 2024
BY KATHRIN GARDHOUSE & MCKENZIE LLOYD-SMITH


Summary: The EU's Artificial Intelligence Act, effective August 2, 2024, has established a comprehensive AI regulatory framework, classifying AI systems by risk and imposing stringent requirements on high-risk applications. This article explores the comprehensive regulation, including the obligations, penalties, and specific provisions, while also addressing the challenges and criticisms it faces.


The European Union's Artificial Intelligence Act (EU AI Act) came into force on August 2 2024, becoming the world's most comprehensive and likely most influential regulatory framework for artificial intelligence (AI). This legislation, four years in the making, is designed to protect fundamental rights while ensuring the safe development and use of AI technologies across the European Union and beyond.


The Context

The rapid advancement of AI technologies has brought both immense opportunities and significant challenges. Recognizing the need for a balanced approach to regulation, the European Union has taken a proactive stance in developing a legal framework that addresses the potential risks of AI while fostering innovation and maintaining Europe's competitiveness in the global AI landscape.

Timeline for the development of the Act:¹

Timeline for the implementation of the Act (see Art. 113):¹


Key features of the EU AI Act

1. Risk-based approach

At the heart of the EU AI Act is a tiered, risk-based system for classifying and regulating AI. This approach ensures that the level of regulatory oversight is proportional to the potential harm an AI system could cause. The Act categorizes AI practices and systems into four risk levels:

In addition, the EU AI Act also applies to General Purpose AI (GPAI) models, defined as:

"an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

These models are not per se high risk, but when integrated into a high-risk AI system, the same obligations would apply to them as well. Note further that there is the additional distinction between GPAI models and GPAI models with systemic risks. (Art. 51) The EU AI Act sets out criteria for the determination of systemic risks, mostly focused on the capabilities of the models. It allows for flexibility in this regard, enabling the Commission to react to technological advancements.

Image credit: Lukas S.

2. Broad scope and extraterritorial application

The Act has an expansive reach, applying to all organizations placing on the EU market or putting AI systems into service within the EU, regardless of where they are headquartered. It also applies to providers and deployers of AI systems located in a third country where the output of the AI system is used in the EU. This extraterritorial scope ensures that any company wishing to deploy AI systems or use their output in the EU market must comply with the Act's provisions (Art. 2).


3. Stringent requirements for High-Risk AI Systems

Providers and deployers of high-risk AI systems must meet extensive but differing obligations. Providers of high-risk AI systems face obligations including:

Obligations for deployers of high-risk AI systems, under Art. 26, include:

It's important to note that some of these obligations have specific conditions or exceptions, particularly for financial institutions and law enforcement agencies. In addition, care must be taken by deployers that they do not inadvertently become a provider due to actions taken described in Art. 25.


4. General Purpose AI (GPAI) Model Provisions

The Act introduces specific rules for GPAI models, recognizing their unique characteristics and potential impacts:

Note that these obligations apply to the providers of GPAI models only. No specific mention is made of any obligations deployers have under the Act. This is due to the fact that an AI model without the additional components that make it an AI system, such as human-AI interfaces and other deployment infrastructure, can’t well be deployed. It can be open-sourced, but that would be on the provider to do.


5. Transparency Obligations applicable to certain AI systems

Users must be informed when interacting with the limited-risk systems described above, and in some cases, AI-generated content must be labeled as such. These obligations are imposed on either the provider or the deployer of the AI system. Special mention is made of deployers of emotion recognition and biometric categorization systems (except for those systems used to prevent or investigate crime), emphasizing that they too must inform individuals of the operation of the system and process the personal data in accordance with applicable laws (Art. 50).


6. Governance and Enforcement Structure 

The Act establishes a complex governance framework to ensure effective implementation:


7. Non-Compliance Penalties

To ensure adherence to the new regulations, the Act introduces substantial fines for violations:

Note that there are no fines imposed on GPAI model providers for violations of their general obligations set out above in section 4, whether or not the model is categorized as a systemic risk model. However, the Commission can withdraw the model from the market under certain circumstances.


Global Impact and Future Implications

As the first comprehensive international AI regulation, the EU AI Act is expected to have far-reaching effects beyond Europe's borders. Much like the General Data Protection Regulation (GDPR) set a global standard for data privacy, the AI Act may influence AI governance worldwide.

Companies developing or deploying AI systems will need to carefully assess their compliance with the Act, potentially leading to significant changes in how AI is developed, tested, and implemented globally. The Act may also spur innovation in "AI governance" technologies and methodologies to meet the new regulatory requirements.

Moreover, the Act's emphasis on ethical AI development aligns with growing global concerns about AI safety and trustworthiness. It may accelerate efforts to create international standards and best practices for responsible AI development.


Challenges and Criticisms

While the EU AI Act represents a major step forward in AI regulation, it has faced some criticism. Some argue that the regulations may stifle innovation or place European companies at a competitive disadvantage. We have already seen some tech giants holding off on deploying in the EU due to concerns the GDPR regulators raised around their compliance with data protection requirements. AI Act obligations may further dissuade companies from placing their system on the EU market. 

Others contend that the Act doesn't go far enough in addressing potential AI risks. Examples of loopholes are carve-outs for public authorities, and relatively weak regulation of GPAI models which experts say may pose the greatest threats.

The practical implementation of the Act, particularly in rapidly evolving areas like general-purpose AI, will likely present challenges and may require ongoing adjustments to the regulatory framework and to the codes of practice and harmonized standards that are currently in the works.


Conclusion

The EU AI Act marks a pivotal moment in the governance of artificial intelligence. By establishing a comprehensive regulatory framework, the European Union aims to create an environment where AI can flourish while respecting fundamental rights and ensuring public safety. As AI continues to transform various aspects of society, the impact of this landmark legislation will be felt far beyond Europe's borders, potentially shaping the future of AI development and deployment on a global scale.

Photo credit: Alev Takil

°   °   °

At MindPort, we believe that the future of AI lies in its ability to seamlessly integrate into the human experience, enhancing our capabilities and enriching our interactions. From crafting bespoke governance frameworks to conducting educational workshops and risk assessments, we ensure that businesses can confidently leverage AI to achieve transformative outcomes while adhering to the highest standards of security and ethics.

If you want support in adopting AI responsibly, building a next-generation product, developing an AI strategy, or just want to learn more, get in touch. 

°   °   °

Sign up receive our insight & reports straight to your inbox. Always interesting, and never more than once per month. We promise.

¹ Footnotes

Share this Insight: