Disclaimer: Since the law is not yet formally adopted, the validity of this content might change over time. This blogpost is based on the European Union AI Act originally proposed in April 2021 together with the amendments made along the way.The contents of this blogpost is not to be regarded as legal advice.
TABLE OF CONTENTS
The regulatory consequences of the AI hype wave
AI is booming, and so are its regulations. ChatGPT’s public launch on 30 November 2022 boosted AI interest, investments, and adoption. But it also triggered global regulation efforts. In late 2023, several AI governance initiatives emerged, such as the Hiroshima AI Process by the G7 and the AI Safety Summit at Bletchley Park. This came to a head in December 2023, with the European Union reaching a provisional agreement on the law to govern AI within the union: the European Union AI Act (EU AI Act).
The EU AI Act: one of the most stringent AI regulations
The EU AI Act is considered one of the most stringent AI regulations internationally, and follows a risk-based approach: the higher the risk, the stricter the rules. Expected to become EU law in early 2024, companies with AI applications must know the implications of the law, and how to comply with it. Non-compliance with the EU AI Act can result in fines up to 35 mEUR or 7% of global annual revenue. As a result, it is urgent for all businesses applying AI within the EU to be on top of the compliance work regarding the EU AI Act.
AI as defined in the EU AI Act
AI is a terminology with many different definitions and meanings. Before diving into how the EU AI Act will regulate AI systems, let’s clarify the definition of AI. The EU AI Act is using a definition inspired by OECD’s definition of AI, and reads as follows:
“Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches [listed in Annex I] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”
What does the EU AI Act mean for your AI applications?
The EU AI Act classifies AI applications into four risk levels: 1) Unacceptable risk, 2) High risk, 3) Limited risk and 4) Low and minimal risk. The higher the risk, the stricter the rules. General purpose AI systems will also be included in, and governed by, the law. Let’s uncover each of these risk categories in detail.
Unacceptable risk applications: banned
The EU AI Act completely bans AI applications that pose an unacceptable level of risk. The following AI applications are deemed to be of unacceptable risk:
Companies that have AI systems in any of these application areas will have to phase them out within six months after the adoption of the law to avoid breaches.
High risk applications: comprehensive requirements
AI use cases are regarded as high risk if at least one of the following two requirements are fulfilled:
1. AI systems used in products that are covered under the EU’s product safety legislation, including toys, aviation, cars, and medical devices.
or
2. AI systems falling into these specific areas that will have to be registered in an EU database:
Among the permitted AI systems, high risk AI systems are the most heavily regulated ones, with regulations including:
Human oversight
The AI system should allow people to oversee their functioning, to guarantee that the AI system cannot be overridden by itself, and is responsive to the human operator.
High quality data
High quality data is essential for the performance of many AI systems, but also to ensure that the AI systems perform as intended, are safe and don’t discriminate. Training, validation and testing data should be sufficiently relevant, representative, free of errors, and complete for the purpose of the AI system.
Consistency, accuracy, robustness and cybersecurity
The AI system should perform consistently throughout its lifecycle and fulfill appropriate levels of accuracy, robustness and cybersecurity.
Record-keeping and technical documentation
Information should be kept and documented about general characteristics, capabilities and limitations of the AI system, algorithms, data, training, testing and validation processes used, and risk management systems.
Technical robustness
The AI systems need to be resilient against risks connected to limitations of the systems as well as malicious actions against them.
Transparency
To ensure that AI systems do not become too complex or incomprehensible for humans, a certain degree of transparency is required. Users should be able to interpret the output of the AI systems and use it appropriately.
CE marking
The AI systems should bear CE marking to indicate their conformity with the EU AI Act.
Limited risk applications: transparency requirements
The limited risk category covers AI systems that interact with humans (such as chatbots), emotion recognition systems, biometric categorisation systems, and AI systems that generate and/or transform image, audio and/or video data. The requirements on limited risk AI systems focuses on transparency, intending to make system users aware that they are interacting with a machine. This information helps users make informed decisions on how to interpret the output of those systems.
Low or minimal risk applications - no obligations
AI systems that are not considered unacceptable, high, or limited in risk, are categorized as low or minimal in risk. The EU AI Act does not impose any legal obligations on AI systems in this category.
General purpose AI systems
The EU AI Act also includes provisions governing General purpose AI systems, which will be split into high-impact and low-impact based on their systemic risk level. High impact General purpose AI systems will have to follow stricter rules for things like model evaluations, assessments and mitigations of systemic risks, the conduction of adversarial testing, reporting of serious incidents to the European Commission, and reporting on energy efficiency. In addition, all providers of General purpose AI systems will have to fulfill transparency requirements on technical documentation and provide detailed summaries about the content used for training. The rules for General purpose AI systems are new and not yet fully completed, which means that more information will come from the regulators.
How to stay compliant with the EU AI Act
After the formal adoption of the EU AI Act by the EU Parliament and Council, expected in early 2024, there will be a two year grace period for compliance. However, riskier AI use cases will likely have a shorter grace period of only six months for unacceptable risk, and 12 months for high risk AI systems.
Companies can take these actions to comply already:
If you want more insights on how to stay compliant with the EU AI Act, we’ve put together an in-depth guide, along with some real world examples to help you on the way.
Related reads
Struggling to comply with the EU AI Act?
Talk to us.