Data Trends & Insights

The EU AI Act in practice

Wednesday, Feb 14, 20247 min read
Patrik Liu Tran

Disclaimer: Since the law is not yet formally adopted, the validity of this content might change over time. This blogpost is based on the European Union AI Act originally proposed in April 2021 together with the amendments made along the way.The contents of this blogpost is not to be regarded as legal advice.

TABLE OF CONTENTS

  • 1. The regulatory consequences of the AI hype wave
  • 2. What does the EU AI Act mean for your applications?
  • 3. How to stay compliant with the EU AI Act
  • The regulatory consequences of the AI hype wave

    AI is booming, and so are its regulations. ChatGPT’s public launch on 30 November 2022 boosted AI interest, investments, and adoption. But it also triggered global regulation efforts. In late 2023, several AI governance initiatives emerged, such as the Hiroshima AI Process by the G7 and the AI Safety Summit at Bletchley Park. This came to a head in December 2023, with the European Union reaching a provisional agreement on the law to govern AI within the union: the European Union AI Act (EU AI Act). 

    The EU AI Act: one of the most stringent AI regulations

    The EU AI Act is considered one of the most stringent AI regulations internationally, and follows a risk-based approach: the higher the risk, the stricter the rules. Expected to become EU law in early 2024, companies with AI applications must know the implications of the law, and how to comply with it. Non-compliance with the EU AI Act can result in fines up to 35 mEUR or 7% of global annual revenue. As a result, it is urgent for all businesses applying AI within the EU to be on top of the compliance work regarding the EU AI Act.

    AI as defined in the EU AI Act

    AI is a terminology with many different definitions and meanings. Before diving into how the EU AI Act will regulate AI systems, let’s clarify the definition of AI. The EU AI Act is using a definition inspired by OECD’s definition of AI, and reads as follows:

    “Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches [listed in Annex I] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”

    The EU AI Act classifies AI applications into four risk levels: 1) Unacceptable risk, 2) High risk, 3) Limited risk and 4) Low and minimal risk. The higher the risk, the stricter the rules.

    The EU AI Act classifies AI applications into four risk categories; the higher the risk, the stricter the rules.

    What does the EU AI Act mean for your AI applications?

    The EU AI Act classifies AI applications into four risk levels: 1) Unacceptable risk, 2) High risk, 3) Limited risk and 4) Low and minimal risk. The higher the risk, the stricter the rules. General purpose AI systems will also be included in, and governed by, the law. Let’s uncover each of these risk categories in detail.

    Unacceptable risk applications: banned

    The EU AI Act completely bans AI applications that pose an unacceptable level of risk. The following AI applications are deemed to be of unacceptable risk:

  • AI systems that deploy harmful manipulative ‘subliminal techniques’
  • AI systems that exploit specific vulnerable groups (physical or mental disability)
  • AI systems used by public authorities, or on their behalf, for social scoring purposes
  • 'Real-time' remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases
  • Companies that have AI systems in any of these application areas will have to phase them out within six months after the adoption of the law to avoid breaches.

    High risk applications: comprehensive requirements

    AI use cases are regarded as high risk if at least one of the following two requirements are fulfilled:

    1. AI systems used in products that are covered under the EU’s product safety legislation, including toys, aviation, cars, and medical devices.

    or

    2. AI systems falling into these specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Assistance in legal interpretation and application of the law.
  • Migration, asylum and border control management
  • Among the permitted AI systems, high risk AI systems are the most heavily regulated ones, with regulations including:

    Human oversight

    The AI system should allow people to oversee their functioning, to guarantee that the AI system cannot be overridden by itself, and is responsive to the human operator.

    High quality data

    High quality data is essential for the performance of many AI systems, but also to ensure that the AI systems perform as intended, are safe and don’t discriminate. Training, validation and testing data should be sufficiently relevant, representative, free of errors, and complete for the purpose of the AI system.

    Consistency, accuracy, robustness and cybersecurity

    The AI system should perform consistently throughout its lifecycle and fulfill appropriate levels of accuracy, robustness and cybersecurity.

    Record-keeping and technical documentation

    Information should be kept and documented about general characteristics, capabilities and limitations of the AI system, algorithms, data, training, testing and validation processes used, and risk management systems.

    Technical robustness

    The AI systems need to be resilient against risks connected to limitations of the systems as well as malicious actions against them.

    Transparency

    To ensure that AI systems do not become too complex or incomprehensible for humans, a certain degree of transparency is required. Users should be able to interpret the output of the AI systems and use it appropriately.

    CE marking

    The AI systems should bear CE marking to indicate their conformity with the EU AI Act.

    Limited risk applications: transparency requirements

    The limited risk category covers AI systems that interact with humans (such as chatbots), emotion recognition systems, biometric categorisation systems, and AI systems that generate and/or transform image, audio and/or video data. The requirements on limited risk AI systems focuses on transparency, intending to make system users aware that they are interacting with a machine. This information helps users make informed decisions on how to interpret the output of those systems.

    Low or minimal risk applications - no obligations

    AI systems that are not considered unacceptable, high, or limited in risk, are categorized as low or minimal in risk. The EU AI Act does not impose any legal obligations on AI systems in this category.

    General purpose AI systems

    The EU AI Act also includes provisions governing General purpose AI systems, which will be split into high-impact and low-impact based on their systemic risk level. High impact General purpose AI systems will have to follow stricter rules for things like model evaluations, assessments and mitigations of systemic risks, the conduction of adversarial testing, reporting of serious incidents to the European Commission, and reporting on energy efficiency. In addition, all providers of General purpose AI systems will have to fulfill transparency requirements on technical documentation and provide detailed summaries about the content used for training. The rules for General purpose AI systems are new and not yet fully completed, which means that more information will come from the regulators.

    How to stay compliant with the EU AI Act

    After the formal adoption of the EU AI Act by the EU Parliament and Council, expected in early 2024, there will be a two year grace period for compliance. However, riskier AI use cases will likely have a shorter grace period of only six months for unacceptable risk, and 12 months for high risk AI systems.

    Companies can take these actions to comply already:

  • Create data visibility: understanding where data for AI systems is stored and what it is used for. A data catalog makes it easy to find and understand your data. It also lets you manage data ownership and access.
  • Prioritize data assets: not all data assets are equally important. By prioritizing data assets based on business importance, utilization, and downstream and upstream dependencies, you can focus on improving the most important data assets.
  • Catch and prevent data issues: data for AI use cases need to be fit for purpose and clear of issues. After prioritizing your data assets, you can make sure they meet quality standards with data observability that looks at actual data and can catch both silent and loud data issues
  • Simplify data issue resolution: when data issues occur, the right stakeholders need to be notified quickly to be able to take action. Column-level lineage across data sources make it easy to perform root-cause analysis and prioritize resolution according to data asset importance.
  • If you want more insights on how to stay compliant with the EU AI Act, we’ve put together an in-depth guide, along with some real world examples to help you on the way.

    Related reads

    Struggling to comply with the EU AI Act?

    Talk to us.