High risk applications: comprehensive requirements
AI use cases are regarded as high risk if at least one of the following two requirements are fulfilled:
1. AI systems used in products that are covered under the EU’s product safety legislation, including toys, aviation, cars, and medical devices.
or
2. AI systems falling into these specific areas that will have to be registered in an EU database:
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Assistance in legal interpretation and application of the law.
- Migration, asylum and border control management
Among the permitted AI systems, high risk AI systems are the most heavily regulated ones, with regulations including:
Human oversight
The AI system should allow people to oversee their functioning, to guarantee that the AI system cannot be overridden by itself, and is responsive to the human operator.
High quality data
High quality data is essential for the performance of many AI systems, but also to ensure that the AI systems perform as intended, are safe and don’t discriminate. Training, validation and testing data should be sufficiently relevant, representative, free of errors, and complete for the purpose of the AI system.
Consistency, accuracy, robustness and cybersecurity
The AI system should perform consistently throughout its lifecycle and fulfill appropriate levels of accuracy, robustness and cybersecurity.
Record-keeping and technical documentation
Information should be kept and documented about general characteristics, capabilities and limitations of the AI system, algorithms, data, training, testing and validation processes used, and risk management systems.
Technical robustness
The AI systems need to be resilient against risks connected to limitations of the systems as well as malicious actions against them.
Transparency
To ensure that AI systems do not become too complex or incomprehensible for humans, a certain degree of transparency is required. Users should be able to interpret the output of the AI systems and use it appropriately.
CE marking
The AI systems should bear CE marking to indicate their conformity with the EU AI Act.