Table of content
- What are high-risk AI systems?
- Risk profiles
- Exceptions to classification
- Social and industrial impacts
- Classification rules
- Practical applications and obligations
- Impacts on member states
The Artificial Intelligence Act (AI Act) is a landmark regulation for high-risk artificial intelligence within the European Union.
Enacted by the European Parliament, the regulation sets strict criteria for identifying and managing high-risk AI systems that may impact health, safety, and fundamental rights.
What are high-risk AI systems?
High-risk artificial intelligence systems represent one of the most sensitive and regulated categories within the AI Act. These are systems designed for use in contexts that can significantly affect critical aspects of human life, such as health, safety, fundamental rights, and public trust.
Regulatory definition
According to Article 6 of Regulation (EU) 2024/1689, an AI system is considered high-risk if it meets two fundamental conditions:
- Integration with regulated products
The system must be a safety component or a product governed by harmonized EU regulations, such as medical devices, automobiles, or industrial equipment.
- Conformity assessment
The product or system must undergo an independent evaluation to ensure its safety and compliance with EU standards.
Additionally, systems listed in Annex III of the regulation are automatically considered high-risk, barring specific exceptions.
Types of systems
Key high-risk artificial intelligence systems include:
- Biometric identification systems
Facial recognition technologies used for real-time or remote identification of individuals, such as in public spaces.
- Emotion analysis tools
Used in contexts such as recruitment or education, these tools can influence decisions based on subjective assessments.
- Critical infrastructure systems
Deployed in areas like air traffic management, electrical grids, or public transportation, where errors could cause large-scale damage.
- AI in healthcare
Examples include tools for automated diagnosis or medical intervention planning.
These examples highlight how such systems can create systemic risks requiring thorough controls to ensure safety.
List of systems in annex III
High-risk AI systems, as defined in Article 6(2), pertain to applications that can significantly impact rights, safety, and human dignity. Below is a detailed description of key application areas:
Biometrics
AI systems using biometric technologies are high-risk, particularly when intended for remote biometric identification of individuals. This includes tools for recognizing people at a distance through physical traits, such as facial features or fingerprints, often without the subjects’ awareness.
Other high-risk systems include those categorizing individuals based on sensitive inferred characteristics such as ethnicity, gender, or religious orientation, and those detecting emotions through physiological or behavioral signals like facial expressions or voice tone.
Critical infrastructure
These systems ensure the safety and efficiency of vital infrastructure, such as road traffic, water supply, energy, gas, and heating networks. Their use carries high risks, as malfunctions could disrupt essential services, causing severe societal consequences.
Education and professional training
In education, high-risk AI systems include those determining access to educational institutions or evaluating students’ performance.
This also covers tools monitoring behavior during tests to detect violations or influencing educational paths and opportunities based on automated analyses.
Employment and workforce management
These systems are used for making critical workplace decisions, such as hiring, selecting, or evaluating employee performance. They may also monitor workers or decide promotions and contract terminations.
The associated risks include impacts on careers, privacy, and fair treatment.
Access to essential services
AI systems in this domain evaluate eligibility for fundamental public or private services, such as healthcare, social benefits, or financial credit.
Similar tools assess creditworthiness or insurance premiums, decisions that can deeply affect individuals’ lives.
Law enforcement
AI systems used by police or similar authorities include tools for criminal profiling, predicting recidivism, or assessing victimization risks.
Other systems evaluate evidence reliability during investigations or are used for advanced surveillance activities. These applications raise concerns about potential discrimination, privacy violations, or judicial errors.
Migration, asylum, and border control
In this field, AI systems are used to analyze the risks posed by people in motion, such as those related to security or health. Other tools assist authorities in reviewing asylum applications, visas, or residence permits, influencing the decision on acceptance.
Similar technologies may identify individuals at border checkpoints.
Example
Similar technologies can also be used to identify individuals at border control points.
Justice administration and democratic processes
In the justice system, AI systems assist judicial authorities in researching and interpreting facts and applying the law. However, their use must be carefully regulated to avoid undue influence or erroneous decisions.
Example
In democratic contexts, AI systems can influence election or referendum outcomes through voter behavior analysis or personalized content, raising concerns about manipulation and transparency.
Risk profiles
High-risk AI systems are evaluated not only by their function but also by their usage context and potential impact:
- Unacceptable risk
Refers to systems inherently unauthorized, such as those for subliminal manipulation or indiscriminate surveillance.
- Significant risks
Even systems improving human activities can cause physical, psychological, or social harm if not properly managed.
Exceptions to classification
A system may not be considered high-risk, even if listed in Annex III, if it does not pose significant risks to:
- The health or safety of individuals.
- Fundamental rights, including privacy and personal freedom.
Example
A system designed to support human evaluation without directly influencing decisions may be excluded from classification.
Social and industrial impacts
The classification of high-risk AI systems is not merely a technical measure but has profound implications for society and industry. These systems require balancing innovation with regulation to avoid unacceptable risks while fostering technological progress.
Understanding the definition and characteristics of high-risk AI systems is essential to ensuring responsible and compliant use, minimizing dangers, and maximizing benefits for citizens and businesses.

Classification rules
The regulation defines two main conditions for categorizing a system as “high-risk”:
- It must be a product or component subject to harmonized EU regulations.
- It must require third-party conformity assessments.
Moreover, the regulation includes exceptions: a system is not high-risk if it poses no significant threats to health, safety, or human decisions. However, any system profiling individuals is automatically classified as high-risk.
Practical applications and obligations
Practical applications of high-risk AI systems span multiple sensitive sectors, including:
- Critical Infrastructure: Ensuring absolute safety.
- Healthcare: For example, automated medical diagnosis tools.
- Real-Time Facial Recognition: Increasingly used in surveillance.
These systems are subject to strict transparency obligations, including comprehensive documentation of conformity assessments and usage modalities.
Impacts on member states
The AI Act requires each Member State to develop an appropriate regulatory framework. Providers of such systems must demonstrate compliance, or they risk exclusion from the EU market. National authorities play a crucial role in monitoring and verifying compliance.
Conclusions
High-risk AI systems represent both an opportunity and a challenge. The European Parliament, through the AI Act, aims to balance innovation and protection, ensuring that such systems improve people’s lives without compromising their fundamental rights.
Questions and answers
- What is the AI Act?
The AI Act is an EU regulation governing the use of artificial intelligence systems, focusing on high-risk applications.
- What are high-risk AI systems?
Systems impacting health, safety, or fundamental rights, such as biometric or emotion recognition tools.
- Which sectors are involved?
Healthcare, critical infrastructure, public surveillance, and other areas with systemic risks.
- What does “unacceptable risk” mean?
Systems posing risks too high to be permissible, such as AI for subliminal manipulation.
- How are high-risk systems classified?
Based on criteria in Article 6 of Regulation (EU) 2024/1689, including conformity assessments.
- Who verifies system compliance?
Competent authorities in each EU Member State.
- What obligations do providers have?
They must document risk assessments and ensure system compliance.
- What does the regulation say about exceptions?
Some systems listed in Annex III are not high-risk if they cause no significant harm.
- What role do Member States play?
They must implement measures to ensure AI systems meet regulations.
- What is the regulation’s objective?
To ensure safe and transparent use of artificial intelligence in the EU.