Loading...

News

Requirements for high-risk AI systems (AI ACT) 

Section 2 of Regulation (EU) 2024/1689 defines the requirements for managing high-risk artificial intelligence systems, focusing on security and ethics. The regulation establishes that such systems must comply with AI requirements, manage risks continuously, and ensure transparency towards users and deployers. The AI risk management system must be iterative, to identify, analyze, and mitigate risks, considering the impact on fundamental rights and security.

AI systems

Table of contents

  • Requirements for high-risk AI systems: a clear framework
  • Record-keeping: traceability and accountability 
  • Human oversight: the role of humans
  • Accuracy, robustness, and cyber security

In previous articles, we discussed what high-risk systems are under the AI ACT. In this article, we specifically define the provisions of Section 2 of the AI ACT Regulation (EU) 2024/1689. 

Requirements for high-risk AI systems: a clear framework

High-risk artificial intelligence systems represent a critical challenge for technological development. Ensuring that these systems operate safely and ethically is at the heart of Regulation EU 2024/1689, which defines the essential requirements for their management. These include compliance with AI requirements, risk management, and transparency for users and deployers. 

A comprehensive and iterative AI risk management system 

At the core of the regulation is the AI risk management system, which must be a continuous and iterative process. This approach enables the identification, analysis, and mitigation of known or foreseeable risks, assessing their impact on fundamental rights and safety. This system is not limited to initial design but requires constant updates to address new scenarios. 

The primary goal is to reduce unacceptable risks associated with using such systems. Every measure must consider the technical perspective while also addressing human implications. 

Data and data governance: a solid foundation 

Data is the lifeblood of every artificial intelligence system, especially high-risk AI systems.

The quality and management of data not only determine system performance but also directly impact decision-making processes, safety, and fundamental rights. For this reason, Regulation EU 2024/1689 places particular emphasis on data governance, establishing stringent standards to ensure that the data used is accurate, representative, and properly managed. 

Data quality as the foundation of safety 

For high-risk AI systems, the data used for training, validation, and testing must meet high-quality standards. It is essential that the data is: 

  • Representative
    It must adequately reflect the context and populations for which the system is designed, avoiding biases that could negatively affect performance or cause discrimination. 
  • Complete and accurate
    Errors or gaps in the data can lead to incorrect decision-making processes, posing potential risks to health, safety, and individual rights. 
  • Free of biases
    Biases in data can cause significant issues, such as amplifying pre-existing biases that can affect system performance and behavior. 

Data quality is not just a technical matter but a critical element for reducing unacceptable risks associated with AI systems. 

Data governance: structures and processes 

Effective data governance is essential for ensuring data is managed securely, ethically, and in compliance with regulations. According to the EU regulation, governance practices must include: 

  • Traceability
    Data must be documented at every stage, from collection to processing, to ensure transparency and control. 
  • Accurate preparation
    Processes such as annotation, labeling, cleaning, and updating must be carefully executed to avoid errors or biases. 
  • Gap identification
    Any deficiencies in the data must be identified and addressed through targeted interventions to ensure the system functions as expected. 
  • Sensitive data protection
    In some cases, it may be necessary to use special categories of personal data as defined by the GDPR. In such situations, stringent measures must be implemented to ensure security and confidentiality. 

A proactive approach to data-related risks 

Data governance not only prevents problems but also aims to identify and mitigate risks associated with data use.

Example
The regulation requires an assessment of possible biases in data that could negatively impact specific individuals or groups. Additionally, mitigation measures must be designed to minimize these risks. 

Another important aspect is addressing data gaps. When the available data is insufficient or inadequate, the regulation requires a thorough analysis to identify the issues and define strategies to resolve them. 

Ensuring system reliability through data 

The management and quality of data directly influence the performance of high-risk AI systems. A system based on well-governed data can deliver more accurate results, minimizing risks to safety and fundamental rights.

This is particularly crucial when such systems are used in critical contexts like healthcare, transportation, or justice. 

Transparency and technical documentation 

Transparency in data processing is supported by technical documentation, which must be accurate and constantly updated.

This documentation not only demonstrates compliance with regulatory requirements but also provides operators and competent authorities with the necessary information to evaluate the system. Key elements of documentation include: 

  • Data sources
    Information about their origin and the purpose for which they were collected. 
  • Processing methods
    Details on how the data was processed, annotated, and validated. 
  • Control measures
    Descriptions of strategies adopted to reduce risks and biases. 

Record-keeping and transparency 

A fundamental aspect of regulating high-risk AI systems is record-keeping, It concerns the preservation of records, also known as logging. This practice is not just a technical requirement but a cornerstone of risk management and transparency, essential for ensuring accountability and control throughout the system’s lifecycle. 

Compliance with AI requirements, risk management

Record-keeping: traceability and accountability 

AI systems must be designed to automatically log relevant events. These logs must be maintained throughout the system’s lifecycle and include critical information, such as: 

  • Usage data
    System start and stop times. 
  • Key interactions
    Inputs received and outputs generated. 
  • Relevant changes
    Any updates or significant changes in the system’s operation. 

Properly structured logs help identify anomalies or operational issues, such as unacceptable risk situations, and trace errors for correction. Additionally, these records support post-market monitoring, a crucial element for assessing the system’s long-term impact and implementing improvements. 

Why transparency matters 

Transparency and providing information to deployers is another key principle for high-risk AI systems. Suppliers and deployers must thoroughly understand the system’s operation, limitations, and conditions under which it may not function correctly. This is achievable only if the system is accompanied by: 

  • Detailed and accessible instructions
    Written in clear, understandable language, specifying the system’s purposes, capabilities, and limitations. 
  • Performance information
    Including levels of accuracy, robustness, and cyber security, with specific metrics demonstrating system reliability. 
  • Guidance for deployers
    Instructions on interpreting the results produced by the system, enabling their appropriate use in decision-making processes

Who is a Deployer? Article 3 of the AI ACT defines a deployer as “a natural or legal person, public authority, agency, or other body that uses an AI system under their authority, except where the system is used in the course of a personal, non-professional activity.” 

Human oversight: the role of humans

Human oversight is essential to ensure systems can be used safely. People must be able to correctly interpret the results generated by AI and, if necessary, intervene to override decisions or shut down the system. 

This oversight aims to prevent critical errors and protect individual safety and rights. 

Accuracy, robustness, and cyber security

Finally, high-risk AI systems must ensure accuracy, robustness, and cyber security. This involves designing solutions resilient to errors, failures, and external attacks, such as the manipulation of training data or the introduction of malicious inputs. 

Only in this way can operational risks be minimized and consistent functionality over time be guaranteed. 

Conclusion

Meeting the requirements for high-risk AI systems is not just a regulatory obligation but a crucial step in ensuring technology can be used safely and ethically.

The combination of risk management, transparency, and human oversight ensures these systems have a positive impact on society while minimizing risks to fundamental rights. 


Questions and answers

  1. What are high-risk AI systems?
    They are AI systems that can impact fundamental rights or security.
  2. What are the main requirements for such systems?
    Compliance, risk management, data governance, and transparency.
  3. What is an AI risk management system?
    It is an iterative process to identify and mitigate associated risks.
  4. What is the role of technical documentation?
    To demonstrate compliance with requirements and support audits.
  5. Why is transparency towards deployers important?
    To enable informed and safe use of AI systems.
  6. What does record retention mean?
    Ensuring traceability and continuous monitoring of the system.
  7. What role does human oversight play?
    To prevent risks and intervene in case of malfunctions.
  8. What is meant by accuracy and robustness?
    The system’s ability to operate consistently and securely.
  9. What is the impact of data on AI?
    Data directly influences the quality and reliability of the system.
  10. How are cyber security attacks prevented?
    With technical measures like data control and protection against malicious inputs.
To top