Loading...

Legal Security

AI Act: European Regulation on Artificial Intelligence

Artificial intelligence (AI) is revolutionizing our relationship with the digital world, finding application in numerous areas. However, its development raises important ethical, social and legal issues. To address them, the European Union has introduced the AI Act, a groundbreaking piece of legislation that aims to regulate the use of AI in a safe and inclusive manner. While some fear the loss of human control over machines, others see AI as an opportunity to address complex challenges.

European Regulation on Artificial Intelligence

Table of contents

  • What is artificial intelligence?
  • What is the AI Act and what is its purpose?
  • The scope of the AI Act
  • European governance for artificial intelligence
  • Implications for companies and users
  • Obligations for companies developing or deploying AI

Artificial Intelligence (AI) is transforming the way we interact with the digital world. From advanced analytical models to applications in medical devices, its use continues to expand.

However, this expansion brings not only benefits but also legal, ethical, and social challenges. To address these issues, the European Union has introduced an innovative regulation: the AI Act (Artificial Intelligence Act), published in the EU Official Journal, which aims to create a solid and inclusive regulatory framework for AI. 

This article provides a brief introduction to the concept of artificial intelligence, focusing on the recent regulation introduced by the European Union: the AI Act. 

What is Artificial Intelligence?

Artificial Intelligence (“AI”) is redefining our interactions with both the digital and real world. Could you confidently say that this article was written by a human and not by a chatbot following instructions from an alleged author? 

AI is increasingly used in both the public and private sectors. Some believe that this technology could lead to a loss of human control over machines, while others see it as an opportunity to solve many previously unsolvable challenges. 

First, it is important to define the phenomenon: AI is defined by the Oxford Dictionary as “the ability of computers or other machines to exhibit or simulate intelligent behavior,” referring at the same time to “software used to perform tasks or produce outputs that were previously thought to require human intelligence, particularly through machine learning techniques to extract insights from large datasets.” The term “artificial intelligence” has been used since the 1950s, when there were no technologies powerful enough to process certain types of data. 

Beyond the numerous ethical dilemmas raised by the increasing use of artificial intelligence, AI also presents significant legal challenges, such as: 

  • Protection of personal data;
  • Safeguarding intellectual and industrial property; 
  • Accountability and liability issues (involving multiple actors such as the producer, programmer, user, or owner of the AI);
  • Interference with privacy rights; 
  • Non-discrimination and access to justice; 
  • Social security and welfare rights, good administration; 
  • Consumer protection.

What is the AI Act and what is its purpose?

The EU AI Act, published in the Official Journal of the EU on July 12, 2024, is formally known as Regulation (EU) 2024/1689 of the European Parliament and Council

Article 3 defines AI as “an automated system (machine-based) designed to operate with varying levels of autonomy, capable of adapting after deployment and, with explicit or implicit objectives, deriving from the input received how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments.” 

This definition aligns with the one previously adopted by the OECD. The regulation does not apply to areas outside the scope of EU law, nor to AI systems intended for military, defense, or national security purposes, or exclusively for research and innovation. 

The implementation timeline varies from 6 to 36 months, prioritizing high-risk sectors. This timeframe was designed to allow for a gradual and effective application of the new regulations. 

The legislation is a key step in regulating Artificial Intelligence systems by classifying them according to their risk levels:

  • Unacceptable risk
    Practices that are strictly prohibited, such as AI used for manipulative emotional recognition, social scoring, and predictive policing technologies. 
  • High-risk AI systems
    Systems such as AI-powered medical devices, which are subject to stringent transparency, data quality, and cyber security requirements. 
  • General-purpose AI or low-risk AI
    Requires providers to inform users that they are interacting with an AI system. 

The principle behind the regulation is that AI must be developed and used safely, ethically, and in accordance with fundamental rights and European values. AI systems are classified based on their potential risks to consumer safety, workers’ rights, and civil liberties. Different obligations are set for AI providers, users, and other actors involved in the AI value chain. 

High-risk AI systems, while permitted, must comply with strict obligations before being placed on the market or used within the EU. These obligations include risk management and impact assessment on fundamental rights. Requirements include the quality of datasets used, transparency, and appropriate cyber security measures. 

For low-risk AI systems, transparency obligations apply, such as informing users when they are interacting with an AI system or consuming AI-generated content, ensuring informed and conscious use. 

Each EU member state will be required to establish independent national authorities responsible for enforcing the AI Act, including issuing sanctions for violations. These sanctions, determined at the national level, must be effective, proportionate, and dissuasive. 

The scope of the AI Act

The regulation applies to all AI systems used within the EU, with some exceptions, such as those designed exclusively for military purposes. To ensure compliance, each member state must designate national authorities responsible for overseeing the implementation of the AI Act and enforcing penalties for violations. 

Artificial intelligence (AI)

European governance for artificial intelligence

The European Commission has established the European AI Office, responsible for coordinating AI governance across the continent. This office promotes standardized testing and security practices for high-risk AI systems to ensure compliance with European values.

Additionally, initiatives like GenAI4EU support businesses, particularly SMEs, in developing ethical and responsible AI technologies. 

Implications for companies and users 

The introduction of the AI Act represents a turning point for companies and users, imposing specific obligations to ensure ethical and secure use of artificial intelligence. The implications vary depending on the role of the involved parties.

Example
AI model providers, developers, integrators, and end-users) and the type of AI system employed. 

Obligations for companies developing or deploying AI

Classification and compliance 

Companies developing or distributing AI must first determine their system’s risk level

  • High-risk systems must undergo strict pre-market checks, including fundamental rights impact assessments and compliance with cyber security standards. 
  • Lower-risk systems, while subject to fewer restrictions, must still meet minimum transparency requirements, such as informing users they are interacting with AI. 

Transparency and traceability 

Companies must ensure AI models are transparent in their design and operation. This includes detailed documentation of training processes, datasets used, and algorithms deployed. Providers must also ensure traceability to facilitate audits and verifications by national authorities

Risk management and impact mitigation 

For high-risk AI systems, companies must develop risk management strategies that include: 

  • Identifying potential negative impacts on fundamental rights;
  • Implementing mitigation measures, such as using non-discriminatory datasets;
  • Ensuring system security against manipulation or breaches.

Penalties for non-compliance 

If companies violate the provisions of the EU AI Act, they may be subject to heavy fines. The fines, which vary depending on the severity of the violation, can be up to 6 percent of the company’s global turnover. This makes it crucial to adopt a preventive strategy for compliance.

Impacts on end-users 

  • Increased awareness and control 
    End-users will benefit from greater transparency.

Example
They will be informed when interacting with AI-generated content, allowing for more conscious technology use and reducing risks of manipulation or deception. 

  • Protection of fundamental rights 
    The regulation protects users from the risks associated with manipulative or discriminatory AI systems.

Example
AI designed for emotional recognition cannot be used in ways that infringe on privacy or individual rights. 

  • Greater security and reliability 
    The strict requirements for high-risk AI systems ensure that these technologies are developed and used safely, reducing risks of malfunctions or misuse—especially in critical sectors like healthcare and mobility. 

Conclusion 

The AI Act, officially published in the EU Official Journal, marks a milestone in AI regulation. It aims to balance innovation with fundamental rights protection, creating a safe, ethical, and EU-compliant technological ecosystem. 

As the full implementation deadline approaches, all organizations must start preparing to comply with this new regulatory framework. 


Questions and answers

  1. What is the AI Act?
    The AI Act is European legislation that regulates artificial intelligence systems based on the level of risk.
  2. When will the AI Act go into effect?
    The AI Act will be fully applicable from February 2025.
  3. What systems are considered high risk?
    Medical devices and ia applications that affect security or fundamental rights.
  4. Does the AI Act prohibit the use of AI?
    It only prohibits unacceptable risk practices, such as social scoring and predictive policing.
  5. What obligations do AI model providers have?
    They must ensure transparency, data quality and cyber security.
  6. What are national competent authorities?
    Entities designated to oversee the implementation of the AI Act in each member state.
  7. To whom does the AI Act apply?
    To all entities developing or using AI systems in the EU, including non-EU vendors.
  8. What practices does the AI Act prohibit?
    Manipulative systems, social scoring, and abusive emotion recognition technologies.
  9. How are AI systems classified?
    Based on risk levels: unacceptable, high, low and general.
  10. Does the AI Act promote innovation?
    Yes, through initiatives such as genai4eu, which support startups and SMEs.
To top