Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site.... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

News Flash

AI Act: the EU tightens its grip on prohibited AI practices 

The European Commission’s Guidelines define the boundaries of Artificial Intelligence

The EU tightens its grip on prohibited AI practices 

Table of contents

  • The EU strengthens AI regulation 
  • The AI Act and its risk-based approach 
  • Prohibited practices: the EU’s new guidelines 
  • Heavy fines for non-compliance 
  • AI Act’s global impact 
  • Towards safe and responsible AI 

The EU strengthens AI regulation 

With the release of its Guidelines on prohibited practices, the European Commission takes another step toward defining the ethical and safe use of Artificial Intelligence.

One year after the approval of the AI Act, the regulation governing AI in the EU, new detailed directives aim to prevent abuse and protect fundamental rights. 

The Commission has published the Guidelines on prohibited AI practices, as defined by the AI Act. These guidelines provide an overview of AI practices deemed unacceptable due to their potential risks to European values and fundamental rights. 

The AI Act and its risk-based approach 

The AI Act (AIA), officially known as European Regulation 2024/1689, is the world’s first attempt to regulate Artificial Intelligence comprehensively. Based on a risk-based approach, it categorizes AI into different risk levels: 

  • Unacceptable
    AI that threatens human rights, such as social scoring and mass biometric surveillance
  • High
    AI that impacts privacy, such as healthcare software or hiring algorithms
  • Transparency-specific
    AI tools like chatbots and voice assistants, which must clearly inform users about their AI-generated nature. 
  • Minimal
    AI used in video games or spam filters, deemed low risk. 

The AI Act aims to promote innovation while ensuring high levels of health, safety, and fundamental rights protection

Prohibited practices: the EU’s new guidelines 

The European Commission’s Guidelines focus on banned AI practices,  which pose a high risk to society. These technologies are considered threats to human dignity, privacy, and fundamental rights, leading to potential discrimination and social distortions. The prohibited technologies include: 

  • Harmful manipulation
    Covers all covert persuasion techniques aimed at influencing users’ decisions without their conscious consent. Examples include predictive profiling algorithms, which manipulate behavior through targeted ads or personalized suggestions. 
  • Exploitation of vulnerabilities
    AI systems designed to take advantage of individuals in vulnerable situations, such as children, the elderly, or people with disabilities, to achieve economic or political gains. 
  • Social scoring AI
    Systems that classify citizens based on social behavior or personal traits, potentially impacting rights and opportunities. 
  • Criminal risk prediction
    AI technologies analyzing personal data to predict an individual’s likelihood of committing a crime, often without solid legal grounds, increasing the risk of discrimination. 
  • Mass data scraping for facial recognition
    Unauthorized collection of biometric images from the internet or other sources to build facial recognition databases without individuals’ consent. 
  • Emotion recognition
    AI designed to interpret human emotions through facial expressions, voice tones, or other parameters, with potential intrusive applications in hiring, education, and surveillance. 
  • Real-time remote biometric identification
    Technologies enabling mass surveillance and public space monitoring through biometric analysis, raising concerns over privacy and individual freedoms. 

The Guidelines are designed to ensure the consistent, effective and uniform application of AI law throughout the European Union. However, while they offer valuable interpretations, they are not binding, leaving the authoritative interpretation of the prohibitions to the Court of Justice of the European Union (CJEU).

They also provide legal explanations and practical examples to help stakeholders understand and comply with the requirements of the AI law. The initiative underscores the EU’s commitment to promoting a safe and ethical AI landscape.

Note that although the Commission has approved the draft guidelines, it has not yet formally adopted them.

Heavy fines for non-compliance 

To ensure enforcement, the EU has set up a monitoring system through the European Artificial Intelligence Office. Companies that do not comply with the rules are at risk:

  • Up to €35 million or 7% of annual global revenue for using prohibited AI;
  • Up to €15 million or 3% of annual revenue for regulatory violations;
  • Up to €7.5 million or 1.5% of annual revenue for providing false or incomplete information. 

SMEs and start-ups receive proportionate penalties to avoid excessive burdens on innovation. 

AI Act’s global impact 

The EU model is influencing other regions. In the U.S., the Executive Order on AI introduced similar guidelines without binding rules.

Meanwhile, China has implemented strict regulations on generative AI and social scoring, emphasizing government control. 

Towards safe and responsible AI 

The EU aims to build an AI ecosystem that is transparent, safe, and aligned with human rights. As the AI Act takes full effect, new updates may refine the regulatory landscape to adapt to technological advancements. 

To top