Table of contents
- AI as a tool for a more efficient administration
- Restrictions: what AI cannot do in public administration
- Compliance challenges and the risk of private sector dependency
The AgID has released new guidelines on Artificial Intelligence (AI) in Public Administration, outlining obligations, limits, and possibilities for its use.
The 119-page document is based on the AI Act and GDPR and is open for public consultation until March 20. It aims to balance efficiency and citizens’ rights, but many questions remain unanswered.
AI as a tool for a more efficient administration
AI is presented as a major opportunity to improve public administration efficiency. The key applications include:
- Automation of repetitive processes to optimize resources and time;
- Predictive analytics based on data to support strategic decision-making;
- Better document management and optimization of public resource distribution;
- Personalized services for citizens, increasing accessibility and transparency.
However, with great power comes great responsibility. Public administrations must ensure compliance with privacy and cyber security laws, using transparent and traceable AI systems. Continuous monitoring is crucial to prevent discrimination or cyber security vulnerabilities.
Restrictions: what AI cannot do in public administration
There are strict limitations: social scoring is prohibited, banning AI systems that evaluate citizens based on behavior or personal traits. Additionally:
- No exploitation of vulnerable individuals, such as minors or people with disabilities;
- Ban on real-time biometric recognition, except for national security;
- Mandatory human supervision in critical decisions affecting citizens;
- Strict data protection requirements, in compliance with the GDPR.
The AI Act aims to prevent authoritarian misuse and ensure ethical AI applications in the public sector.
Compliance challenges and the risk of private sector dependency
Public administrations must adhere to strict technical standards, ensuring AI systems are reliable, secure, and unbiased. However, a major concern is dependence on private companies.
While administrations can develop some AI tools internally, there is a significant risk of relying on external providers that own the technology necessary for these systems.
To ensure effective implementation, governments must invest in internal training and expertise, avoiding total reliance on private companies for AI development.