Loading...

Tech Deep Dive

Shadow AI: risks and opportunities

Discover what shadow AI is, the risks it poses to corporate security, and how to manage the unauthorized use of artificial intelligence models.

shadow artificial intelligence

Table of contents

  • What is Shadow AI
  • Concrete examples of shadow AI
  • Why Shadow AI is a cyber security risk
  • Real case: a generative model and a data leak
  • How to identify Shadow AI
  • Strategies to manage Shadow AI
  • Reference frameworks and regulations
  • Mitigation tools: the role of IBM and Splunk
  • The future of Shadow AI

Shadow AI is the new hidden face of artificial intelligence in companies: models, chatbots, and generative tools used without authorization that escape the control of IT departments. It is a growing, often invisible phenomenon that can put sensitive data, reputation, and compliance at risk. Let’s explore what shadow AI really means, why it represents a threat to cyber security, and which concrete strategies can be adopted to identify and manage it responsibly.

What is Shadow AI

Shadow AI (shadow artificial intelligence) is the direct evolution of a phenomenon already well known in the IT world: shadow IT, meaning the use of technologies, applications, and services not authorized by corporate IT departments. With the advent of generative artificial intelligence, however, the landscape has changed radically.

Today, any employee with internet access can interact with AI models such as ChatGPT, Claude, Gemini, Copilot, or Perplexity. In many cases, these tools are used for perfectly legitimate activities: writing texts, translating, summarizing documents, creating reports, or analyzing data. However, when this use occurs without a corporate policy or outside controlled systems, a dangerous gray area emerges: shadow AI.

According to an IBM 2024 study on AI Governance, over 75% of organizations report that at least one employee has used an external generative model to process company data, often without authorization or tracking. This means that millions of conversations, prompts, and sensitive documents are potentially being sent to external servers, outside the company’s jurisdiction.

Concrete examples of shadow AI

Imagine a marketing employee who, to speed up drafting a commercial proposal, copies and pastes a contract draft containing sensitive customer and supplier data into the prompt of an external chatbot. Or a financial analyst who uses an internal open-source model, not updated for months, to generate economic forecasts based on proprietary datasets.

In both cases, company data leave controlled boundaries, creating risks of data leakage or unintended exposure of confidential information. This is not about malicious intent, but about a lack of awareness and governance.

A 2025 Splunk report on the “AI Visibility Gap” confirms that 62% of companies still do not have specific monitoring tools to detect unauthorized internal or external AI models. In other words, shadow AI is not only widespread, but also difficult to see.

Why Shadow AI is a cyber security risk

The implications of shadow AI are numerous and range from data theft to the distortion of decision-making processes. Let’s look at some of them.

1. Loss of data control

Sending sensitive data to external platforms can violate internal policies and, in many cases, the General Data Protection Regulation (GDPR). Information shared with a generative model may be stored, analyzed, or reused to train other models, even if in anonymized form.

2. Outdated or insecure models

Many internal models independently installed by departments or developers may rely on obsolete versions of AI frameworks, vulnerable to exploits or prompt injection attacks. These risks increase when systems are not subject to regular updates or security testing.

3. Weak governance

In the absence of clear corporate policies, every employee can become an access point to shadow AI. This undermines the integrity of the entire IT infrastructure, introducing blind spots that traditional cyber security tools fail to detect.

4. Data leakage and industrial espionage

The combination of improper use of generative models and lack of traceability can lead to leaks of confidential data, strategic plans, source code, and technical documentation. In some cases, such information is intercepted or sold on the dark web.

Real case: a generative model and a data leak

In 2024, a major European financial institution was involved in an emblematic incident. An analyst tasked with drafting an internal report used an open-source language model installed on an internal server not managed by the IT department. To improve the accuracy of the output, the analyst provided input documents containing customer data and internal audit reports.

A malfunction of the model caused the temporary exposure of input and output logs, accessible via the local network. An external user, connected through a corporate VPN, managed to intercept part of the files. The incident led to a notification to the Data Protection Authority and a complete review of internal AI policies.

Example
Shows how shadow AI can emerge completely unintentionally, generating real consequences even in apparently secure environments.

How to identify Shadow AI

Detecting the presence of unauthorized models in a corporate environment is not easy, but it is possible by combining network scanning tools, log analysis, and machine learning to identify suspicious endpoints.

The first step is to create an inventory of AI models in use: who uses them, with what data, where they are installed, and how they communicate externally.

The second step is monitoring APIs and network traffic: many AI models respond to recognizable patterns (for example endpoints such as /v1/completions or /chat/completions).

Here is a simple example of a Python script that can be used to search for unauthorized AI model endpoints within a corporate network:

import requests
import socket

# Common possible AI endpoints
possible_ai_endpoints = ["v1/completions", "v1/chat", "generate", "predict"]

def scan_network(hosts):
    for host in hosts:
        try:
            ip = socket.gethostbyname(host)
            for endpoint in possible_ai_endpoints:
                url = f"http://{ip}/{endpoint}"
                response = requests.get(url, timeout=2)
                if response.status_code == 200:
                    print(f"[!] Potential AI model found at {url}")
        except Exception:
            continue

# Example usage
hosts = ["192.168.1.10", "192.168.1.15", "192.168.1.22"]
scan_network(hosts)

This simple snippet does not replace a SIEM or IDS system, but it can help identify suspicious endpoints or servers hosting unregistered models. In more complex infrastructures, scanning can be automated and integrated into threat intelligence workflows.

Strategies to manage Shadow AI

Reducing the risks associated with shadow AI requires a structured approach that combines clear policies, continuous monitoring, and staff training.

1. Create an inventory of AI models

Every organization should maintain a centralized catalog listing all AI models and applications in use, specifying purpose, version, training datasets, dependencies, and owners.

2. Define usage policies

A clear AI Policy is needed that specifies:

  • which models are approved;
  • what types of data can be used;
  • how inputs, outputs, and logs are handled.

These policies must be integrated into compliance and risk management systems.

3. Monitor AI activities

Tools such as Splunk AI Assistant or IBM watsonx.governance allow real-time monitoring of model usage and detection of deviations from expected behavior.

The adoption of AI observability solutions also makes it possible to trace the origin of model decisions and verify compliance with ethical and regulatory guidelines.

4. Train employees

Awareness is the first line of defense. Employees must understand what shadow AI is, why it is dangerous, and which authorized tools they can use.

Internal security awareness campaigns can reduce the risk of AI misuse by 40%, according to an IBM Security 2025 estimate.

5. Conduct periodic audits

Security checks should also include AI model audits: who uses them, where they reside, and what data they process. Regular audits ensure compliance and help identify anomalous practices early.

Reference frameworks and regulations

In recent years, several standards and guidelines have been introduced for the secure management of artificial intelligence:

  • NIST AI Risk Management Framework (AI RMF)
    Developed by the National Institute of Standards and Technology, it provides a model for identifying, analyzing, and mitigating AI-related risks.
  • ISO/IEC 42001:2023
    An international standard defining requirements for AI management systems (AIMS).
  • European AI Act
    The EU regulation establishing specific obligations for high-risk AI systems, including transparency, traceability, and security.

Companies that integrate these standards into their AI governance significantly reduce the likelihood of violations or incidents.

Mitigation tools: the role of IBM and Splunk

IBM offers an advanced suite of tools for AI Governance, including watsonx.governance, which enables model management throughout the entire lifecycle, monitoring transparency, reliability, and compliance.
This centralized approach reduces the likelihood that unauthorized models are used within the company.

Splunk, on the other hand, focuses on data visibility and AI observability: the platform can integrate logs and network flows to detect anomalous behavior related to generative models or unapproved automations. In 2025, Splunk introduced specific features for shadow AI detection, improving integration with security tools such as SIEM, SOAR, and Threat Detection Pipelines.

The future of Shadow AI

Shadow AI is not a temporary anomaly, but a sign of a profound transformation in how people use technology. As with shadow IT, it will be impossible to eliminate it completely: what matters is making it visible and governable.

The future of corporate security will depend on the ability to combine innovation and control: allowing employees to leverage AI, but within clear, secure, and traceable boundaries.

A company that adopts transparent AI governance, invests in continuous monitoring, and promotes ethical training can transform shadow AI from a threat into an opportunity for growth.

Conclusion

Shadow AI represents one of the most complex challenges of the new digital era. Protecting IT systems is no longer enough: automated cognitive processes must also be protected.

Every line of code generated by a model, every decision made by an algorithm, every prompt typed by an employee can become a potential vulnerability—or, if properly managed, a lever for innovation.

Governing AI does not mean stopping it, but illuminating its shadowy areas. Only in this way can responsible innovation and truly sustainable security be guaranteed.


Questions and answers

  1. What is shadow AI in simple terms?
    It is the use of unauthorized artificial intelligence tools or models in a corporate context.
  2. What risks does it pose to corporate data security?
    Risks include data leaks, GDPR violations, outdated and vulnerable models.
  3. How does shadow AI manifest in an organization?
    Through chatbots, apps, or AI scripts used by employees without IT approval.
  4. How can unauthorized AI models be identified?
    By monitoring networks and APIs with SIEM tools or scanning scripts like the one shown above.
  5. What is an AI policy?
    A set of rules defining how and within what limits artificial intelligence can be used in a company.
  6. What is the difference between shadow IT and shadow AI?
    Shadow IT concerns software and devices; shadow AI concerns intelligent models and systems.
  7. How can AI misuse be prevented?
    Through training, continuous monitoring, and governance tools.
  8. What are AI audits?
    Periodic checks that review models, datasets, logs, and security levels.
  9. What tools do IBM and Splunk offer?
    IBM watsonx.governance for model management and Splunk AI for monitoring and data visibility.
  10. Can shadow AI become an opportunity?
    Yes, if managed correctly, it can foster innovation and improve business efficiency.
To top