Organizations are in the middle of a profound contradiction. Leadership demands faster AI adoption and wants to see ROI from AI investments. Teams are being pushed to experiment, innovate, and find competitive advantage through generative AI. But while executives champion AI adoption, something unexpected is happening in the trenches: employees have already adopted it, just not through approved channels.
80% of workers use unapproved AI tools in their jobs, according to a 2025 UpGuard survey. More concerning, nearly 90% of security professionals themselves admit to using unauthorized AI systems. This isn't malice. Employees aren't trying to sabotage the organization. They're trying to get their jobs done faster, and when official tools don't exist or don't meet their needs, they find alternatives on their own. The result is shadow AI: a parallel ecosystem of unmanaged, unmonitored, and ungoverned AI tool usage that operates outside IT's visibility.
Shadow AI represents a new class of enterprise risk that is fundamentally different from shadow IT. It moves faster, exposes more sensitive data, and creates compliance violations at scale. Unlike traditional unauthorized software, which might slowly accumulate across the organization, shadow AI spreads at light speed because the tools are so easy to access, so embedded in workflows, and so productivity-enhancing that adoption can happen in days. The Samsung case illustrates the speed and impact: engineers pasted proprietary code into the free ChatGPT interface to debug it, unintentionally making Samsung's trade secrets, equipment specs, and internal recordings part of an AI training dataset.
For IT security teams, the challenge isn't stopping innovation. That ship has sailed. The challenge is gaining visibility into what's happening, understanding the data flows, and creating a governance framework that enables safe adoption rather than driving usage underground.
The problem IT isn't talking about
Industry leaders talk about shadow AI in surface terms. They define it as "unauthorized use of AI tools." But that definition misses the real problem. Shadow AI is actually unmanaged data movement at scale.
Consider what actually happens when an employee uses an unapproved AI tool. A financial analyst uploads a spreadsheet containing customer data to summarize it. A developer pastes proprietary code into ChatGPT to debug it. A marketer uploads a strategy document to Claude to get a summary. A compliance officer shares legal frameworks into Copilot to extract key concepts. In each case, the employee isn't trying to cause harm. They're trying to work more efficiently. But the data has now left your organization's control.
Here's what most organizations don't know: after that data moves to an external AI platform, they lose visibility into what happens next. The data is stored on third-party servers. It may be logged. It might be used for model training. It could be accessible via future prompts from other users if the AI hallucinates or reconstructs it. And critically, your organization has no audit trail of what data was shared, when, by whom, or what the AI output contained.
The scale of this data movement is staggering. UpGuard found that 75% of workers are using AI tools in their jobs, with most doing so without permission, and executives 50% more likely to use shadow AI than average employees. 75% of employees using shadow AI admitted to sharing possibly sensitive information with unapproved tools, most commonly employee data, customer data, and internal documents.
Employees in finance are sharing customer records. People in HR are uploading personnel files. Teams in legal are sharing confidential contracts. Across every department, data is flowing out through unmanaged channels at a rate that IT cannot track.
The conventional response from enterprise security has been to block, restrict, and punish. "Do not use ChatGPT." "Unapproved AI tools are prohibited." But enforcement-only approaches don't work with shadow AI. Bans drive usage underground, create false compliance, and breed resentment. When IT is perceived as the roadblock to productivity, 41% of employees say they find a way around it.
The real problem isn't that employees are using AI. It's that organizations have created an environment where employees feel they have to go rogue to get access to tools that boost their productivity.
Why this matters for security and compliance
The business impact of shadow AI extends beyond individual data incidents. It affects three critical areas that board-level executives and security leaders must understand.
Data exposure and IP loss
When employees use unauthorized AI tools, data leaves your infrastructure and moves to third-party servers beyond your control. Gartner projects that over 40% of AI-related data breaches by 2027 will stem from unapproved or improper generative AI use. IBM's 2025 Cost of a Data Breach Report found that shadow AI breaches cost organizations $670,000 more than average incidents, with total breach costs reaching $4.63 million versus $3.96 million for standard incidents.
The types of data exposed vary. Developers share source code. Financial teams share customer data. Legal teams share confidential documents. Engineering teams share product specifications. Marketing shares strategy. Once that data is shared with an external AI tool, your organization loses control over how it's stored, how it's used, and whether it can be deleted. Some AI vendors use submitted data for model training, meaning your proprietary information becomes part of an AI model that could be accessed by competitors or other users. Other vendors retain data indefinitely without deletion guarantees.
The real risk is that most organizations don't know what data has been exposed until months or years later, when a security incident, audit finding, or regulatory inquiry forces the question.
Compliance violations
Compliance frameworks like GDPR, HIPAA, CCPA, SOC 2, and PCI DSS were not designed for AI. They don't account for data being sent to external AI platforms, processed by machine learning models, and potentially logged or used for training purposes. When shadow AI violates these frameworks, organizations face enforcement actions, penalties, and customer trust erosion.
Consider a healthcare organization. HIPAA requires strict controls over patient data. If an employee uploads patient records into an unapproved AI tool without explicit data use agreements, that organization has violated HIPAA. The same applies to financial services under GLBA, to customer data under CCPA, and to employee data under various privacy laws across jurisdictions.
The compliance risk is compounded by the fact that data can cross borders when it's shared with cloud-based AI tools. An organization compliant in the EU under GDPR might unintentionally violate data residency requirements by having an employee paste data into a tool hosted on US servers. Regulators are beginning to scrutinize this behavior, and 2025 enforcement actions have started.
Operational and security vulnerability
Unauthorized AI tools create new attack surfaces that IT security teams cannot monitor or protect. Netskope research shows that 47% of employees accessing AI tools are doing so through personal accounts not managed by their organizations, potentially exposing credentials if those platforms are compromised. Unsecured API connections to external AI services become vectors for phishing, malware injection, or credential theft. Weak authentication on consumer-grade AI platforms makes account takeover easier.
But the operational risk goes further. An employee might deploy an unauthorized AI agent to automate part of their workflow. That agent starts making decisions, executing commands, or interacting with customers. If that agent makes a mistake, generates a hallucination, or produces biased output, the organization is liable for actions taken by an unvetted, unmonitored system.
Worse, if bad actors compromise an unauthorized AI tool, they could use it as a pivot point to access internal systems. A compromised API connection could become a backdoor. A breached employee account could give attackers visibility into internal conversations and data that employees have shared with the AI tool.
What effective shadow AI governance looks like
The organizations succeeding at this are not the ones banning AI. They're the ones making a strategic shift: from attempting to control usage through prohibition to controlling risk through visibility, choice, and behavioral analytics.
This approach has three pillars.
Pillar 1: Provide sanctioned alternatives
The foundation is offering employees approved AI tools that meet their actual needs. If an organization says "You can't use ChatGPT," but doesn't offer an alternative, employees will use ChatGPT anyway. If the offered alternative is harder to use, slower, or or less capable, employees will find workarounds.
Forward-thinking organizations are deploying private, cloud-hosted LLM solutions with data segregation and compliance controls. Platforms like Azure OpenAI, Anthropic Claude (enterprise), and Cohere offer the productivity benefits of generative AI with security controls that satisfy IT and compliance teams. Self-hosted and open-weight models like Mistral allow organizations with the strictest data sovereignty requirements to run AI models entirely on-premises.
The key is making approved tools accessible, easy to use, and capable enough that employees don't feel they need to seek unauthorized alternatives. A legal team approved to use a copilot that understands contract language is less likely to seek unauthorized tools. An engineering team with access to GitHub Copilot through their corporate tenant is less likely to use personal accounts.
Pillar 2: Establish clear policies and training
Policies define what is and isn't acceptable. But enforcement starts with understanding. Most employees don't understand the risk of uploading sensitive data to public AI tools. They don't know about data retention policies, training data use, or compliance implications. When policies emerge from a place of education rather than punishment, adoption improves.
Effective organizations publish AI policies that are practical, not just prohibitive. "Here is what you can use." "Here is how to handle different types of data." "Here is what to do if you need a tool we haven't approved yet." Policies become guardrails for real-time decision-making rather than rules to hide from.
Training reinforces the message. When employees understand why certain data cannot go into certain tools, and what happens if it does, they make better choices. Training also identifies high-risk behaviors early, allowing targeted interventions before they become incidents.
Pillar 3: Invest in visibility and behavioral analytics
The final pillar is what most organizations get wrong. They invest in detection tools that scan for known threats or network-level indicators of unauthorized AI tool usage. These tools have value, but they're reactive: they find problems after they've happened.
What organizations need is behavioral visibility into how employees are actually using AI tools. Which teams are using approved tools? Which are abandoning them? Why? Are employees asking the approved copilot to do things it can't handle, signaling that they'll eventually seek unauthorized alternatives? Are there departments where AI adoption is stalling?
More critically, organizations need to understand what data is flowing where. What types of data are employees trying to process through AI tools? What prompts are being used? Are there patterns that suggest guardrail testing, policy confusion, or risky behavior? Are employees repeatedly trying to get AI tools to accept restricted data? These behavioral signals precede security incidents. When organizations can see them, they can intervene.
This requires moving beyond traditional IT monitoring (uptime, latency, error rates) to product-like user analytics that track adoption, engagement, task success, sentiment, and compliance signals. It means understanding not just that someone used an AI tool, but how they used it, whether they completed their task, whether they succeeded, and whether they're likely to use it again.
Organizations deploying user analytics for GenAI are discovering patterns that technical monitoring misses entirely. Research shows that one financial services organization discovered dozens of behavioral patterns that preceded PII leakage: employees testing guardrails, showing confusion about what data was safe, and attempting to input the same types of sensitive information repeatedly. These behavioral signals appeared days or weeks before they would have triggered security alerts.
The role of IT security in the AI era
Shadow AI signals a fundamental shift in how IT security must approach emerging technology. The traditional model was: "New technology appears. IT blocks it. IT eventually approves it after six months of testing."
That model doesn't work with AI because adoption happens faster than IT can evaluate it. Employees don't wait for approval. They find what they need.
The new model requires IT security to be more product-like. Instead of saying no, IT says yes with conditions. Instead of conducting long evaluation cycles, IT experiments alongside business teams. Instead of trying to control every tool, IT establishes guardrails and principles.
This also means IT security teams need to understand behavioral data. Traditional security monitoring focuses on infrastructure: firewalls, vulnerability scans, access logs. But most shadow AI incidents don't show up on those tools. They show up in conversation logs, in what data is being requested, in patterns of tool usage, in the back-and-forth between employees and AI.
For IT security to gain control of shadow AI, they need visibility into human behavior, not just system behavior. That requires integrating user analytics into the security monitoring stack.
Key findings from 2025 research
Two data points from current research underscore the scale and urgency of the shadow AI problem.
Finding 1: Executives and security professionals are the highest-risk users
A counterintuitive finding from 2025 research is that shadow AI use is highest among security and executive leaders, not general employees. UpGuard found that 90% of security leaders use unapproved AI tools, often regularly, with 69% of CISOs incorporating them into their daily workflows. This matters for two reasons.
First, it signals that shadow AI isn't a matter of IT literacy or policy knowledge. Even security professionals, who understand the risks, feel that approved alternatives don't exist or don't meet their needs. Second, when security leaders use unauthorized tools, it sends a cultural signal that the policies aren't serious. Employees see executives and security leaders skirting rules and conclude that shadow AI must be acceptable.
Finding 2: The cost delta of shadow AI incidents
IBM's analysis of shadow AI-related data breaches found they cost organizations $670,000 more than standard incidents, driven by longer detection times and broader data exposure across multiple environments. What makes this finding actionable is that organizations can identify and focus governance efforts on high-risk teams: those handling source code, intellectual property, or strategic information are at highest risk. Concentrated efforts in these areas prevent a disproportionate amount of incidents.
What IT security leaders should do now
For organizations still treating shadow AI as a technical IT problem to be blocked and monitored, the time to reassess is now. The evidence from 2025 shows that enforcement-only approaches fail, and that organizations focusing on visibility and enablement are succeeding.
Here's a practical roadmap:
1. Acknowledge the reality
Acknowledge that shadow AI is happening in your organization. Conduct a survey asking employees which AI tools they use and for what purposes. Most will say they use tools without approval. That's the starting point. Denial about the scope of the problem prevents action.
2. Inventory your data exposure
Map which functions and teams handle sensitive data (IP, customer records, financial data, health information, strategic information). These are your highest-risk groups. They should be your governance priority, not your enforcement target.
3. Deploy approved alternatives first
Don't tell employees what they can't use until you've given them what they can use. Evaluate private, cloud-hosted LLM solutions or self-hosted options that meet your security and compliance requirements. Make them accessible. Train employees on how to use them. Make them easier than unauthorized alternatives.
4. Publish clear policies
Create policies that guide behavior, not just prohibit it. Define data classification (what data can and can't go into AI tools). Define tool categories (what's approved, what's not, what requires special handling). Publish policies in accessible language, not legalese. Train leadership to model compliant behavior.
5. Implement behavioral visibility
Deploy user analytics for your approved AI tools. Understand which teams are adopting, which are struggling, why tasks are succeeding or failing, and what behavioral signals precede policy violations. Use these insights to improve tools, training, and policies. Share metrics with leadership to demonstrate impact and justify continued investment.
6. Create an approval process
For employees who need AI tools beyond what's officially approved, create an expedited evaluation and approval process. Make it fast. Make it responsive. A process that takes three months is not fast enough; employees will go rogue. A process that takes two weeks is competitive with the time it takes to sign up for an unauthorized tool.
What this means for enterprise leaders
Shadow AI isn't primarily a technical problem. It's an organizational problem that looks like a technical problem because employees are using technical tools. The underlying issue is that organizations have created an environment where employees feel they must go rogue to be productive.
The solution isn't better detection or stronger prohibition. It's better visibility, clearer choice, and faster enablement. Organizations that shift from a control-based model to a visibility-based model are moving from reactive incident response to proactive risk management. They're turning shadow AI from a security crisis into a governance opportunity.
The window for action is narrowing. As workforce adoption of AI continues to climb and regulatory scrutiny increases, organizations still operating without visibility will face incidents they can't explain, compliance violations they didn't know were happening, and executives asking questions they can't answer.
The organizations succeeding in 2026 are those that acknowledge shadow AI as inevitable, invest in understanding where data is flowing, and create frameworks that enable safe innovation rather than punishing employees for seeking productivity gains.
For enterprise teams managing AI adoption alongside risk and compliance, the real question isn't how to stop shadow AI. It's how to see it, understand it, and channel it into approved workflows that deliver value without exposing the organization. Understanding user behavior, adoption patterns, and what drives employees to seek unauthorized tools is the foundation of that visibility.
Nebuly's approach to GenAI analytics provides organizations with real-time visibility into how employees interact with AI systems, including behavioral signals that precede policy violations and compliance risks. By analyzing conversation content, user patterns, and task outcomes, security and compliance teams can detect emerging risks and improve approved tools before unauthorized usage becomes endemic. For organizations serious about managing shadow AI as a governance problem rather than an enforcement problem, behavioral user analytics becomes essential infrastructure.
Ready to gain visibility into your AI usage?
See how Nebuly helps security and compliance teams detect shadow AI risks before they become incidents. Discover behavioral patterns, understand data flows, and build a governance framework that enables safe AI adoption. Book a demo with us.



