When Eager Employees Go Rogue

The productivity gains from AI have sparked an unintended consequence: Shadow AI. Employees are increasingly turning to unauthorized AI tools to streamline their work, creating a double-edged situation. Organizations gain the upside of improved efficiency and innovation while grappling with the downside of data security risks, regulatory compliance issues, and loss of operational control.

How Big is the Problem?

Multiple surveys show rapid, often-unmanaged adoption of AI tools at work:

  • OpenAI’s July 2025 Productivity Note documents the steep growth in workplace use, showing that professionals are tapping general-purpose LLMs for work. Widespread adoption is now a structural reality, not an experiment.

  • A 2025 Clutch/Lifewire roundup reported roughly 74% of full-time employees use AI tools for some work tasks — but only about one third had received formal AI training, creating a dangerous knowledge gap that increases risk of misuse.

  • A 2024 study by Software AG estimated that roughly 50% of employees are using unapproved AI tools, with many indicating they would continue this practice even if prohibited by their organizations.

Taken together: employees are already using AI widely, with most being untrained, and many do so using free tools that do not have privacy protections—a perfect recipe for accidental data exposure.

Real Incidents

Samsung engineers pasted internal meeting notes and even snippets of proprietary source code into public chatbots, exposing the company’s sensitive data. The fallout included emergency restrictions and a company-wide ban on certain consumer chat services while IT assessed the damage. This incident crystallized the risk: a single prompt can export confidential IP into third-party systems.

In a high-profile 2025 example, attorneys filed briefs containing fabricated citations produced by an LLM. As a result, a federal judge publicly raised sanctions as well as concerns about professional responsibility.

Healthcare staff have repeatedly used free chatbots to draft patient communications or summarize cases — sometimes pasting identifiable patient details into chats and risking HIPAA violations and patient privacy breaches.

These incidents share one theme: human convenience trumps process. When IT tools are slow, hard to use, or unavailable, employees will adopt consumer tools that feel faster — even when doing so exposes confidential data, regulatory obligations, or organizational liability.

Why Outright Bans Backfire

When organizations respond to Shadow AI with blunt bans, the result frequently looks like this:

  1. Workarounds proliferate - Bans push use underground (VPNs, personal devices, private accounts). That makes detection and remediation harder and prevents the IT team from implementing sensible safety controls.

  2. Lost benefits and morale issues - When competitors gain speed and creativity via AI, your top talent may simply ignore restrictive rules or leave for better opportunities.

  3. Missed governance opportunity - Banning AI tools postpones the critical work of establishing secure, sanctioned AI infrastructure (e.g., enterprise-grade vendor accounts, data protection protocols, and privacy-preserving connectors). When bans are eventually lifted, organizations find themselves playing catch-up while entrenched shadow AI practices prove difficult to replace.

  4. False sense of security - Prohibition can lull leadership into believing the risk is solved. In reality, unmanaged use continues; the organization just loses visibility to steer responsible AI use..

So — while sometimes appropriate as short-term emergency measures (e.g., after an acute leak), bans are a poor long-term strategy.

Bottom line

Employees are already using AI to get work done. That reality isn’t going away, and treating it as a binary “allowed/forbidden” problem will make things worse. The practical path is to acknowledge the phenomenon, measure and surface shadow use, provide approved and useful AI tools, and provide extensive training on safe and effective implementation. This approach protects IP, privacy, and professional obligations — while unlocking the productivity AI promises.

Your organization and portfolio companies are already wrestling with this challenge, whether you're aware of it or not. The critical question isn't whether Shadow AI exists in your workplace, but how much insight you have into its scope and impact. If you're prepared to develop a strategic AI adoption framework that maximizes employee potential while maintaining security and compliance, NextAccess can help.

NextAccess Authors: Scott Kosch and Valerie VanDerzee

NextAccess is an advisory firm of experienced operators with deep experience running top-performing organizations and delivering exceptional results. We help executive teams and investors build stronger, more valuable companies through a powerful mix of operational expertise, strategic insight, and data-driven solutions.

Want to learn more?

Message Scott Kosch or Valerie VanDerzee to schedule a complimentary 30-minute consultation to explore how our expertise can help your organization.

Previous
Previous

Democratizing AI Expertise

Next
Next

AI-Powered Deal Sourcing