The rapid rise of AI tools in the workplace coupled with the increasing pressure on employees to deliver more, faster, means organisations face a growing security blind spot: the use of AI agents.
Written by our VP of Product, Mark Hamill, this article takes a practical look at how AI is being deployed in organisations today, explores the risks of “shadow AI,” and highlights why clear policies, education, and accessible approved tools are essential to protect sensitive data.

He had three days until the case went to court.
The documents were piling up – witness statements, precedent cases, six months’ preparation notes. There weren’t enough hours. A colleague had mentioned, in passing, that she’d been using ChatGPT to summarise long documents. Seemed useful. He assumed that if she was using it openly, it must be fine, someone must have signed off on it. He opened a browser tab, pasted in his case notes, and got a clean summary in minutes. It worked. So he did it again, and again.
He didn’t know that the free tier of ChatGPT doesn’t carry enterprise data protections. He didn’t know that content submitted through a consumer AI tool can, depending on settings, be used to improve the model. He didn’t know his firm had a policy about this because nobody had told him there was one. He was a lawyer with a court deadline, doing what any competent professional does under pressure: he used the best available tool and got the job done.
This is what shadow AI looks like. Not a rogue employee. Not someone indifferent to risk. Just a capable professional filling a vacuum and trying to get his work done.
Nobody Told Them Not To
Recent research suggests that roughly half of all knowledge workers are using AI tools their employer hasn’t approved. Half. That means whatever policy your organisation has – if it has one at all – is already being outpaced by the behaviour it’s meant to govern.
The reason isn’t defiance. Most people using unsanctioned AI tools aren’t doing it to circumvent security controls. They’re doing it because they didn’t know there was a rule, or a colleague was using the same tool, which they reasonably took as institutional approval. If my manager’s manager is openly using ChatGPT in meetings, surely it’s been cleared? And the free version does the same thing as the paid one, right? So why raise a procurement request when the tool is sitting right there in a browser tab?
These are entirely logical conclusions drawn from incomplete information. The gap isn’t in your employees’ judgement. It’s in your communications.
The Other Half of the Problem
Outside of the office, the message your employees are receiving is loud and unambiguous: adopt AI or fall behind. Use these tools or watch your career become irrelevant. Every headline, every leadership conference, every breathless article about AI taking jobs is telling them to move faster, experiment more, stay ahead of the curve. Then they arrive at work and discover the approved path involves a procurement form, an IT review, and a six-week wait for a licence.
The internal message is silence. The external message is existential urgency. That collision is exactly where shadow AI is born.
What’s Actually Walking Out of the Door
The risk isn’t the tool. It’s what gets fed into it.
Consumer AI products offering free tiers and personal accounts aren’t bound by the same data agreements as enterprise software. When an employee pastes a document into a free AI tool, that data may leave the organisation’s control entirely. Depending on the platform and its settings, it may be stored, reviewed, or used to train the model. Most employees have no idea there’s a meaningful distinction between the free version and the enterprise version. To them, it’s the same product.
This matters more in some roles than others. A lawyer pasting privileged client documents. A finance team summarising merger discussions. A HR manager drafting a disciplinary note. These aren’t hypothetical risks. In 2023, Samsung engineers pasted proprietary source code into ChatGPT three separate times, across three separate teams, before anyone noticed. The code was already out before anyone raised a hand.
The invisibility is what makes AI hard to manage. Shadow IT – employees using unauthorised software – at least left network and endpoint traces. A shadow AI interaction is just a browser tab. There’s no installer to block it. By the time anyone knows it happened, the data is already out of the door.
What to Do About It
The instinct is to block AI. Lock down the tools, smash the office router and leave a threatening note in the kitchen by the toaster.
But while you can block the tools, you can’t block the intent. If you remove the tool and leave the pressure – the deadlines, the workload, the ambient message that AI adoption is non-negotiable – people find another tool. This is exactly what happened with Dropbox a decade ago. Organisations blocked it. Employees switched to personal email. The behaviour didn’t stop; it just became harder to see.
The organisations that solved shadow IT weren’t the ones with the tightest controls. They were the ones that provided a credible, accessible alternative – faster. The same logic applies here.
If you’re trying to manage how your employees use AI, start by making your position visible and clear. Not buried in an acceptable use policy that lives in the intranet nobody reads, but actually published and acknowledged by all. Cover it in your all-hands. Name the tools people are using. Explain the difference between the free tier and the enterprise version in language that means something to a non-technical professional. Be clear about what’s approved, what’s not, and why. Not as a threat. As safety information.
Brief your managers. The “my colleague uses it so it must be fine” assumption travels through teams because managers are often using these tools themselves without understanding the implications. If your managers can’t explain your AI policy, your employees won’t follow it.
Run training that addresses the specific misconceptions – not generic “be careful with data” messaging, but targeted guidance on the exact scenarios people face – looming deadlines, large datasets, numerous documents that just need a quick summary. Make it concrete enough to be remembered when someone is three hours away from a deadline.
And make the approved route genuinely easy to use. If getting a legitimate AI licence takes six weeks, people won’t wait. They’ll use what they have. Friction in the approved path is not a security feature. It’s a shadow AI generator.
—
The lawyer won his case. The documents were handled, the skeleton argument was sharp, the client was pleased. Somewhere in the data trail, if anyone had been looking, those case notes had taken an unauthorised journey.
Nobody was looking.
Somewhere in your organisation, the same thing happened today. Your employees aren’t circumventing your security policy. They’re making a risk decision you didn’t make for them. The question is whether you’re going to keep letting that happen or whether you’re going to make it easier to do the right thing than to guess.
Make it easier to do the right thing
Shadow AI isn’t a technology problem, it’s a visibility and behaviour problem. If employees don’t know the rules, can’t access approved tools quickly, or don’t understand the risk, they’ll fill the gap themselves.
MetaCompliance helps organisations close that gap with targeted, behaviour-driven security awareness. From clear policy communication to real-world, scenario-based training, we help you make risk visible, guidance actionable, and secure behaviour the easiest path.
See how MetaCompliance can help you take control of human risk.