Skip to main content

Context is Everything - UK AI Consultancy & Private AI Deployment

Context is Everything is a UK-based AI consultancy specialising in private AI deployment and institutional intelligence. We build SASHA, an enterprise AI platform deployed inside your firewall, trained on your proprietary methodology. Our AI concierge Margaret demonstrates these capabilities for free on our website.

Our Team

Proven Results

Shadow AI vs Shadow IT: Why This Wave Is Different - Thought leadership article by Context is Everything on AI implementation

Shadow AI vs Shadow IT: Why This Wave Is Different

·6 min read·780 words
AI GovernanceShadow AIRisk ManagementAI Implementation

80% of workers use AI at work. Only 22% use employer-provided tools. Shadow AI is not the same problem as shadow IT, and the governance playbook that worked before will not work here.

You have likely already read a comparison piece around shadow AI and shadow IT. Everyone seems to have written it: employees adopt unsanctioned tools, organisations scramble to catch up, governance arrives late.

We saw it with email, with WhatsApp, with cloud services, with personal devices. The pattern is well documented and the parallel is obvious.

But there is a reason the parallel breaks down, and almost nobody is talking about it.

Every previous wave of unsanctioned workplace technology eventually found its equilibrium. Email was chaos, then we got policies, archiving, and cc etiquette. WhatsApp was chaos, then GDPR forced governance. The technology was too useful to ban. Organisations adapted. Rules emerged. Things settled.

Shadow AI may be the first wave that does not settle on its own.

The asymmetry problem

The Italian programmer Alberto Brandolini observed that the energy needed to refute bullshit is an order of magnitude greater than the energy needed to produce it. He was describing misinformation, but the principle maps precisely onto what is happening with AI in the workplace.

Previous waves of unsanctioned technology moved information around. Email moved documents. WhatsApp, WeChat and Slack moved conversations. The information already existed, it just ended up somewhere the organisation had not planned for. The governance challenge was essentially a plumbing problem: redirect the flow, and things stabilise.

AI does not move information. It manufactures it. Analysis, recommendations, structured arguments, client-ready documents, produced at near-zero cost, in seconds, with no audit trail and no guarantee that a human has reviewed the output before acting on it.

An IBM-sponsored study of 3,000 office workers found that 80% now use AI in their roles, but only 22% rely exclusively on employer-provided tools. Forty percent said they prefer external tools because the features are simply better.

This is not sabotage. It is pragmatism.

But when the cost of producing plausible-sounding content collapses to near zero and the cost of properly verifying it remains stubbornly human, the gap does not close over time. It widens. That is the asymmetry, and it is why this wave is structurally different from the ones before it.

Where this hits hardest

Reco's State of Shadow AI report found that companies with 11 to 50 employees face the highest exposure, an average of 269 unsanctioned AI tools per 1,000 workers. These are exactly the organisations least equipped to monitor or manage it.

For professional services firms and consultancies, the risk is not primarily data leakage. It is reputational. When staff augment their expertise with unvetted tools and present the output as the firm's own work, the quality assurance framework that clients are paying for has a hole in it that nobody can see.

Separately, SAP and Oxford Economics research found 68% of UK organisations report staff using unapproved AI tools at least occasionally. As one commentator put it, this is not a sign of resistance. It is a sign of enthusiasm outrunning governance.

The communication collapse

There is a second-order effect that almost nobody is discussing.

AI makes it trivially easy to produce lengthy, structured, authoritative-sounding correspondence. The result is a new kind of escalation: one party sends an AI-enhanced email, the recipient lacks time to engage with it properly, so they feed it into their own AI and fire back an equally detailed response. Both parties believe they have communicated. Neither has read what the other wrote.

With email overload, we adapted by stripping responses back to one-liners and thumbs-up emojis. With AI, the pendulum has swung the other way. The same person who sent a thumbs-up last year now sends 600 words, and it took them less effort than the emoji did.

The length is no longer a sign of thought. It is a sign of delegation. And when decisions are being made on the back of correspondence that no human has fully read, that is a governance gap no policy document will catch.

What actually works

Banning AI tools is demonstrably ineffective. The IBM research found that 60% of workers said hands-on training would increase their use of approved tools. The problem is not defiance. It is a gap between what employees need and what employers provide.

Three things make a difference:

Visibility first. You cannot govern what you cannot see. Before writing a policy, understand what your staff are actually using. Our free AI Readiness Calculator is a five-minute starting point.

Compete, do not prohibit. If external tools are winning because the features are better, provide something better internally. Private AI deployment, trained on your methodology and inside your infrastructure, removes the incentive to go elsewhere.

Methodology over policy. A usage policy tells people what they cannot do. A methodology tells them how to use AI well: when to delegate, when to verify, when the task genuinely requires human attention. That distinction is where the Contour Methodology sits, building layers of context so that AI-assisted work is traceable, verifiable, and owned.

Previous technology waves settled because the underlying problem was containable. Move the data back inside the perimeter, and the risk reduces. Shadow AI does not work that way. The risk is not where the data goes. It is what comes back.

---

Explore our free AI Security Training (UK and US versions, no signup required) or try the AI Readiness Calculator to understand where your organisation stands.

Related Articles

What happens next?

Talk to us. We'll tell you honestly whether AI makes sense for your situation.

If it does, we'd love to work with you. If it doesn't, we'll tell you that too.

Start a Conversation