Your Employees Are Already Using AI

You Just Don’t Know About It!

Workers from over 90% of organisations are using personal AI tools for work. This isn’t a compliance problem, it’s important information your company is missing.

There is a quiet revolution happening inside your organisation. Not in the boardroom, not in the IT department, and certainly not in whatever your official AI strategy document says. It is happening on personal laptops, mobile phones, and browser tabs that close the moment a manager walks past.

MIT’s GenAI Divide study[1] , based on 150 executive interviews, 350 employee surveys, and analysis of 300 public deployments, uncovered a striking gap. Only 40% of companies have an official AI tool subscription. But workers from over 90% of surveyed companies reported using personal AI tools for work tasks. ChatGPT, Claude, Gemini all being accessed on personal accounts, often on personal devices, without organisational oversight or approval.

MIT calls this the “shadow AI economy.” Most organisations treat it as a risk. The smartest ones are treating it as a signal.

How much you don’t know

It’s important to remember these numbers. In 90% of companies, workers use AI tools the company didn’t approve, pay for, or even know about. This isn’t just a few tech fans. Regular employees in areas like finance, marketing, and HR find that a cheap, personal AI subscription greatly improves their work.

They are drafting documents, summarising reports, analysing data, writing code, preparing presentations, and answering questions that would previously have required hours of research. And they are doing it quietly, because the official channels are either too slow, too restrictive, or simply not available.

This finding sits alongside another that sharpens the irony. According to Menlo Ventures’ annual tracking study[2], enterprise spending on generative AI reached £37 billion in 2025 – more than tripling in a single year. Yet the MIT study[1] found that approximately 95% of enterprise AI pilots are not delivering measurable returns on the profit-and-loss statement. Organisations are spending billions on formal AI programmes that struggle to deliver, while their own employees have already found tools that work – and are using them without permission.

Why this is happening

Shadow AI is not a failure of compliance. It is a failure of provision.

Most enterprise AI rollouts follow a familiar pattern. The organisation selects a vendor, negotiates a contract, configures the tool to meet security and governance requirements, runs a pilot with a small group, evaluates the results, and – if all goes well – begins a phased deployment. This process typically takes months. In some large organisations, it takes more than a year.

Meanwhile, an employee can sign up for ChatGPT Plus in two minutes and start getting value immediately. No procurement process. No IT approval. No six-month pilot. The gap between what the corporate process delivers and what the individual can access on their own has never been wider. Andreessen Horowitz found[3] that what enterprise Chief Information Officers spent on AI in all of 2023, they now spend in a single week – but a lot of that money is stuck in the buying process while workers have left.

There’s also a problem with quality. MIT found that common, ready-to-use AI tools are often better than the official company versions, especially for simple tasks like writing, summarizing, and research. Company tools are often limited, restricted in what data they can use, and set up to be careful instead of helpful. This makes the official tool seem slower and less useful than the personal one. Workers see this.

The governance trap

The instinctive response in many organisations is to treat shadow AI as a governance problem. And there are legitimate concerns. When employees paste confidential data into a personal AI account, there are real risks around data protection, intellectual property, and regulatory compliance. No responsible board should ignore these.

But the organisations that respond by simply banning personal AI use are making a mistake. First, because bans are largely unenforceable. The tools are accessible from any personal device, and most usage is invisible to IT departments. Second, and more importantly, because a ban throws away the most valuable signal your organisation has about where AI actually helps.

Think about what shadow AI usage tells you. Your employees have, without any budget, training, or direction from above, identified the tasks where AI makes the biggest practical difference to their daily work. They have tested multiple tools and settled on the ones that work best. They have, in effect, run thousands of small pilots across your organisation – for free. The question is whether you are paying attention to the results.

Smart actions

The 5% of companies that MIT found benefiting from AI share a key trait: they learn from unofficial AI use instead of opposing it. These companies see unsanctioned tools as market research by their staff. They study which tools people prefer, what tasks they use them for, and what that shows about real productivity problems.

This does not mean abandoning governance. It means building governance around reality rather than pretending the usage does not exist.

  • Make it safe to be honest. If employees fear punishment for admitting they use personal AI tools, you will never get an accurate picture of what is happening. The organisations learning most from shadow AI have created channels for employees to share what they are doing – anonymously if necessary – without fear of reprisal.
  • Close the gap between personal and corporate tools. If your employees prefer their personal AI subscription to your enterprise tool, that is a product problem, not a people problem. It may mean your enterprise deployment is too restrictive, too slow, or too difficult to use. Fixing this is more effective than any ban.
  • Empower line managers to lead adoption. The MIT research consistently shows that AI adoption succeeds when it is driven by the people closest to the work, not by a central AI team. Line managers understand the daily tasks, pain points, and workflows where AI can make a real difference. Giving them the authority and budget to act on that knowledge is one of the clearest patterns in the organisations that are succeeding.

The board-level question

If you lead an organisation or sit on a board, the shadow AI economy raises a straightforward question: do you know what your people are actually doing with AI?

Not what your AI strategy says they should be doing. Not what the pilot results show. What they are actually doing, right now, on their own initiative, with tools they chose themselves.

Because that is where the real evidence of AI’s value lives. Your workforce has already run the experiment. The question is whether your organisation is willing to learn from the results.

This article draws on findings from my current research on The Real-World State of AI, which examines in detail what separates the 5% of organisations succeeding with AI from the rest, and what the evidence says about where this technology may be heading next.

References

[1] MIT NANDA, “The GenAI Divide: State of AI in Business 2025”, 2025,https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

[2] Menlo Ventures, “The State of Generative AI in the Enterprise (December 2025)”, 2025, https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/

[3] Andreessen Horowitz (a16z), “How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025”, 2025, https://a16z.com/ai-enterprise-2025/


Discover more from Curious About

Subscribe to get the latest posts sent to your email.


Posted

in

,

by

Comments

Leave a comment