
AI is everywhere and no longer theoretical. McKinsey’s 2025 Global AI Survey shows that 88% of organisations now use AI in at least one business function, up from 78% the year before. That is a significant shift in a short period of time. AI has moved from curiosity to capability, and from capability to expectation.
But that does not mean it is embedded at scale. In some organisations, AI is being formally integrated into operating models, workflows, and products. In others, it sits within specific teams or functions, often driven by individual leaders or pockets of enthusiasm. And in many, it remains at the pilot stage, tested across operational processes but not yet scaled.
On paper, it looks like progress. But step back, and something isn’t quite landing.
AI is everywhere. Value isn’t.
Across organisations, there is a growing disconnect between the level of activity around AI and the level of meaningful business impact. Tools are being deployed. Pilots are being launched. Budgets are being allocated. Leadership attention is increasing.
And yet the outcomes don’t reflect the effort. The same McKinsey survey shows that only 39% of organisations report any measurable enterprise-wide EBIT impact from AI — and for most of those, the impact is less than 5% of total EBIT. That is not transformation. That is marginal gain.
At the same time, usage remains inconsistent. In many organisations, access to AI tools has expanded rapidly, but consistent, meaningful use has not kept pace. Some teams are experimenting heavily. Others are barely engaging. Outcomes are uneven, and benefits are difficult to isolate.
This creates a pattern that is becoming increasingly common.
High activity. Low clarity.
Most organisations are busy with AI, but not yet better because of it.
That gap is not just organisational. It is human.
Let’s not pretend this isn’t happening. People are already using AI. Quietly in some cases, more openly in others. They are using it to get work done faster, to produce stronger outputs, and to support their thinking and communication. Emails are sharper. Documents are more structured. Responses are more immediate.
There is nothing wrong with that. It is exactly what you would expect when a powerful capability becomes widely available at low cost and high accessibility.
But it introduces a subtle and important shift. AI is improving output faster than it is improving capability. Work looks better. Responses sound more considered. Ideas appear more complete. But that does not always mean deeper understanding, better judgment, or stronger decision-making.
It is entirely possible to produce a well-structured argument without fully owning the thinking behind it. It is possible to respond quickly without fully understanding the implications. And it is possible to appear more capable than the underlying capability would suggest.
Again, none of this is unusual. It is how technology works. It amplifies. But most organisations are not acknowledging this gap, let alone managing it.
At the organisational level, the same pattern continues. Access to AI is growing faster than consistent usage. Pilots are widespread but struggle to scale into production. Governance is lagging behind adoption. In some cases, organisations do not even have a clear view of where AI is being used, by whom, or for what purpose.
AI is often treated as a tool to deploy or a capability to introduce. In reality, it behaves very differently. It is not just a technology shift, but a change programme and a workflow redesign challenge as well as a test of leadership discipline.
Without that recognition, progress stalls.
And when progress stalls, the gap between perception and reality widens.
Customer Impact
This is where the Customer Spectacles lens becomes critical. Inside the organisation, outputs look sharper. Communication is faster. Activity appears high. Dashboards show rollout, licences, and pilot activity. Reports suggest momentum.
From the outside, the experience is far less convincing. Service still feels inconsistent. Processes remain slow or fragmented. Decision-making is not materially clearer. Outcomes have not significantly improved.
The organisation sees progress. The customer sees little difference.
This is where the issue moves from capability to commercial impact.
This is not just a maturity gap. It is a risk. AI is absorbing budget, consuming leadership attention, and increasing operational complexity. It is influencing how work is done, how decisions are made, and how people interact.
Without clear outcomes, it creates false confidence. It drives uneven capability across teams. It introduces hidden dependency on tools that are not fully understood or governed.
Over time, the risk is not that AI fails. The risk is that it quietly underperforms. That it becomes active, visible, and expensive, but not transformative. That it absorbs energy without materially changing results.
This is where a more honest position is required.
There is nothing wrong with people using AI. In fact, they should be. They should be experimenting, learning, testing, and improving how they work. That is how capability develops. That is how organisations move forward.
But organisations need to catch up. That means making AI use visible rather than pretending it isn’t happening, supporting it properly with guidance and boundaries rather than just access, building real capability rather than just better outputs, and putting guardrails in place so usage is consistent, safe, and effective.
Encourage the use. Remove the illusion. Govern it properly.
The Missing Layer — A Leadership Reality Check
If you’re serious about AI creating value, there are five questions worth asking at board level. Not in theory. In reality.
- Where is AI actually being used today? Not where you think. Not where it’s been approved. Where is it genuinely being used across the organisation — by individuals, by teams, in shadow usage? If you don’t know, you don’t have control.
- Where is it improving outcomes? Not activity. Not output. Where has AI reduced cost, improved customer experience, accelerated decision-making, or increased revenue? If you can’t point to it clearly, value isn’t landing.
- Where is it only improving appearance? This is the uncomfortable one. Where has AI improved how work looks — better reports, sharper emails, more structured thinking — but delivered no material change in outcomes? That’s where false confidence builds.
- Who owns AI in your organisation? Not in principle. In practice. Who decides what is acceptable? Who defines guardrails? Who is accountable when it goes wrong? If the answer is unclear, governance doesn’t exist.
- What would your customer recognise? This is the real test. If your customer looked at your AI activity, what would they actually notice — faster service, better decisions, improved experience — or nothing?
That is where value becomes real. That is where value lives.
And right now, in many organisations, AI is everywhere. Value isn’t. And in too many cases, neither is control.
