
Dear WIMM Supporter,
Easter is coming, but I will publish 5 posts next week. In the meantime, here are posts scheduled for this week:
2 April: [Essay] AI needs consultants
3 April: [Market updates] ARM
4 April: [How startups work] Waymo —> new category to WIMM
5 April: [Deep dive] on Intel
Onto the update:
TL;DR
The implementation gap is the real problem. Most companies have access to AI tools but lack the organizational capacity to turn them into working systems.
AI alone creates no value. Value only emerges when AI is embedded into real workflows, decision rights, and management routines, and that requires human expertise to bridge the gap between technology and business.
A new role is becoming essential. Forward-deployed engineers and AI implementation specialists (pioneered by Palantir, now adopted across the industry) sit between the tool and the organization, doing the unglamorous work of making AI actually useful.
Competitive advantage goes to those who can operationalize, not just access. As models commoditize, the winning edge will belong to organizations that build the internal capability to deploy, adapt, and measure AI in context.

AI Needs Consultants
By the time most executives finish their AI strategy presentations, they have already solved the wrong problem.
They have chosen vendors, negotiated licenses, stood up infrastructure, and declared transformation underway. What they have not done, and what almost no one helps them do, is figure out which decisions should change, which workflows should be rebuilt, which employees need new skills, and how the organization will actually know whether any of it is working. The hard part of AI adoption has never been acquiring the technology. It has always been making it useful.
This is the implementation gap, and it is now wider than the innovation gap. For the past several years, the dominant narrative around artificial intelligence has focused on capability. Example: what models can do, how fast they are improving, how many parameters they contain. That conversation has largely been won, because the models exist, the tools are commercially available, the compute is accessible, etc. What companies are discovering, often quietly, in retrospective analyses of stalled pilots and underwhelming proof-of-concept, is that capability alone creates nothing. Value requires implementation.
Hence, the questions that actually stall enterprise AI are organizational:
Which processes should be redesigned first, and who decides?
Where does AI generate genuine economic return versus performing as expensive theater for the board?
How should managers restructure accountability when AI is making or influencing decisions?
What data governance, compliance, and audit frameworks need to exist before a tool touches a customer?
How should performance be measured when human and machine labor are increasingly intertwined?
These are not questions that a software vendor can answer, because they require a different kind of expertise entirely.
The translators the enterprise needs
This is why a new professional role is quietly becoming indispensable in the AI economy: the implementation specialist, the forward-deployed engineer, the AI transformation partner, the Palantirization of everything… whatever name an organization assigns to the person who sits between the technology and the business, and makes them work together:

source: a16z
These are not traditional IT consultants, nor are they data scientists. They are organizational translators. Their job is to understand a business deeply enough to know where AI can be embedded in ways that actually change performance, and then to do the difficult, unglamorous work of making that happen.
That work includes diagnosing which use cases have genuine leverage, redesigning the workflows that AI will touch, integrating systems that were never built to communicate, training teams whose instincts were formed in a pre-AI operating environment, managing the human resistance that inevitably accompanies change, and establishing the measurement frameworks that tell leaders whether the investment is compounding or decaying. None of this is automatic. None of it ships with the software.
Why are generic tools not enough
The reason generic tools underdeliver is that organizations are not standardized environments. Every company is an accumulation of particular choices: particular customers, particular incentive structures, particular legacy systems, particular cultures, particular risk tolerances, particular data architectures built over decades of acquisition and improvisation. Off-the-shelf AI can offer enormous potential, but potential is not a business outcome. Realizing that potential requires customization, orchestration, and what might best be called organizational translation, the act of rendering a general capability into a specific, trusted, governed workflow that real employees use in their actual jobs.
This translation work is harder than it looks, and it is precisely what enterprises consistently underinvest in. Companies spend heavily on model access and software licenses. They spend comparatively little on the human capacity needed to embed those tools in ways that are durable, measurable, and aligned with how the business actually runs.
The economics of implementation
The real question is what value goes unrealized without successful implementation, and not “How much consultants costs?”.
That framing changes the economics considerably. Consider the calculus in customer service operations. An AI-assisted triage and response system can reduce average handling time, improve resolution rates, and free senior agents for complex escalations. But none of that happens automatically. Someone must map the existing process, identify where the handoffs break down, redesign the routing logic, train the model on the organization's specific products and policies, and teach supervisors how to interpret performance data in a new operating model. The organizations that invest in that implementation work see compounding returns with faster execution, lower operating costs, better customer experience, and agents who are more productive and less burned out. Those who simply license the tool and expect the transformation to follow typically see neither.

The pattern repeats across industries. A regional hospital system piloting AI-assisted clinical documentation found that physician adoption stalled because no one had redesigned the clinical workflow to make AI assistance feel natural rather than interruptive.
When an implementation team rebuilt the interaction model around how physicians actually moved through a patient encounter, rather than how the software engineers imagined they did, adoption rates and documentation quality both improved materially. The model had not changed, but the organizational context around it had.
In legal services, Baker McKenzie implemented AI-powered contract lifecycle management across 45 of its offices worldwide, achieving 85% faster contract review, $3.5 million in annual savings, and a 99.7% compliance rate. Wolters Kluwer's research on legal AI adoption more broadly found that firms recovering value from AI implementations were, on average, capturing 20% more billable hours and achieving 300% ROI, but only when workflows had been explicitly redesigned to integrate the tools, risk thresholds had been defined, and attorneys had been trained to engage with AI output critically rather than either over-relying on it or ignoring it.
The value is realized only when someone has done the work of defining acceptable risk thresholds, establishing review protocols, training partners to trust the output appropriately rather than either over-relying on it or ignoring it, and ensuring the workflow meets the firm's compliance obligations. That work is professional, contextual, and deeply human. See below a slide from my annual presentation that I deliver to companies.
Consulting, reimagined
These dynamics suggest something larger about the future of consulting itself. The most valuable advisors in the AI era will not be those who arrive with elegant frameworks and depart before implementation begins. The gap between strategy and execution has always been wide in enterprise transformation; AI makes that gap consequential in ways that are difficult to hide. Organizations need partners who stay through deployment, who iterate when the first design fails, who measure outcomes against what was actually promised, and who build internal capability rather than dependency.
The best consulting firms already sense this shift. The Palantir-Bain partnership is one visible signal. The emergence of "AI transformation" practices at every major professional services firm is another. The distinction between strategic advice and implementation support is collapsing. What clients need now is someone who can help them build the organizational capacity to use it. That is a fundamentally different engagement model, one defined less by the elegance of the deck and more by the durability of what gets built.

“Where is my moat?” is designed for individual readers, though the occasional forward is absolutely fine. If you’d like to set up multiple subscriptions for your team with a group discount (minimum 5 seats), reach out to me directly.
Thanks for your support & have a wonderful day!

