
McKinsey's State of AI 2025 confirms what many executives quietly suspect: AI adoption is nearly universal, but AI value remains stubbornly rare. The gap between doing AI and winning with AI has never been wider.
As McKinsey partner Michael Chui puts it: "It takes hard work to do it well."
88% of organizations now use AI in at least one function, yet only 6% qualify as "high performers" capturing more than 5% of EBIT from their AI investments.
Two-thirds of companies remain stuck in pilot purgatory—running experiments that never scale into enterprise-wide transformation.
High performers are three times more likely to have fundamentally redesigned workflows around AI, treating it as organizational transformation rather than technology deployment. McKinsey's research is unambiguous: "Of 25 attributes tested for organizations of all sizes, the redesign of workflows has the biggest effect on an organization's ability to see EBIT impact from its use of gen AI."
The companies building durable AI advantages share four characteristics: proprietary data assets, compounding feedback loops, deep workflow integration, and organizational AI fluency.
The question isn't whether to invest in AI. It's whether the AI you're building today will matter in 2035.
Gartner's Haritha Khandabattu frames the stakes clearly: "With AI investment remaining strong this year, a sharper emphasis is being placed on using AI for operational scalability and real-time intelligence." The emphasis on scalability and intelligence—not just adoption—signals where the market is heading.

At All Things AI, we believe that building AI fluency is super important. That's why we provide cutting-edge learning opportunities both in person and online.
Lunch & Learn: Agentic Systems: How AI Actually Gets Work Done
Tue, February 10 · 12:00 PM EST · Online
A practical, non-technical look at agentic systems—how AI moves beyond chat interfaces to autonomously execute multi-step tasks, make decisions, and deliver real business outcomes.
In-Person Event: Ask Us Anything About AI
Wed, February 11 · 6:00 PM EST · Loading Dock, Raleigh
An in-person AMA with AI experts at the Loading Dock. Bring your questions about implementing AI in your organization—this is a rare opportunity to get direct access to practitioners in an intimate setting.
The Missing Link: Adding Your Data to Your App
Tue, February 24 · 12:00 PM EST · Online
So you vibe-coded a front-end app—now what? Building a sleek interface is only half the battle. This session covers how to connect your AI applications to real data sources and make them actually useful.
Missed a session? Catch up on recent recordings. Here are some past events:
Keep up with our latest events by joining the Meetup.com group.

🎙️ AI Confidential Podcast - Are LLMs Dead?
🔮 AI Lesson - Claude Skills: Teach Your AI Once, Use It Forever
🎯 The AI Marketing Advantage - Meta’s AI Ad Machine Makes Creative the Last Lever
💡 AI CIO - Fresh Minds Outsmart the Experts
📚 AIOS - This is an evolving project. I started with a 14-day free AI email course to get smart on AI. But the next evolution will be a ChatGPT Super-user Course and a course on How to Build AI Agents.


Build AI That Lasts
Why most AI investments won't survive the decade—and what to do about it
The CFO at a mid-sized financial services firm had a question that stopped her AI steering committee in its tracks: "Show me the AI investments we made three years ago that still matter today."
The room went quiet. The automated report generator they'd celebrated in 2023? Competitors had the same capability six months later. The customer service chatbot? Replaced twice as better models emerged. The predictive analytics dashboard? Still running, still useful, but no longer a differentiator.
She wasn't being cynical. She was asking the right question. In a world where foundation models improve every quarter and implementation playbooks spread across LinkedIn in real-time, what makes any AI investment defensible?
McKinsey's just-released State of AI 2025 report puts numbers to her intuition—and the findings should prompt every leadership team to reconsider how they're allocating AI resources.
The uncomfortable truth is that most AI investments being made today are building capabilities that won't exist in meaningful form by 2035. Not because the technology will fail, but because the advantages will evaporate. When everyone has access to the same foundation models, cloud infrastructure, and implementation playbooks, the technology itself ceases to be the differentiator.
This raises a strategic question that too few leadership teams are asking: What kind of AI are we building, and will it matter in a decade?
The Value/Defensibility Matrix
Not all AI investments are created equal. To understand which ones will last, consider two dimensions: the value an AI capability creates today, and how defensible that value will be over time.
High Value, Low Defensibility: Efficiency Plays
These are the AI investments that deliver immediate returns but limited lasting advantage. Automated customer service responses. Document summarization. Basic predictive analytics. They create real value—often significant cost savings—but that value erodes as competitors deploy the same capabilities. When your efficiency gain becomes table stakes, you're back to competing on something else.
Low Value, High Defensibility: Technical Moats Without Markets
Some organizations build impressive technical capabilities that no one actually needs. Proprietary models trained on narrow datasets. Sophisticated systems solving problems customers don't have. These are genuinely difficult to replicate, but defensibility without value is expensive infrastructure.
Low Value, Low Defensibility: Pilot Purgatory
This is where two-thirds of enterprise AI lives today. Experiments that show promise in demos but never scale. Proofs of concept that prove the concept, but not the business case. These initiatives consume resources without building either immediate value or lasting advantage.
Forrester's research reinforces the risk: "The expectation for immediate returns on AI investments will see many enterprises scaling back their efforts sooner than they should." The pilot trap isn't just about wasted resources—it's about abandoning initiatives before they have time to build durable value.
High Value, High Defensibility: Durable AI
The upper-right quadrant is where the 6% live. These organizations have built AI capabilities that create substantial value today and become more valuable—and harder to replicate—over time. They've moved beyond AI as a tool to AI as a transformation of how the business operates.
The Four AI Moats
What separates durable AI from expensive experiments? Four characteristics appear consistently in organizations building lasting AI advantages:
Moat 1: Proprietary Data Assets
Foundation models are commoditizing. What isn't commoditizing is the data that makes those models useful for specific business contexts. Organizations building durable AI are systematically creating proprietary data assets—customer interaction histories, operational telemetry, domain-specific knowledge graphs—that competitors who didn't start collecting five years ago can't replicate.
The key insight: data moats aren't about having more data. They're about having data that reflects your specific customers, operations, and market position. A regional healthcare system's patient outcome data is more valuable than a larger, generic dataset because it captures the specific population it serves.
Moat 2: Compounding Feedback Loops
The most durable AI advantages get better the more they're used. Every customer interaction improves the recommendation engine. Every operational decision refines the prediction model. Every exception handled teaches the system something new.
McKinsey's research shows that high performers are significantly more likely to have defined processes for continuous model improvement. They treat AI not as a one-time deployment but as a learning system that compounds value over time. Organizations without feedback loops are running on a treadmill—constantly investing to maintain current performance levels.
Moat 3: Deep Workflow Integration
High performers are three times more likely than average organizations to have fundamentally redesigned workflows as part of their AI deployments. This isn't coincidental. Surface-level AI integration—adding a chatbot, automating a report—delivers surface-level advantages. Deep integration—rethinking how decisions get made, how work flows through the organization, how humans and AI systems collaborate—creates advantages that are genuinely difficult to replicate.
Workflow integration is a moat because it requires organizational change, not just technical implementation. Competitors can copy your technology. They can't easily replicate your culture, processes, and institutional knowledge for making AI work in your specific context.
Moat 4: Organizational AI Fluency
The most underappreciated moat is human capital. Organizations where leaders understand AI's capabilities and limitations, where employees know how to work effectively with AI systems, where the culture supports experimentation and learning—these organizations can move faster, deploy more effectively, and adapt more quickly than competitors still treating AI as an IT project.
McKinsey found that high performers are three times more likely to strongly agree that senior leaders demonstrate ownership of and commitment to AI initiatives, including role modeling AI use. This leadership fluency cascades through the organization, creating an institutional capability that takes years to develop.
Reid Hoffman, in his book Superagency, captures this broader transformation: "Superagency is that elevation of human agency that we get when we get new superpowers from technology, and in particular, when millions of us get that new superpower at the same time." The organizations building durable AI aren't just deploying tools—they're elevating their people's capabilities in ways that compound over time.
How to Build for Durability
Moving from pilot purgatory to durable AI requires a different approach than most organizations' current approach. Here's what the transition looks like:
Phase 1: Honest Assessment
Map your current AI initiatives on the Value/Defensibility Matrix. Be ruthless. Most organizations discover that 80% of their AI portfolio falls in the lower-left quadrant—low value, low defensibility. That's not a failure; it's a starting point. The goal isn't to abandon those initiatives but to understand which ones can be elevated and which are consuming resources without building lasting advantage.
Evaluate each initiative against the Four Moats criteria:
Does this build proprietary data assets?
Does this create compounding feedback loops?
Does this require deep workflow integration?
Does this develop organizational AI fluency?
Initiatives that score zero across all four categories are unlikely to create a durable advantage regardless of their current value.
Phase 2: Portfolio Rebalancing
Shift investment toward initiatives with moat potential. This doesn't mean abandoning efficiency plays—they fund the transformation. But it does mean allocating a meaningful portion of AI investment to capabilities that will compound over time.
The most common mistake at this stage is under-investing in data infrastructure. Organizations focus on model deployment while neglecting the data assets that will determine long-term value. Building proprietary data capabilities isn't exciting, but it's essential.
Phase 3: Workflow Transformation
This is where most organizations stall. They deploy AI tools without redesigning the work. High performers do the opposite—they treat AI deployment as an opportunity to rethink how work gets done.
The key question: If we were designing this workflow from scratch, assuming AI is a given, what would it look like? The answer is rarely "the same process with an AI tool bolted on."
Phase 4: Governance as Competitive Advantage
Here's a counterintuitive finding from McKinsey's research: AI high performers actually report more negative consequences from AI than average adopters—particularly around intellectual property infringement and regulatory compliance. Why? Because they're pushing AI into more complex, higher-stakes domains.
But they're also more likely to have human-in-the-loop protocols, rigorous output validation, and centralized governance structures. According to ISACA's 2025 AI Pulse Poll, only 31% of organizations have a formal, comprehensive AI policy in place—a stark disparity between how often AI is used and how closely it's governed.
The organizations building durable AI treat governance not as a compliance burden but as an enabler. Strong governance allows you to deploy AI in higher-value, higher-risk domains where competitors without governance frameworks can't follow.
Gartner's data underscores this urgency: the firm predicts that through 2026, organizations will abandon 60% of AI projects unsupported by AI-ready data. Governance isn't just about risk mitigation—it's about building the data foundation that enables sustainable AI.
Forrester's warning is equally stark: "Three out of four firms that build aspirational agentic architectures on their own will fail." The message is clear: governance and infrastructure aren't optional. They're prerequisites for the agentic future everyone is racing toward.
Key Success Factors:
Executive sponsorship that goes beyond approval to active engagement and role modeling
Cross-functional teams that combine technical capability with domain expertise
Clear metrics tied to business outcomes, not AI activity
Patience—durable advantages take years to build
Common Missteps
Mistaking Activity for Progress
The most common failure mode is confusing AI activity with AI value. The number of pilots launched, models deployed, or use cases identified is a vanity metric. The only metrics that matter are business outcomes: revenue impact, cost reduction, and competitive differentiation. Organizations that can't tie their AI investments to financial results are almost certainly in pilot purgatory.
Chasing the Latest Model
Every few months, a new foundation model promises transformative capabilities. Organizations that constantly restart their AI initiatives to incorporate the latest technology never build the institutional knowledge, workflow integration, or data assets that create a durable advantage. The technology will keep changing. Your moats need to transcend any particular model generation.
Treating Governance as a Brake
51% of organizations using AI report experiencing at least one negative consequence, with nearly 30% citing AI inaccuracy as the cause. Organizations that respond by restricting AI use are solving the wrong problem. The goal isn't to avoid AI risk—it's to build the governance capabilities that allow you to take intelligent risks at scale while competitors remain cautious.
Underestimating the Organizational Challenge
AI transformation is 20% technology and 80% organizational change. The technical implementation is typically the easy part. Getting people to change how they work, building new skills across the organization, and redesigning processes that have operated the same way for decades—these are where most transformations stall. Organizations that treat AI as a technology project rather than an organizational transformation rarely escape pilot purgatory.
Business Value
The gap between AI high performers and everyone else is substantial and growing. High performers report more than 5% of EBIT attributable to AI—a threshold that only 6% of organizations achieve. More significantly, they're positioned for that gap to widen as their moats deepen.
ROI Considerations:
The traditional approach to AI ROI—calculate cost savings from automation—captures only a fraction of the potential value. Durable AI advantages create value through:
Revenue growth from capabilities competitors can't match
Pricing power from differentiated customer experiences
Operational agility that enables faster response to market changes
Talent attraction from reputation as an AI-forward organization
Organizations focused solely on cost reduction are competing for the smallest slice of AI value.
Competitive Implications:
The window for building durable AI advantages is narrowing. Organizations that establish moats in the next 2 to 3 years will be hard to catch. Those still in pilot purgatory by 2028 will face a stark choice: compete on commoditized AI capabilities where margins are thin, or cede AI-enabled market segments to competitors who moved faster.
This isn't speculation. It's the pattern that played out in digital transformation, cloud adoption, and every previous technology wave. Early movers who invested in building institutional capabilities maintained advantages for decades. Late movers never caught up.
What This Means for Your Planning
The strategic question for 2026 isn't whether to invest in AI—that question is settled. It's whether the AI you're building will matter in 2035.
Most organizations are building AI that won't. They're deploying tools without transforming workflows. They're chasing efficiency without building moats. They're accumulating pilots without creating durable capabilities.
The 6% taking a different approach are building proprietary data assets that compound in value. They're creating feedback loops that make their AI better with every use. They're integrating AI so deeply into their workflows that replication would require competitors to transform their own organizations. And they're developing organizational fluency that enables them to move faster and deploy more boldly.
The honest assessment starts with a simple question: If you map your current AI portfolio on the Value/Defensibility Matrix, where does it fall? If the answer is lower-left primarily—low value, low defensibility—you have the diagnosis. The prescription is a systematic shift toward initiatives that build moats, not just capabilities.
The organizations that will win the next decade of AI aren't necessarily the ones spending the most. They're the ones building AI that lasts.
Author’s note: This week’s complete edition—including the AI Toolbox and a hands-on Productivity Prompt—is now live on our website. Read it here.

No slides. No vendor pitches. Just real answers.
Join AI practitioners from Google, Make.com, and more for an intimate AMA on what’s actually working in enterprise AI right now — agents, automation, strategy, governance, ROI, and beyond.
Bring the questions you can’t get answered online.
Come curious. Leave smarter.
Date: Wednesday, Feb 11 · 6–8 PM EST
Location: Loading Dock, Raleigh NC


Tools for building and defending your AI portfolio.
ModelOp — AI Governance Platform - Enterprise-grade model operations platform that automates governance workflows across the AI lifecycle. Strong compliance documentation with automated model inventory, risk scoring, and audit trails. Recognized in Gartner's 2025 Market Guide for AI Governance.
Credo AI — Responsible AI Platform - Policy-to-process platform that translates AI governance policies into technical controls. Particularly strong on fairness assessments and EU AI Act compliance workflows. Named Forrester Wave Leader Q3 2025.
Holistic AI — AI Risk Management - Comprehensive risk assessment platform spanning bias auditing, explainability, and security testing. Academic foundations from University College London bring methodological rigor. IDC ProductScape 2025, Gartner Cool Vendor 2024.
OneTrust AI Governance — Privacy-First Governance - Extends OneTrust's privacy management platform to AI governance. Natural fit for organizations already using OneTrust for data privacy, adding AI inventory and risk assessment capabilities.
Arthur AI — Model Performance & Monitoring - Real-time monitoring platform focused on model performance, drift detection, and explainability. Developer-friendly with strong API integration. Recently launched Agent Discovery and Governance Platform on Google Cloud.

Prompt of the Week: AI Portfolio Assessment
The Problem
Most organizations can't answer a fundamental question: which of our AI investments will still matter in five years? They track launched pilots and deployed models, but not whether those initiatives are building a durable competitive advantage. This prompt helps leadership teams audit their AI portfolio through the lens of defensibility rather than immediate value alone.
Why This Prompt Works
This prompt applies the Value/Defensibility Matrix framework from this week's Deep Dive to your specific portfolio. It assigns a strategic analyst role to ensure business-focused evaluation, requires specific evidence for each assessment, and produces a prioritized action plan rather than a mere diagnosis.
The Prompt
You are a strategic analyst evaluating an enterprise AI portfolio for long-term competitive advantage. Your task is to assess each initiative against four "moat" criteria and recommend portfolio rebalancing.
## Context
I will provide a list of our current AI initiatives with brief descriptions. Evaluate each against the criteria below.
## AI Initiatives
[PASTE YOUR LIST OF AI INITIATIVES WITH 1-2 SENTENCE DESCRIPTIONS]
## Moat Assessment Criteria
For each initiative, score 0-2 on each dimension:
1. Proprietary Data: Does this build unique data assets competitors can't easily replicate?
2. Feedback Loops: Does this get better with use? Does usage data improve future performance?
3. Workflow Integration: Is this deeply embedded in how work gets done, or easily replaceable?
4. Fluency Building: Does this develop organizational AI capabilities that transfer to future initiatives?
## Output Format
Provide:
1. Scoring table (initiative × four criteria, 0-2 each)
2. Portfolio map placing each initiative on Value/Defensibility matrix
3. Top 3 initiatives to double down on (highest moat scores)
4. Bottom 3 initiatives to sunset or deprioritize
5. Gaps: moat categories where portfolio is weakest
6. 90-day action plan with specific next steps
## Constraints
- Be direct about initiatives that score poorly
- Distinguish between current value and defensibility potential
- Flag initiatives that could be elevated with specific changes
- Note where you need more information to assess accurately
Example Use Case
A VP of Data Science uses this prompt to prepare for an annual planning session. They paste descriptions of 12 active AI initiatives and discover that 8 score zero on "Feedback Loops"—meaning they're deploying static models that don't improve with use. This insight shifts the conversation from "which pilots to fund next" to "how do we add learning systems to our existing investments?"
Variations
For Board Presentations: Add "Generate a one-page executive summary with the three most important strategic findings"
For Budget Planning: Add "Estimate relative investment levels for each quadrant of the Value/Defensibility Matrix"

I appreciate your support.

Your AI Sherpa,
Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter



