
BCG and Columbia Business School research reveals that employee-centric organizations are seven times more likely to achieve AI maturity than their peers — and that maturity is built bottom-up, not top-down. Yet 74% of companies can't scale AI beyond pilots, and 42% abandoned most initiatives in 2025 alone. The disconnect points to a fundamental strategic error: treating AI adoption as a technology deployment when it's actually a workforce capability challenge.
Employee-led AI experimentation consistently outperforms centralized pilots — 72% of workers now use AI regularly. Still, business value only emerges when companies redesign workflows around how people actually work, not how executives planned deployments.
The replacement model is collapsing: 55% of executives who made AI-driven redundancies now admit they made the wrong call, with Klarna and Duolingo walking back aggressive automation strategies.
Research consistently shows 40-66% productivity gains when employees direct their own AI use, and the innovation effects (24% increase in product patents) dwarf the cost-cutting returns.
The winning formula invests 70% in people and process, 30% in technology — the inverse of how most enterprises currently allocate AI budgets.
The question for your next planning cycle isn't whether to adopt AI. It's why your $50K pilot is being outperformed by the marketing analyst paying $20 a month for ChatGPT.
The CTO of a Fortune 500 financial services company recently described their AI strategy as "a graveyard of pilots." Over 18 months, they had launched 47 AI initiatives. Forty-three delivered nothing measurable. The remaining four showed promise but couldn't scale beyond their test environments. Total investment: $23 million. P&L impact: zero.
This story isn't unusual. It's the norm.
S&P Global's 2025 survey of over 1,000 enterprises found that 42% of companies abandoned most of their AI initiatives this year, up from just 17% in 2024. The average organization scrapped 46% of AI proofs-of-concept before they reached production. BCG reports that 74% of companies struggle to achieve and scale value from AI. The pattern is consistent across industries: ambitious pilots, substantial investments, and results that quietly disappear from quarterly reports.
Yet something curious is happening in parallel. The same companies watching their official AI programs sputter are home to employees who've figured out how to use AI effectively on their own. McKinsey found that 90% of knowledge workers use personal AI tools for work, even though only 40% have officially sanctioned access. These shadow AI users aren't waiting for permission—they're shipping better work, faster.
The question isn't whether your organization should adopt AI. The question is why the $50,000 pilot your IT department spent months planning is being outperformed by the marketing analyst who pays $20 a month for ChatGPT Pro.

Is the Model Context Protocol on your radar? Has it become a point of contention between developers keen to use MCP servers and security teams concerned about the lack of guardrails?
Stacklok is working with leaders across industries to bring the Model Context Protocol into production on a secure, scalable platform. Curate a registry of trusted MCP servers. Control auth via an MCP gateway.
Learn more at stacklok.com or join us at an upcoming MCP roadshow stop in San Diego, Austin, Atlanta, Boston, New York, or Chicago.

🎙️ AI Confidential Podcast - Are LLMs Dead?
🔮 AI Lesson - Build AI Skills That Remember How You Work
🎯 The AI Marketing Advantage - ChatGPT Just Entered the Ads Game — And Marketers Aren’t Ready
💡 AI CIO - Fresh Minds Outsmart the Experts
📚 AIOS - This is an evolving project. I started with a 14-day free AI email course to get smart on AI. But the next evolution will be a ChatGPT Super-user Course and a course on How to Build AI Agents.


Employee-Led AI Experimentation
The companies winning with AI aren't running bigger pilots. They're running thousands of smaller experiments.
The enterprise AI playbook has followed a predictable pattern: identify a high-value use case, assemble a cross-functional team, partner with a major vendor, run a pilot, prove ROI, then scale. It's the same approach that worked for ERP systems, cloud migrations, and digital transformation initiatives.
It's also failing spectacularly with AI.
What's Actually Going Wrong
The pilot model assumes you can identify the right use case in advance, control the variables, and measure success through traditional metrics. AI doesn't work that way. The highest-value applications often emerge from unexpected places—the customer service rep who discovers a better way to summarize calls, the analyst who figures out how to automate their weekly reports, the product manager who uses AI to synthesize customer feedback at scale.
When 90% of employees are already experimenting with AI tools outside official channels, the pilot isn't discovering new possibilities. It's constraining them.
The data tells a clear story about what's failing. Organizations are spending disproportionately on the wrong priorities—over 50% of AI budgets go to sales, marketing, and customer operations, while back-office automation, which shows the highest demonstrable ROI, remains underfunded. Companies build when they should buy, with only 33% success rates for internal AI builds compared to 67% for vendor partnerships. And perhaps most critically, they focus on technology when they should focus on workflow redesign.
The Stanford Digital Economy Lab found that 42% of AI initiatives in 2025 were abandoned, with "model fetishism"—the obsession with having the newest, most powerful AI—cited as a primary cause. Teams spend months evaluating whether to use GPT-4 or Claude 3.5, even though the real bottleneck is whether anyone has mapped the process the AI is supposed to improve.
Why Shadow AI Outperforms Official Programs
Consider the math. A formal AI pilot involves: stakeholder alignment (2-4 weeks), vendor evaluation (4-8 weeks), security and compliance review (4-6 weeks), development and integration (8-12 weeks), user acceptance testing (2-4 weeks), and training and rollout (2-4 weeks). Total timeline: 5-9 months before results are visible.
An employee with a ChatGPT subscription experiments immediately, iterates daily, and either proves value or moves on within a week. The feedback loops are measured in hours, not quarters.
This isn't about rogue employees circumventing controls. It's about the scientific method. Science advances through many small experiments with quick feedback, not through massive centralized initiatives. Breakthroughs happen when researchers can test hypotheses rapidly, fail cheaply, and iterate.
The same principle applies to AI adoption. The 77% of employees using generative AI at work—with only 28% having formal guidance—aren't being irresponsible. They're running thousands of parallel experiments that no pilot program could replicate.
The Human-AI Collaboration Evidence
The research on employee-led AI use is compelling. A Nielsen Norman Group study found a 66% increase in average productivity when employees integrated AI into their workflows on their own terms. A Science journal publication documented a 40% time reduction and an 18% improvement in quality for professional writing tasks. MIT research on software developers found a 26% increase in output, with effects concentrated in mid-level complexity work, where human judgment and AI capability intersect most productively.
But here's what matters most for long-term strategy: AI assistance increases employee creativity, not just efficiency. An Academy of Management Journal field experiment found that AI assistance elevated creative problem-solving in sales contexts, with the effect "much more pronounced for higher-skilled employees." Employees exposed to intelligent AI assistants report greater creative self-efficacy when organizational infrastructure supports experimentation.
This distinction—creativity versus efficiency—explains why the replacement approach is failing while the augmentation approach succeeds.
The Replacement Trap
Companies rushing to replace workers with AI are learning expensive lessons.
Klarna, the buy-now-pay-later giant, provided a case study in aggressive AI replacement. CEO Sebastian Siemiatkowski bragged that AI bots were doing the work of "700 full-time agents" and stopped hiring for a year. The company partnered with OpenAI, slashed customer service and marketing headcount, and claimed $10 million in savings.
By early 2025, customer service ratings had dipped, user complaints increased, and Siemiatkowski was forced to acknowledge that "cost unfortunately seems to have been too predominant evaluation factor." Klarna is now hiring human customer service agents again and facing net losses of $99 million in Q1 2025—more than double the same period in the prior year. The company paused its highly anticipated IPO. Siemiatkowski now says: "There will always be a human if you want."
Duolingo followed a similar path. The language learning company cut 10% of its contractor workforce in January 2024 and laid off over 100 contract writers, translators, and curriculum experts. CEO Luis von Ahn announced AI would "eventually replace all contractors." Users complained that the content felt "repetitive, robotic," lacking the playful tone that made Duolingo iconic. Language teachers and linguists raised concerns about educational accuracy and cultural nuance. A massive social media backlash followed. Von Ahn walked back his stance, acknowledging potential quality impacts.
A survey of 1,000 UK C-suite executives by Orgvue revealed the pattern isn't isolated. Of leaders who made redundancies due to AI adoption, 55% admit they made the wrong decisions. The consequences included widespread internal confusion, employee turnover, and—notably—declines in productivity, the opposite of the intended outcome.
Why Replacement Fails, Augmentation Succeeds
The Brookings Institution research tells a different story about what happens when companies invest in AI without the replacement mindset. Analyzing firms from 2010-2018, researchers found AI investments associated with 13% increase in trademarks, 24% increase in product patents, but only 1% increase in process patents. Companies were using AI for product innovation, not just cost-cutting.
The researchers concluded: "It does not appear the main use of AI has been to cut costs and replace human workers." The primary effect was sales growth and expansion through increased product innovation. And critically, the overall relationship between AI adoption and employment was positive.
This isn't about being soft on efficiency. It's about understanding where AI creates value. McKinsey found that workflow redesign has the biggest impact on AI's business value—not the sophistication of the model, not the scale of the pilot, but whether the work itself is reconceived to leverage what AI does well while preserving what humans do better.
The formula that emerges from successful implementations is consistent: 70% investment in people, process, and adoption; 30% in technology. Organizations treating AI as a technology deployment problem get the ratio backward.
How to Implement the Experimentation Model
Phase 1: Sanctioned Access (Weeks 1-4)
Provide official AI tool access to all knowledge workers. This isn't about control—it's about visibility. When employees use shadow AI, you can't see what's working. When they use sanctioned tools, experimentation becomes observable.
The security concerns that typically delay this phase are valid but often overstated. The employees are already using AI. You're choosing between invisible, uncontrolled experimentation and visible, guideline-bounded experimentation.
Phase 2: Use Case Identification (Ongoing)
Instead of top-down use case selection, establish mechanisms to surface what employees are already doing. Regular check-ins, internal showcases, Slack channels for sharing AI wins—the goal is to make successful experiments visible so others can learn from them.
The best use cases often come from unexpected places. The analyst who figures out how to automate a tedious weekly report. The customer service rep who discovers a better way to summarize complex issues. The product manager who uses AI to synthesize hundreds of feedback entries. These discoveries happen at the edges, not in pilot program planning meetings.
Phase 3: Scaling What Works (Months 3-6)
When experiments prove value, invest in making them systematic. This might mean enterprise licensing, custom integrations, or workflow redesign to embed AI assistance into standard processes.
But be selective. Not every successful experiment needs to scale. Some will remain individual productivity tools. Others will transform entire functions. The experimentation phase reveals which is which.
Key Success Factors:
Executive air cover for employee experimentation, explicitly sanctioned
Clear guidelines on data handling and appropriate use (not prohibitions on use)
Visible celebration of successful experiments, especially from unexpected sources
Patience with 2-4 year ROI timelines—sustainable advantage, not quick wins
Common Missteps
Mistaking pilot success for scalability: A pilot in controlled conditions with dedicated support doesn't predict enterprise-wide adoption. The high failure rate isn't about failed pilots—it's about pilots that can't survive contact with real organizational complexity.
Underinvesting in workflow redesign: AI doesn't improve bad processes. It accelerates them. Before any AI implementation, map the current workflow and ask: if we were starting from scratch, how would we design this? The technology decision comes after the process decision.
Measuring the wrong things: Traditional IT metrics—uptime, user adoption, feature utilization—miss what matters. The right questions: What decisions are people making differently? What work is no longer being done? What new capabilities have emerged?
Pursuing replacement when augmentation creates more value: The research is clear—companies that position AI as a tool for workers outperform those positioning it as a replacement for workers. And the companies that tried replacement are walking it back.
Business Value
The productivity evidence is substantial when AI augments rather than replaces:
66% average productivity increase in employee-directed AI use (Nielsen Norman)
40% time reduction with 18% quality improvement in professional tasks (Science)
26% output increase for developers in mid-complexity work (MIT)
73% greater productivity in human-AI collaborative teams versus AI-only or human-only (Pairit)
Beyond productivity, the innovation case is stronger still. The Brookings research found that AI-investing firms saw a 24% increase in product patents. When AI handles routine cognitive work, humans focus on judgment, creativity, and complex problem-solving—exactly the work that creates competitive differentiation.
ROI Considerations:
Expect 2-4 year timelines for meaningful returns. Quick wins exist but sustainable advantage requires workflow transformation. Organizations reporting faster ROI typically had higher AI maturity before beginning—they were ready for the technology.
Competitive Implications:
The companies succeeding with AI aren't those with the biggest pilots or the most sophisticated models. They're the ones that figured out how to run thousands of small experiments simultaneously, surfacing insights from the people closest to the work.
Your competitors' advantage isn't their technology budget. It's whether their employees feel empowered to experiment.
What This Means for Your Planning
The strategic implication is straightforward: stop treating AI adoption as a technology implementation and start treating it as a workforce capability challenge.
This means redirecting budget from centralized pilots to distributed access. It means building measurement systems that can see what employees are already doing with AI. It means accepting that the best use cases will emerge from experimentation, not planning.
For leaders who've watched expensive pilots fail, this is actually good news. The path forward doesn't require more investment—it requires different investment. Enable your people. Make experimentation visible. Scale what works.
The displacement fears are real. Entry-level positions are disappearing—skills atrophy when AI handles too much. But the companies that rushed to replace workers are retreating. The evidence favors augmentation, and augmentation requires human involvement.
One question for your next planning cycle: How many experiments are your employees running right now that you can't see? And what would change if you could?

You built the front end. Now make it work.
Join Jordan van Maanen from Make.com on February 24, Tuesday at 12 PM EST to learn how to connect tools like Softr and Zoho Forms to real automations using webhooks, data mapping, and multi-step workflows.
Stop building static apps. Start building automated systems.


This week's Deep Dive argues that enabling employee-led AI experimentation outperforms centralized pilots. These tools support that thesis—from sanctioned enterprise platforms to the new wave of autonomous agents that let individuals run their own experiments.
Claude Cowork — Anthropic's newest tool transforms Claude from a chatbot into a desktop collaborator. Point Cowork at a folder and describe what you need—organizing files, synthesizing research, creating expense reports from receipt screenshots. It operates autonomously within user-specified directories, executing multi-step workflows while you step away. Built on the same architecture as Claude Code, Cowork emerged after Anthropic observed developers using its coding tool for non-coding work, such as vacation research, inbox management, and slide decks. The research preview runs in a sandboxed virtual machine with deletion protection and human-in-the-loop confirmation for significant actions.
Pricing: Requires a Claude Max subscription ($100-$200/month)
Best for: Knowledge workers drowning in file organization, document synthesis, and repetitive desktop tasks
Enterprise ready: Partial — Research preview with security caveats; not recommended for regulated workloads; macOS only
Last major update: January 2026 (plugin system launched January 30)
Manus — The tool that defined the "general AI agent" category in March 2025. Unlike chatbots that wait for instructions, Manus independently plans and executes complex tasks—market research, code deployment, data analysis, supplier sourcing—with minimal human guidance. Each session runs on a dedicated cloud virtual machine, providing users with Turing-complete capabilities via natural language. Manus achieved state-of-the-art performance on GAIA benchmarks, outperforming OpenAI's Deep Research. Meta acquired the Singapore-based company in December 2025 for an estimated $2-3 billion; the service continues operating independently. Wide Research, launched this month, spawns multiple Manus instances to tackle complex projects in parallel.
Pricing: Free tier (300 daily credits) / Plus $19/month / Pro $199/month / Team $39/seat/month (5-seat minimum)
Best for: Professionals needing autonomous research, analysis, and workflow execution—without constant supervision
Enterprise ready: Partial — Credit-based pricing unpredictable for complex tasks; regulatory review pending post-acquisition
Last major update: January 2026 (Wide Research multi-agent feature)
Moltbot — The viral open-source project that sold out Mac Minis across major cities. Moltbot runs locally on your machine, connecting to messaging apps (Telegram, Discord, Slack, Signal) while maintaining persistent memory across sessions. Unlike cloud-only solutions, you control your data. It can install tools, troubleshoot software, and execute autonomous workflows, then notify you when finished. The project rebranded from "Clawdbot" in January 2026 due to trademark concerns raised by Anthropic. Security researchers have flagged significant risks in granting shell access to AI agents. Still, for employees who want to experiment with autonomous AI on isolated hardware, Moltbot offers an accessible entry point.
Pricing: Free (open-source) — Requires API keys for Claude, GPT-5, or local models via Ollama
Best for: Technical employees and power users wanting local, private AI automation with full control
Enterprise ready: No — Significant security considerations; "sandbox mode" recommended; not suitable for production workloads
Last major update: January 2026 (rebrand to Moltbot, multi-agent orchestration)
ChatGPT Enterprise/Business — The enterprise standard for sanctioned employee AI access. Over 5 million business users across 92% of Fortune 500 companies. Recent updates add shared Projects (collaborative workspaces with persistent memory), connectors to Gmail, Outlook, GitHub, and SharePoint, plus Custom GPTs for packaging repeatable workflows. OpenAI's December 2025 enterprise report found weekly messages grew 8x year-over-year, with structured workflow usage (Projects, Custom GPTs) up 19x. The platform offers the clearest path to enabling "thousands of small experiments"—employees get sanctioned tools, IT gets governance, and organizations retain full data ownership.
Pricing: Business (formerly Team) $30/user/month / Enterprise custom pricing
Best for: Organizations wanting governed, scalable employee AI access with admin controls and compliance certifications
Enterprise ready: Yes — SOC 2 Type 2, ISO 27001/27017/27018/27701, SSO, no training on business data
Last major update: January 2026 (shared Projects, enhanced connectors)
Glean — When employees experiment with AI, they need access to organizational knowledge. Glean connects to 100+ enterprise applications (Slack, Drive, Confluence, Salesforce, ServiceNow) and provides AI-powered search across all company data while respecting source-system access controls. The platform's "Work Hub" goes beyond search to offer AI assistants, agents, and a unified knowledge layer. For organizations enabling employee AI experimentation, Glean solves the "context problem"—employees can query internal knowledge without manually copying documents into chat interfaces.
Pricing: Enterprise (custom pricing, typically $15-25/user/month)
Best for: Mid-to-large enterprises with fragmented knowledge across multiple SaaS applications
Enterprise ready: Yes — SOC 2 Type II, data residency options, granular permissions, AI governance features
Last major update: November 2025 (enhanced AI agents and Work Hub features)

We’re considering a small change…
I appreciate your support.

Your AI Sherpa,
Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter



