EXECUTIVE SUMMARY

AI adoption is nearly universal — over 92% of developers now use AI tools — but the productivity gains have stalled at around 10%. A landmark METR study found that experienced developers were actually 19% slower with AI assistants, while believing they were 24% faster. A Fortune survey of 6,000 global executives confirms the pattern: AI is everywhere except in the results. The problem isn't the technology. It's the absence of systems around the technology. Most people adopt AI tools superficially — bouncing between apps, copying and pasting between platforms, never building the reusable prompts, integrations, and workflows that turn occasional use into compounding proficiency.

Drawing on Eli Goldratt's Theory of Constraints and the systems philosophy from Atomic Habits, this Deep Dive argues that the bottleneck in AI productivity isn't generation — it's everything that happens after. I share my own experience going deep for a week: building reusable Skills, vibe-coding MCP plugins for Beehiiv and WordPress, and checking it all into GitHub. The takeaway: you won't rise to the level of your AI ambitions. You'll fall to the level of your AI systems.

"You do not rise to the level of your goals. You fall to the level of your systems."

That line from Atomic Habits has been rattling around my head for weeks — not because I'm trying to build better morning routines, but because it perfectly captures what's going wrong with how most of us are adopting AI.

We set ambitious goals. We'll use AI to write faster, code smarter, and automate everything. We download the apps. We pay for the subscriptions. We tell ourselves we're "AI-first." And then we fall to the level of our systems — which, for most of us, means no system at all.

I know this because I just caught myself doing it.

Stop Collecting AI Tools. Start Building AI Systems.

Why systems thinking — not more apps — is the key to real AI productivity

The uncomfortable truth the industry isn't discussing is that everyone is quick to adopt AI but painfully slow to become proficient at it. The data backs this up. A recent analysis of 121,000 developers across 450+ companies found that 92.6% now use AI tools — but productivity gains have plateaued at roughly 10% since the initial bump. Even more striking, a randomized controlled trial by METR found that experienced open-source developers took 19% longer to complete tasks when using AI coding assistants. The kicker? Those same developers estimated they were 24% faster. They weren't just unproductive — they were confidently unproductive.

This isn't a technology problem. It's a systems problem. A Fortune survey of 6,000 executives across the U.S., U.K., Germany, and Australia published just last week, found that the vast majority see little measurable impact from AI on their operations. As one economist put it, "AI is everywhere except in the incoming macroeconomic data." We have a classic productivity paradox on our hands — the same pattern we saw with IT spending in the 1980s that took a decade to resolve. And at the individual level, it shows up as what I'd call the "shiny object loop": download a new tool, use it superficially for a few days, get excited about another one, repeat. You're technically "using AI" — but you're not actually getting better at anything.

What Is an AI System?

An AI system isn't a tool; it's a structured, repeatable process that integrates AI capabilities to achieve a specific business outcome. It's the difference between using a calculator for a single sum and building a spreadsheet model that automates an entire financial forecast. A tool helps you perform a task; a system transforms a workflow.

At the individual level, an AI system is a personal framework of reusable prompts, codified knowledge, and automated connections. It's a personal knowledge base that you continuously curate, turning one-off answers into compounding expertise. Instead of asking ChatGPT the same type of question every day, you build a Skill or a custom GPT that codifies your best approach, complete with your preferred format, tone, and quality standards. Your ad-hoc queries evolve into a personal operating system.

At the team level, an AI system is a shared set of tools, integrations, and best practices that enables seamless collaboration. It's a centralized prompt library, a common set of fine-tuned models, and automated workflows that pipe AI-generated content directly into shared platforms like Slack, Notion, or Jira. The goal is to move from individual, isolated uses of AI to a collective, integrated capability in which the output of one person's AI-assisted work becomes a reliable input to someone else's.

At the organizational level, an AI system is a strategic platform that connects AI capabilities to core business processes. It involves building or integrating with platforms that offer robust APIs, security controls, and monitoring — like the Model Context Protocol (MCP). It means treating your AI infrastructure with the same seriousness as your CRM or ERP, with a focus on governance, scalability, and ROI. It's the shift from letting employees expense a handful of AI tools to building a secure, internal AI platform that gives teams access to approved models and data sources.

Is Your AI Stack a Constraint?

In The Goal, Eli Goldratt introduced the Theory of Constraints — the idea that every system is limited by its bottleneck, and optimizing anything other than the bottleneck is an illusion of progress. You can make every machine on the factory floor faster, but if one station can only process 50 units per hour, the whole line produces 50 units per hour.

Apply that lens to how most people use AI, and the problem becomes obvious. We're optimizing the wrong things. We speed up the generation step — drafting emails, writing code, producing content — while ignoring the actual bottleneck. For most knowledge workers, the bottleneck isn't creating the first draft. It's everything that happens after: editing, formatting, reviewing, integrating the output into your actual workflow, copying it from one app to another, reworking it when the AI got the tone wrong because you didn't give it enough context.

Goldratt's five focusing steps are instructive here. Identify the constraint — where does your AI-assisted workflow actually slow down or break? For me, it was the handoff. I'd generate something useful in Claude or ChatGPT, then spend endless minutes cutting and pasting into Beehiiv, into WordPress, into Google Docs. The AI made the first 20% of the work faster and the remaining 80% no different. Exploit the constraint — before adding anything new, squeeze everything you can from the bottleneck. That means getting better at the tools you already have, not adding more. Subordinate everything else — this is the hard part. It means stopping other activities that don't serve the bottleneck. Elevate the constraint — once you've exhausted exploitation, invest in removing the bottleneck entirely. Build integrations. Create reusable systems. Automate the handoff. Repeat — because when you fix one bottleneck, a new one emerges. Systems thinking is never finished.

My Week of Going Deep

Last week, I did something uncomfortable. I stopped using almost everything—no bouncing between five AI apps. No trying the latest model release for an afternoon. No surface-level experiments. Instead, I picked two platforms — Claude and Manus — and went deep. Really deep. The kind of deep where you start writing code at 9 AM and look up to find it's midnight, and you honestly can't remember eating lunch.

I created Skills in Claude and Manus. Skills are reusable prompt frameworks — codified instructions that tell the AI exactly how to perform a specific task the way I want it done. Not a one-off prompt I'll forget tomorrow. A persistent, portable system I can invoke again and again. I wrote skills for creating newsletter editions like this one, for LinkedIn content, and for whitepapers. Each one includes specific examples of what "good" looks like, voice guidelines, structural templates, and quality standards.

I vibe-coded MCP plugins for Beehiiv and WordPress. This is the part that still surprises me. MCP — Model Context Protocol — is essentially a universal connector that lets AI tools talk directly to other software. I built a plugin that lets Claude publish directly to Beehiiv, the platform I use to send this newsletter. I built another for WordPress to update websites. Both are still rough, but the concept works: instead of generating content in one place and manually moving it to another, the AI writes and delivers.

I checked everything into GitHub — using natural language. I didn't open a terminal or type a single git command. I told Manus to create a repository and commit my skills, and it did. No command line. No technical ceremony. But the result is the same: I now have a single source of truth for all my skills and plugins. They're versioned, portable, and shareable. And here's the part that makes this a real system rather than a one-time project: I'm continuously curating those repositories. When I add a good example that improves output quality, it goes into the repo. When an example doesn't seem to guide the AI toward the results I want, I pull it out. The repository isn't a backup — it's a living system that gets better every time I touch it.

I'm not exaggerating when I say I wrote more code last week than I have in my entire life. And I'm not a developer. That's the point. The tools have reached a level where a non-developer with 30 years of technology experience can build real integrations using plain English — if they stop skimming the surface and actually commit to depth.

The Copy-Paste Tax

Every time you generate something in an AI tool and then manually copy it into another application, you're paying what I call the "copy-paste tax." It seems small — a few seconds here, a minute there. But it compounds in three destructive ways.

First, it kills your flow state. You shift from creative thinking to administrative shuttling. Context-switching research consistently shows that this costs 15–25 minutes of refocusing time per switch, not just the seconds of the copy-paste itself. Second, it degrades quality. When you're manually moving content between apps, you're making micro-decisions about formatting, structure, and adaptation that you shouldn't have to make. Things get lost. Headers change. The tone shifts because you're editing on the fly. Third, it prevents compounding. If every interaction with AI is a one-off — a fresh prompt, a blank context window, a manual export — you never build momentum. You're starting from zero every single time.

BetterUp Labs and Stanford found that 41% of workers have encountered low-quality AI-generated output that required nearly two hours of rework per instance. I'd wager a significant portion of that rework comes not from bad AI output, but from bad systems around the AI output.

Building Systems, Not Collecting Tools

Here's the framework I'm now using, and it maps directly to the habits philosophy: make it obvious, make it easy, make it satisfying, and make it repeatable.

Codify your best prompts into reusable skills. If you've written a prompt that produces great results, don't let it evaporate. Turn it into a skill with explicit instructions, examples of good output, and quality criteria. I spent time adding real examples alongside my prompts — showing the AI what an A+ newsletter edition looks like, not just describing it. The difference in output quality was immediate and dramatic.

Build connectors, not workflows. A workflow is "I generate in Tool A, copy to Tool B, format in Tool C." A connector eliminates the copy step entirely. MCP plugins, Zapier integrations, API connections — whatever it takes to make the AI output flow directly into your production systems. This is where the real leverage lives.

Version control your AI systems. Check your skills, prompts, and plugins into source control. Treat them like software — because they are. They need iteration, testing, and documentation. When you improve a skill based on what you learn, the improvement persists and benefits every future use.

Invest in depth over breadth. Pick fewer tools. Learn them thoroughly. Understand their quirks, their strengths, and their failure modes. A mediocre user of ten AI tools will always be outperformed by a skilled user of two.

The Continuous Improvement Imperative

AI proficiency isn't a destination. It's a practice. You don't "implement AI" and move on. The models change. The capabilities expand. Your own understanding of what's possible deepens with every serious use. The 1% daily improvement that Atomic Habits describes isn't just a motivational concept — it's literally how AI skill development works. Each prompt you refine, each skill you codify, each integration you build makes the next one easier and better.

The organizations and individuals who win with AI won't be the ones who adopted first. They'll be the ones who built systems for continuous learning and improvement — who treated AI proficiency as an ongoing practice rather than a one-time implementation. Research from UC Berkeley's Haas School found that employees with unlimited AI access initially surged in productivity, then burned out because they took on a broader scope, faster pace, and longer hours without changing their underlying systems. The AI accelerated their existing patterns — including the dysfunctional ones.

The answer isn't to use AI less. It's about being intentional in how you use it. Build the system. Identify the bottleneck. Go deep instead of wide. Codify what works. Iterate.

What This Means for Your Planning

The shift from collecting AI tools to building AI systems is not a technical detail; it is the central strategic challenge for every leader in the next 18 months. The productivity paradox we are currently experiencing is a temporary phenomenon — it reflects a lag between technological potential and organizational adoption. The companies that close this gap first will not just gain a temporary efficiency advantage; they will fundamentally reshape their operational capabilities.

In your next planning cycle, resist the urge to ask, "What new AI tools should we buy?" Instead, ask, "What are the 3–5 most critical, high-friction processes in our business, and how can we build a dedicated AI system to transform them?" This reframing shifts the focus from cost-center experimentation to value-driving investment. It moves AI from the periphery of your operations to the very core.

The winners in this new era will not be the companies with the longest list of AI apps on their expense reports. They will be the ones with the most robust, integrated, and continuously improving AI systems — the ones who treated AI not as a toy, but as a factory, and built that factory with intention, discipline, and a relentless focus on the bottlenecks that truly matter.

What is the biggest "copy-paste tax" your organization is currently paying, and what would it take to build a system that eliminates it entirely?

IN PARTNERSHIP WITH ALL THINGS AI

All Things AI 2026 — March 23–24 | Durham Convention Center, NC

I produce the All Things AI Conference with my business partner, Todd Lewis, founder of All Things Open. We are committed to upskilling and aim to provide the most valuable and accessible expert-led workshops in the industry. Here’s what’s on tap in Durham in March. Workshops sold out in 2025. Don't wait. Check out all the workshops here.

  • Conference Pass — $199 — Tuesday, March 24. Full conference access, 50+ sessions across 4 tracks, networking events, and session recordings.

  • AI for DevOps Workshop + Conference — $299 — Monday–Tuesday, March 23–24. Full-day hands-on workshop with John Willis (Author of the DevOps Handbook and co-founder of the DevOps movement) plus full conference access.

  • AI for Business Workshop + Conference — $299 — Monday–Tuesday, March 23–24. Full-day hands-on workshop with Mark Hinkle plus full conference access.

  • AI for Agents Workshop + Conference — $299 — Monday–Tuesday, March 23–24. Full-day hands-on workshop with Don Shin plus full conference access.

Prices increase after March 17. Compare that to $1,000–$3,000+ at other AI conferences.

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

Avatar

or to participate

Keep Reading