EXECUTIVE SUMMARY

Tech companies will spend $400 billion on AI infrastructure in 2025—exceeding the Apollo program's inflation-adjusted budget, repeated every ten months. Both AI bulls and bubble skeptics present compelling evidence, leaving business leaders caught between FOMO and prudent risk management.

Unlike dot-com startups burning venture capital, today's AI leaders (Microsoft, Google, Amazon) are massively profitable and will survive even if AI bets fail. The technology demonstrably works for specific tasks. Infrastructure has alternative uses if foundation model companies collapse.

Unit economics worsen with scale rather than improve—a reversal of traditional tech scaling dynamics. Financial engineering obscures true profitability (Microsoft–OpenAI circular revenue bookings echo early-2000s accounting practices reminiscent of the WorldCom era). MIT studies show 95% of AI pilots fail to yield meaningful results. The gap between infrastructure spending ($400 billion) and consumer revenue ($12 billion annually) echoes telecom overcapacity that left 85-95% of fiber "dark" in 2002.

Don't bet on timing the bubble—build for multiple scenarios. Prioritize AI applications with 12-month ROI that work whether vendors consolidate or not. Rent compute from hyperscalers rather than building proprietary infrastructure. Develop internal expertise that survives vendor failures. Prepare to acquire distressed assets (GPUs, talent, data centers) at 2027 fire-sale prices if correction arrives. Remember Amara's Law: We overestimate short-term impact, underestimate long-term transformation. The internet crashed in 2000 but enabled Amazon, Google, and Facebook by 2005. Position to benefit from both timelines.

The bubble thesis is probably correct for 2026–2027—but that doesn’t make AI investments wrong. That doesn't make AI investments wrong—it makes vendor selection, contract structure, and capability building more critical than ever. Companies that survive bubbles distinguish hype from utility, build competency during uncertainty, and stay capitalized to buy when others must sell. Investors who know what’s coming can avoid misfortune.

FROM THE ARTIFICIALLY INTELLIGENT ENTERPRISE NETWORK

🎙️ AI Confidential Podcast - Are LLMs Dead?

🎯 The AI Marketing Advantage - Inside the new standard for AI-native advertising

 📚 AIOS - This is an evolving project. I started with a 14-day free Al email course to get smart on Al. But the next evolution will be a ChatGPT Super-user Course and a course on How to Build Al Agents.

Charlotte’s Biggest AI Event Is Here — Don’t Miss AI FORWARD!

The future of AI in business is happening this Monday and Tuesday at the UNC Charlotte City Center.

Join top executives, innovators, and AI leaders for AI // FORWARD — a two-day event packed with strategic insights, real-world frameworks, and hands-on workshops designed to help you scale AI from pilot to enterprise.

Hear from visionaries like John Willis and Mark R. Hinkle of The AIE Network, along with other industry leaders driving the next wave of enterprise AI adoption.

Seats are limited and going fast — don’t miss your chance to be part of Charlotte’s most important AI event of the year!

AI DEEPDIVE

Quantum Computing + AI

Why the technology that could transform machine learning won't impact your 2025 roadmap—and how to prepare anyway.

Quantum computing represents a fundamental departure from the binary logic that powers every smartphone, data center, and AI system in operation today. Classical computers—regardless of their processing power—manipulate bits that exist in one of two states: 0 or 1. Every calculation, from spreadsheet formulas to training GPT-4, reduces to sequences of binary operations executed at extraordinary speed.

Quantum computers operate differently. They use quantum bits, or qubits, which exploit quantum mechanical phenomena to exist in superposition—simultaneously representing both 0 and 1 until measured. When you measure a qubit, it collapses to a definite state, but before measurement, it holds both possibilities with certain probabilities. This isn't merely faster binary computation; it's a different computational model entirely.

The power multiplies through entanglement. When qubits become entangled, measuring one instantly affects the others, regardless of physical separation. A system with just 300 entangled qubits can theoretically represent more states simultaneously than there are atoms in the observable universe. This exponential scaling—not just faster clock speeds—creates quantum computing's potential advantages for specific problem types.

The architecture matters. Superconducting qubits—used by IBM and Google—operate at temperatures near absolute zero, using superconducting circuits where current flows without resistance. Trapped ion systems—developed by IonQ and Honeywell—suspend individual atoms in electromagnetic fields and manipulate them with lasers. Photonic quantum computers—pursued by PsiQuantum—use particles of light. Each approach trades different advantages: coherence time versus gate speed, operating temperature versus scalability, error rates versus qubit connectivity.

The critical limitation: Noise. Quantum states are extraordinarily fragile. Environmental interference—stray electromagnetic fields, thermal fluctuations, cosmic rays—causes decoherence, where qubits lose their quantum properties and behave classically. Current quantum systems maintain coherence for microseconds to milliseconds. Useful algorithms require millions of quantum gate operations. The mathematics don’t yet support that scale. This is why we're in the NISQ era: Noisy Intermediate-Scale Quantum. We have quantum computers, but they can't run algorithms deep or complex enough to outperform classical systems on most practical problems.

Error correction offers a path forward. Quantum error correction encodes one "logical qubit" across multiple "physical qubits," detecting and correcting errors without measuring (and thus destroying) the quantum state. Google's Willow chip claimed progress here—demonstrating that error rates can decrease as physical qubits increase, a critical threshold for scaling. But achieving fault-tolerant quantum computing requires hundreds of physical qubits per logical qubit. Current systems have 100-1000 physical qubits total. Useful algorithms need thousands of logical qubits. The gap between today's hardware and practical quantum advantage spans 5-10 years under optimistic projections.

How To Consider Quantum Computing Strategically

This technical reality collided with commercial momentum in October 2025. Quantum computing stocks surged up to 165%—not from breakthrough scientific papers, but from purchase orders. Rigetti announced $5.7 million in sales—one system to an Asian manufacturer, another to a U.S.-based AI startup. The dollar amounts seem trivial compared to AI infrastructure spending, but they signal a transition from research to product.

More significantly, HSBC executed the first quantum-powered bond trades. Vanguard demonstrated quantum portfolio optimization. These aren't startups or research labs—they're institutions managing trillions in assets, testing quantum systems for production finance applications. IBM and Google continued hardware development. The technology moved from "laboratory curiosity" to "early commercial deployment."

For AI leaders, this creates strategic tension. Quantum computing could theoretically accelerate machine learning optimization, enable new model architectures, and solve problems intractable for classical systems. But the timeline spans years, the applicability is narrow, and the hardware remains immature. Understanding where quantum matters—and where it doesn't—determines whether you're strategically positioned or wastefully distracted.

What Quantum Computing Actually Means for AI

The intersection of quantum computing and artificial intelligence isn't about making ChatGPT respond faster. It's about specific computational bottlenecks where quantum mechanics provides theoretical advantages. Here's where quantum could genuinely impact AI, with realistic timelines:

Optimization Problems (3-7 year horizon): Many machine learning models—particularly reinforcement learning and neural architecture search—rely on solving large-scale optimization problems. Finding the best hyperparameters for a neural network, optimizing supply chain logistics, or determining ideal portfolio allocation all reduce to searching vast solution spaces for optimal configurations.

Quantum algorithms like QAOA (Quantum Approximate Optimization Algorithm) and Grover's Algorithm can theoretically search certain solution spaces more efficiently than classical approaches. The advantage compounds exponentially with problem size—but only for problems structured in quantum-amenable ways.

Real-world example: Training a reinforcement learning model for warehouse robotics currently takes days of GPU time. Quantum acceleration might reduce this to hours—but only once quantum systems achieve sufficient scale and error correction. Current NISQ-era hardware can't handle production-scale problems. The qubit counts are too low, coherence times too short, error rates too high.

Business application: Supply chain optimization, logistics routing, portfolio allocation, and resource scheduling are quantum-amenable. But classical heuristics and specialized hardware (like Google's TPUs) remain competitive for 3-5 years. The quantum advantage exists in theory; the hardware to realize it doesn’t yet.

Kernel Methods Revival (3-7 year horizon): Before deep learning dominated AI, kernel methods—particularly Support Vector Machines—were state-of-the-art for classification and regression. Quantum computers process high-dimensional data efficiently due to superposition, potentially reviving these classical approaches.

Quantum Support Vector Machines could enable smaller, more interpretable models. In regulated industries—healthcare, finance—where model explainability matters legally, Quantum SVMs might compete with deep learning's black-box approaches. The catch: current quantum systems have too few qubits and too much noise for practical kernel computation at enterprise scale.

Enhanced Sampling for Probabilistic Models (5-10 year horizon): Quantum systems excel at sampling from complex probability distributions. This matters for Bayesian inference, probabilistic graphical models, and generative models like Quantum GANs. Applications include risk management, climate modeling, and decision-making under uncertainty.

Current limitation: Classical Monte Carlo methods remain faster and more reliable for problems that fit in available memory. Quantum sampling advantages emerge only when classical systems exhaust computational resources—a threshold that keeps receding as classical hardware improves.

Quantum Neural Networks (7-10+ year horizon): Researchers at IBM, IonQ, and Google explore QNNs—neural network architectures that leverage quantum properties. These remain largely conceptual. The value proposition centers on domains where quantum properties are intrinsic: quantum chemistry, materials science, molecular dynamics.

Reality check: For natural language processing, computer vision, or recommendation systems, classical neural networks will dominate for the foreseeable future. Quantum neural networks aren't general-purpose AI accelerators. They're specialized tools for problems where quantum mechanics governs the underlying system being modeled.

Scientific Discovery Acceleration (0-5 year horizon): This is where quantum shows near-term value. Simulating molecular systems, materials design, and drug discovery involve quantum mechanics natively. Classical computers simulate quantum systems by approximation—an approach that hits exponential walls as system size increases.

Quantum computers don't simulate quantum systems; they are quantum systems. A 100-qubit quantum computer can directly model quantum mechanical interactions that would require classical computers with more memory than exists in the entire world. AI models that optimize quantum circuit design combined with quantum computers simulating molecular systems create genuine advantages today—in narrow, scientifically important domains.

Pharma companies exploring this space aren't speculating. They're addressing problems where classical simulation hits exponential walls and quantum simulation provides polynomial or exponential speedup.

The Current State: NISQ Era Realities

We're in the Noisy Intermediate-Scale Quantum (NISQ) era. Systems have tens to low hundreds of qubits without full fault tolerance. Google's Willow chip announced 105 qubits with improved error suppression. IBM's roadmap targets modular, scalable architectures. But "improved error suppression" still means these systems can't run algorithms complex enough to outperform classical computers on most practical problems.

The hardware platforms competing for dominance include:

  • Superconducting qubits (IBM, Google): Well-developed fabrication, good scaling trajectory, but requiring cryogenic cooling and fighting cross-talk issues.

  • Trapped ions (IonQ, Honeywell): High coherence, precise control, but slower gate speeds and scaling complexity.

  • Photonic systems (PsiQuantum): Room-temperature operation potential, but challenges generating deterministic entanglement.

  • Semiconductor spin qubits: Leveraging existing semiconductor manufacturing, but early-stage with control challenges.

None of these platforms has achieved fault-tolerant, error-corrected quantum computing at scale. That milestone—when quantum systems can run arbitrary algorithms reliably—remains 5-10 years away under optimistic scenarios. Until then, quantum computing remains a specialized research tool with narrow commercial applications, not a general-purpose platform for AI workloads.

How to Position Your Organization for Quantum

Business leaders face a timing problem. Act too early, and you waste capital on immature technology. Wait too long, and competitors establish quantum competency that takes years to replicate.

The solution: strategic monitoring with minimal capital commitment.

Scenario One: You're in pharma, materials science, or chemistry. Quantum computing could provide a genuine competitive advantage in 3-5 years.

Your move: Establish relationships with quantum hardware vendors (IBM, IonQ, Rigetti) through cloud access programs. Identify molecule simulation or materials design problems amenable to quantum acceleration. Build hybrid quantum-classical teams. Budget for quantum consulting but not quantum hardware. Partner with the national labs working on quantum chemistry applications.

Scenario Two: You're in finance or complex optimization. Portfolio optimization, risk modeling, and certain trading strategies might benefit from quantum approaches in 5-7 years.

Your move: Join industry consortia (Quantum Economic Development Consortium). Pilot quantum optimization through cloud platforms (AWS Braket, Azure Quantum, IBM Quantum Network). Focus on problems where classical approaches are computationally expensive. Don't build quantum infrastructure—rent access.

Scenario Three: You're deploying mainstream AI (NLP, computer vision, recommender systems). Quantum computing won't materially impact your roadmap before 2030-2032.

Your move: Monitor developments through industry publications. Attend quantum computing conferences. Understand which vendors are building quantum-classical hybrid systems. Don't allocate capital. Do allocate attention.

Scenario Four: You manage AI security and infrastructure. Quantum computing poses an immediate threat through Shor's algorithm, which can break classical encryption.

Your move: Transition to post-quantum cryptography now. NIST has published post-quantum cryptographic standards. Protect AI model weights, training data, and inference APIs with quantum-resistant encryption. This is the only quantum-related priority that demands immediate investment.

The sophisticated approach: Build literacy without building infrastructure. Quantum computing will matter—but not yet, and not uniformly. The companies that win in quantum-enhanced AI will be those that understand which problems quantum solves, not those who bought quantum hardware earliest.

Common Missteps

Misstep One: Confusing quantum computing with quantum AI hype. Vendors marketing "quantum AI" or "quantum machine learning" often describe research prototypes or theoretical algorithms, not production systems. When someone pitches quantum-enhanced AI, ask: "What qubit count, error rate, and coherence time does this require?" If the answer is "hundreds of logical qubits with error correction," the timeline is 7-10 years minimum.

Misstep Two: Assuming quantum replaces classical computing. Quantum computers won't replace CPUs or GPUs. They're specialized accelerators for specific problem types. Most AI workloads—training transformers, running inference, processing images—will remain on classical hardware. Even in quantum-advantaged domains, you'll need hybrid systems: classical computers pre-processing data, quantum systems solving specific sub-problems, classical systems post-processing results.

Misstep Three: Waiting for "quantum maturity" before building expertise. By the time quantum systems are production-ready for AI, the talent gap will be enormous. Organizations that start building quantum literacy now (through cloud access, partnerships, research collaborations) will have 5-7 year lead on competitors who wait for turnkey solutions.

Misstep Four: Ignoring post-quantum cryptography. This is the inverse mistake. While quantum AI is distant, quantum threats to encryption are imminent. Nation-states are recording encrypted data now to decrypt later with quantum computers. If you're training proprietary AI models or handling sensitive data, quantum-resistant encryption isn't optional—it's urgent.

Timing Your Quantum Adoption Curve

The business value of quantum computing for AI isn't in 2025 revenue or 2026 cost savings. It's in strategic positioning for a 7-10 year technology transition.

Consider the railroad analogy: In 1835, most businesses didn't need railroad access. By 1855, businesses without railroad infrastructure were obsolete. The companies that thrived weren't necessarily those who built railroads earliest—they were those who understood where railroads mattered and positioned accordingly.

Quantum computing for AI follows a similar logic. The value creation happens in three phases.

Phase One (2025-2027): Literacy and partnerships. Understand quantum's applicability to your domain. Establish vendor relationships. Pilot problems through cloud access. Build hybrid quantum-classical teams. The organizations that do this well spend $100K-$500K annually on quantum exploration, not $10M on quantum infrastructure.

Phase Two (2028-2032): Hybrid deployment. As fault-tolerant systems emerge, specific optimization problems and scientific computing tasks shift to quantum acceleration. Early adopters in pharma, finance, and materials science gain measurable advantages. The value isn't wholesale AI transformation—it's 10-30% improvements in targeted applications.

Phase Three (2032+): Architectural integration. Quantum computing becomes standard infrastructure for certain AI workloads. QNNs compete with classical neural networks, in specific domains. Quantum-enhanced optimization is routine for supply chain and logistics. This phase resembles how GPUs became standard for deep learning—not replacing CPUs, but essential for specific tasks.

The recent quantum stock surge reflects Phase One momentum. Real purchase orders—even small ones—signal commercial viability. HSBC and Vanguard demonstrations prove financial applications aren't science fiction. But momentum in Phase One doesn't mean proximity to Phase Two.

Business leaders should invest proportionally: significant attention to understanding quantum's trajectory, minimal capital on quantum infrastructure, urgent action on post-quantum cryptography.

Quantum computing will transform certain AI applications—but confusing ‘will transform’ with ‘is transforming’ wastes capital and creates vulnerability. The winners in quantum-enhanced AI will be those who build expertise without overcommitting resources, who understand timelines without dismissing potential, and who prepare infrastructure for quantum threats while remaining patient on quantum opportunities.

AI TOOLBOX
  • Strangeworks - best for multi-vendor quantum access and hybrid AI-quantum workflows. Recently acquired Quantagonia to combine quantum computing with AI-powered optimization. Provides the largest catalog of quantum and quantum-inspired computing resources through a single platform. Enables access to IBM, IonQ, Rigetti, D-Wave, and others without vendor lock-in. Particularly strong for organizations needing both classical HP, and quantum resources.

  • IBM Quantum Network - best for Cloud-based quantum access and learning.

    Industry's most mature quantum cloud platform providing access to quantum hardware, simulators, and development tools. Ideal for building quantum literacy without infrastructure investment. Offers Qiskit open-source framework and educational resources. Start here for pharmaceutical, materials science, or optimization pilots.

  • AWS Braket - best for multi-vendor quantum experimentation.

    Amazon's quantum computing service provides access to quantum hardware from IonQ, Rigetti, and D-Wave through familiar AWS interface. Good for organizations already using AWS infrastructure. Enables side-by-side comparison of quantum approaches. Pay only for quantum computing time used.

  • Azure Quantum - best for Enterprise integration and hybrid workflows.

    Microsoft's quantum platform integrates with existing Azure AI/ML services. Strong for organizations invested in Microsoft ecosystem. Includes access to IonQ, Quantinuum, and Rigetti systems plus quantum-inspired optimization solvers that run on classical hardware today.

  • Classiq - best for quantum algorithm development without physics PhD.

    High-level quantum software platform that abstracts low-level quantum circuit design. Enables traditional software engineers to design quantum algorithms. Critical for building internal quantum capability without hiring quantum physicists. Synthesizes optimized circuits for multiple hardware platforms.

PRODUCTIVITY PROMPT

Prompt of the Week: Learning Quantum Computing for AI Leaders

Most of us—even the technically inclined—struggle to grasp quantum computing. It’s often reduced to Schrödinger’s cat or misrepresented in Marvel movies. The reality is more complex and less cinematic.

For business leaders, that complexity creates a dangerous gap: understanding quantum computing’s relevance to AI without needing a physics degree. Most resources target researchers or developers, while vendors obscure limitations behind jargon. The result? Executives either dismiss quantum as science fiction or accept marketing claims without scrutiny—both leading to bad bets and missed opportunities.

This prompt creates a structured learning path tailored to non-technical business leaders evaluating quantum-AI convergence. By requesting explanations through business analogies rather than physics concepts, it makes quantum computing accessible while maintaining accuracy. The constraint to provide concrete examples from specific industries prevents abstract theorizing. The requirement to distinguish hype from reality forces intellectual honesty about timelines and limitations. Most importantly, it teaches judgment—how to evaluate quantum claims independently rather than relying on vendor pitches or media coverage.

You are a quantum computing educator specializing in teaching business executives about quantum's implications for AI and machine learning. Your audience is technically literate (understands software, cloud computing, ML basics) but lacks quantum physics background. Create a self-paced learning module that enables them to evaluate quantum computing strategies independently.

Context: The learner is a [ROLE: CTO/VP of Engineering/Chief Data Officer] at a [INDUSTRY: pharma/finance/manufacturing/logistics] company. They currently deploy classical AI/ML systems and need to understand:
- Whether quantum computing matters for their AI applications
- How to distinguish genuine progress from hype
- What actions to take in 2025 vs. 2028 vs. 2032
- How to evaluate vendor claims and partnership opportunities

Learning Objectives:
1. Understand quantum computing fundamentals without physics equations
2. Recognize which AI problems are quantum-amenable vs. classical-optimal
3. Evaluate realistic timelines for quantum-AI convergence
4. Develop frameworks for assessing vendor capabilities and claims
5. Create a personal action plan for quantum literacy and strategic positioning

Module Structure:

****Part 1: Core Concepts Through Business Analogies (30 minutes)****
Explain qubits, superposition, and entanglement using business/computing analogies rather than physics. For example: "A qubit is like a consultant who can simultaneously explore multiple strategic options until you ask for a recommendation, at which point they commit to one path."

****Part 2: The Quantum-AI Intersection (20 minutes)****
Which AI workloads could benefit from quantum acceleration? Use specific examples from the learner's industry. Distinguish between:
- Optimization problems (supply chain, portfolio, scheduling)
- Sampling problems (probabilistic modeling, risk assessment)
- Simulation problems (drug discovery, materials science)
- Classical AI problems that will never benefit from quantum

****Part 3: Reality Check—What Works Today vs. 2030 (20 minutes)****
Provide honest timeline assessment:
- What quantum systems can do today (with examples)
- What they'll likely do in 3-5 years (with uncertainty bounds)
- What remains speculative beyond 7-10 years
Include specific qubit counts, error rates, and coherence times needed for practical advantage

****Part 4: Evaluating Vendor Claims (15 minutes)****
Teach the learner to ask the right questions:
- "What qubit count and error rate does your solution require?"
- "How many logical qubits vs. physical qubits?"
- "What classical baseline are you comparing against?"
- "When do you project fault-tolerant operation?"
Provide examples of strong vs. weak vendor responses

****Part 5: Personal Action Plan (15 minutes)****
Based on the learner's industry and role, recommend:
- Immediate actions (post-quantum cryptography, vendor relationships)
- 2026 pilots (cloud quantum access for specific use cases)
- 2028+ positioning (talent development, infrastructure decisions)
- Monitoring strategy (conferences, publications, partnerships to track)

Output format: Interactive learning module with:
- Concept explanations using business analogies
- Industry-specific examples throughout
- Self-assessment questions after each section
- Resource list (papers, vendors, tools) organized by urgency
- One-page "Quantum Strategy Cheat Sheet" summarizing key decisions

Constraints:
- Tone: Accessible to smart non-physicists; avoid condescension
- Length: Completable in 90-120 minutes with breaks
- Examples: Must be industry-relevant and concrete
- Honesty: Acknowledge uncertainty; don't oversell or undersell
- Actionability: Every section must connect to decisions the learner will face

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

or to participate

Keep Reading

No posts found