EXECUTIVE SUMMARY

Tech companies will spend $400 billion on AI infrastructure in 2025—exceeding the Apollo program's inflation-adjusted budget, repeated every ten months. Both bubble skeptics and AI bulls present compelling evidence, leaving business leaders caught between FOMO and prudent risk management.

Unlike dot-com startups burning venture capital, today's AI leaders (Microsoft, Google, Amazon) are massively profitable and will survive even if AI bets fail. The technology demonstrably works for specific tasks. Infrastructure has alternative uses if foundation model companies collapse.

Unit economics worsen with scale rather than improve. Financial engineering obscures true profitability (Microsoft-OpenAI circular revenue bookings mirror WorldCom-era accounting). MIT studies show 95% of AI pilots fail to yield meaningful results. The gap between infrastructure spending ($400 billion) and consumer revenue ($12 billion annually) echoes telecom overcapacity that left 85-95% of fiber "dark" in 2002.

Don't bet on timing the bubble—build for multiple scenarios. Prioritize AI applications with 12-month ROI that work whether vendors consolidate or not. Rent compute from hyperscalers rather than building proprietary infrastructure. Develop internal expertise that survives vendor failures. Prepare to acquire distressed assets (GPUs, talent, data centers) at 2027 fire-sale prices if correction arrives. Remember Amara's Law: We overestimate short-term impact, underestimate long-term transformation. The internet crashed in 2000 but enabled Amazon, Google, and Facebook by 2005. Position to benefit from both timelines.

The bubble thesis is probably correct for 2026-2027. That doesn't make AI investments wrong—it makes vendor selection, contract structure, and capability building more critical than ever. Companies that survive bubbles distinguish hype from utility, build competency during uncertainty, and stay capitalized to buy when others must sell. Investors who know what’s coming can avoid misfortune.

FROM THE ARTIFICIALLY INTELLIGENT ENTERPRISE NETWORK

🎙️ AI Confidential Podcast - Are LLMs Dead?

🎯 The AI Marketing Advantage - Visa is building the trust layer for AI

 📚 AIOS - This is an evolving project. I started with a 14-day free Al email course to get smart on Al. But the next evolution will be a ChatGPT Super-user Course and a course on How to Build Al Agents.

Charlotte’s Largest AI Conference Is Here

Join senior executives and enterprise leaders on October 27–28 at the UNC Charlotte City Center for AI // FORWARD — a two-day event packed with strategic frameworks, operational playbooks, and peer-to-peer exchange.

Learn how to scale AI from pilot to enterprise with keynotes and workshops led by industry leaders, including The AIE Network’s John Willis and Mark R. Hinkle.

Seats are strictly limited and selling fast — secure your spot now to be part of Charlotte’s most important AI event!

AI DEEPDIVE

The AI Bubble

The infrastructure outlasts the hype—but only if you understand where the money actually went.

A CFO at a Fortune 500 manufacturer sits in her office, reviewing a $12 million proposal to deploy AI-powered quality control systems across three plants. Her VP of Operations insists the technology will pay for itself in 18 months. Her board asks why competitors are moving faster. Meanwhile, her risk committee has flagged a report comparing current AI investment levels to 1999—the year before $5 trillion in market value evaporated in the dot-com crash.

She's caught between two competing narratives: transformative technology that will redefine manufacturing, or speculative mania that will leave her holding worthless infrastructure when the bubble bursts. The truth, as with most existential business decisions, is more useful than either extreme suggests.

How Tech Companies Are Spending on AI

The numbers are staggering enough to make even seasoned investors pause. Tech companies will spend approximately $400 billion in 2025 on AI infrastructure—more than the inflation-adjusted cost of the Apollo program—repeated every ten months instead of once a decade. Microsoft alone plans $80 billion in capital expenditures. Meta is budgeting $60–65 billion. Amazon plans more than $100 billion. The comparison to the dot-com bubble isn't just inevitable—it's already happened, complete with Sam Altman acknowledging investors are "overexcited about AI."

Yet here's what makes this moment genuinely confusing for business leaders: both the bubble skeptics and the AI believers are presenting compelling evidence. Technology writer Cory Doctorow describes AI companies' unit economics as "dogshit" and warns of an impending "economic AI apocalypse." The Wall Street Journal's investigation reveals data centers using GPU chips as loan collateral—a financial engineering move that evokes the mortgage-backed securities of 2008.

Meanwhile, TV personality and former hedge fund manager Jim Cramer argues this is "the polar opposite" of 2000 because today's AI leaders are massively profitable companies, not money-losing startups. Capital Economics notes that forward price-to-earnings ratios aren't yet at dot-com levels, and the Federal Reserve is lowering rates rather than raising them.

The question isn’t whether AI is overhyped in the short term—it almost certainly is—but whether that matters for your business decisions in 2025.

What Is Actually Happening With AI Investment?

Understanding where the money flows reveals why simplistic bubble comparisons miss the mark. The $400 billion isn't going into vaporware. Here's the actual allocation:

Chips and compute hardware absorb roughly 60% of data center costs. Nvidia alone sold $50 billion worth of GPUs for AI training in 2024. These are real assets with immediate utility—not speculative domain names. The problem: individual chips burn out at astonishing rates during training runs. One 54-day training session can destroy tens of thousands of GPUs, and unlike the internet infrastructure from 2000, these assets depreciate faster than fresh-caught fish.

As the WSJ reports, data center companies are collateralizing loans with giant Nvidia GPUs. This is extraordinary: there's practically nothing—apart from fresh-caught fish, as Doctorow notes—that loses value faster than silicon chips. The financial innovation here isn't building infrastructure; it's disguising how quickly that infrastructure becomes worthless.

Power generation and cooling systems consume the second-largest bucket. Meta's new data centers require arrangements with nuclear power plants. xAI built a hybrid data center and power-generation facility in Memphis. McKinsey estimates that $300 billion in power generation infrastructure is needed, equivalent to powering 150 million homes annually. This infrastructure has multi-decade value—but only if demand materializes.

Data center construction itself represents the smallest capital component but the longest-term bet. Oracle revealed a five-year, $300 billion deal for compute power beginning in 2027—a figure that presumes immense growth for both Oracle and OpenAI, and more than a little faith that neither company's valuation will crater before construction completes.

The unit economics problem emerges here, and it's what makes Doctorow's analysis particularly compelling. Unlike Amazon in 1999, which became cheaper to operate with each additional customer, AI foundation models become more expensive with scale. Each new generation costs more to train. Each additional user increases compute costs. As Doctorow puts it—each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money.

The financial engineering compounds the problem. Microsoft "invests" in OpenAI by giving the company free access to its servers. OpenAI reports this as a $10 billion investment, then redeems these "tokens" at Microsoft's data centers. Microsoft then books this as $10 billion in revenue. The same dollar gets counted multiple times across balance sheets, inflating both companies' apparent performance. The WSJ investigation calls this "sweating the assets"—and notes it's normal for Nvidia to "invest" tens of billions in a data center company, which then spends that investment buying Nvidia chips.

The revenue gap tells the story. Sequoia Capital's David Cahn estimates that for AI companies to become profitable, they would have to sell us $600 billion worth of services over the life of today's data centers and GPUs. Yet the Wall Street Journal reports that American consumers spend only $12 billion a year on AI services. That's the economic difference between Singapore and Somalia—and it's the chasm between vision and reality.

How Generative AI Differs From the Dot-com Bubble

The dot-com crash was preceded by companies with no revenue, no profits, and no path to profitability. Pets.com burned through $300 million in under two years selling dog food below cost. Global Crossing and WorldCom built telecommunications infrastructure based on fraudulent accounting, with 85-95% of their fiber remaining unused four years after bankruptcy. WSJ's analysis notes that the AI bubble is vastly larger than the telecom bubble—WorldCom's fraud-soaked fiber optic bonanza saw the company's CEO sent to prison, where he eventually died, but the scale was measured in billions, not hundreds of billions.

Today's AI leaders are different in three material ways. First, they're already profitable. Nvidia, Microsoft, Google, Amazon, and Meta all generate massive cash flows from existing businesses. If their AI bets fail entirely, these companies survive—they just write down the losses. As Jim Cramer notes, "When the dotcoms made bad investments, nearly all of them went under. But, worst case scenario, if Google and Amazon and Meta make bad investments and take big losses, that's just another day at the office."

Second, the technology demonstrably works at specific tasks. Unlike the vague promise of "internet synergy" in 1999, GPT-4 can genuinely automate code documentation and reduce development time by 35-45%. McKinsey reported that GenAI helped its developers cut code documentation time in half and reduced time spent generating code by 35-45%.

Third, the infrastructure has alternative buyers. If OpenAI collapses tomorrow, those data centers don't become worthless—they serve cloud computing, graphics rendering, or scientific research.

The more accurate comparison isn't Pets.com. It's the railroad boom of the 1800s, as multiple analysts have noted. Hundreds of railroad companies failed. Investors lost fortunes. But the infrastructure powered American growth for a century. Or consider the telecom bubble itself: WorldCom and Global Crossing declared bankruptcy, but their "dark fiber" was eventually purchased for pennies on the dollar and became the backbone of broadband internet. The technology was real. The timing and business models were wrong.

How to Implement Strategic Positioning

Doctorow's thesis is stark: "Anything that can't go on forever eventually stops." The AI companies are incinerating money faster than practically any other human endeavor in history, with precious little to show for it. He argues that a future administration—potentially led by Trump—might bail out the AI companies, but questions for how long.

Business leaders face three scenarios, each requiring different responses:

Scenario One: The bubble bursts in 2026-2027. This is the Doctorow/Sequoia thesis: AI companies can't generate the $800 billion in revenue needed to justify current infrastructure spending. Foundation model companies shut down or consolidate. GPU prices crater. Venture funding evaporates. Your company's AI vendors disappear overnight.

Your move: Prioritize AI infrastructure you control. Build capabilities around open-source models that won't disappear when venture capital dries up. Avoid long-term contracts with money-losing AI vendors. Prepare to buy compute power at fire-sale prices in 2027-2028. Plan to "absorb the productive residue" left behind after the bubble bursts—acquiring GPUs for ten cents on the dollar, hiring skilled applied statisticians in a buyer's market, and optimizing open-source models with massive potential.

Scenario Two: Selective correction. This is the Jim Cramer/Capital Economics thesis: weak AI companies fail, but Microsoft, Google, and Amazon survive handily. There's a correction, not a crash. Profitable use cases emerge slowly. Some applications justify costs, most don't.

Your move: Let hyperscalers take the infrastructure risk. Rent, don't buy. Focus ruthlessly on applications with clear ROI within 12 months. Treat AI as productivity enhancement for existing workers, not wholesale replacement. Be ready to consolidate vendors as weaker players exit.

Scenario Three: Sustained growth. This is the bull case: AI achieves breakthrough capabilities in reasoning, achieves genuine productivity gains, and $800 billion in new revenue materializes across industries.

Your move: Invest aggressively in AI talent and infrastructure now. Build proprietary models. Accept 3-5 year payback periods. Race competitors for AI-native market position.

Here's the sophisticated insight: you don't need to bet on one scenario. Build capabilities for Scenario Two while preparing for Scenarios One and Three. Establish AI pilots with quarterly re-evaluation triggers. Partner with cloud providers who absorb infrastructure risk. Develop internal expertise that survives vendor consolidation.

Common Missteps

Misstep One: Ignoring Amara's Law. Roy Amara observed in 1978 that "we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." The dot-com crash proved Amara right—the internet did transform everything, just not on the 1999 timeline. Business leaders make two symmetric errors: dismissing AI entirely because the bubble is obvious, or assuming 2025 capabilities scale linearly to 2027. Neither is sophisticated.

Misstep Two: Confusing infrastructure spending with application value. The fact that Microsoft is spending $80 billion doesn't mean your company should spend $8 million. Hyperscalers are making decade-long platform bets. You need 12-month ROI on specific use cases. The infrastructure exists whether you use it or not—your job is identifying the 5% of applications that actually improve margins.

Misstep Three: Assuming the bubble bursting means AI disappears. Doctorow's most important point is that AI is what Princeton's Arvind Narayanan and Sayash Kapoor call "a normal technology." It's a grab-bag of useful (sometimes very useful) tools that can sometimes make workers' lives better, when workers get to decide how and when they're used. When the dot-com bubble popped, Amazon's stock dropped from $107 to $7. Amazon didn't disappear—it became the most valuable retailer on earth. The infrastructure didn't vanish—it became cheaper to use. Business leaders who waited for "AI to prove itself" will find themselves five years behind competitors who bought capacity at 2027 fire-sale prices and built competency during the correction.

Misstep Four: Treating this as a timing problem. You cannot time the bubble. Even sophisticated investors disagree on whether valuations are stretched. Your advantage isn't prediction—it's portfolio construction. Maintain optionality. Build skills that transfer across AI vendors. Implement applications that improve productivity whether the bubble bursts or not.

Finding the Business Value of AI

The most valuable insight from the dot-com era isn't that bubbles burst—it's that infrastructure outlasts investor enthusiasm. In 2002, you could buy "dark fiber" for pennies on the dollar. Data centers sat empty. Skilled engineers needed work. The same companies that caused the crash by overbuilding infrastructure created the conditions for Amazon, Google, and Facebook to scale cheaply in 2004-2008.

AI bubble bursting in 2026-2027 creates similar opportunities: GPU compute at 90% discounts, AI engineers seeking employment, open-source models abandoned by unprofitable startups, and data center capacity sold at distressed prices. Businesses should plan to absorb the productive residue—acquiring GPUs for ten cents on the dollar, hiring applied statisticians in a buyer's market, and optimizing open-source models with massive potential.

The business value lies in strategic patience combined with tactical preparation. Companies that survive bubbles do three things well: they distinguish between hype and capability, they build competency during uncertainty, and they're capitalized to buy assets when others are forced to sell.

Amara's Law suggests we're in the "overestimated short-term impact" phase. That doesn't make AI worthless—it makes it predictably overhyped right now and predictably transformative in 2030. Your competitive advantage comes from understanding which applications deliver value in both timelines, and positioning your organization to benefit whether the crash comes in 2026 or never arrives at all.

The most important thing about AI isn’t its technical capabilities or limitations—it’s the investor story and the ensuing mania that has set up an economic catastrophe that will harm hundreds of millions or even billions of people. AI isn't going to wake up, become superintelligent and turn you into paperclips—but rich people with AI investor psychosis are almost certainly going to make the economy much poorer.

The CFO reviewing that $12 million AI proposal shouldn't ask "Is this a bubble?" She should ask: "Does this deliver value if AI funding dries up next year? Does it position us to scale if AI capabilities improve? Can we acquire this cheaper in 2027?"

If the answer to all three is yes, the bubble thesis becomes operationally irrelevant.

AI TOOLBOX
  • Vellum - Best for managing multi-model AI workflows. Production-ready platform for testing, monitoring, and switching between AI models. Critical for avoiding vendor lock-in when bubble uncertainty makes long-term contracts risky. Offers prompt version control and A/B testing across OpenAI, Anthropic, and open-source alternatives.

  • LangSmith - Best for debugging and evaluating AI applications. When you need to understand why your AI application fails in production, LangSmith provides observability into model behavior. Essential for building applications that survive vendor consolidation—detailed logging lets you replicate functionality if your primary AI vendor disappears.

  • Modal - Best for running AI workloads without infrastructure commitment.

    Serverless compute platform that spins up GPUs on-demand. Perfect for scenario planning: test AI applications without capital expenditure, scale down instantly if bubble bursts, scale up if capabilities improve. Pay only for compute seconds used.

  • Weights & Biases - Best for ML experiment tracking and collaboration. Enterprise-grade platform for monitoring training runs, comparing model versions, and collaborating across AI teams. Particularly valuable for companies building internal AI competency—the expertise you develop here transfers regardless of which AI vendors survive.

  • Roboflow - Best for computer vision applications. Purpose-built platform for deploying vision AI in manufacturing, quality control, and logistics. Strong ROI documentation and on-premise deployment options make it suitable for conservative infrastructure strategies. Works with open-source models, reducing vendor dependency.

PRODUCTIVITY PROMPT

Prompt of the Week: The Strategic Investment Analyst

Business leaders receive AI proposals with vague ROI promises, unclear timelines, and no consideration of bubble risk. Internal champions oversell capabilities. Vendors obscure implementation complexity. Finance teams lack frameworks for evaluating AI spend against traditional capital allocation criteria. The result: companies either avoid AI entirely or greenlight proposals that would embarrass them during the next board review.

This prompt creates a structured investment analysis framework that forces consideration of multiple scenarios, quantifies downside risk, and separates technology capability from business value. By embodying a strategic investment analyst role, it applies traditional capital allocation discipline to AI spending while acknowledging uncertainty. The constraint to provide both bull and bear cases prevents one-sided advocacy. The requirement for specific metrics and timeline assumptions forces precision where most AI proposals traffic in ambiguity.

Just fill in the blanks in the prompt below in the [SQUARE BRACKETS] and use ChatGPT or your favorite chatbot as a thought partner to understand how to drive better ROI from your AI investment.

You are a strategic investment analyst at a [COMPANY TYPE] with deep expertise in technology capital allocation and a track record of identifying both transformational opportunities and speculative excess. Your CEO has asked you to evaluate a proposal to invest [INVESTMENT AMOUNT] in [SPECIFIC AI APPLICATION] over [TIMEFRAME].

Context: The proposal comes at a moment of high uncertainty about AI valuations and business model sustainability. Multiple reputable analysts have compared current AI investment levels to the dotcom bubble. Your analysis must acknowledge this uncertainty while providing actionable recommendations.

Task: Produce a structured investment analysis that includes:

1. ****Base Case Scenario Analysis****
   - Best case (AI capabilities meet projections, vendor remains stable)
   - Likely case (mixed results, some vendor consolidation)
   - Worst case (bubble bursts, infrastructure becomes distressed asset)
   
2. ****Unit Economics Breakdown****
   - Cost per transaction/output with AI vs. without AI
   - Payback period under each scenario
   - Dependency on vendor-specific infrastructure
   
3. ****Risk Mitigation Strategies****
   - How to structure vendor contracts for downside protection
   - Open-source alternatives if primary vendor fails
   - Skills/capabilities that survive vendor consolidation
   
4. ****Decision Framework****
   - Under what conditions should we proceed immediately?
   - Under what conditions should we pilot only?
   - Under what conditions should we wait for market consolidation?

Output format: Executive summary (3 bullet points), detailed analysis (use tables for scenario comparison), and explicit recommendation with trigger points for re-evaluation.

Constraints:
- Tone: Analytically rigorous, acknowledges uncertainty without advocacy
- Avoid: "Revolutionary," "transformative," "paradigm shift," or other hype language
- Include: Specific quantitative assumptions that can be updated quarterly
- Length: 800-1200 words maximum

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

or to participate

Keep Reading

No posts found