EXECUTIVE SUMMARY

New research from the National Bureau of Economic Research analyzing 18 billion ChatGPT messages reveals the most comprehensive picture yet of how people actually use AI. The findings challenge conventional enterprise assumptions: 73% of usage is non-work-related, 77% of messages cluster in just three categories (practical guidance, information-seeking, and writing), and users find more value in "Asking" (49%) than "Doing" (40%). For business leaders, this means AI strategy should prioritize decision support over task automation.

Today, we’re unpacking the data behind these findings and outline what they mean for leaders shaping AI strategy in 2025 and beyond.

MORE FROM THE ARTIFICIALLY INTELLIGENT ENTERPRISE NETWORK

🎙️ AI Confidential Podcast - Are LLMs Dead?

🎯 The AI Marketing Advantage - AI Agents Take Over the Marketing Workflow

 📚 AIOS - This is an evolving project. I started with a 14-day free AI email course to get smart on AI. But the next evolution will be a ChatGPT Super-user Course and a course on How to Build AI Agents.

AI DEEPDIVE

How People Actually Use AI

The Data Behind 18 Billion Messages

ChatGPT now serves 700 million weekly active users, and generates 18 billion messages per week. This NBER study, published in September 2025, represents the first rigorous analysis of actual usage data rather than surveys or assumptions. The researchers developed a novel taxonomy by mapping conversations to O*NET occupational categories, creating a framework that connects AI usage directly to workplace activities.

Weekly active ChatGPT users on consumer plans (Free, Plus, Pro), shown as point-in-time snapshots every six months, November 2022–September 2025 [Source: NBER]

The 70/30 Split Nobody Expected

The most striking finding: 73% of ChatGPT usage is non-work related, up from 53% in June 2024. This isn't users slacking off—it's evidence that AI has become infrastructure for daily life, not just a productivity tool. Personal use spans everything from meal planning to relationship advice to learning new skills.

This ratio has significant implications for enterprise AI strategy. Organizations that measure only workplace productivity capture less than 30% of how their employees actually develop AI fluency.

What People Actually Do With AI

The taxonomy reveals three dominant categories accounting for 77% of all messages:

  1. Practical Guidance (29%): How-to questions, recommendations, planning assistance

  2. Seeking Information (24%): Research, fact-finding, learning

  3. Writing (24%): Content creation, editing, communication

Share of consumer ChatGPT messages broken down by high level conversation top [Source: NBER]

Notably, programming represents only 4.2% of usage—far less than the tech industry narrative suggests. Education accounts for 10.2%, making it a larger use case than coding.

A critical nuance in the writing category: two-thirds of writing messages involve modification rather than creation. Users aren't asking AI to write from scratch; they're asking it to edit, improve, or transform existing content. This distinction matters for training and tool design.

Asking vs. Doing vs. Expressing

The researchers categorized all usage into three meta-categories:

  • Asking (49%): Information retrieval, decision support, learning

  • Doing (40%): Task completion, content creation, problem-solving

  • Expressing (11%): Creative writing, communication, self-expression

[Source:NBER]

Here's what matters: user satisfaction is higher for Asking than for Doing. People find more value when AI helps them think through problems than when it executes tasks for them. This challenges the automation-first mindset dominating enterprise AI strategy.

The Work Usage Profile

Within the 27% of work-related usage, the distribution is revealing:

  • Writing assistance: 40% of work messages

  • Information and decision-making: 41% combined

  • Programming: Higher than consumer average but still minority

[Source: NBER]

When mapped to O*NET's Generalized Work Activities, "Making Decisions and Solving Problems" and "Getting Information" appear in the top-2 across virtually all occupations. This isn't about automating routine tasks—it's about augmenting judgment.

Demographics: The Gaps Are Closing

The study documents rapid demographic shifts:

  • Gender: At launch, 80% of users were male. By June 2025, the user base was 52% female.

  • Age: 46% of messages come from users aged 18-25, but adoption is growing fastest among older cohorts.

  • Geography: Growth is faster in low and middle-income countries than in high-income nations.

For enterprises, this means AI literacy programs designed for technical early adopters will miss most of the workforce.

The Consumer Ecosystem Context

The a16z Top 100 Gen AI Apps report (August 2025) provides complementary data on the broader landscape. Fourteen apps have appeared in all five editions of the ranking—the "All Stars"—with ChatGPT maintaining a dominant position.

Brynjolfsson and Collis estimate that ChatGPT generated $97 billion in US consumer surplus in 2024—the value users received beyond what they paid. This economic impact dwarfs most enterprise software categories.

Implementation: Five Strategic Shifts

Based on the NBER research findings, here are the priority changes for enterprise AI strategy:

  1. 1. Expand Measurement Beyond Productivity: If 73% of AI value creation happens outside work tasks, traditional ROI metrics miss most of the picture. Track decision quality, learning velocity, and employee AI fluency alongside task completion.

  2. Position AI as Advisor First: User satisfaction is higher for "Asking" than "Doing." Lead with decision support use cases—scenario analysis, option evaluation, risk assessment—before automation projects.

  3. Start With Writing Workflows: Writing represents 24% of all usage and 40% of work usage. Focus on editing and improvement workflows rather than generation from scratch, since two-thirds of writing usage involves modification.

  4. Extend Beyond Engineering: Programming is only 4.2% of usage. Every department—HR, finance, operations, sales—has higher-volume use cases waiting for enablement. Larridin research shows employees with formal AI training demonstrate 2.7x higher proficiency than self-taught users.

  5. Build Around Decision Support: "Making Decisions and Solving Problems" tops the usage charts across occupations. Design AI implementations that enhance judgment rather than replace it.

Common Missteps

  • Over-indexing on automation: The data shows people want thinking partners, not task robots. Automation-first strategies miss the highest-value use cases.

  • Ignoring non-work learning: Employees develop AI skills through personal use. Restrictive policies may slow organizational capability building.

  • Assuming technical users lead: The gender and age data show AI adoption is democratizing rapidly. Programs designed for developers will miss most users.

  • Measuring only productivity: With 81% of work messages going to information and decision-making, productivity metrics capture a fraction of actual value.

Next Steps For Business Adoption of AI

This research provides the first empirical foundation for enterprise AI strategy. The key insight: AI is becoming infrastructure for thinking, not just doing. Organizations that recognize this shift—building for decision support, measuring beyond productivity, and enabling all employees—will capture value that their automation-focused competitors miss.

One important caveat: this data is ChatGPT-specific. Claude's usage patterns show different distributions—33% programming versus ChatGPT's 4.2%—reflecting different user bases and product positioning. As the market matures, understanding these platform-specific patterns will become increasingly important.

This email includes the core analysis, but the complete issue lives on our website—featuring the AI Toolbox, a Productivity Prompt of the Week, and additional insights that don’t fit in email format. Visit the site to explore the full edition and put these ideas into action.

AI TOOLBOX
  • Writer — Enterprise AI writing platform with full-stack content generation. Team plan $18/user/month includes brand voice customization, style guides, and SOC 2 Type II compliance. Built on proprietary Palmyra LLMs, not third-party APIs. Aligns with the 24% writing usage finding.

  • Grammarly Pro — AI writing assistant with tone detection and rewrite suggestions. Pro tier $12/user/month (annual) includes 2,000 AI prompts monthly, plagiarism detection, and enterprise SSO. Supports the "editing over creation" insight—two-thirds of writing is modification.

  • Amplitude — Product analytics platform with AI-powered behavioral insights. Starter free for up to 50K monthly tracked users; Plus $49/month adds advanced charts and unlimited cohorts. Critical for measuring AI usage beyond productivity metrics.

ALL THINGS AI 2026

Are you looking to learn from the leaders shaping the AI industry? Do you want to network with like-minded business professionals?

Join us at All Things AI 2026, happening in Durham, North Carolina, on March 23–24, 2026!

This two-day conference kicks off with a full day of hands-on training on Day 1, followed by insightful talks from the innovators building the AI infrastructure of the future on Day 2.

Don’t miss your chance to connect, learn, and lead in the world of AI.

PRODUCTIVITY PROMPT

Prompt of the Week: AI Usage Audit Framework

The Challenge: You need to understand how your organization actually uses AI—not how you assume it is used—to inform strategy and investment decisions.

The Prompt:

You are an AI adoption analyst helping me audit how our organization uses AI tools. I'll provide context about our company, and you'll help me create a structured assessment.

First, I need you to understand our situation:
- Company: [COMPANY NAME]
- Industry: [INDUSTRY]
- Employee count: [NUMBER]
- Current AI tools deployed: [LIST TOOLS]
- Departments using AI: [LIST DEPARTMENTS]

Based on the NBER research showing AI usage breaks into Asking (49%), Doing (40%), and Expressing (11%), analyze our likely usage patterns.

Provide your analysis in this structure:

1. **Usage Pattern Summary:** Estimate our breakdown across Asking/Doing/Expressing based on our industry and tools.

2. **Measurement Gap Matrix:** Create a table showing:
   - What we likely measure (productivity metrics)
   - What we likely miss (decision support, learning, satisfaction)
   - Recommended new metrics for each gap

3. **Team-by-Team Assessment:** For each department, identify:
   - Most likely high-value use case
   - Current measurement approach
   - Recommended measurement expansion

4. **Priority Recommendations:** Rank the top 5 changes we should make to our AI measurement and enablement strategy.

5. **Quick Wins:** Identify 3 things we can implement this week to better understand our AI usage.

Example Use Case: A VP of Digital Transformation at a financial services firm used this prompt to audit ChatGPT Enterprise usage across 200 employees. The analysis revealed that 60% of value came from decision support (loan risk analysis, client recommendations) that wasn't being tracked, while productivity metrics focused on document automation, which accounted for only 15% of actual usage.

Variations:

  • For IT leaders: Add "Include security and compliance monitoring gaps"

  • For HR leaders: Add "Include employee AI skill development tracking"

  • For executives: Add "Include competitive benchmarking recommendations"

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

or to participate

Keep Reading

No posts found