EXECUTIVE SUMMARY

You know a technology category has arrived when it starts spawning foundations—not working groups or consortiums, but foundations with bylaws, governance structures, and institutional weight that signals long-term commitment.

  • The Linux Foundation launched the Agentic AI Foundation (AAIF) on December 9 with Anthropic, OpenAI, Google, Microsoft, AWS, and others contributing MCP, Goose, and AGENTS.md as open source projects under neutral governance.

  • The Agentics Foundation has quietly built a 100,000+ practitioner community through Reddit, LinkedIn, and Discord—no platinum membership tiers, just builders learning together.

  • One foundation serves enterprises with formal governance; the other serves practitioners with a hands-on community. Smart organizations engage with both.

If you're planning your 2025 agentic AI strategy, the emergence of complementary foundations signals that the category has matured—and that the organizations that shape these standards will have advantages over those that simply consume them.

MORE FROM THE ARTIFICIALLY INTELLIGENT ENTERPRISE NETWORK

🎙️ AI Confidential Podcast - Are LLMs Dead?

🎯 The AI Marketing Advantage - AI Agents Take Over the Marketing Workflow

 📚 AIOS - This is an evolving project. I started with a 14-day free AI email course to get smart on AI. But the next evolution will be a ChatGPT Super-user Course and a course on How to Build AI Agents.

AI DEEPDIVE

A Tale of Two Agentic Foundations

When enterprises need governance and practitioners need community, the smart money builds both

The agentic AI space just crystallized around two distinct centers of gravity. Understanding the difference—and the complementarity—will shape how enterprises approach this technology through 2025 and beyond.

What Makes a Technology Foundation

Technology foundations exist to solve coordination problems that markets can't. When multiple competitors need the same infrastructure and that infrastructure benefits from being open and interoperable, foundations provide neutral ground for collaboration without antitrust concerns or vendor lock-in fears.

The Linux Foundation has perfected this model. Kubernetes started as Google's internal container orchestration system. Today, under the Cloud Native Computing Foundation (a Linux Foundation project), it's the de facto standard that runs workloads across AWS, Azure, and GCP. No single vendor controls it. Everyone benefits.

Last week's AAIF launch signals that the same institutional machinery is now focused on agentic AI. The goal, as Jim Zemlin, Executive Director of the Linux Foundation, stated: “avoid a future of "closed wall proprietary stacks where tool connections, agent behavior, and orchestration are locked behind a handful of platforms."

But formal governance isn't the only coordination mechanism that matters. Communities also solve coordination problems—different problems, through other means.

The AAIF Model: Corporate Governance for Open Standards

The Agentic AI Foundation launched on December 9, 2025, with a clear mission: provide vendor-neutral oversight for AI agent infrastructure. Its structure mirrors other successful Linux Foundation projects.

Founding Projects:

The AAIF launched with three cornerstone donations that represent the foundational plumbing of the agent era:

  • Model Context Protocol (MCP) from Anthropic has become the universal standard for connecting AI models to tools, data, and applications. Released just one year ago, MCP now has over 10,000 published servers and has been adopted by Claude, Cursor, Microsoft Copilot, Gemini, VS Code, and ChatGPT. As MCP co-creator David Soria Parra explained: "We're all better off if we have an open integration center where you can build something once as a developer and use it across any client."

  • Goose from Block is an open source agent framework that serves as a reference implementation for MCP. Thousands of Block engineers use it weekly for coding, data analysis, and documentation. As Manik Surtani, Head of Open Source at Block, noted: "By establishing the AAIF, Block and this group of industry leaders are taking a stand for openness."

  • AGENTS.md: From OpenAI is a simple instruction file developers add to repositories to guide AI coding tools. Think of it as a README for machines—a predictable place to provide context that helps agents work reliably across different projects. Over 60,000 open source projects have already adopted it.

Membership Tiers:

AAIF operates through tiered corporate membership. Platinum members (AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, OpenAI) shape strategic direction. Additional tiers provide varying levels of governance participation. This is companies backing infrastructure with real commitment—not just press release participation. Keep in mind, you may need to pay to join the Linux Foundation for general membership. Consider this when you engage.

What's Coming:

The first MCP Developers Summit is scheduled for April 2-3, 2026, in New York. This signals the Linux Foundation's commitment to building not just governance but community around these standards.

The Agentics Model: Grassroots Community for Practitioners

While AAIF was forming in boardrooms, another foundation was growing from the ground up.

The Agentics Foundation, founded by Reuven Cohen (rUv), claims 100,000+ members globally across Reddit, LinkedIn, Discord, and regional meetup chapters. Unlike AAIF's corporate structure, Agentics is built around individuals learning and creating together.

What They Offer:

Agentics operates through education and community engagement rather than formal governance. Their Discord server hosts biweekly classes. Regional chapters (London, Toronto, and others) hold regular meetups. Hackathons bring builders together for intensive collaboration. The foundation has developed its own frameworks—most notably SPARC(Specification, Pseudocode, Architecture, Refinement, Completion)—for guiding agent development.

Organizational Structure:

Agentics operates as a non-profit with an Apache Foundation-inspired meritocracy model. There are no corporate membership tiers—contributions are measured by community participation, not by dues. This keeps the barrier to entry low for individual practitioners who want to learn and contribute but can't justify a corporate membership fee.

How They Work Together

This isn't a zero-sum competition. The two foundations address different gaps in the agentic AI ecosystem.

What AAIF Provides:

  • Formal governance that enterprises require before betting on a standard

  • Intellectual property frameworks that protect contributors and users

  • Neutral territory where competitors (OpenAI, Anthropic, Google) can collaborate

  • Long-term maintenance commitments for critical infrastructure

What Agentics Provides:

  • Hands-on learning for practitioners building with agent technologies

  • Community support for debugging, best practices, and real-world implementation

  • Regional presence through meetups and local chapters

  • Accessible entry points for individual contributors

Think of it as the "AI mullet" model: business in the front (AAIF's formal governance), hacker collective in the back (Agentics' builder community).

How to Engage with Agentic Foundations

For Enterprise Leaders:

If you're evaluating agentic AI for production deployment, AAIF membership signals commitment to open standards. More practically, your engineering teams should be building on MCP now—it's emerging as the de facto integration layer, and formal foundation governance reduces vendor lock-in risk.

Consider these engagement paths:

Phase 1: Standards Alignment

Review your current AI integration architecture against MCP capabilities. Identify where custom connectors could be replaced with standardized protocols.

Phase 2: Governance Participation

If agentic AI is strategic for your organization, consider AAIF membership at the appropriate tier. This provides input on roadmap decisions and early access to emerging standards.

Phase 3: Community Investment

Encourage your engineering teams to participate in both communities. AAIF for formal standards work, Agentics for practical implementation knowledge.

For Practitioners:

Start with Agentics if you're learning. The barrier to entry is lower, the community is welcoming, and you'll find people actively building and sharing their work. Their Discord and Reddit channels are active with real implementation questions and answers.

As your projects mature, engage directly with AAIF-governed projects. Contribute to the MCP servers. Build Goose extensions. Add AGENTS.md to your repositories. This is how individual practitioners shape standards—through contribution, not membership fees.

Common Missteps

Waiting for Standards to Settle

The standards are settling now. MCP has already achieved broad adoption. Organizations that delay building on these foundations will find themselves catching up rather than contributing.

Treating Foundation Membership as Marketing

AAIF membership isn't a badge—it's a commitment. If your organization joins but doesn't contribute engineering resources or participate in governance, you're paying for access you won't use.

Ignoring Community Learning

Enterprise teams often undervalue community participation. The practitioners in Agentics are solving problems today that your team will face tomorrow. Their experience compounds faster than any vendor's documentation can capture.

Business Value

Learning Acceleration:

Organizations with teams active in Agentics report faster onboarding for agentic AI projects. Community knowledge transfer compresses learning curves that would otherwise take months of internal trial and error.

Governance Enables Procurement:

Many enterprise procurement processes require evidence of neutral governance before adopting infrastructure components. AAIF provides exactly this evidence for MCP, Goose, and AGENTS.md. This removes a barrier that previously slowed agentic AI adoption.

Competitive Implications:

Organizations that engage with both foundations will develop agentic AI capabilities faster than those that engage with neither. The combination of formal standards (reducing technical risk) and practitioner community (accelerating learning) creates compounding advantages.

What This Means for Your Planning

The emergence of two complementary foundations—one corporate, one community—is exactly what mature technology categories look like. Consider how Linux has both the Linux Foundation (governance) and vibrant user groups worldwide (community). Or how web standards have W3C (formal) alongside countless local meetups and online communities (informal).

For your 2025 planning, three concrete implications emerge:

  • Budget for Foundation Engagement: If agentic AI is on your roadmap, allocate resources for AAIF membership (at whatever tier matches your scale) and staff time for community participation. This isn't optional overhead—it's infrastructure investment that compounds over time.

  • Accelerate MCP Adoption: The protocol just gained the governance credibility that enterprise procurement requires. If you've been waiting for "the right time" to build on MCP, this is it. Your competitors who are already building will have a head start.

  • Connect Your Teams to Community: Encourage your practitioners to engage with Agentics Foundation resources. The learning acceleration from community connection is real, and it's one of the few advantages smaller, more agile organizations have over large enterprises with isolated teams.

The age of agentic AI isn't coming—it's here. And like every technology era before it, the organizations that shape the standards will have advantages over those who simply consume them.

This email includes the core analysis, but the complete issue lives on our website—featuring the AI Toolbox, a Productivity Prompt of the Week, and additional insights that don’t fit in email format. Visit the site to explore the full edition and put these ideas into action.

AI TOOLBOX

Tools for engaging with the emerging agentic AI foundation ecosystem:

  • Goose: Agent Framework - Block's open source agent framework, now under AAIF governance. Local-first architecture means your data stays on your infrastructure. MCP-native from day one, making it the reference implementation for protocol compliance.

    License: Open source (Apache 2.0)

    Best for: Engineering teams building production agent systems

  • MCP Inspector: Protocol Debugging - Developer tool for testing and debugging MCP server implementations. Essential for teams building custom MCP integrations or validating third-party servers.

    Pricing: Open source

    Best for: Developers implementing MCP servers

  • Obot AI: MCP Gateway - Open source platform for managing MCP connections at scale. Provides centralized control, security proxies, role-based access, audit logging, and a curated catalog for organizations deploying MCP across multiple AI systems. Founded by the Rancher Labs team, raised $35M seed in September 2025.

Pricing: Open source (free) + enterprise support available

Best for: IT teams governing AI integrations across the organization

  • Agentics SPARC Framework — Development Methodology - Structured approach to agent development covering Specification, Pseudocode, Architecture, Refinement, and Completion phases. Available as documentation and community templates.

    Pricing: Open source

    Best for: Teams establishing agent development practices

ALL THINGS AI 2026

Are you looking to learn from the leaders shaping the AI industry? Do you want to network with like-minded business professionals?

Join us at All Things AI 2026, happening in Durham, North Carolina, on March 23–24, 2026!

This two-day conference kicks off with a full day of hands-on training on Day 1, followed by insightful talks from the innovators building the AI infrastructure of the future on Day 2.

Don’t miss your chance to connect, learn, and lead in the world of AI.

PRODUCTIVITY PROMPT

Prompt of the Week: Foundation Engagement Assessment

Organizations struggle to evaluate where and how to engage with emerging AI standards bodies. Without a structured assessment, teams either overcommit resources to every initiative or miss critical inflection points, allowing competitors to move ahead.

This prompt applies strategic fit criteria across multiple dimensions—technical alignment, organizational readiness, competitive positioning, and resource requirements. By structuring the evaluation, it surfaces trade-offs that might otherwise remain implicit.

Note that even if you aren’t thinking of engaging in a foundation, this prompt demonstrates a few things:

  • Instructions show an ordered list to accomplish a complicated workflow.

  • Output format is specified, so you get the outcome you want.

  • Note that there’s a scoring prompt so it’s well defined, not just vague good or bad, giving you a vague idea of how they rank versus each other.

  • Constraints keeps the model focused on your task, and doesn’t go too far afield.

The Prompt

You are a technology strategist evaluating foundation and standards body engagement for an enterprise organization. Your task is to assess engagement options and provide actionable recommendations.

## Context
[PASTE DESCRIPTION OF YOUR ORGANIZATION'S AI STRATEGY AND CURRENT AGENT INITIATIVES]

## Foundation/Standards Body to Evaluate
[PASTE INFORMATION ABOUT THE FOUNDATION—MISSION, MEMBERSHIP, PROJECTS, GOVERNANCE]

## Instructions

1. Assess technical alignment: How closely do the foundation's projects align with our current and planned AI architecture?

2. Evaluate strategic value: What governance participation, early access, or networking benefits does membership provide?

3. Calculate resource requirements: What engineering time, membership fees, and ongoing commitment does meaningful participation require?

4. Analyze competitive dynamics: Which competitors are members? What's the risk of non-participation?

5. Identify community value: Beyond formal membership, what community engagement opportunities exist?

## Output Format

Provide your analysis as a structured assessment including:
- Technical Fit Score (1-10) with rationale
- Strategic Value Score (1-10) with rationale
- Resource Requirement Estimate (low/medium/high)
- Competitive Risk Assessment (low/medium/high)
- Recommended engagement level (none/observe/community/member/contributor)
- Specific next steps with timeline

## Constraints
- Base assessment on provided information; flag assumptions
- Consider both immediate and 24-month strategic horizons
- Distinguish between formal membership value and community participation value

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

or to participate

Keep Reading

No posts found