The Infrastructure Layer for AI Agents Is Here (And It's Already Standard)
Major tech companies just donated their AI agent protocols to the Linux Foundation. If you're building agents, this shift from proprietary chaos to open standards changes everything.
I've spent enough time in infrastructure to recognize when the industry is choosing its plumbing. Last week, OpenAI, Anthropic, and Block donated their core AI agent technologies to the Linux Foundation's new Agentic AI Foundation. This isn't another standards committee that will argue for three years before producing a 400-page spec nobody implements. The standards already exist, they're already deployed at scale, and now they're being handed over to neutral governance.
If you're building anything with AI agents, this matters more than whatever model release is trending this week.
What Actually Got Donated
Three projects anchor the foundation, and they solve different pieces of the agent infrastructure puzzle:
Model Context Protocol (MCP) from Anthropic is the integration layer. It's a universal standard for connecting AI models to external systems—databases, APIs, internal tools. According to Anthropic's announcement, there are now over 10,000 active public MCP servers in production, with adoption across ChatGPT, Cursor, Gemini, Microsoft Copilot, and VS Code. The official SDKs see 97 million monthly downloads across Python and TypeScript.
Goose from Block is an agent framework that thousands of engineers at Square and Cash App use weekly for coding, data analysis, and documentation. It's the implementation layer—how you actually build agents that do work.
AGENTS.md from OpenAI is a simple instruction file you drop into repositories to tell AI coding tools how to behave in your codebase. Think of it as a README for robots.
These aren't vaporware. They're battle-tested at companies processing billions in payments and serving hundreds of millions of users.
Why the Linux Foundation
The cynic in me sees three competitors suddenly playing nice and wonders what changed. But this actually makes sense.
As MCP co-creator David Soria Parra told TechCrunch: "The main goal is to have enough adoption in the world that it's the de facto standard. We're all better off if we have an open integration center where you can build something once as a developer and use it across any client."
The Linux Foundation has done this before. Kubernetes won the container orchestration wars not through superior technology alone, but through neutral governance that made it safe for everyone to adopt. You can't build agent infrastructure if every vendor is worried about feeding their competitor's moat.
OpenAI engineer Nick Cooper put it plainly: "We need multiple [protocols] to negotiate, communicate, and work together to deliver value for people, and that sort of openness and communication is why it's not ever going to be one provider, one host, one company."
The Infrastructure Is Moving Fast
While the foundation provides the standards, specialized infrastructure is emerging to support agents at scale:
Tiger Data launched Agentic Postgres in early December—a Postgres variant designed specifically for agent workloads. The key innovation is "Fluid Storage," a distributed block layer that enables zero-copy database forks in seconds. An agent can spin up an isolated copy of production data, test whether new indexes improve performance, and tear everything down without touching production. According to InfoQ, this addresses what Tiger Data calls a critical gap: agents need to iterate rapidly on real data, and traditional cloud databases like Amazon EBS can't deliver that speed.
Fal raised $140 million (Series D led by Sequoia) at a $4.5 billion valuation, tripling their worth from July. They're building the model hosting layer—making it simple to run image, video, and audio models at scale. Bloomberg reported the company has surpassed $200 million in revenue with customers including Adobe, Shopify, and Canva.
The open source ecosystem is exploding too. Activepieces now offers approximately 400 MCP servers for AI agents, according to their GitHub repository with 19,600 stars. HumanLayer (7,600 stars) positions itself as "the best way to get AI coding agents to solve hard problems in complex codebases." Google's ADK samples repository (7,100 stars) provides reference implementations for everything from academic research to financial advisory agents.
What This Means for Your Stack
The practical implications depend on where you are:
If you're building agent applications now, MCP adoption means you can write integrations once instead of maintaining separate connectors for every AI platform. That's the promise, anyway. Block's Brad Axen explained to TechCrunch why they open-sourced Goose: "Getting it out into the world gives us a place for other people to come help us make it better. We have a lot of contributors from open source, and everything they do to improve it comes back to our company."
If you're evaluating vendors, the standardization push reduces lock-in risk. An MCP server you build for Claude will work with ChatGPT, Gemini, and whatever comes next. No more rewriting integrations every time you want to test a new model.
If you're worried about security, pay attention. One analysis found over 5,200 open-source MCP server implementations on GitHub, and credential management varies wildly. The foundation plans to establish "shared safety patterns," according to Linux Foundation executive director Jim Zemlin, but we're in the early days.
The Pattern Here
I've seen this movie before with containers, orchestration, and observability. First comes the proprietary explosion where everyone builds their own thing. Then the consolidation where a few approaches win. Finally, the standardization where industry players donate their tech to neutral foundations because widespread adoption matters more than control.
We're in act three for agent infrastructure.
Zemlin told TechCrunch the goal is avoiding "closed wall proprietary stacks, where tool connections, agent behavior, and orchestration are locked behind a handful of platforms." Whether that succeeds depends on implementation quality and actual adoption, not press releases. But the structural pieces are falling into place.
Cooper from OpenAI had the right framing: "I don't want these protocols to be part of this foundation, and that's where they sat for two years. They should evolve and continually accept further input." Standards need to be living documents, not monuments.
The Takeaway
If you're investing engineering time in AI agents, build on these standards. The MCP documentation is at modelcontextprotocol.io. Check the registry for existing servers before building your own. Understand that the database, hosting, and orchestration layers are still evolving rapidly.
The infrastructure layer for agents crystallized in about twelve months. That's uncomfortably fast for technology that's supposed to run production systems. But standardization was always going to happen—either through a messy format war or through coordinated donation to neutral governance.
The industry chose the latter. Now we get to see if open standards can keep pace with the velocity of AI development, or if they become anchors that slow down necessary evolution. My money's on the former, but I've been wrong before.
The agent era needed plumbing. Now it has some.