This is a submission for the Google Cloud NEXT Writing Challenge
Hey folks, welcome back!!
So here's the thing. For the last few weeks, I'd been quietly stuck on a problem that sounds embarrassingly simple. I have a few AI agents I've been playing around with. One does research. One drafts content. One reviews stuff. And in my head, the obvious next step was... get them to talk to each other. Pass work between themselves. Work like a tiny team.
Cool idea. Real cool.
The actual implementation? A nightmare of glue code, custom JSON shapes, and my favorite kind of bug (the "this worked yesterday" kind). Every agent had its own opinions about how to receive a task, what "done" looked like, and what counted as a result. (I had three agents and a headache, my bad lol)
Then Google Cloud NEXT '26 happened. 260 announcements in three days. New TPUs. A whole rebrand of Vertex AI into the Gemini Enterprise Agent Platform. Gemini 3.1. Cross-cloud lakehouses. Agentic this, agentic that.
And buried somewhere between TPU 8t and the new Knowledge Catalog was the announcement I almost scrolled past. It was the only one that solved my actual problem.
It was a boring open protocol called A2A.
Why everyone (including me) almost missed it
Conferences reward shiny. The Gemini Enterprise Agent Platform got the demos. The 8th-gen TPUs got the headline performance numbers (3x faster training, 80% better performance-per-dollar, all the kind of stats that fit on a slide). Gemini 3.1 Pro got the model benchmarks. Even the Wiz security partnership made noise because it had a big number attached.
A2A passing 150 enterprises in production? Quiet bullet point. Footnote energy.
But here's the part that's funny in hindsight. The Next Web actually called it out plainly: "The most strategically significant announcement may be the least visible to end users." And the more I read, the more I agreed.
So let's talk about why a footnote announcement might be the most consequential thing Google said all week.
What A2A actually is
A2A is nothing but an open protocol that lets AI agents from different vendors talk to each other.
That's it. That's the whole thing. (I told you it sounded boring.)
If you've been around the AI tooling space for the last year, you've probably heard of MCP, the Model Context Protocol. Anthropic shipped that one. It quietly became the way agents connect to tools and data. Files, APIs, databases, your shell. One agent, many tools.
A2A is the other side of that coin.
If MCP is how an agent talks to its tools, A2A is how agents talk to each other.
Or to put it in even older terms: A2A is HTTP for agents. Or maybe it's more like SMTP for agents. (A boring open standard nobody loves but everyone quietly agrees on, which is honestly the highest compliment a protocol can get.)
It's built on stuff every dev already understands. HTTP for transport. JSON-RPC 2.0 for messages. Server-Sent Events for streaming updates. OAuth 2.0 for auth. No magic. No proprietary handshake. Nothing you have to learn from scratch.
And that's the part that made me sit up.
The "wait, that's clever" detail
Here's the design choice that got me grinning.
Every A2A agent publishes a tiny file at a well-known URL. The file lives at /.well-known/agent-card.json. It says who the agent is, what it can do, and how to talk to it.
If you've ever poked around robots.txt or security.txt, you already know this pattern. Same energy.
Which means... any URL on the public internet can become a discoverable agent by hosting one JSON file. Your Vercel side project. Your Cloudflare worker. Your weekend Raspberry Pi. None of them need a Google Cloud contract. None of them need a vendor's permission. They just need a file at a known address.
(I keep coming back to this detail. It's so boring on the surface and so quietly democratic underneath.)
The Agent Card describes capabilities. Authentication is OpenAPI-spec stuff (API keys, OAuth, OpenID Connect). The protocol is async-first, which is a fancy way of saying "long-running tasks don't block the conversation, and humans can step in mid-flow." Tasks have lifecycles. Messages are multipart. Artifacts come back as structured data, not vibes.
It's almost suspiciously normal. And that's why I think it's going to win.
Why boring tends to win
Look at what actually runs the internet today.
HTTP. SMTP. DNS. SQL. TCP/IP. Every single one of these is a boring, decades-old, committee-shaped open standard. None of them are anyone's clever proprietary breakthrough. All of them outlived the slick alternatives that tried to replace them.
(There's a pattern here, isn't there?)
A2A is following the same script. Google built it, then donated it to the Linux Foundation in 2025. So it's not Google's protocol anymore. It belongs to the ecosystem.
In one year, A2A went from announcement to 150+ organizations supporting the standard, with real production deployments across major cloud platforms. Not pilots. Production. The named supporters include Microsoft, AWS, Salesforce, SAP, ServiceNow, Cisco, and IBM. (For context, those are not companies that adopt protocols casually.)
The protocol supports cryptographic signing of agent cards (via JSON Web Signatures, the same mechanism JWTs use). Translation: an agent can prove it's actually who it says it is. That's the kind of feature you ship when something is being used for real money.
Make no mistake, this is the moment a protocol stops being a nice idea and becomes infrastructure.
Why this matters for the rest of us
Now here's the part I'm personally excited about, and I think you should be too.
Open protocols favor the underdog. Always have.
When the rules of the road are public, the small builder pulls up to the same intersection as the giant. Your weekend agent, the one stitched together with duct tape and a Gemini API key, can publish a /.well-known/agent-card.json and speak the exact same language as Salesforce's enterprise agent. Same protocol. Same primitives. Same wire format.
That's a wild thing to type out, but it's literally true.
And remember the mini-team idea I started with? The research-drafting-reviewing trio I was trying to wire up? That stops being a glue-code nightmare. Each agent publishes its capabilities. Each one accepts standardized tasks. They pass work between themselves the same way two production agents at Salesforce and ServiceNow do. (Yeah, I know lol. Wild.)
Then Google announced a dedicated AI Agent Marketplace, planned for later this year, where partners can sell A2A agents directly to customers. Which means if you build a good one, there's a path to revenue that doesn't require a sales team or a partnership deal. You ship the agent. The protocol does the integration work. The marketplace does the distribution.
That's not a small thing for solo devs and tiny teams.
So here's the takeaway
Let me recap, because we covered some ground:
- A2A is an open protocol for agents to talk to each other (HTTP for agents, basically)
- It crossed 150+ supporting organizations with production deployments by NEXT '26 and is now governed by the Linux Foundation, not Google
- Major clouds (Google, Microsoft, AWS) have embedded it directly into their platforms
- It complements MCP rather than competing with it (MCP for tools, A2A for agents)
- The well-known URL design means any solo dev can publish a discoverable agent
- A dedicated AI Agent Marketplace was announced for partners to sell agents directly to customers (launching later in 2026)
Personally, I think this is the announcement from NEXT '26 that I'll still be thinking about a year from now. The TPUs are cool. Agent Studio is cool. Gemini 3.1 is cool. (Pretty cool, right?) But chips and models and IDEs come and go. Open protocols stick around.
If you're building anything agent-shaped, even a small thing, I'd peek at the spec. The agent-card.json pattern alone is worth twenty minutes of your evening. And the multi-agent team you've been daydreaming about? It just got a lot less hypothetical.
That's the part I keep grinning at.
This article was originally published by DEV Community and written by Arun K C.
Read original article on DEV Community





