- Turing Post
- Posts
- š¦øš»#17: What is A2A and why is it ā still! ā underappreciated?
š¦øš»#17: What is A2A and why is it ā still! ā underappreciated?
everything you need to know about Googleās Agent2Agent protocol (and if Google builds the worldās first agent directory, A2A will be the language it speaks)
āOne of the biggest challenges in enterprise AI adoption is getting agents built on different frameworks and vendors to work together.ā
Remember the classic example of what we wish AI agents could do smoothly? āBook me a trip to New York next weekend. Prefer a direct flight, leave Friday afternoon, back Sunday evening. And find a hotel close to a good jazz bar.ā The problem with that (besides becoming a clichĆ©) is AI agents still struggle to understand your full intent, plan across multiple steps, and act reliably across tools ā all without constant hand-holding. Each step (parsing the task, finding options, making tradeoffs, booking) works okay in isolation, but stitching it all together smoothly and safely? Thatās still brittle and error-prone. Most agents today operate in silos, each locked into its own ecosystem or vendor. As a result, we have a fragmented landscape where agents canāt directly talk to each other, limiting their usefulness in complex, cross-system workflows. In April 2025, Google unveiled Agent2Agent (A2A) as an open protocol to break these silosā. Backed by an all-star roster of over 50 partners (from Atlassian and Salesforce to LangChain)ā. A2A aims to be the ācommon languageā that lets independent AI agents collaborate seamlessly across applications.
Yet even with the loud launch and 50 big-name partners, a few weeks later A2A remains underappreciated. It hasnāt ignited the kind of frenzy one might expect given its pedigree.

The level of popularity on Reddit and the problem of naming š³
Currently, the trend suggests a slowdown in growth ā why such a lukewarm reception for what could be critical infrastructure?

Image Credit: GitHub Star History
In this article, weāll dive deep into A2A ā what it is, why it exists, how it works, what people think about it ā and explore why its adoption is lagging (and why that might soon change). Weāll walk through the technical foundation of A2A, compare it to protocols like Anthropicās MCP, and explain the real-world challenges that come with building multi-agent systems. Along the way, weāll also look at why Googleās push for agent interoperability could have much bigger implications ā possibly even laying the groundwork for a searchable, internet-scale directory of AI agents. As always, itās a great starting guide, but also useful for those who have already experimented with A2A and want to learn more. Dive in!
Follow us on š„ YouTube Twitter Hugging Face š¤
Whatās in todayās episode?
Why A2A Isnāt Making Waves (Yet)
So, What Is A2A and How Does It Work?
The key components of A2A
How do I actually get started with A2A?
Before A2A: The Fragmented World of Isolated Agents
Is A2A a Silver Bullet for AI Collaboration? + Challenges
Will MCP and A2A Become Competitors?
A2A in Agentic Orchestration and Its Place in the AI Stack (Why do we need another protocol?!)
New Possibilities Unlocked by A2A
Concluding Thoughts ā Could Google spin A2A into a public, Google-search-style index of agents?
Resources to dive deeper
Why A2A Isnāt Making Waves (Yet)
Googleās announcement of A2A checked all the right boxes: a compelling vision of cross-agent collaboration, heavyweight partners, open-source code, and even a complementary relationship with Anthropicās Model Context Protocol (MCP)ā. In theory, the timing is perfect. The AI world is abuzz with āagentā frameworks ā but most first-generation āAI agentā stacks have been solo players, single large language models equipped with a toolbox of plugins or APIs. Recently, we saw a tremendous success of MCP thats standardizes how an AI agent accesses tools and context, acting as a kind of āUSB-C port for AIāā. A2A picks up where that leaves off: standardizing how multiple autonomous agents communicate, so they can exchange tasks and results without custom integration glue.
So why hasnāt A2A taken off overnight? Part of the issue is hype dynamics. When Anthropic announced MCP in late 2024, it initially got a tepid response; only months later did it trend as a game-changer. A2A may be experiencing a similar delay in recognition. Its value is a bit abstract at first glance ā enterprise agent interoperability isnāt as immediately flashy as, say, a new state-of-the-art model or a chatbot that writes code. Many developers havenāt yet felt the pain of multi-agent collaboration because theyāre still experimenting with single-agent applications. In smaller-scale projects, one might simply orchestrate multiple API calls within a single script or use a framework like LangChain internally, without needing a formal protocol. The real urgency of A2Aās solution becomes evident in larger, complex environments ā exactly those in big companies ā but that story is still filtering out to the broader community.
Another factor is the āyet another standardā fatigue. Over the past year, numerous approaches for extending LLMs have popped up: OpenAIās function calling, various plugin systems, custom RPC schemes, not to mention vendor-specific agent APIs. Developers might be asking: Do we really need another protocol? Right now, A2A is still so new that there are few public success stories ā no killer demo that has gone viral to showcase āagents talking to agentsā in a jaw-dropping way. Without that spark, A2A remains under the radar, quietly intriguing to those who read the spec but not yet a buzzword in everyday AI developer chats. (Remember, all links for further learning are included at the end of the article.)

So, What Is A2A and How Does It Work?
You are on a free list. Upgrade if you want to be the first to receive the full articles directly in your inbox. Simplify your learning journey ā
Want a 1-month subscription? Invite three friends to subscribe and get a 1-month subscription free! š¤
Reply