Tutorial: Multi-Agent Collaboration with LangChain, MCP, and Google A2A Protocol
Artificial Intelligence has moved beyond single, monolithic models trying to handle every request. Today, AI systems are becoming agent-based: collections of specialized agents that can work together, communicate, and call external tools to complete complex tasks. If you’ve ever wished your AI assistant could not only “think” but also collaborate and act, you’re already thinking in the world of multi-agent systems.
In this tutorial, we’ll explore three key technologies that make this possible:
By the end of this guide, you’ll not only understand what each of these technologies does, but you’ll also build a working Python project that:
We’ll go step by step, covering environment setup, installation, code examples, and troubleshooting tips. All you need is some basic Python knowledge, curiosity, and a willingness to experiment.
Let’s dive in and build your first collaborative AI system!
What Are MCP, LangChain, and Agent2Agent?
Before jumping into code, let’s make sure we understand the three building blocks of our project. Each solves a different problem, but together they form the foundation of modern multi-agent AI systems.
LangChain: Building Smarter Agents
LangChain is a popular open-source Python framework for working with Large Language Models (LLMs). Instead of having to write boilerplate code to prompt models, connect APIs, and handle workflows, LangChain gives you ready-made components:
With LangChain, you can quickly create an agent that not only “talks” but also acts by calling external tools.
MCP: Giving Agents Tools They Can Trust
The Model Context Protocol (MCP) is an open standard that makes it easy for agents to connect to external tools. Think of it like an “app store” for agents, if an agent knows MCP, it can plug into a library of services without custom coding.
Instead of hardcoding integrations, you just run an MCP server that exposes functions (like “get_weather” or “add_numbers”), and any MCP-aware agent can use them. This gives agents superpowers while keeping the design modular and reusable.
Agent-to-Agent (A2A): Teaching Agents to Collaborate
While MCP is about connecting agents to tools, the Agent-to-Agent Protocol (A2A) is about connecting agents to each other. Developed by Google, A2A provides a standard way for agents to:
This means you can have specialized agents, like a “Math Agent” and a “Spelling Agent”, work together to solve a problem that neither could do alone.
With these three pieces, you can create a system where:
Next, let’s set up your Python environment and install the necessary packages.
Setting Up Your Environment
Now that we understand what each technology does, let’s roll up our sleeves and get our environment ready. Don’t worry if you’re new to Python projects, we’ll go step by step.
Step 1: Install Python
Make sure you have Python 3.10 or higher installed. You can check this by running:
python — version
If it’s older than 3.10, download the latest version from python.org.
Step 2: Create a Virtual Environment
It’s best practice to isolate your project with a virtual environment so dependencies don’t conflict.
python -m venv .venv
source .venv/bin/activate # On Mac/Linux
.venv\Scripts\activate # On Windows
Now, all packages you install will stay inside this project.
Step 3: Install Required Libraries
We need LangChain, the MCP adapter, and the A2A SDK. Let’s install them all in one go:
pip install - pre -U langchain # LangChain core
pip install -U langchain-openai # OpenAI connector (or langchain-anthropic if you prefer)
pip install langchain-mcp-adapters # MCP adapter for LangChain
pip install mcp # For creating custom MCP servers
pip install a2a-sdk # Google's Agent-to-Agent SDK
This gives you everything you need: LangChain to build agents, MCP for tools, and A2A for collaboration.
Step 4: Set Up API Keys
Most agents need an LLM behind them. If you’re using OpenAI or Anthropic, grab an API key and set it as an environment variable:
export OPENAI_API_KEY=”your_api_key_here” # Mac/Linux
setx OPENAI_API_KEY "your_api_key_here" # Windows
If you’re following Google’s A2A examples with Gemini, you’ll also need:
export GOOGLE_API_KEY=”your_api_key_here”
Tip: You can keep keys in a .env file and load them automatically using python-dotenv.
Step 5: Verify the Installation
Run this quick check in Python:
import langchain
import mcp
import a2a_sdk
print("LangChain:", langchain.__version__)
print("MCP installed")
print("A2A installed")
If you see versions and no errors, you’re good to go!
With the environment ready, the next step is to build your first MCP tool, a small, simple function that agents can call. This will give us the foundation to connect everything together.
Building Your First MCP Tool
Now that our environment is ready, let’s create a simple MCP tool server. Think of this as a service that exposes a function your agent can call, just like a mini-API, but built specifically for agents.
Step 1: Create a New Python File
Make a new file called math_server.py. This will be our MCP server.
Step 2: Write the MCP Tool
Here’s a small MCP server that provides an add function to add two numbers:
# math_server.py
from mcp.server.fastmcp import FastMCP
# Create an MCP server instance
mcp = FastMCP("MathServer")
# Expose a function as a tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers and return the result."""
return a + b
# Start the server
if __name__ == "__main__":
mcp.run(transport="stdio")
Step 3: Run the MCP Server
In your terminal, run:
python math_server.py
This starts the MCP server and makes the add tool available.
Step 4: How It Works
Step 5: Next Step — Connecting to LangChain
Now that we have a working MCP tool, we’ll connect it to a LangChain agent. This will let our agent automatically call the add function when it needs to perform arithmetic.
Connecting LangChain to MCP Tools
We now have a math MCP server running, but it’s not very useful until an agent can actually call it. This is where LangChain comes in. With LangChain, we can create an agent that reasons about a user’s request and automatically calls our MCP tool when needed.
Step 1: Create a New Python File
Make a new file called agent_with_mcp.py.
Step 2: Connect the MCP Client
LangChain provides a MultiServerMCPClient that can connect to one or more MCP servers. Let’s use it to hook into our math server:
# agent_with_mcp.py
import asyncio
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent
from langchain_mcp_adapters.client import MultiServerMCPClient
async def main():
# Connect to the MCP math server
client = MultiServerMCPClient({
"math": {
"transport": "stdio",
"command": "python",
"args": ["math_server.py"], # path to your MCP server
}
})
# Get the tools from the MCP server
tools = await client.get_tools()
# Create an LLM (using OpenAI here, but you can use Anthropic, Gemini, etc.)
llm = ChatOpenAI(model="gpt-4o-mini")
# Create a LangChain agent that knows about the MCP tools
agent = create_openai_functions_agent(llm, tools)
# Test the agent
result = await agent.ainvoke({"input": "What is 5 plus 7?"})
print("Agent Response:", result)
if __name__ == "__main__":
asyncio.run(main())
Step 3: Run the Agent
In one terminal, make sure your MCP server is available:
python math_server.py
Then, in another terminal, run:
python agent_with_mcp.py
You should see the agent respond with something like:
Agent Response: 12
Step 4: How It Works
Now that our agent can call external tools, the next step is to let agents talk to each other. This is where the Agent-to-Agent (A2A) Protocol comes into play.
Recommended by LinkedIn
Building an A2A-Enabled Agent
So far, we’ve built a LangChain agent and connected it to an MCP tool. That’s powerful, but it’s still a single agent. What if we want multiple agents to discover each other and share tasks? That’s where the Agent-to-Agent (A2A) Protocol comes in.
With A2A, each agent exposes a simple JSON “card” describing its capabilities and runs a lightweight server that other agents can talk to. Agents can then send structured tasks to one another, just like humans passing around assignments.
Step 1: Create an Agent Card
Each agent needs a .well-known/agent.json file that describes its capabilities. Let’s make one for our math agent:
{
"agentName": "CalcAgent",
"version": "1.0",
"description": "Performs arithmetic calculations",
"protocol": "A2A",
"capabilities": ["math.add"]
}
Save this in a folder (for example, calc_agent/.well-known/agent.json).
Step 2: Build an A2A Server
Next, we’ll use the a2a-sdk to serve this agent. Create a new file called calc_agent.py:
# calc_agent.py
import asyncio
from a2a_sdk.server import A2AServer
from a2a_sdk.types import Task
# Define how the agent handles tasks
async def handle_task(task: Task):
user_message = task["messages"][0]["parts"][0]["text"]
if "add" in user_message:
# Very simple parser for "Add X and Y"
numbers = [int(s) for s in user_message.split() if s.isdigit()]
result = sum(numbers)
return f"The sum is {result}"
return "Sorry, I only know how to add numbers."
async def main():
server = A2AServer(agent_card_path="./.well-known/agent.json", task_handler=handle_task)
await server.start()
if __name__ == "__main__":
asyncio.run(main())
Step 3: Run the Agent
Start your math agent by running:
python calc_agent.py
It will now listen for A2A requests on its configured endpoint (default localhost port).
Step 4: Send a Task from Another Agent
To test collaboration, create a second agent (e.g., client_agent.py) that sends tasks to CalcAgent:
# client_agent.py
import asyncio
from a2a_sdk.client import A2AClient
async def main():
client = A2AClient(remote_agent_url="http://localhost:8000") # adjust port if needed
task = {
"task": {
"taskId": "task1",
"state": "submitted",
"messages": [
{"role": "user", "parts": [{"text": "Add 5 and 7"}]}
]
}
}
response = await client.send_task(task)
print("Response from CalcAgent:", response)
if __name__ == "__main__":
asyncio.run(main())
Run this file in a second terminal. If everything is working, you’ll see:
Response from CalcAgent: The sum is 12
Step 5: Why This Matters
Now that we have agents that can talk to each other, the final step is to combine everything: a LangChain agent that uses MCP tools internally and can also collaborate with other agents via A2A.
Bringing It All Together: Multi-Agent Collaboration
We now have all the pieces:
Let’s combine them into a mini multi-agent system.
Scenario: Math + Spelling Collaboration
We’ll build two agents:
The two agents will communicate via A2A:
Step 1: MCP-Powered CalcAgent
We already have math_server.py from earlier. Now, let’s build an A2A-enabled CalcAgent that uses this server.
# calc_agent.py
import asyncio
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent
from langchain_mcp_adapters.client import MultiServerMCPClient
from a2a_sdk.server import A2AServer
from a2a_sdk.types import Task
async def handle_task(task: Task):
user_input = task["messages"][0]["parts"][0]["text"]
# Connect to MCP server
client = MultiServerMCPClient({
"math": {
"transport": "stdio",
"command": "python",
"args": ["math_server.py"],
}
})
tools = await client.get_tools()
# Create agent
llm = ChatOpenAI(model="gpt-4o-mini")
agent = create_openai_functions_agent(llm, tools)
# Ask the agent
result = await agent.ainvoke({"input": user_input})
return str(result)
async def main():
server = A2AServer(
agent_card_path="./.well-known/agent.json",
task_handler=handle_task
)
await server.start()
if __name__ == "__main__":
asyncio.run(main())
Step 2: SpellingAgent
The SpellingAgent doesn’t need MCP; it just uses LangChain + an LLM to convert numbers to words.
# spelling_agent.py
import asyncio
from langchain_openai import ChatOpenAI
from a2a_sdk.server import A2AServer
from a2a_sdk.types import Task
llm = ChatOpenAI(model="gpt-4o-mini")
async def handle_task(task: Task):
user_input = task["messages"][0]["parts"][0]["text"]
response = await llm.ainvoke(f"Convert {user_input} into English words.")
return response.content
async def main():
server = A2AServer(
agent_card_path="./.well-known/agent.json",
task_handler=handle_task
)
await server.start()
if __name__ == "__main__":
asyncio.run(main())
Step 3: Orchestrating the Workflow
Finally, let’s write a client agent that coordinates the process.
# orchestrator.py
import asyncio
from a2a_sdk.client import A2AClient
async def main():
# First ask CalcAgent
calc_client = A2AClient("http://localhost:8000") # CalcAgent URL
calc_task = {
"task": {
"taskId": "task1",
"state": "submitted",
"messages": [{"role": "user", "parts": [{"text": "Add 12 and 7"}]}]
}
}
calc_response = await calc_client.send_task(calc_task)
number_result = calc_response["result"]
# Now ask SpellingAgent
spelling_client = A2AClient("http://localhost:8005") # SpellingAgent URL
spell_task = {
"task": {
"taskId": "task2",
"state": "submitted",
"messages": [{"role": "user", "parts": [{"text": number_result}]}]
}
}
spell_response = await spelling_client.send_task(spell_task)
print("Final Answer:", number_result, "-", spell_response["result"])
if __name__ == "__main__":
asyncio.run(main())
Step 4: Run the System
You should see:
Final Answer: 19 — nineteen
Why This Is Powerful
This is the foundation of building complex, modular AI systems where specialized agents handle their part of the job and then pass results along.
Troubleshooting & Tips
As you experiment with LangChain, MCP, and A2A, it’s normal to run into small hiccups. Here are some common issues and how to fix them.
Environment & Dependencies
MCP Issues
LangChain Issues
A2A Issues
Debugging Tips
Congratulations, you’ve just built a working multi-agent AI system using:
This beginner project may be simple, but it introduces the core patterns of modern AI engineering: modularity, interoperability, and multi-agent workflows. From here, you can expand your agents with new MCP tools, add more specialized collaborators, or even deploy them across different environments.
The future of AI isn’t just one giant model, it’s teams of agents working together, and now you have the skills to start building them.
Next Steps & Further Learning
You’ve built your first multi-agent system, congratulations! But this is just the beginning. There are many ways you can extend what you’ve learned and explore the ecosystem further.
Add More MCP Tools
Right now, your agent only knows how to add numbers. Why not expand? You could:
Every new MCP server you add gives your agents new skills without changing their core logic.
Experiment with Multi-Agent Patterns
You’ve seen two agents collaborate, but multi-agent systems can grow much larger. Try experimenting with:
The A2A protocol makes it possible to stitch together entire ecosystems of agents.
Explore Deployment Options
Running locally is great for learning, but you can go further by deploying agents so they run 24/7:
Learn More from the Communities
Here are a few good resources to deepen your knowledge:
Dream Bigger
With these building blocks, you could create:
Multi-agent AI isn’t science fiction, it’s already here, and you now have the foundation to start building. The future belongs to teams of agents, and you’ve just taken the first step toward becoming their architect.