In the past three months, TWO powerful AI
agent development frameworks have been released:
- Google Agent Development Kit (ADK)
- AWS Strands Agents
In the previous post, we've introduces AWS Strands agent and app using AWS Strands Agent, Nova, FastAPI, Streamlit UI.
In this post, we’ll dive into the Google Agent Development Kit (ADK)
and show how to create agent-based applications using local and remote MCP
(Model Context Protocol) tools alongside Gemini 2.5, FastAPI, and a Streamlit
interface. Whether you're interested in understanding how AI agents function or ready to build your own, this guide is a great place to begin.
Table of Contents
- What is Google Agent Development Kit?
- Motivation: Why Use an Agent Framework?
- Google ADK Agent Event Loop
- What is Model Context Protocol (MCP)?
- Agent App with Local & Remote MCP using Google ADK, Gemini, FastAPI and Streamlit
- Conclusion
- References
What is Google Agent Development Kit?
- Agent Development Kit (ADK) is an
open-source framework
to develop AI agentsto run anywhere
:- VSCode, Terminal,
- Docker Container
- Google CloudRun
- Google Kubernetes Engine
- Really good documentation to view/learn agents, tools, workflows, session, memory, runner, multi agents, etc.
- Quickstart: https://google.github.io/adk-docs/get-started/quickstart/
Motivation: Why Use an Agent Framework?
- Organized workflows: Make it easier decision-making, tool usage, and response generation through a structured process.
- Contextual memory: Maintains session history to enable more intelligent and personalized interactions.
- Collaborative agents: Coordinates multiple specialized agents to handle complex problems effectively.
- Seamless tool integration: Supports connecting tools, APIs, and functions that agents can invoke appropriately.
- Model versatility: Enables use and switching between various language models (e.g., GPT, Claude, Nova).
- Ready for deployment: Includes built-in capabilities for logging, monitoring, and robust error management.
Google ADK Agent Event Loop
At its heart, the ADK Runtime operates on an Event Loop. This loop facilitates a back-and-forth communication between the Runner component and your defined "Execution Logic" (which includes your Agents, the LLM calls they make, Callbacks, and Tools).
In simple terms:
- The
Runner
receives a user query and asks the main Agent to start processing. - The
Agent
(and its associated logic) runs until it has something to report (like a response, a request to use a tool, or a state change) – it then yields or emits an Event. - The Runner receives
this Event
, processes any associated actions (like saving state changes via Services), and forwards the event onwards (e.g., to the user interface). - Only after the Runner has processed the event does the Agent's logic resume from where it paused, now potentially seeing the effects of the changes committed by the Runner.
- This
cycle repeats
until the agent has no more events to yield for the current user query.
What is Model Context Protocol (MCP)?
MCP
is an open standard designed to standardize how Large Language Models (LLMs) like Gemini and Claude communicate with external applications, data sources, and tools.
[TLDR]
MCP is middleware
that connects LLM/Agent and Internet apps with their APIs
Agent App with Local & Remote MCP using Google ADK, Gemini, FastAPI and Streamlit
TWO small sample projects
on GitHub:
- https://github.com/omerbsezer/Fast-LLM-Agent-MCP/tree/main/agents/google_adk/02-agent-local-mcp-fileOps-streamlit
- https://github.com/omerbsezer/Fast-LLM-Agent-MCP/tree/main/agents/google_adk/03-agent-remote-mcp-google-search-serper
Local MCP Tool - FileOps
FileOps MCP runs on Linux (WSL). Please install nodejs, npm, npx
in your system to run npx-based MCP tool.
Remote MCP Tool - Serper
Loging into the Serper App and Serper API key is required to run Serper MCP tool. Also, please install nodejs, npm, npx
in your system to run npx-based MCP tool.
Installing Dependencies & Reaching Gemini Model
- Go to: https://aistudio.google.com/
- Get API key to reach Gemini
- Please add .env with Gemini and Serper API Keys
# .env
SERPER_API_KEY=PASTE_YOUR_ACTUAL_API_KEY_HERE
GOOGLE_GENAI_USE_VERTEXAI=FALSE
GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_API_KEY_HERE
- Please install nodejs, npm, npx in your system to run npx-based MCP tool.
- Please install requirements:
fastapi
uvicorn
google-adk
google-generativeai
pydantic
streamlit
dotenv
Frontend - Streamlit UI
# app.py
import streamlit as st
import requests
st.set_page_config(page_title="Agent Chat", layout="centered")
if "messages" not in st.session_state:
st.session_state.messages = []
st.title("Agent MCP Tool")
for msg in st.session_state.messages:
with st.chat_message(msg["role"]):
st.markdown(msg["content"])
user_query = st.chat_input("Ask for tool commands or anything...")
# send and display user + assistant messages
if user_query:
st.chat_message("user").markdown(user_query)
st.session_state.messages.append({"role": "user", "content": user_query})
try:
response = requests.post(
"http://localhost:8000/ask",
json={"query": user_query}
)
response.raise_for_status()
agent_reply = response.json().get("response", "No response.")
except Exception as e:
agent_reply = f"Error: {str(e)}"
st.chat_message("assistant").markdown(agent_reply)
st.session_state.messages.append({"role": "assistant", "content": agent_reply})
Backend Local MCP
# agent.py
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from dotenv import load_dotenv
from google.genai import types
from google.adk.agents.llm_agent import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters
from fastapi import FastAPI, Request
load_dotenv()
MODEL = "gemini-2.5-flash-preview-04-17"
MCP_TOOL_PATH = "/home/omer/mcp-test"
# # MODEL="gemini-2.5-pro-preview-03-25"
# # MODEL="gemini-2.0-flash"
# # MODEL="gemini-2.0-flash-lite"
app = FastAPI()
class QueryRequest(BaseModel):
query: str
# MCPToolset.from_server(...) is an async likely performing I/O, so all chain async
async def get_agent_async():
"""Creates an ADK Agent equipped with tools from the MCP Server."""
tools, exit_stack = await MCPToolset.from_server(
connection_params=StdioServerParameters(
command='npx',
args=["-y", "@modelcontextprotocol/server-filesystem", MCP_TOOL_PATH],
)
)
print(f"Fetched {len(tools)} tools from MCP server.")
root_agent = LlmAgent(
model=MODEL,
name='filesystem_assistant',
instruction='Help user interact with the local filesystem using available tools. \n'
'- For all other questions, respond using your own knowledge.',
tools=tools,
)
return root_agent, exit_stack
async def handle_query(query: str):
session_service = InMemorySessionService()
artifacts_service = InMemoryArtifactService()
session = session_service.create_session(
state={}, app_name='mcp_filesystem_app', user_id='user_fs'
)
content = types.Content(role='user', parts=[types.Part(text=query)])
root_agent, exit_stack = await get_agent_async()
runner = Runner(
app_name='mcp_filesystem_app',
agent=root_agent,
artifact_service=artifacts_service,
session_service=session_service,
)
events_async = runner.run_async(
session_id=session.id, user_id=session.user_id, new_message=content
)
result = []
async for event in events_async:
if hasattr(event, "content") and event.content:
for part in event.content.parts:
if hasattr(part, "text") and part.text:
result.append(part.text)
elif hasattr(event, "tool_name") and hasattr(event, "parameters"):
result.append(f"Tool called: {event.tool_name}({event.parameters})")
elif hasattr(event, "response"):
result.append(f"Tool response: {event.response}")
else:
result.append(f"Unrecognized event: {event}")
await exit_stack.aclose()
final_output = "\n".join(result).strip()
return final_output if final_output else "No response from model or tools."
@app.post("/ask")
async def ask(request: Request):
data = await request.json()
query = data.get("query", "")
response = await handle_query(query)
return {"response": response}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Local MCP Run & Demo
Run frontend (app.py):
streamlit run app.py
or
python -m streamlit run app.py
Run backend (agent.py):
uvicorn agent:app --host 0.0.0.0 --port 8000
Demo: GIF on GitHub
Backend Remote MCP
# agent.py
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from dotenv import load_dotenv
from google.genai import types
from google.adk.agents.llm_agent import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters
from fastapi import FastAPI, Request
import os
load_dotenv()
MODEL = "gemini-2.5-flash-preview-04-17"
# # MODEL="gemini-2.5-pro-preview-03-25"
# # MODEL="gemini-2.0-flash"
# # MODEL="gemini-2.0-flash-lite"
app = FastAPI()
class QueryRequest(BaseModel):
query: str
# MCPToolset.from_server(...) is an async likely performing I/O, so all chain async
async def get_agent_async():
"""Creates an ADK Agent equipped with tools from the MCP Server."""
serper_api_key = os.getenv("SERPER_API_KEY")
tools, exit_stack = await MCPToolset.from_server(
connection_params=StdioServerParameters(
command='npx',
args=["-y", "serper-search-scrape-mcp-server"],
env={"SERPER_API_KEY": serper_api_key}
)
)
print(f"Fetched {len(tools)} tools from MCP server.")
root_agent = LlmAgent(
model=MODEL,
name='filesystem_assistant',
instruction="""You are a research assistant and search on SERPER, search and give results in link, title, content
'You MUST:
- Use **google_search** to perform web searches when the user is asking for general information, how-to guides, comparisons, news, or any content that could be sourced from the internet. This tool retrieves:
- Perform web searches using user queries to find organic results, FAQs, related searches, and knowledge graph entries.
- Handle a wide range of search intents: informational, comparative, educational, technical, current events, etc.
- Always return useful summaries along with links to the most relevant pages.
**Tool Parameters**
- `q`: Required. The search query string (e.g., "how Kubernetes works", "latest AI trends 2025"). Retrieve from the prompt.
- `gl`: Required. Geographic region code in ISO 3166-1 alpha-2 format (e.g., "us", "de", "gb"). Use "en".
- `hl`: Required. Language code in ISO 639-1 format (e.g., "en", "fr", "es"). Use "en.
- `location`: Required. Location for search results (e.g., 'SoHo, New York, United States', 'California, United States'). Use "United States".
Always summarize the top results clearly and include direct URLs for reference.
- Use **scrape** to extract content from a specific webpage when:
- The user provides a URL and asks for content, summaries, or metadata
- A relevant link was previously found via **google_search** and needs to be explored further
- Use the tools wisely to assist users. Based on the provided results, use Function tool call returns, retrieve only content.
- Parse the JSON response carefully and extract **relevant fields**. Give the search results with TITLE, LINK, CONTENT or SNIPPET.
- For all other questions, respond using your own knowledge.""",
tools=tools,
)
return root_agent, exit_stack
async def handle_query(query: str):
session_service = InMemorySessionService()
artifacts_service = InMemoryArtifactService()
session = session_service.create_session(
state={}, app_name='mcp_filesystem_app', user_id='user_fs'
)
content = types.Content(role='user', parts=[types.Part(text=query)])
root_agent, exit_stack = await get_agent_async()
runner = Runner(
app_name='mcp_filesystem_app',
agent=root_agent,
artifact_service=artifacts_service,
session_service=session_service,
)
events_async = runner.run_async(
session_id=session.id, user_id=session.user_id, new_message=content
)
result = []
async for event in events_async:
if hasattr(event, "content") and event.content:
for part in event.content.parts:
if hasattr(part, "text") and part.text:
result.append(part.text)
elif hasattr(event, "tool_name") and hasattr(event, "parameters"):
result.append(f"Tool called: {event.tool_name}({event.parameters})")
elif hasattr(event, "response"):
result.append(f"Tool response: {event.response}")
else:
result.append(f"Unrecognized event: {event}")
await exit_stack.aclose()
final_output = "\n".join(result).strip()
return final_output if final_output else "No response from model or tools."
@app.post("/ask")
async def ask(request: Request):
data = await request.json()
query = data.get("query", "")
response = await handle_query(query)
return {"response": response}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Remote MCP Run & Demo
Run frontend (app.py):
streamlit run app.py
or
python -m streamlit run app.py
Run backend (agent.py):
uvicorn agent:app --host 0.0.0.0 --port 8000
Demo: GIF on GitHub
Conclusion
In this post, we mentioned:
- how to access Google Gemini 2.5,
- how to implement Google ADK agent with MCP tools using Gemini, Fast API, Streamlit UI.
If you found the tutorial interesting, I’d love to hear your thoughts in the blog post comments. Feel free to share your reactions or leave a comment. I truly value your input and engagement 😉
For other posts 👉 https://dev.to/omerberatsezer 🧐
References
- https://google.github.io/adk-docs/
- https://github.com/omerbsezer/Fast-LLM-Agent-MCP/
- https://www.anthropic.com/news/model-context-protocol
- https://github.com/modelcontextprotocol
Your comments 🤔
- Which tools are you using to develop AI Agents (e.g.
Google ADK
,AWS Strands, Google ADK
, CrewAI, Langchain, etc.)? Please mention in the comment your experience, your interest? - What are you thinking about Google ADK?
- Are you interested in developing an AI agent app?
Top comments (7)
Really appreciate the step-by-step demo and the event loop breakdown, that structure makes it much easier to reason about agent flows compared to some other frameworks I've tried (like Langchain). Have you run into any cases where ADK’s loop approach doesn't fit well or adds friction?
I have not encountered any problems so far, if I do some more examples, I can give a better answer. I like google adk and aws strands libraries. Langchain RAG, Retrievel, Loader applications are good, but I don't think the features of agent libraries in langchain are enough.
I explored multiple AI agent frameworks (LangChain, CrewAI, AWS Strands, and Google ADK) by testing them across diverse use cases such as multi-agent collaboration, integration with MCP tools, support for various language models, and workflow orchestration. Among these, Google ADK and AWS Strands are good/easy to implement agent apps. Both provide comparable features and integrate seamlessly with open-source tools like LiteLLM (for multi-model support) and MCP components (such as StdioServerParameters).
Interesting to read
Thanks :)
This was a really helpful and clear article. As someone who's just getting started, it made it much easier to understand how to build AI agents with Google ADK. Thank you so much for the great work! 🙏
Thanks a lot for your kind words.. I'm happy to hear the post/tutorial helped. Getting started with AI agents and Google ADK can feel overwhelming at first, so it’s great to know the post made it more approachable.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.