I’ve been working with AI agents since the beginning of the year. It hasn’t been an easy journey — there was a ton of learning and experimentation along the way.
Throughout this process, I tried different libraries and frameworks. For example:
LangGraph: It's a great tool for building low-level flows. Getting a simple flow working is easy, but things quickly get complex when you start connecting more than one node. Another issue I found is the documentation — it’s confusing, as there are several ways to define an agent.
Agno: I really liked this one. The API is simple and easy to understand, and the documentation is great. However, when I tried to set up a multi-agent architecture, I ran into performance issues.
But then I tried ADK (Agent Development Kit) — and it solved most of my problems. It has great documentation, it’s backed by Google, and the team seems very active, delivering fixes weekly. I was able to understand everything very quickly.
Movie Finder with ADK
ADK works with Python, and there’s also a recent version available in Java. For this example, we’ll stick with Python. Let’s get started:
Install the package:pip install google-adk
Set up your environment keys — in this case, we’ll use Gemini. You can generate the API key in Google AI Studio:
GOOGLE_GENAI_USE_VERTEXAI=FALSE
GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_API_KEY_HERE
Now, create the folder structure for your agent:
parent_folder/
movie_finder_agent/
__init__.py
agent.py
.env
In __init__.py
, make sure to include from . import agent
Now let’s work on the tool our agent will use:
MOVIES = [
{"title": "The Matrix", "genre": "Sci-Fi", "year": 1999},
{"title": "Inception", "genre": "Sci-Fi", "year": 2010},
{"title": "Titanic", "genre": "Romance", "year": 1997},
{"title": "The Godfather", "genre": "Crime", "year": 1972},
{"title": "Interstellar", "genre": "Sci-Fi", "year": 2014},
{"title": "Pulp Fiction", "genre": "Crime", "year": 1994},
]
def find_movies(genre: str, decade: int) -> str:
"""
Find movies that match the specified genre and year.
Args:
genre (str): The genre to filter movies by (e.g. "Sci-Fi", "Romance", "Crime")
decade (int): The start year of the decade to filter movies by (e.g. 1990, 2000)
Returns:
str: A newline-separated string of matching movie titles, or a message if no matches found
"""
results = []
year_range = range(decade, decade + 10)
for movie in MOVIES:
if genre and movie["genre"] == genre and movie["year"] in year_range:
results.append(movie["title"])
if not results:
return "No matching movies found."
return "\n".join(results)
As you can see, a tool is just a regular function the agent can call — nothing new here. In this case, the agent’s job is to figure out the parameters the tool needs in order to run correctly.
Here’s how the agent is defined:
root_agent = Agent(
name="root_agent",
model="gemini-2.5-flash-preview-04-17",
description="Answers questions from the user",
instruction="You are a helpful assistant that can answer questions about movies.",
tools=[
find_movies
],
)
It’s very simple to understand, but let me break down the properties:
- name – the agent’s name
- model – the model version; if you want to use a non-Google model, you can use LiteLLM
- description – a short summary of what the agent does
- instruction – your agent’s prompt and behavior guide
- tools – the list of tools your agent can call
Now let’s see it in action. Run adk web
This dashboard is the testing environment, and here you can see the agent in action, and as you can see I asked: I'm looking for a sci-fi movie in the 90s.
It correctly used the tool and generated the final response.
If we inspect the tool call, we can see the correct parameters were passed:
And it returned the result as expected.
In under 5 minutes, you created your first agent — with minimal effort and simple prompts!
Agents Are Not Just LLMs
It’s important to understand a few key concepts to avoid confusion or wrong expectations when working with agents:
- LLM ≠ Agent An LLM is a pretrained model built using massive datasets, high compute costs, and long training cycles. An agent is a runtime layer built on top of an LLM that adds tools, memory, and logic.
- Agents can think, remember, and act They can analyze a question, call a tool to get an answer, and store that result for future use — like a lightweight brain with short-term memory.
- Agents are slower than chat apps Don’t expect the speed you see in ChatGPT or Gemini.
⸻
If you want a part 2 of this tutorial, leave a like!
Top comments (0)