Recently, AWS announced the release of Strands Agents, an open source SDK that takes a model-driven approach to building and running AI agents in just a few lines of code.
A while ago I built a city explorer using Knowledge Bases for Amazon Bedrock so to experiment with Strands Agents, I built the same application, this time using Strands Agents.
Here's the process I followed:
I already have Python 3.13 installed in my local machine, if you don't already have Python 3.10 or higher installed be sure to download an install it.
I have an AWS account so I enabled model access for Claude 3.7 in Amazon Bedrock in the same region as the default region specified in my code later. Claude 3.7 is the default model used by Strands Agents.
I then proceeded to set up my environment as follows:
# Create a virtual environment
python -m venv .venv
# Activate the environment
# On Windows
.venv\Scripts\activate
# On macOS/Linux
source .venv/bin/activate
# Install Strands Agents SDK and tools
pip install strands-agents strands-agents-tools
The strands-agents
package provides the core SDK functionality, while strands-agents-tools
includes a variety of built-in tools we can use to enhance our agents. For my application, I could have installed only the strands-agents
package if I opted to do so.
Next, I created my city_explorer_agent
in the city_explorer.py
file with the following code:
from strands import Agent
from strands.models import BedrockModel
from botocore.config import Config as BotocoreConfig
# Create a boto client config with custom settings
boto_config = BotocoreConfig(
retries={"max_attempts": 3, "mode": "standard"},
connect_timeout=5,
read_timeout=60
)
# Create a configured Bedrock model
bedrock_model = BedrockModel(
model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
region_name="us-east-1", # Specify a different region than the default
temperature=0.3,
top_p=0.8,
stop_sequences=["###", "END"],
boto_client_config=boto_config,
)
# Create a city explorer agent with the configured model and system prompt
city_explorer_agent = Agent(
model=bedrock_model,
system_prompt="You are a knowledgeable city facts assistant. Provide concise, interesting facts about cities when asked. Keep responses brief and engaging."
)
# Conversational loop
print("City Facts Assistant - Ask me about any city! (type 'quit' to exit)")
while True:
user_input = input("\nYou: ").strip()
if user_input.lower() in ['quit', 'exit', 'bye']:
print("Goodbye!")
break
if user_input:
response = city_explorer_agent(user_input)
print(f"\nAssistant: {response}")
I ran the code in the terminal:
Good to know
You might be wondering about the benefits of using Strands versus Bedrock Agents; the two technologies are complimentary in that Bedrock Agents offers fully managed hosted agents while Strands is an open source framework that can run anywhere. Today, if you're experimenting, use Strands and if you need to be in production, then use Bedrock Agents.
Strands Agents is less opinionated than some other frameworks such as LangChain in that it lets the model take the strain rather than having libraries of prompt engineering templates, it's a new breed of agentic tooling.
I was motivated by the promise that Strands Agents takes a model-driven approach to building and running AI agents in just a few lines of code, focusing on the model's ability to reason and plan rather than manually defining complex workflows and as seen in my code above, Strands Agents lived up to that promise.
Next Steps
There's a whole community behind this open source project and contributions such as adding support for additional providersโ models and tools, collaborating on new features, or expanding the documentation, and more, are welcome. If you find a bug, have a suggestion, or have something to contribute, join the project on GitHub.
Resources
Model Driven Agents - Strands Agents on YouTube
Building AI Agents with Strands - A series
Top comments (1)
Thanks for this post, very easy to understand!