๐โโ๏ธ TL;DR
An AI agent using mem0_memory
tool to get persistent context for serverless AWS Lambda
based Strands
agents: minimal code to store user prefs and recall them upon different AWS Lambda
invocations.
Hereโs the GitHub repo if you want to dive in right away: ๐ serverless-memory-strands-agent
๐ Why?
Ever wondered how to persist user conversation context across different AWS Lambda
invocations? Using the Strands Agents SDK
with its mem0_memory
tool makes it surprisingly easy. Letโs dig into how to build and deploy a serverless agent that can store and recall context, and run it serverless.
In the previous article of this series, we explored how to build a serverless agent using the Strands Agents SDK
. Since serverless apps are stateless by nature, we now need a way to persist conversation context across invocations! For this scope, we can use mem0_memory
tool, built on top of mem0.ai
, which provides several actions:
-
store
is used to persist a new memory tied to a specific user -
retrieve
fetches semantically relevant memories for that user -
list
returns all stored memories associated with a user - the agent can also use
mem0_memory
to automagically retrieve and leverage memories during its reasoning process
Everything becomes pretty clear when you take a look at the toolโs source code here
Neat, right? It gives your agent persistent context out of the box: basically, a serverless AI agent that actually remembers and have memories tied to users.
Thereโs a specific section in the Strands Agents docs that covers it here.
โ๏ธ Strands Agents Mem0 Configuration
mem0_memory
tool supports three different backend configurations:
-
OpenSearch
which is recommended for production AWS environments: it requiresAWS credentials
andOpenSearch
configuration. You should create it with your preferred IaC framework and then setOPENSEARCH_HOST
and optionallyAWS_REGION
. -
FAISS
is the default for local development as the local vector store backend. It requiresfaiss-cpu
package for local vector storage and no additional configuration is needed. -
mem0.ai
platform using APIs for memory management. Requires amem0.ai
API key to be set asMEM0_API_KEY
in the environment variables.
Iโm going with the last option as I prefer testing things "remocally" (local code, remote data) when building cloud-native solutions, and I love how simple mem0.ai
makes it.
Full Code Walkthrough
First, letโs set things up:
- Load the
.env
file with yourmem0.ai
credentials (you can grab an API key by signing up and using their dashboard)
MEM0_API_KEY=xxx
- Define a friendly system prompt to guide your agentโs behavior
from typing import Dict, Any
from strands import Agent, tool
from strands_tools import mem0_memory
from strands.models import BedrockModel
from dotenv import load_dotenv
load_dotenv()
SYSTEM_PROMPT = """
You are a helpful personal assistant that provides personalized responses based on user history.
Capabilities:
- Store information with mem0_memory
- Retrieve memories with mem0_memory
Key Rules:
- Be conversational
- Retrieve memories before responding
- Store new info
- Share only relevant memories
- Politely indicate if nothingโs found
"""
Next, letโs create the AWS Lambda
handler, just like we did in the previous article of this series:
- let's read from the event a
user_id
(to scope memories), anaction
(to decide what to do with the content), and acontent
(to interact with the agent) - then init our agent using previously defined system prompt and the memory tool
- route incoming calls based on the
action
parameter:store
,retrieve
, orlist
to interact withmem0.ai
, orchat
to engage in a conversation with the agent. - for
chat
action, we also injectuser_id
into the prompt, so we are sure memories are scoped correctly to the user - I've wrapped everything in try/except code block to return JSON-friendly errors, just in case.
def memory(event: Dict[str, Any], _context) -> Any:
user_id = event.get("user_id")
action = event.get("action", "chat")
content = event.get("content")
# Basic validation
if not user_id:
return {"error": "Missing 'user_id' in payload."}
if not content and action not in ["list"]:
return {"error": "Missing 'content' in event payload."}
memory_agent = Agent(
system_prompt=SYSTEM_PROMPT,
tools=[mem0_memory],
)
try:
if action == "store":
memory_agent.tool.mem0_memory(action="store", content=content, user_id=user_id)
elif action == "retrieve":
memory_agent.tool.mem0_memory(action="retrieve", content=content, user_id=user_id)
elif action == "list":
memory_agent.tool.mem0_memory(action="list", user_id=user_id)
elif action == "chat":
memory_agent(f"USER_ID:{user_id} - {content}")
else:
return {"error": f"Unknown action: {action}"}
return {"result": "done"}
except Exception as e:
return {"error": str(e)}
๐ก๏ธ Keeping User Data Scoped (and Safe)
In this demo, weโre passing user_id
directly in the AWS Lambda
payload for simplicity: but in production youโd inject it from a trusted source, like AWS Cognito
or a custom authorizer
. That way it can't be tampered with, unlike a field coming from the clientโs request.
๐ Deploy on AWS Lambda
To deploy on AWS Lambda
is as simple as writing a Serverless
file.
service: serverless-memory-strands-agent
frameworkVersion: '3'
## Use .env
useDotenv: true
## Package individually each function
package:
individually: true
## Apply plugins
plugins:
- serverless-python-requirements #install python requirements
## Define provider and globals
provider:
name: aws
runtime: python3.12
environment:
MEM0_API_KEY: ${env:MEM0_API_KEY} #API key for Mem0
## Define atomic functions
functions:
## memory function
memory:
handler: src/agent/memory/handler.memory #function handler
url: true
package: #package patterns
include:
- "!**/*"
- src/agent/memory/**
Remember to create a MEM0_API_KEY
in your .env
file!
๐งช Test locally with Serverless Framework
We can now test it locally using serverless invoke local
functionality.
First of all let's store some data for two different users.
Let's start saving preferences for user 1
:
sls invoke local -f memory --data \
'{"content": "I like apples and grapefruit, I do not like oranges and bananas","action":"store","user_id":"1"}'
Serverless CLI
will resume memories stored, scoped to user 1
.
Then continue saving preferences for user 2
:
sls invoke local -f memory --data \
'{"content": "I like oranges and bananas, I do not like apples","action":"store","user_id":"2"}'
Again, Serverless CLI
will resume memories stored, but scoped to user 2
.
We can see the memories stored in mem0.ai
dashboard:
Finally we could interact with our agent asking something about what we store (in this case we are asking for preferred fruits).
sls invoke local -f memory --data \
'{"content":"What fruit do i like?","action":"chat","user_id":"1"}'
We'll see previously saved preferences, retrieved by our agent and used to say that user 1
prefers apples and grapefruit.
Finally, let's test it for user 2
.
sls invoke local -f memory --data \
'{"content":"What fruit do i like?","action":"chat","user_id":"2"}'
We'll see previously saved preferences, retrieved by our agent and used to say that user 2
prefers oranges and bananas.
You can also use the actions list
and retrieve
.
As an example, to list all memories for a specific user.
sls invoke local -f memory --data \
'{"action":"list","user_id":"1"}'
๐ Ship to the cloud
As simple as
sls deploy
Remember you should have AWS Credentials
configured.
๐ Final Thoughts
Strands Agents SDK
strips away much of the boilerplate youโd normally deal with in typical agent frameworks. It offers a clean, intuitive API and built-in tools, like `mem0_memory, that cover a wide range of real-world use cases. Whether you're building chatbots, assistants, or serverless AI workflows, this SDK gives you a solid and extensible foundation to start from.
โญ๏ธ What's Next?
A great next step would be testing the mem0_memory
tool with an AWS OpenSearch Serverless
backend. Itโs a production-ready option that scales automatically, plays well with Amazon Bedrock
, and eliminates the need to manage infrastructure: perfect for cloud-native memory-driven agents on AWS.
๐ Who am I
I'm D. De Sio and I work as a Head of Software Engineering in Eleva.
I'm currently (Apr 2025) an AWS Certified Solution Architect Professional and AWS Certified DevOps Engineer Professional, but also a User Group Leader (in Pavia), an AWS Community Builder and, last but not least, a #serverless enthusiast.
My work in this field is to advocate about serverless and help as more dev teams to adopt it, as well as customers break their monolith into API and micro-services using it.
Top comments (2)
I'm new to this, is it possible to train ai with data.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.