DEV Community

Cover image for Give Your AI Agents Persistent Memory with Mem0
Ibrohim Abdivokhidov
Ibrohim Abdivokhidov

Posted on

Give Your AI Agents Persistent Memory with Mem0

๐Ÿš€ Want your AI agents to remember past conversations and learn from user interactions? Here's how I integrated @mem0ai into my AI agent system to give them persistent, intelligent memory.

The result? Agents that truly understand context and personalize experiences! ๐Ÿงต๐Ÿ‘‡

๐Ÿค” The Problem: Most AI agents start fresh every conversation. They can't remember user preferences, past decisions, or successful strategies.

Solution: Mem0's AI-powered memory layer = Agents that learn, adapt, and provide personalized experiences across sessions! ๐Ÿง โšก

๐Ÿ“Š Why Mem0? According to their research paper:
โ€ข 26% higher accuracy than OpenAI Memory
โ€ข 91% lower latency
โ€ข 90% token savings

Plus full TypeScript support and enterprise-grade features! ๐Ÿ“ˆ

๐Ÿ› ๏ธ Setup is surprisingly simple. First, install the package:

npm install mem0ai
# or for Python
pip install mem0ai
Enter fullscreen mode Exit fullscreen mode

Get your API key from Mem0 Platform and you're ready to go! ๐ŸŽฏ

๐Ÿ’ก The magic starts with configuration. Here's my production setup using Supabase + OpenAI:

const memoryConfig = {
  embedder: {
    provider: 'openai',
    config: {
      model: 'text-embedding-3-small',
    }
  },
  historyStore: {
    provider: 'supabase',
    config: {
      tableName: 'memory_history'
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”ง Creating the MemoryService class for seamless integration:

export class MemoryService {
  private memory: Memory;

  async addMemories(messages, userId, metadata) {
    return await this.memory.add(messages, {
      userId,
      metadata: { timestamp: new Date().toISOString(), ...metadata }
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Clean, modular, production-ready! โœจ

๐ŸŽฏ Function calling integration makes memory operations automatic:

{
  name: "addMemory",
  description: "Store conversation memories",
  parameters: {
    type: "object",
    properties: {
      messages: { type: "array" },
      metadata: { type: "object" }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The AI decides when to save memories naturally! ๐Ÿค–

๐Ÿ“ˆ The AI agent conversation flow with memory:

  1. User: "I'm vegetarian and allergic to nuts"
  2. AI calls addMemory() to store preferences
  3. Next session: User asks "What should I eat?"
  4. AI calls searchMemories("food preferences")
  5. Gets personalized recommendations!

Context persistence = magic! โœจ

๐Ÿ” Memory search is incredibly powerful:

// Search relevant memories
await memoryService.searchMemories(query, userId, limit)

// Get contextual memories
await memoryService.getRelevantMemories(
  currentMessage, 
  userId, 
  conversationHistory
)
Enter fullscreen mode Exit fullscreen mode

Semantic search finds exactly what's needed! ๐ŸŽฏ

โšก Real-world memory operations I implemented:

case "searchMemories":
  const memories = await memoryService.searchMemories(
    args.query, userId, args.limit || 5
  );
  return JSON.stringify({ memories, count: memories.length });

case "addMemory":
  await memoryService.addMemories(
    args.messages, userId, args.metadata
  );
  return JSON.stringify({ success: true });
Enter fullscreen mode Exit fullscreen mode

๐ŸŽฎ Advanced features that make the difference:

  • User & Agent Memory: Separate memory spaces for users and AI agents
  • Session Memory: Short-term context for conversations
  • Metadata Filtering: Precise memory retrieval with structured data
  • Memory History: Track how memories evolve over time
  • Batch Operations: Update/delete multiple memories efficiently

๐Ÿ›ก๏ธ Error handling for production reliability:

try {
  const result = await memoryService.addMemories(messages, userId);
  return { success: true, memoryId: result };
} catch (error) {
  console.error('Memory operation failed:', error);
  return { error: "Failed to add memory", success: false };
}
Enter fullscreen mode Exit fullscreen mode

Always graceful degradation! ๐Ÿ›ก๏ธ

๐Ÿ“Š Real example from my Wei AI agent:

User: "I want to build a morning routine"
Agent: Stores preference + suggests habits
User (next week): "How's my routine going?"
Agent: Retrieves memory + provides personalized progress report with specific habits mentioned before

Continuity = engagement! ๐ŸŽฏ

๐Ÿš€ Performance benefits I'm seeing:

  • โšก Sub-100ms memory operations
  • ๐Ÿง  Intelligent memory consolidation
  • ๐Ÿ’ฐ 90% reduction in context tokens
  • ๐ŸŽฏ Highly relevant memory retrieval
  • ๐Ÿ“ฑ Works seamlessly with streaming
  • ๐ŸŒ Scales with user growth

๐Ÿ› ๏ธ The complete integration took about 3 hours:

โœ… 45 min: Mem0 service setup
โœ… 60 min: Function calling integration
โœ… 45 min: Error handling & testing
โœ… 30 min: Production optimizations

Most time was on polish, not core functionality! โญ

๐ŸŽฏ Key takeaways:

  1. Memory transforms AI agents from tools to companions
  2. Mem0's API is developer-friendly and performant
  3. Function calling makes memory operations seamless
  4. Context persistence dramatically improves UX
  5. Essential for any serious AI application

Try building with memory! ๐Ÿš€


Technical Implementation Deep Dive

Complete Memory Service Implementation

import { Memory } from 'mem0ai/oss';

const memoryConfig = {
  version: 'v1.1',
  embedder: {
    provider: 'openai',
    config: {
      apiKey: process.env.OPENAI_API_KEY,
      model: 'text-embedding-3-small',
    },
  },
  vectorStore: {
    provider: 'memory',
    config: {
      collectionName: 'memories',
      dimension: 1536,
    },
  },
  llm: {
    provider: 'openai',
    config: {
      apiKey: process.env.OPENAI_API_KEY,
      model: 'gpt-4o-mini',
      temperature: 0.2,
      maxTokens: 1500,
    },
  },
  historyStore: {
    provider: 'supabase',
    config: {
      supabaseUrl: process.env.NEXT_PUBLIC_SUPABASE_URL,
      supabaseKey: process.env.SUPABASE_SERVICE_ROLE,
      tableName: 'memory_history',
    },
  },
};

export class MemoryService {
  private memory: Memory;

  constructor() {
    this.memory = new Memory(memoryConfig);
  }

  async addMemories(messages, userId, metadata) {
    try {
      const result = await this.memory.add(messages, {
        userId,
        metadata: {
          timestamp: new Date().toISOString(),
          ...metadata,
        },
      });
      return result;
    } catch (error) {
      console.error('Error adding memories:', error);
      throw error;
    }
  }

  async searchMemories(query, userId, limit = 5) {
    try {
      const result = await this.memory.search(query, { userId, limit });
      return result;
    } catch (error) {
      console.error('Error searching memories:', error);
      throw error;
    }
  }

  async getRelevantMemories(currentMessage, userId, conversationHistory = [], limit = 3) {
    try {
      // Search for memories related to current message
      const relevantMemories = await this.searchMemories(currentMessage, userId, limit);

      // Also search based on conversation context
      if (conversationHistory.length > 0) {
        const recentMessages = conversationHistory.slice(-3);
        const contextQuery = recentMessages.map(msg => msg.content).join(' ');
        const contextMemories = await this.searchMemories(contextQuery, userId, limit);

        // Combine and deduplicate
        const allMemories = [...relevantMemories.results, ...contextMemories.results];
        const uniqueMemories = allMemories.filter(
          (memory, index, self) => index === self.findIndex(m => m.id === memory.id)
        );

        return uniqueMemories.slice(0, limit);
      }

      return relevantMemories.results;
    } catch (error) {
      console.error('Error getting relevant memories:', error);
      return [];
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

API Route Integration

// Memory function definitions for AI agents
const memoryFunctions = [
  {
    name: "searchMemories",
    description: "\"Search through user memories for relevant information\","
    parameters: {
      type: "object",
      properties: {
        query: { type: "string", description: "\"Search query\" },"
        limit: { type: "number", description: "\"Max results (default: 5)\" }"
      },
      required: ["query"]
    }
  },
  {
    name: "addMemory",
    description: "\"Store new memories from conversation\","
    parameters: {
      type: "object",
      properties: {
        messages: {
          type: "array",
          items: {
            type: "object",
            properties: {
              role: { type: "string" },
              content: { type: "string" }
            }
          }
        },
        metadata: { type: "object" }
      },
      required: ["messages"]
    }
  },
  {
    name: "getRelevantMemories",
    description: "\"Get contextually relevant memories\","
    parameters: {
      type: "object",
      properties: {
        currentMessage: { type: "string" },
        conversationHistory: { type: "array" },
        limit: { type: "number" }
      },
      required: ["currentMessage"]
    }
  }
];

// Function call handler
async function handleMemoryFunction(name, args, userId) {
  switch (name) {
    case "searchMemories":
      try {
        const memories = await memoryService.searchMemories(
          args.query, userId, args.limit || 5
        );
        return JSON.stringify({
          memories,
          count: Array.isArray(memories) ? memories.length : 0
        });
      } catch (error) {
        return JSON.stringify({
          error: "Failed to search memories",
          memories: []
        });
      }

    case "addMemory":
      try {
        const result = await memoryService.addMemories(
          args.messages, userId, args.metadata
        );
        return JSON.stringify({
          success: true,
          message: "Memory added successfully",
          memoryId: result
        });
      } catch (error) {
        return JSON.stringify({
          error: "Failed to add memory",
          success: false
        });
      }

    case "getRelevantMemories":
      try {
        const relevantMemories = await memoryService.getRelevantMemories(
          args.currentMessage,
          userId,
          args.conversationHistory || [],
          args.limit || 3
        );
        return JSON.stringify({
          memories: relevantMemories,
          count: Array.isArray(relevantMemories) ? relevantMemories.length : 0
        });
      } catch (error) {
        return JSON.stringify({
          error: "Failed to get relevant memories",
          memories: []
        });
      }
  }
}
Enter fullscreen mode Exit fullscreen mode

Supabase Database Schema

-- Memory history table for Mem0
CREATE TABLE memory_history (
  id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
  user_id UUID REFERENCES auth.users(id),
  memory_id TEXT NOT NULL,
  operation TEXT NOT NULL,
  data JSONB,
  created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
  updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

-- Enable RLS
ALTER TABLE memory_history ENABLE ROW LEVEL SECURITY;

-- RLS Policies
CREATE POLICY "Users can view own memory history" ON memory_history
  FOR SELECT USING (auth.uid() = user_id);

CREATE POLICY "Users can insert own memory history" ON memory_history
  FOR INSERT WITH CHECK (auth.uid() = user_id);

-- Indexes for performance
CREATE INDEX memory_history_user_id_idx ON memory_history(user_id);
CREATE INDEX memory_history_memory_id_idx ON memory_history(memory_id);
CREATE INDEX memory_history_created_at_idx ON memory_history(created_at);
Enter fullscreen mode Exit fullscreen mode

Production Use Cases

1. Personal AI Assistant

// Store user preferences
messages = [
  { role: "user", content: "I prefer morning workouts and vegetarian meals" }
];
await memoryService.addMemories(messages, userId, { category: "preferences" });

// Later retrieval
const preferences = await memoryService.searchMemories("user preferences", userId);
Enter fullscreen mode Exit fullscreen mode

2. Learning Agent

// Track user progress
messages = [
  { role: "user", content: "I completed the JavaScript basics course" },
  { role: "assistant", content: "Great! Ready for React next?" }
];
await memoryService.addMemories(messages, userId, { category: "learning" });
Enter fullscreen mode Exit fullscreen mode

3. Customer Support Agent

// Remember past issues
const relevantIssues = await memoryService.searchMemories(
  "billing problems", userId, 3
);
// Provide context-aware support
Enter fullscreen mode Exit fullscreen mode

Performance Optimizations

Memory Consolidation

  • Automatically merge similar memories
  • Remove outdated information
  • Maintain memory relevance scores

Caching Strategy

// Cache frequently accessed memories
const memoryCache = new Map();

async function getCachedMemories(userId, query) {
  const cacheKey = `${userId}:${query}`;
  if (memoryCache.has(cacheKey)) {
    return memoryCache.get(cacheKey);
  }

  const memories = await memoryService.searchMemories(query, userId);
  memoryCache.set(cacheKey, memories);
  return memories;
}
Enter fullscreen mode Exit fullscreen mode

Batch Operations

// Batch update memories for efficiency
const updates = [
  { memory_id: "123", text: "Updated preference" },
  { memory_id: "456", text: "New information" }
];
await memoryService.batchUpdate(updates);
Enter fullscreen mode Exit fullscreen mode

This Article demonstrates a production-ready Mem0 integration in an AI agent system. The code examples are from the actual Wei AI Agent project, showing real-world implementation patterns.

Top comments (0)