Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • North Korean hacker mediocre use AI to steal millions.
  • I’m Growing on Instagram After 10 Years — Here’s What I‘m Doing Differently
  • The Coding for Building a Hyperopt-based Conditional Bayesian Optimization Pipeline with Early Stopping and Hyperopt
  • Join Us for Our Livestream: Musk and Altman on the Future of OpenAI
  • A detection tool claims that the Pope’s warnings about AI were AI-generated.
  • Photon releases Spectrum, an open-source TypeScript framework that deploys AI agents directly to iMessages, WhatsApp and Telegram
  • OpenAI Open-Sources – Euphony: a web-based visualization tool for Harmony session data and Codex chat logs
  • Hugging face releases mlintern: A Open-Source AI agent that automates LLM post-training workflow
AI-trends.todayAI-trends.today
Home»Tech»The right way to Construct a Common Lengthy-Time period Reminiscence Layer for AI Brokers Utilizing Mem0 and OpenAI

The right way to Construct a Common Lengthy-Time period Reminiscence Layer for AI Brokers Utilizing Mem0 and OpenAI

Tech By Gavin Wallace16/04/20269 Mins Read
Facebook Twitter LinkedIn Email
Step-by-Step Guide to Creating Synthetic Data Using the Synthetic Data
Step-by-Step Guide to Creating Synthetic Data Using the Synthetic Data
Share
Facebook Twitter LinkedIn Email

On this tutorial, we construct a common long-term reminiscence layer for AI brokers utilizing Mem0, OpenAI fashions, and ChromaDB. We design a system that may extract structured reminiscences from pure conversations, retailer them semantically, retrieve them intelligently, and combine them straight into customized agent responses. We transfer past easy chat historical past and implement persistent, user-scoped reminiscence with full CRUD management, semantic search, multi-user isolation, and customized configuration. Lastly, we assemble a production-ready memory-augmented agent structure that demonstrates how fashionable AI methods can motive with contextual continuity fairly than function statelessly.

!pip set up mem0ai openai wealthy chromadb -q


import os
import getpass
from datetime import datetime


print("=" * 60)
print("🔐  MEM0 Advanced Tutorial — API Key Setup")
print("=" * 60)


OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY


print("n✅ API key set!n")


from openai import OpenAI
from mem0 import Reminiscence
from wealthy.console import Console
from wealthy.panel import Panel
from wealthy.desk import Desk
from wealthy.markdown import Markdown
from wealthy import print as rprint
import json


console = Console()
openai_client = OpenAI()


console.rule("[bold cyan]MODULE 1: Basic Memory Setup[/bold cyan]")


reminiscence = Reminiscence()


print(Panel(
   "[green]✓ Memory instance created with default config[/green]n"
   "  • LLM: gpt-4.1-nano (OpenAI)n"
   "  • Vector Store: ChromaDB (local)n"
   "  • Embedder: text-embedding-3-small",
   title="Memory Config", border_style="cyan"
))

We set up all required dependencies and securely configure our OpenAI API key. We initialize the Mem0 Reminiscence occasion together with the OpenAI shopper and Wealthy console utilities. We set up the inspiration of our long-term reminiscence system with the default configuration powered by ChromaDB and OpenAI embeddings.

console.rule("[bold cyan]MODULE 2: Adding & Retrieving Memories[/bold cyan]")


USER_ID = "alice_tutorial"


print("n📝 Adding memories for user:", USER_ID)


conversations = [
   [
       {"role": "user", "content": "Hi! I'm Alice. I'm a software engineer who loves Python and machine learning."},
       {"role": "assistant", "content": "Nice to meet you Alice! Python and ML are great areas to be in."}
   ],
   [
       {"role": "user", "content": "I prefer dark mode in all my IDEs and I use VS Code as my main editor."},
       {"role": "assistant", "content": "Good to know! VS Code with dark mode is a popular combo."}
   ],
   [
       {"role": "user", "content": "I'm currently building a RAG pipeline for my company's internal docs. It's for a fintech startup."},
       {"role": "assistant", "content": "That's exciting! RAG pipelines are really valuable for enterprise use cases."}
   ],
   [
       {"role": "user", "content": "I have a dog named Max and I enjoy hiking on weekends."},
       {"role": "assistant", "content": "Max sounds lovely! Hiking is a great way to recharge."}
   ],
]


outcomes = []
for i, convo in enumerate(conversations):
   consequence = reminiscence.add(convo, user_id=USER_ID)
   extracted = consequence.get("results", [])
   for mem in extracted:
       outcomes.append(mem)
   print(f"  Conversation {i+1}: {len(extracted)} memory(ies) extracted")


print(f"n✅ Total memories stored: {len(results)}")

We simulate lifelike multi-turn conversations and retailer them utilizing Mem0’s automated reminiscence extraction pipeline. We add structured conversational knowledge for a particular consumer and permit the LLM to extract significant long-term info. We confirm what number of reminiscences are created, confirming that semantic information is efficiently persevered.

console.rule("[bold cyan]MODULE 3: Semantic Search[/bold cyan]")


queries = [
   "What programming languages does the user prefer?",
   "What is Alice working on professionally?",
   "What are Alice's hobbies?",
   "What tools and IDE does Alice use?",
]


for question in queries:
   search_results = reminiscence.search(question=question, user_id=USER_ID, restrict=2)
   desk = Desk(title=f"🔍 Query: {query}", show_lines=True)
   desk.add_column("Memory", type="white", max_width=60)
   desk.add_column("Score", type="green", justify="center")
  
   for r in search_results.get("results", []):
       rating = r.get("score", "N/A")
       score_str = f"{score:.4f}" if isinstance(rating, float) else str(rating)
       desk.add_row(r["memory"], score_str)
  
   console.print(desk)
   print()


console.rule("[bold cyan]MODULE 4: CRUD Operations[/bold cyan]")


all_memories = reminiscence.get_all(user_id=USER_ID)
memories_list = all_memories.get("results", [])


print(f"n📚 All memories for '{USER_ID}':")
for i, mem in enumerate(memories_list):
   print(f"  [{i+1}] ID: {mem['id'][:8]}...  →  {mem['memory']}")


if memories_list:
   first_id = memories_list[0]["id"]
   original_text = memories_list[0]["memory"]
  
   print(f"n✏️  Updating memory: '{original_text}'")
   reminiscence.replace(memory_id=first_id, knowledge=original_text + " (confirmed)")
  
   up to date = reminiscence.get(memory_id=first_id)
   print(f"   After update: '{updated['memory']}'")

We carry out semantic search queries to retrieve related reminiscences utilizing pure language. We reveal how Mem0 ranks saved reminiscences by similarity rating and returns essentially the most contextually aligned data. We additionally carry out CRUD operations by itemizing, updating, and validating saved reminiscence entries.

console.rule("[bold cyan]MODULE 5: Memory-Augmented Chat[/bold cyan]")


def chat_with_memory(user_message: str, user_id: str, session_history: record) -> str:
  
   related = reminiscence.search(question=user_message, user_id=user_id, restrict=5)
   memory_context = "n".be a part of(
       f"- {r['memory']}" for r in related.get("results", [])
   ) or "No relevant memories found."
  
   system_prompt = f"""You're a extremely customized AI assistant.
You might have entry to long-term reminiscences about this consumer.


RELEVANT USER MEMORIES:
{memory_context}


Use these reminiscences to offer context-aware, customized responses.
Be pure — do not explicitly announce that you simply're utilizing reminiscences."""
  
   messages = [{"role": "system", "content": system_prompt}]
   messages.lengthen(session_history[-6:])
   messages.append({"role": "user", "content": user_message})
  
   response = openai_client.chat.completions.create(
       mannequin="gpt-4.1-nano-2025-04-14",
       messages=messages
   )
   assistant_response = response.decisions[0].message.content material
  
   change = [
       {"role": "user", "content": user_message},
       {"role": "assistant", "content": assistant_response}
   ]
   reminiscence.add(change, user_id=user_id)
  
   session_history.append({"role": "user", "content": user_message})
   session_history.append({"role": "assistant", "content": assistant_response})
  
   return assistant_response




session = []
demo_messages = [
   "Can you recommend a good IDE setup for me?",
   "What kind of project am I currently building at work?",
   "Suggest a weekend activity I might enjoy.",
   "What's a good tech stack for my current project?",
]


print("n🤖 Starting memory-augmented conversation with Alice...n")


for msg in demo_messages:
   print(Panel(f"[bold yellow]User:[/bold yellow] {msg}", border_style="yellow"))
   response = chat_with_memory(msg, USER_ID, session)
   print(Panel(f"[bold green]Assistant:[/bold green] {response}", border_style="green"))
   print()

We construct a completely memory-augmented chat loop that retrieves related reminiscences earlier than producing responses. We dynamically inject customized context into the system immediate and retailer every new change again into long-term reminiscence. We simulate a multi-turn session to reveal contextual continuity and personalization in motion.

console.rule("[bold cyan]MODULE 6: Multi-User Memory Isolation[/bold cyan]")


USER_BOB = "bob_tutorial"


bob_conversations = [
   [
       {"role": "user", "content": "I'm Bob, a data scientist specializing in computer vision and PyTorch."},
       {"role": "assistant", "content": "Great to meet you Bob!"}
   ],
   [
       {"role": "user", "content": "I prefer Jupyter notebooks over VS Code, and I use Vim keybindings."},
       {"role": "assistant", "content": "Classic setup for data science work!"}
   ],
]


for convo in bob_conversations:
   reminiscence.add(convo, user_id=USER_BOB)


print("n🔐 Testing memory isolation between Alice and Bob:n")


test_query = "What programming tools does this user prefer?"


alice_results = reminiscence.search(question=test_query, user_id=USER_ID, restrict=3)
bob_results = reminiscence.search(question=test_query, user_id=USER_BOB, restrict=3)


print("👩 Alice's memories:")
for r in alice_results.get("results", []):
   print(f"   • {r['memory']}")


print("n👨 Bob's memories:")
for r in bob_results.get("results", []):
   print(f"   • {r['memory']}")

We reveal user-level reminiscence isolation by introducing a second consumer with distinct preferences. We retailer separate conversational knowledge and validate that searches stay scoped to the right user_id. We verify that reminiscence namespaces are remoted, guaranteeing safe multi-user agent deployments.

print("n✅ Memory isolation confirmed — users cannot see each other's data.")


console.rule("[bold cyan]MODULE 7: Custom Configuration[/bold cyan]")


custom_config = {
   "llm": {
       "provider": "openai",
       "config": {
           "model": "gpt-4.1-nano-2025-04-14",
           "temperature": 0.1,
           "max_tokens": 2000,
       }
   },
   "embedder": {
       "provider": "openai",
       "config": {
           "model": "text-embedding-3-small",
       }
   },
   "vector_store": {
       "provider": "chroma",
       "config": {
           "collection_name": "advanced_tutorial_v2",
           "path": "/tmp/chroma_advanced",
       }
   },
   "version": "v1.1"
}


custom_memory = Reminiscence.from_config(custom_config)


print(Panel(
   "[green]✓ Custom memory instance created[/green]n"
   "  • LLM: gpt-4.1-nano with temperature=0.1n"
   "  • Embedder: text-embedding-3-smalln"
   "  • Vector Store: ChromaDB at /tmp/chroma_advancedn"
   "  • Collection: advanced_tutorial_v2",
   title="Custom Config Applied", border_style="magenta"
))


custom_memory.add(
   [{"role": "user", "content": "I'm a researcher studying neural plasticity and brain-computer interfaces."}],
   user_id="researcher_01"
)


consequence = custom_memory.search("What field does this person work in?", user_id="researcher_01", restrict=2)
print("n🔍 Custom memory search result:")
for r in consequence.get("results", []):
   print(f"   • {r['memory']}")


console.rule("[bold cyan]MODULE 8: Memory History[/bold cyan]")


all_alice = reminiscence.get_all(user_id=USER_ID)
alice_memories = all_alice.get("results", [])


desk = Desk(title=f"📋 Full Memory Profile: {USER_ID}", show_lines=True, width=90)
desk.add_column("#", type="dim", width=3)
desk.add_column("Memory ID", type="cyan", width=12)
desk.add_column("Memory Content", type="white")
desk.add_column("Created At", type="yellow", width=12)


for i, mem in enumerate(alice_memories):
   mem_id = mem["id"][:8] + "..."
   created = mem.get("created_at", "N/A")
   if created and created != "N/A":
       attempt:
           created = datetime.fromisoformat(created.exchange("Z", "+00:00")).strftime("%m/%d %H:%M")
       besides:
           created = str(created)[:10]
   desk.add_row(str(i+1), mem_id, mem["memory"], created)


console.print(desk)


console.rule("[bold cyan]MODULE 9: Memory Deletion[/bold cyan]")


all_mems = reminiscence.get_all(user_id=USER_ID).get("results", [])
if all_mems:
   last_mem = all_mems[-1]
   print(f"n🗑️  Deleting memory: '{last_mem['memory']}'")
   reminiscence.delete(memory_id=last_mem["id"])
  
   updated_count = len(reminiscence.get_all(user_id=USER_ID).get("results", []))
   print(f"✅ Deleted. Remaining memories for {USER_ID}: {updated_count}")


console.rule("[bold cyan]✅ TUTORIAL COMPLETE[/bold cyan]")


abstract = """
# 🎓 Mem0 Superior Tutorial Abstract


## What You Discovered:
1. **Primary Setup** — Instantiate Reminiscence with default & customized configs
2. **Add Reminiscences** — From conversations (auto-extracted by LLM)
3. **Semantic Search** — Retrieve related reminiscences by pure language question
4. **CRUD Operations** — Get, Replace, Delete particular person reminiscences
5. **Reminiscence-Augmented Chat** — Full pipeline: retrieve → reply → retailer
6. **Multi-Person Isolation** — Separate reminiscence namespaces per user_id
7. **Customized Configuration** — Customized LLM, embedder, and vector retailer
8. **Reminiscence Historical past** — View full reminiscence profiles with timestamps
9. **Cleanup** — Delete particular or all reminiscences


## Key Ideas:
- `reminiscence.add(messages, user_id=...)`
- `reminiscence.search(question, user_id=...)`
- `reminiscence.get_all(user_id=...)`
- `reminiscence.replace(memory_id, knowledge)`
- `reminiscence.delete(memory_id)`
- `Reminiscence.from_config(config)`


## Subsequent Steps:
- Swap ChromaDB for Qdrant, Pinecone, or Weaviate
- Use the hosted Mem0 Platform (app.mem0.ai) for manufacturing
- Combine with LangChain, CrewAI, or LangGraph brokers
- Add `agent_id` for agent-level reminiscence scoping
"""


console.print(Markdown(abstract))

We create a completely customized Mem0 configuration with express parameters for the LLM, embedder, and vector retailer. We check the customized reminiscence occasion and discover reminiscence historical past, timestamps, and structured profiling. Lastly, we reveal deletion and cleanup operations, finishing the complete lifecycle administration of long-term agent reminiscence.

In conclusion, we applied a whole reminiscence infrastructure for AI brokers utilizing Mem0 as a common reminiscence abstraction layer. We demonstrated the best way to add, retrieve, replace, delete, isolate, and customise long-term reminiscences whereas integrating them right into a dynamic chat loop. We confirmed how semantic reminiscence retrieval transforms generic assistants into context-aware methods able to personalization and continuity throughout periods. With this basis in place, we at the moment are outfitted to increase the structure into multi-agent methods, enterprise-grade deployments, different vector databases, and superior agent frameworks, turning reminiscence right into a core functionality fairly than an afterthought.


Try the Full Implementation Code and Notebook. Additionally, be happy to observe us on Twitter and don’t neglect to hitch our 130k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Must associate with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and so on.? Connect with us


AI openai
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

The Coding for Building a Hyperopt-based Conditional Bayesian Optimization Pipeline with Early Stopping and Hyperopt

22/04/2026

Photon releases Spectrum, an open-source TypeScript framework that deploys AI agents directly to iMessages, WhatsApp and Telegram

22/04/2026

OpenAI Open-Sources – Euphony: a web-based visualization tool for Harmony session data and Codex chat logs

22/04/2026

Hugging face releases mlintern: A Open-Source AI agent that automates LLM post-training workflow

22/04/2026
Top News

Grok’s sexual content is more graphic than X

ChatGPT now has a study mode. AI won’t fix education’s AI problems

Anthropic Plots Major London Expansion

‘Fallout’ Producer Jonathan Nolan on AI: ‘We’re in Such a Frothy Moment’

OpenAI Wants to ChatGPT be your Future Operating System

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Jack Dongarra: How supercomputing will evolve

05/08/2025

Anthropic’s Claude Controls a Robot Dog

12/11/2025
Latest News

North Korean hacker mediocre use AI to steal millions.

22/04/2026

I’m Growing on Instagram After 10 Years — Here’s What I‘m Doing Differently

22/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.