This tutorial will show you how to create an intelligent agent who remembers us, adapts and learns over time. We implement an Persistent Memory & Personalisation The system is a way to get in touch with others. Simulating modern AI frameworks by using rule-based logic, we can simulate the way they store and retrieve contextual information. As we continue, we will observe how an agent’s response changes with its experience. Memory decay prevents overload. And personalisation enhances performance. Our goal is to learn, step-by-step, how persistent chatbots become a contextually aware, evolving digital assistant. See the FULL CODES here.
Random import, math and time
Type Import List
Class MemoryItem
Def __init__ (self, sort:str content:str score:float=1.0 kind:str)
Self-kind = Kind
"self-content" = content
Score = self.score
self.t = time.time()
MemoryStore Class:
def __init__(self, decay_half_life=1800):
Self-items List[MemoryItem] = []
self.decay_half_life = decay_half_life
def _decay_factor(self, item:MemoryItem):
dt = time.time() - item.t
return 0.5 ** (dt / self.decay_half_life)
Our agent has a long-term storage memory. To hold the information, we define a MemoryItem and create a MemoryStore that has accelerated decay. As we begin to lay the foundation, information will age and be stored just as a person’s brain does. See the FULL CODES here.
Def add (self, type:str and content:str with score:float=1.0),
self.items.append(MemoryItem(kind, content, score))
def search(self, query:str, topk=3):
Score = []
For it in Self.items
decay = self._decay_factor(it)
sim = len(set(query.lower().split()) & set(it.content.lower().split()))
final = (it.score * decay) + sim
scored.append((final, it))
scored.sort(key=lambda x: x[0], reverse=True)
Return to the Homepage [it for _, it in scored[:topk] if _ > 0]
def cleanup(self, min_score=0.1):
NEW = []
For it in Self.items
if it.score * self._decay_factor(it) > min_score:
new.append(it)
Self.items is new
Add methods for inserting, searching, and cleaning old memories. A simple decay-based cleaning routine and similarity functions allow the agent to retain relevant information while automatically deleting weak or old facts. Click here to see the FULL CODES here.
class Agent
def __init__(self, memory:MemoryStore, name="PersonalAgent"):
Memory = self-memory
Self-name = Name
def _llm_sim(self, prompt:str, context:List[str]):
Base = "OK. "
If any ("prefers short" "in c for C in context":
Base = ""
"Reply" = base plus f"I considered {len(context)} past notes. "
If "summarize" Lowercase():
Retourner + "Summary: " + " | ".join(context[:2])
If "recommend" Lowercase():
If you have any questions, please feel free to ask."cybersecurity" "in c for C in context":
Return reply "Recommended: write more cybersecurity articles."
If you have any questions, please feel free to ask."rag" "in c for C in context":
Return reply "Recommended: build an agentic RAG demo next."
Retourner + "Recommended: continue with your last topic."
Return reply "Here's my response to: " + prompt
def perceive(self, user_input:str):
ui = user_input.lower()
If "i like" in ui or "i prefer" in ui:
self.memory.add("preference", user_input, 1.5)
If "topic:" in ui:
self.memory.add("topic", user_input, 1.2)
If "project" in ui:
self.memory.add("project", user_input, 1.0)
Def act(self:user_input, str)
mems = self.memory.search(user_input, topk=4)
ctx= [m.content for m in mems]
answer = self._llm_sim(user_input, ctx)
self.memory.add("dialog"F"user said: {user_input}", 0.6)
self.memory.cleanup()
Return Answer, Ctx
Our intelligent agent uses memory to guide its response. Our mock language model simulates responses that are adapted based upon stored topics and preferences. The perception function allows the agent dynamically to capture new insights. See the FULL CODES here.
def evaluate_personalisation(agent:Agent):
agent.memory.add("preference", "User likes cybersecurity articles", 1.6)
q = "Recommend what to write next"
ans_personal, _ = agent.act(q)
empty_mem = MemoryStore()
cold_agent = Agent(empty_mem)
ans_cold, _ = cold_agent.act(q)
Gain = Len(ans_personal - cold)
Return ans_personal ans_cold gain
Our agent can now act and assess itself. The agent can now recall past memories and use them to create contextual responses. A small evaluation loop is added to measure the effectiveness of the remembered memory. Visit the FULL CODES here.
mem = MemoryStore(decay_half_life=60)
Agent = Agent(mem).
print("=== Demo: teaching the agent about yourself ===")
Inputs = [
"I prefer short answers.",
"I like writing about RAG and agentic AI.",
"Topic: cybersecurity, phishing, APTs.",
"My current project is to build an agentic RAG Q&A system."
]
For inp:
agent.perceive(inp)
print("n=== Now ask the agent something ===")
user_q = "Recommend what to write next in my blog"
Agent.act (user_q), ctx, = ans
print("USER:", user_q)
print("AGENT:", ans)
print("USED MEMORY:", ctx)
print("n=== Evaluate personalisation benefit ===")
p, c, g = evaluate_personalisation(agent)
print("With memory :", p)
print("Cold start :", c)
print("Personalisation gain (chars):", g)
print("n=== Current memory snapshot ===")
For it in Agent.memory.items
print(f"- {it.kind} | {it.content[:60]}... | score~{round(it.score,2)}")
We run the demo in full to watch our agent at work. Then we feed the agent user inputs. We watch how it suggests personalised action and verify its snapshot memory. As we watch, adaptive behaviour emerges. This is proof of persistent memory transforming a script from a simple static to an intelligent learning assistant.
We conclude by showing how the addition of memory and personalisation make our agent human like, able to remember preferences, adapt plans and naturally forget outdated details. We find that simple mechanisms, such as retrieval and decay, improve relevance and the quality of responses. We realize at the end that persistent memory will be the basis of the next generation Agentic AI. This is one that can learn continuously, customize experiences intelligently and maintain context dynamically, all in an offline, fully local setup.
Click here to find out more FULL CODES here. Check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe Now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.
Asif Razzaq serves as the CEO at Marktechpost Media Inc. As an entrepreneur, Asif has a passion for harnessing Artificial Intelligence’s potential to benefit society. Marktechpost was his most recent venture. This platform, which focuses on machine learning and deep-learning news, is both technical and understandable to a broad audience. This platform has over 2,000,000 monthly views which shows its popularity.

