We will build an Agentic AI system with this tutorial. spaCyDesigned to enable multiple intelligent agents reason, collaborate and reflect on their experiences. The entire pipeline is walked through step-by-step, with each agent being observed to perform tasks using communication, planning, memory, and semantic reasoning. The system is transformed into a dynamic, multi-agent architecture, capable of extracting and understanding entities and contexts, creating reasoning chains, and building knowledge graphs. Click here to see the FULL CODES here.
!pip matplotlib networkx spacy -q
Import spacy
Typing import List, Dict Any, Optional Tuple
Import dataclass and field
Deque import collections defaultdict
Import Enum
Import json
Import hashlib
Import datetime from datetime
Class MessageType (Enum)
REQUEST "request"
RESPONSE "response"
BROADCASTING = "broadcast"
QUALITY = "query"
@dataclass
class message:
sender: str
Receiver:
msg_type is MessageType
The content is Dict[str, Any]
timestamp: float = field(default_factory=lambda: datetime.now().timestamp())
Priority: int = 1.
def get_id(self) -> str:
Return hashlib.md5 (f"{self.sender}{self.timestamp}".encode()).hexdigest()[:8]
@dataclass
Class Agent Task:
task_id: str
task_type: str
Data:
Priority: int = 1.
dependencies: List[str] = field(default_factory=list)
metadata: Dict = field(default_factory=dict)
@dataclass
Class Observation
State:
Action: Str
Any
Confidence: floating
timestamp: float = field(default_factory=lambda: datetime.now().timestamp())
Class WorkingMemory
def __init__(self, capacity: int = 10):
What is self-capacity?
self.items = deque(maxlen=capacity)
self.attention_scores = {}
def add(self, key: str, value: Any, attention: float = 1.0):
self.items.append((key, value))
self.attention_scores[key] Attention =
def recall(self, n: int = 5) -> List[Tuple[str, Any]]:
sorted_items = sorted(self.items, key=lambda x: self.attention_scores.get(x[0], 0), reverse=True)
return sorted_items[:n]
def get(self, key: str) -> Optional[Any]:
For k and v, see self-items
if k == key:
The return of v
Return None
Class EpisodicMemory
def __init__(self):
self.episodes = []
self.success_patterns = defaultdict(int)
def store(self, observation: Observation):
self.episodes.append(observation)
if observation.confidence > 0.7:
pattern = f"{observation.state}→{observation.action}"
self.success_patterns[pattern] += 1
def query_similar(self, state: str, top_k: int = 3) -> List[Observation]:
Score [(obs, self._similarity(state, obs.state)) for obs in self.episodes[-50:]]
scored.sort(key=lambda x: x[1], reverse=True)
You can return to your original language by clicking here. [obs for obs, _ in scored[:top_k]]
def _similarity(self, state1: str, state2: str) -> float:
words1, words2 = set(state1.split()), set(state2.split())
If words 1 or 2 are not found, then the sentence is false.
Return 0.0
return len(words1 & words2) / len(words1 | words2)
All the basic structures for an agentic system are created. We import libraries, create message formats and tasks, and construct both episodic and working memory modules. We lay the foundation for communication, reasoning, and storage as we build these foundations. Click here to see the FULL CODES here.
class ReflectionModule
def __init__(self):
self.performance_log = []
def reflect(self, task_type: str, confidence: float, result: Any) -> Dict[str, Any]:
self.performance_log.append({'task': task_type, 'confidence': confidence, 'timestamp': datetime.now().timestamp()})
Recent = [p for p in self.performance_log if p['task'] == task_type][-5:]
avg_conf = sum(p['confidence'] If recent, then 0.5"Insights ="
insights = {
'performance_trend': 'improving' if confidence > avg_conf else 'declining',
'avg_confidence': avg_conf,
'recommendation': self._get_recommendation(confidence, avg_conf)
}
Return to the Source
def _get_recommendation(self, current: float, average: float) -> str:
If current list[str]:
similar = self.episodic_memory.query_similar(str(task.data))
Similar and related[0].confidence > 0.7:
Return to the Homepage [similar[0].action]
You can return to your original language by clicking here. self._default_plan(task)
def _default_plan(self, task: AgentTask) -> List[str]:
return ['analyze', 'extract', 'validate']
def send_message(self, receiver: str, msg_type: MessageType, content: Dict):
msg= Message(self.name.receiver, msg_type.content)
self.message_queue.append(msg)
Return msg
def receive_message(self, message: Message):
self.message_queue.append(message)
self.collaboration_graph[message.sender] += 1
def process(self, task: AgentTask) -> Dict[str, Any]:
Raise NotImplementedError
class CognitiveEntityAgent(AdvancedAgent):
def process(self, task: AgentTask) -> Dict[str, Any]:
Self.nlp = task.data
entities = defaultdict(list)
entity_contexts = []
Documents that contain ents are referred to as doc.ents.
context_start = max(0, ent.start - 5)
context_end = min(len(doc), ent.end + 5)
context = doc[context_start:context_end].text
The following are some of the most common types:[ent.label_].append(ent.text)
entity_contexts.append({'entity': ent.text, 'type': ent.label_, 'context': context, 'position': (ent.start_char, ent.end_char)})
For ent_type in entities.items():
attention = len(ents) / len(doc.ents) if doc.ents else 0
self.working_memory.add(f"entities_{ent_type}", ents, attention)
If entities, confidence = min (len(entities), / 4, 1,0) otherwise 0.3
The Observational State (F) is observable by the term obs."entity_extraction_{len(doc)}tokens", action="extract_with_context", result=len(entity_contexts), confidence=confidence)
self.episodic_memory.store(obs)
reflection = self.reflector.reflect('entity_extraction', confidence, entities)
return {Return "entities: dict()(entities), contexts: entity_contexts; 'confidence:' confidence, next_actions: ['semantic_analysis', 'knowledge_graph'] if confidence > 0.5 else []}
The base class and reflection engine are constructed to provide every agent with planning and memory abilities. Next, we implement the Cognitive Entity Agent which uses text processing to extract context-rich entities and store meaningful observations. We watch as the agent adapts to its experience and learns from it. Click here to see the FULL CODES here.
class SemanticReasoningAgent(AdvancedAgent):
def process(self, task: AgentTask) -> Dict[str, Any]:
Self.nlp = task.data
reasoning_chains = []
Sending a document is doc.sents.
chain = self._extract_reasoning_chain(sent)
If chain:
reasoning_chains.append(chain)
entity_memory = self.working_memory.recall(3)
semantic_clusters = self._cluster_by_semantics(doc)
confidence = min(len(reasoning_chains) / 3, 1.0) if reasoning_chains else 0.4
Obs = Observation (state=f"semantic_analysis_{len(list(doc.sents))}sents", action="reason_and_cluster", result=len(reasoning_chains), confidence=confidence)
self.episodic_memory.store(obs)
return {Return "reasoning_chains": reasoning_chains; "semantic_clusters": semantic_clusters; "memory_context", entity_memory. ['knowledge_integration']}
def _extract_reasoning_chain(self, sent) -> Optional[Dict]:
subj, verb, obj = None, None, None
For token sent
If token.dep_ >= 'nsubj:
subj = token
If token.pos_ is equal to 'VERB,' then the condition will be met:
verb = token
elif! token.dep_ ['dobj', 'attr', 'pobj']:
obj = token
If subj, verb, and obj are all present:
return {'subject': subj.text, 'predicate': verb.lemma_, 'object': obj.text, 'confidence': 0.8}
Return None
def _cluster_by_semantics(self, doc) -> List[Dict]:
clusters = []
nouns = [token for token in doc if token.pos_ in ['NOUN', 'PROPN']]
Visit = Set()
for noun in nouns:
if noun.i in visited:
You can continue reading
cluster = [noun.text]
visited.add(noun.i)
Other in Nouns
If noun.i or other.i are not present in the visited list:
if noun.similarity(other) > 0.5:
cluster.append(other.text)
visited.add(other.i)
if len(cluster) > 1:
clusters.append({'concepts': cluster, 'size': len(cluster)})
return clusters
Semantic Reasoning Agent analyzes sentences, creates reasoning chains and groups ideas based on similarity in semantics. Working memory is integrated to enhance the understanding that the agent develops. We can see the agent moving from surface level extraction to deeper understanding as we do this. See the FULL CODES here.
class KnowledgeGraphAgent(AdvancedAgent):
def process(self, task: AgentTask) -> Dict[str, Any]:
Self.nlp = task.data
graph = {Set graph to 'nodes"(), 'edges': []}
Sending a document is a good idea
List(sent.ents = entities)
if len(entities) >= 2:
for ent in entities:
Graphs['nodes'].add((ent.text, ent.label_))
Root = sent.root
If root.pos_ >= 'VERB":
for i in range(len(entities) - 1):
Graphs['edges'].append({.append ('from:' entities[i]Text, root.lemma_ -'relationship', entities[i+1].text, 'sentence': sent.text[:100]})
The graph below shows the difference between the two.['nodes'] = list(graph['nodes'])
Confidence = min(len (graph['edges']If graph () = 5, 1, if graph['edges'] Other 0.3
The Observational State (F) is observable by the term obs."knowledge_graph_{len(graph['nodes'])}nodes", action="construct_graph", result=len(graph['edges']), confidence=confidence)
self.episodic_memory.store(obs)
return {Return 'graph', graph. 'node_count:' len(graph['nodes']), 'edge_count': len(graph['edges']), 'confidence': confidence, 'next_actions': []}
Class MetaController
def __init__(self):
self.nlp = spacy.load('en_core_web_sm')
self.agents = {
'cognitive_entity': CognitiveEntityAgent('CognitiveEntity', 'entity_analysis', self.nlp),
'semantic_reasoning': SemanticReasoningAgent('SemanticReasoner', 'reasoning', self.nlp),
'knowledge_graph': KnowledgeGraphAgent('KnowledgeBuilder', 'graph_construction', self.nlp)
}
self.task_history = []
self.global_memory = WorkingMemory(capacity=20)
def execute_with_planning(self, text: str) -> Dict[str, Any]:
initial_task = AgentTask(task_id="task_001", task_type="cognitive_entity", data=text, metadata={'source': 'user_input'})
results = {}
task_queue = [initial_task]
Iterations = 0
max_iterations = 10
When task_queues and iterations str
Report "=" * 70 + "n"
Report == " ADVANCED AGENTIC AI SYSTEM - ANALYSIS REPORTn"
Report == "=" * 70 + "nn"
Results.items for Agent_type():
Self-agent = Agent[agent_type]
Report = f"🤖 {agent.name}n"
"report +="" Specialty: {agent.specialty}n"
Report += f" Confidence: {result['confidence']:.2%}n"
If'reflection" is in the result
report = f" Performance: {result['reflection'].get('performance_trend', 'N/A')}n"
Report += " Key Findings:n"
report += json.dumps({k: v for k, v in result.items() If the k does not appear, then it is possible that the k has been removed. ['reflection', 'next_actions']}, indent=6) + "nn"
Report == "📊 System-Level Insights:n"
Report += f" Total iterations: {len(self.task_history)}n"
Report += f" Active agents: {len(results)}n"
Report += f" Global memory size: {len(self.global_memory.items)}n"
Return report
Implementing the Knowledge Graph Agent allows the system connect entities by extracting relations from text. The Meta-Controller is then built, which manages the planning and multi-step implementation, coordinates agents, and synchronizes them. This component allows us to see the system act like a real multi-agent flow control pipeline. Visit the FULL CODES here.
If __name__ is equal to "__main__":
sample_text = """
OpenAI researchers and DeepMind have developed artificial intelligence.
Advanced language models OpenAI is led by Sam Altman in San Francisco.
Demis Hassabis, DeepMind's London head, is Demis. They work together
They have partnered with universities like MIT, Stanford and others. The research they do focuses on machine learning.
Learning, reinforcement learning and neural networks. The breakthrough
Transformers revolutionized the natural language processing industry in 2017.
"""
controller = MetaController()
results = controller.execute_with_planning(sample_text)
print(controller.generate_insights(results))
print("Advanced multi-agent analysis complete with reflection and learning!")
The entire system is run end-to-end using a text sample. The planning is executed, each agent is called in order, and we generate an analysis report. We are now at the stage where we can see how the architecture of multi-agents works in real-time.
As a conclusion, using spaCy we have created a multi-agent framework for reasoning that works on text in real life, while integrating learning, planning and memory to create a seamless workflow. The Meta-Controller is able to orchestrate the agents’ unique understandings, resulting in rich insights. Lastly, because of the extensibility and flexibility of the agentic design we are confident in its ability to adapt to larger datasets or more complex tasks.
Take a look at the FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.
Asif Razzaq serves as the CEO at Marktechpost Media Inc. As an entrepreneur, Asif has a passion for harnessing Artificial Intelligence to benefit society. Marktechpost is his latest venture, a media platform that focuses on Artificial Intelligence. It is known for providing in-depth news coverage about machine learning, deep learning, and other topics. The content is technically accurate and easy to understand by an audience of all backgrounds. Over 2 million views per month are a testament to the platform’s popularity.

