In this tutorial we will build an advanced Agentic artificial intelligence using the control plane design pattern. The control plane is the orchestrator central that manages safety and reasoning rules. We also set up a mini retrieval system and defined modular tools. Then, we integrated an agentic layer of reasoning that plans and executes dynamically. We can now see how this system acts like an AI that is able to retrieve knowledge, assess understanding, update learner profiles and log all interactions using a single, scalable, architecture. See the FULL CODES here.
Import subprocess
Import sys
Install def_deps():
deps = ['anthropic', 'numpy', 'scikit-learn']
for dep in deps:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-q', dep])
try:
Import anthropic
Except ImportError
install_deps()
Import anthropic
Import json
Numpy can be imported as a np
from sklearn.metrics.pairwise import cosine_similarity
Dataclasses can be imported asdict
Type import List, Dict Any, Optional
Import datetime from datetime
@dataclass
The class document:
Id: str
content
Metadata[str, Any]
Embedding is Optional[np.ndarray] There are no other options.
SimpleRAGRetriever class:
def __init__(self):
self.documents = self._init_knowledge_base()
def _init_knowledge_base(self) -> List[Document]:
Documents = [
Document("cs101", "Python basics: Variables store data. Use x=5 for integers, name="Alice" for strings. Print with print().", {"topic": "python", "level": "beginner"}),
Document("cs102", "Functions encapsulate reusable code. Define with def func_name(params): and call with func_name(args).", {"topic": "python", "level": "intermediate"}),
Document("cs103", "Object-oriented programming uses classes. class MyClass: defines structure, __init__ initializes instances.", {"topic": "python", "level": "advanced"}),
Document("math101", "Linear algebra: Vectors are ordered lists of numbers. Matrix multiplication combines transformations.", {"topic": "math", "level": "intermediate"}),
Document("ml101", "Machine learning trains models on data to make predictions. Supervised learning uses labeled examples.", {"topic": "ml", "level": "beginner"}),
Document("ml102", "Neural networks are composed of layers. Each layer applies weights and activation functions to transform inputs.", {"topic": "ml", "level": "advanced"}),
]
For i, document in enumerate (docs)
doc.embedding = np.random.rand(128)
doc.embedding[i*20:(i+1)*20] += 2
return docs
def retrieve(self, query: str, top_k: int = 2) -> List[Document]:
query_embedding = np.random.rand(128)
scores = [cosine_similarity([query_embedding], [doc.embedding])[0][0] For doc, in [self.documents]
top_indices = np.argsort(scores)[-top_k:][::-1]
You can return to your original language by clicking here. [self.documents[i] for i in top_indices]
Then we import all the libraries and create the knowledge base data structure. In order to perform a similarity search, we generate mock embeddings and define a simple retriever. This block prepares everything for retrieval driven reasoning. Look at the FULL CODES here.
Class ToolRegistry
def __init__(self, retriever: SimpleRAGRetriever):
Self-retriever = Retriever
self.interaction_log = []
self.user_state = {"level": "beginner", "topics_covered": []}
def search_knowledge(self, query: str, filters: Optional[Dict] = None) -> Dict:
docs = self.retriever.retrieve(query, top_k=2)
If you have filters,
Docs =ReturnReturn [d for d in docs if all(d.metadata.get(k) == v for k, v in filters.items())]
return {
"tool": "search_knowledge",
"results": [{"content": d.content, "metadata": d.metadata} for d in docs],
"count": len(docs)
}
def assess_understanding(self, topic: str) -> Dict:
questions = {
"python": ["What keyword defines a function?", "How do you create a variable?"],
"ml": ["What is supervised learning?", "Name two types of ML algorithms."],
"math": ["What is a vector?", "Explain matrix multiplication."]
}
return {
"tool": "assess_understanding",
"topic": topic,
"questions": questions.get(topic, ["General comprehension check."])
}
def update_learner_profile(self, topic: str, level: str) -> Dict:
If the topic does not appear in self.user_state["topics_covered"]:
self.user_state["topics_covered"].append(topic)
self.user_state["level"] LevelReturnReturn
return {
"tool": "update_learner_profile",
"status": "updated",
"profile": self.user_state.copy()
}
def log_interaction(self, event: str, details: Dict) -> Dict:
log_entry = {
"timestamp": datetime.now().isoformat(),
"event": event,
"details": details
}
self.interaction_log.append(log_entry)
return {"tool": "log_interaction", "status": "logged", "entry_id": len(self.interaction_log)}
Our agent interacts with the system using the tool registry we build. In addition to defining tools like knowledge search, assessment, profile updating, and logging we also maintain a persistent dictionary of user states. We can see that as we work with this layer, each tool is a module capability to which the control plane may route. Visit the FULL CODES here.
Class ControlPlaneReturn
def __init__(self, tool_registry: ToolRegistry):
self.tools = tool_registry
self.safety_rules = {
"max_tools_per_request": 4,
"allowed_tools": ["search_knowledge", "assess_understanding",
"update_learner_profile", "log_interaction"]
}
self.execution_log = []
def execute(self, plan: Dict[str, Any]) -> Dict[str, Any]:
if not self._validate_request(plan):
return {"error": "Safety validation failed", "plan": plan}
"Action" = "plan.get ("action")
Params = Plan.get ("parameters", {})
result = self._route_and_execute(action, params)
self.execution_log.append({
"timestamp": datetime.now().isoformat(),
"plan": plan,
"result"The resultReturn
})
return {
"success"True,
"action": action,
"result": result,
"metadata": {
"execution_count": len(self.execution_log),
"safety_checks_passed": True
}
}
def _validate_request(self, plan: Dict) -> bool:
"Action" = "plan.get ("action")
If the action does not follow self.safety_rules["allowed_tools"]:
Return False
if len(self.execution_log) >= 100:
Return False
Return to True
def _route_and_execute(self, action: str, params: Dict) -> Any:
tool_map = {
"search_knowledge": self.tools.search_knowledge,
"assess_understanding": self.tools.assess_understanding,
"update_learner_profile": self.tools.update_learner_profile,
"log_interaction": self.tools.log_interaction
}
tool_func = tool_map.get(action)
If tool_funcReturn
return tool_func(**params)
return {"error"The f"Unknown action: {action}"}
The control plane we implement orchestrates the tool execution and checks safety rules. We also manage permissions. Every request is validated, actions are routed to the appropriate tool and an execution log is kept for transparency. We can observe that the control system becomes the one that governs agentic behaviour, ensuring it is predictable and safe. See the FULL CODES here.
class TutorAgent:
def __init__(self, control_plane: ControlPlane, api_key: str):
self.control_plane = control_plane
self.client = anthropic.Anthropic(api_key=api_key)
self.conversation_history = []
def teach(self, student_query: str) -> str:
plan = self._plan_actions(student_query)
Results = []
Action_plan is a plan.
result = self.control_plane.execute(action_plan)
results.append(result)
response = self._synthesize_response(student_query, results)
self.conversation_history.append({
"query": student_query,
"plan": plan,
"results": results,
"response""
})
Return Response
def _plan_actions(self, query: str) -> List[Dict]:
plan = []
query_lower = query.lower()
If any (kw for query_lower in kw ["what", "how", "explain", "teach"]):
plan.append({
"action": "search_knowledge",
"parameters": {"query": query},
"context": {"intent": "knowledge_retrieval"}
})
If there is any(kw query_lower for the kw ["test", "quiz", "assess", "check"]):
Topic = "python" If you want to know more about if "python" in query_lower else "ml"
plan.append({
"action": "assess_understanding",
"parameters": {"topic": topic},
"context": {"intent": "assessment"}
})
plan.append({
"action": "log_interaction",
"parameters": {"event": "query_processed", "details": {"query": query}},
"context": {"intent": "logging"}
})
Return plan
def _synthesize_response(self, query: str, results: List[Dict]) -> str:
response_parts = [f"Student Query: {query}n"]
Results:
If result.get("success"( "result" In result
Result = tool_result["result"]
If result["action"] == "search_knowledge":
response_parts.append("n📚 Retrieved Knowledge:")
Tool_result.get() returns a doc when prompted."results", []):
response_parts.append(f" • {doc['content']}")
Elif result["action"] == "assess_understanding":
response_parts.append("n✅ Assessment Questions:")
For q, in tool_result.get()"questions", []):
response_parts.append(f" • {q}")
You can return to your original language by clicking here. "n".join(response_parts)
We use the TutorAgent to plan actions, communicate with the control plane and create final answers. Analysis of queries is performed, multi-step plan generation and tool outputs are combined to produce meaningful answers. This snippet shows the agent coordinating assessment, retrieval and logging. See the FULL CODES here.
def run_demo():
print("=" * 70)
print("Control Plane as a Tool: RAG AI Tutor Demo")
print("=" * 70)
API_KEY = "your-api-key-here"
Search for SimpleRAGRetriever()
tool_registry = ToolRegistry(retriever)
control_plane = ControlPlane(tool_registry)
print("System initialized")
print(f"Tools: {len(control_plane.safety_rules['allowed_tools'])}")
print(f"Knowledge base: {len(retriever.documents)} documents")
try:
tutor = TutorAgent(control_plane, API_KEY)
except:
print("Mock mode enabled")
tutor = None
demo_queries = [
"Explain Python functions to me",
"I want to learn about machine learning",
"Test my understanding of Python basics"
]
Demo_queries for query:
print("n--- Query ---")
If tutor:
print(tutor.teach(query))
else:
plan = [
{"action": "search_knowledge", "parameters": {"query": query}},
{"action": "log_interaction", "parameters": {"event": "query", "details": {}}}
]
print(query)
Action plan
result = control_plane.execute(action)
print(f"{action['action']}: {result.get('success', False)}")
print("Summary")
print(f"Executions: {len(control_plane.execution_log)}")
print(f"Logs: {len(tool_registry.interaction_log)}")
print(f"Profile: {tool_registry.user_state}")
If the __name__ equals "__main__":
run_demo()
The demo initializes the components and processes student sample queries. It also prints system status summaries. The agent is shown logging data and retrieving it, while the Control Plane enforces the rules and records execution history. We get a good picture as we complete this block of the architecture in action.
We conclude by gaining a better understanding of the way in which the control-plane design simplifies orchestration and strengthens safety while creating a separation between tool execution and reasoning. The retrieval layer, the tool registry and the agentic planner come together in a way that creates a coherence AI tutor. It responds to questions intelligently. We observe as we play with the demo how the system applies rules and routes tasks while still remaining flexible and extensible.
Click here to find out more FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.
Asif Razzaq serves as the CEO at Marktechpost Media Inc. As an entrepreneur, Asif has a passion for harnessing Artificial Intelligence’s potential to benefit society. Marktechpost is his latest venture, a media platform that focuses on Artificial Intelligence. It is known for providing in-depth news coverage about machine learning, deep learning, and other topics. The content is technically accurate and easy to understand by an audience of all backgrounds. Over 2 million views per month are a testament to the platform’s popularity.

