This tutorial demonstrates how to implement an advanced graph based AI agent by using the GraphAgent Framework and Gemini 1.5 Flash Model. The nodes in the graph are each assigned a function. A planner breaks the task down, while a router controls the flow. Math and research nodes provide the external evidence. Gemini is integrated through a wrapper which handles JSON-structured prompts. Local Python functions are used for math evaluation, document searching and safe math evaluation. We demonstrate modularization of reasoning, retrieval and validation by executing the pipeline from beginning to end. Visit the FULL CODES here.
import os, json, time, ast, math, getpass
Import dataclasses and fields
Typing import List, Dict or Callable
import google.generativeai as genai
try:
Import networkx as Nx
If you get an ImportError, it's because your import is not working.
Nox = None
To structure our state, we import the core Python libraries, including dataclasses, typing helpers, and timing and evaluation. The google.generativeai is also loaded to allow access to Gemini, and optionally NetworkX, for graph visualisation. Look at the FULL CODES here.
Make a model using the API key (api_key) and Model Name. "gemini-1.5-flash"):
genai.configure(api_key=api_key)
return genai.GenerativeModel(model_name, system_instruction=(
"You are GraphAgent, a principled planner-executor. "
"Prefer structured, concise outputs; use provided tools when asked."
))
def call_llm(model, prompt: str, temperature=0.2) -> str:
r = model.generate_content(prompt, generation_config={"temperature": temperature})
Text return "").strip()
This function will call the LLM while displaying a prompt and allowing temperature to be controlled. We set up this configuration to make sure our agent gets consistent, well-structured outputs. See the FULL CODES here.
def safe_eval_math(expr: str) -> str:
Node = ast.parse() (expr), mode="eval")
Allow = (ast.Expression ast.BinOp ast.UnaryOp ast.Num ast.Constant)
ast.Add, ast.Sub, ast.Mult, ast.Div, ast.Pow, ast.Mod,
ast.USub, ast.UAdd, ast.FloorDiv, ast.AST)
def check(n):
If isinstance()(n) does not exist: Raise ValueError ("Unsafe expression")
for c in ast.iter_child_nodes(n): check(c)
check(node)
return str(eval(compile(node, "
The agent is equipped with two tools: A simple document search, which extracts the most pertinent snippets of information from a limited in-memory database. This allows us to provide the agent with reliable retrieval and computation capabilities, without external dependency. See the FULL CODES here.
@dataclass
CLASS STATE:
task: str
Plan: str = ""
The scratch-off List[str] = field(default_factory=list)
Evidence: List[str] = field(default_factory=list)
result: str = ""
Integer = 0, step
If bool is False, the result will be:
def node_plan(state: State, model) -> str:
prompt = f""The user's task can be solved by planning step-bystep.Return JSON
Task: {state.task}
Return JSON: {{"subtasks": ["..."], "tools": {{"search": true/false, "math": true/false}}, "success_criteria": ["..."]}}"""
js = call_llm(model, prompt)
try:
plan = json.loads(js[js.find("{"): js.rfind("}")+1])
The exception:
plan = {"subtasks": ["Research", "Synthesize"], "tools": {"search": True, "math": False}, "success_criteria": ["clear answer"]}
state.plan = json.dumps(plan, indent=2)
state.scratch.append("PLAN:n"+state.plan)
Return to the Homepage "route"
def node_route(state: State, model) -> str:
prompt = f""You are a Router. Choose the next node.
Context scratch:n{chr(10).join(state.scratch[-5:])}
If math needed -> 'math', if research needed -> 'research', if ready -> 'write'.
One token can be returned from [research, math, write]. Task: {state.task}"""
choice = call_llm(model, prompt).lower()
If "math" Choose any (ch.isdigit)() "Ch" in the state.task
Return to the Homepage "math"
If "research" Evidence:
Return to the Homepage "research"
Return to the Homepage "write"
def node_research(state: State, model) -> str:
prompt = f""Generate 3 targeted search queries to:
Task: {state.task}
As a JSON string list, return the strings."""
qjson = call_llm(model, prompt)
try:
queries = json.loads(qjson[qjson.find("["): qjson.rfind("]")+1])[:3]
The exception:
queries = [state.task, "background "+state.task, "pros cons "+state.task]
Hits = []
for q in queries:
hits.extend(search_docs(q, k=2))
state.evidence.extend(list(dict.fromkeys(hits)))
state.scratch.append("EVIDENCE:n- " + "n- ".join(hits))
Return to the Homepage "route"
def node_math(state: State, model) -> str:
prompt = "Extract a single arithmetic expression from this task:n"+state.task
Expr = call_llm(model, prompt)
expr = ""Joining a ch with an expr is possible by using.join. "0123456789+-*/().%^ ")
try:
Safe_eval_math (expr).
state.scratch.append(f"MATH: {expr} = {val}")
Other than Exceptions as follows:
state.scratch.append(f"MATH-ERROR: {expr} ({e})")
Return to the Homepage "route"
def node_write(state: State, model) -> str:
prompt = f""Write the answer.
Task: {state.task}
Cite the math and evidence below. [1],[2].
Evidence:n{chr(10).join(f'[{i+1}] '+e for i,e in enumerate(state.evidence))}
Notes:n{chr(10).join(state.scratch[-5:])}
Please provide a structured, short answer."""
draft = call_llm(model, prompt, temperature=0.3)
Draft = state.result
state.scratch.append("DRAFT:n"+draft)
Return to the Homepage "critic"
def node_critic(state: State, model) -> str:
prompt = f""The answer should be improved for its factuality and missing steps.
The improved response will be returned if a fix is required. Otherwise, return the 'OK" answer.
Answer:n{state.result}nCriteria:n{state.plan}"""
crit = call_llm(model, prompt)
If crit.strip().upper() != "OK" and len(crit) > 30:
state.result = crit.strip()
state.scratch.append("REVISED")
state.done is True
Return to the Homepage "end"
Nodes:[str, Callable[[State, Any], str]] = {
"plan": node_plan, "route": node_route, "research": node_research,
"math": node_math, "write": node_write, "critic": node_critic
}
def run_graph(task: str, api_key: str) -> State:
model = make_model(api_key)
state = State (task=task
Cur = "plan"
max_steps = 12
while not state.done and state.step plan -> route -> (research route) & (math route) -> write -> critic -> END
"""
As the graph is executed, we define a State dataclass that uses typed values to maintain the plan, the evidence, the scratch notes, the control flags and the task. Implementing node functions and other features such as math, research, writer, planner, router, etc. These functions change the node state, and then return the label for the next one. Then we register them into NODES, and run_graph iterates until the graph is finished. Also, we expose the ascii_graph.() Visualize the flow of control we use as we move between math/research and finish with a final critique. Visit the FULL CODES here.
If __name__ is equal to "__main__":
Key = os.getenv()"GEMINI_API_KEY") or getpass.getpass("🔐 Enter GEMINI_API_KEY: ")
Task = input ("📝 Enter your task: ").strip() The following are some examples of how to use "Compare solar vs wind for reliability; compute 5*7."
t0 = time.time()
state = run_graph(task, key)
dt = time.time() - t0
print("n=== GRAPH ===", ascii_graph())
print(f"n✅ Result in {dt:.2f}s:n{state.result}n")
print("---- Evidence ----")
print("n".join(state.evidence))
print("n---- Scratch (last 5) ----")
print("n".join(state.scratch[-5:]))
Run_graph is used to run the graph. We first define the entry point of the program: the Gemini API key. The program measures execution time and prints the ASCII workflow graph. It also displays the final results, as well as supporting evidence, scratch notes, and a few other things for transparency. See the FULL CODES here.
Finally, we show how a graph structured agent allows the design of deterministic work flows around a LLM. We can see how the nodes of the planner and router enforce task decomposition. Gemini serves as the main reasoning engine. The graph nodes are responsible for providing structure, safety check, and transparency in state management. Our final agent is fully functional and demonstrates how graph orchestration can be combined with an LLM to enable extensions, such as multi-turn memory or parallel execution of nodes in complex deployments.
Click here to find out more FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe Now our Newsletter.
Asif Razzaq serves as the CEO at Marktechpost Media Inc. As an entrepreneur, Asif has a passion for harnessing Artificial Intelligence to benefit society. Marktechpost was his most recent venture. This platform, which specializes in covering machine learning and deep-learning news, is highly regarded for being both technically correct and understandable to a broad audience. Over 2 million views per month are a testament to the platform’s popularity.

