This tutorial shows you how to build a production-style, agentic workflow. GraphBit It demonstrates that graph-structured tools, LLM-driven agents and tool calls can all coexist within a single, integrated system. Initially, we inspect and initialize the GraphBit, and then create a customer support ticket domain using typed data and deterministic tools that can be executed offline. This tutorial shows how the tools are composed to form a robust, rule-based workflow for routing, classification and response drafting. It then elevates that logic in a validated GraphBit Workflow where agent nodes orchestrate their tool use via a directed Graph. Throughout this tutorial, the system is kept in offline mode, but can be promoted to online execution with an LLM configuration. This shows how GraphBit allows for the adoption of agentic intelligent without compromising reproducibility and operational control. Click here to view the Full Codes here.
!pip install numpy rich graphbit
Import os
import time
Download json
Import random
Import dataclass
Type import List Optional, Dict or Any of the following:
Import numpy as an np
Rich import print is rprint
From rich.panel, import Panel
Rich.table Import Table
To begin, install all the dependencies, then import the Python numerical, visualization, and core Python libraries required for this tutorial. Set up the Google Colab runtime so the notebook is self-contained. Visit the Full Codes here.
From graphbit, import shutdown, configuration_runtime get_system_info health_check version
From graphbit, import Node, Workflows, LlmConfig, and Executor
ToolExecutor and ExecutorConfig are the tools that import graphbit data.
From graphbit import clear_tools, get_tool_registry
configure_runtime(worker_threads=4, max_blocking_threads=8, thread_stack_size_mb=2)
init(log_level="warn", enable_tracing=False, debug=False)
info = get_system_info()
Health = Check Your Health()
sys_table = Table(title="System Info / Health")
sys_table.add_column("Key", style="bold")
sys_table.add_column("Value")
K in ["version", "python_binding_version", "cpu_count", "runtime_worker_threads", "runtime_initialized", "build_target", "build_profile"]:
sys_table.add_row(k, str(info.get(k)))
sys_table.add_row("graphbit_version()", str(version()))
sys_table.add_row("overall_healthy", str(health.get("overall_healthy")))
rprint(sys_table)
Initialize GraphBit and configure the execution parameters explicitly to control resource consumption and threading. Then we query the system metadata, and run a health test to ensure that it is initialized correctly. See the Full Codes here.
@dataclass
Tickets for class:
Ticket_id
User_id : str
Text:
The float was created at:
def make_tickets(n: int = 10) -> List[Ticket]:
seeds [
"My card payment failed twice, what should I do?",
"I want to cancel my subscription immediately.",
"Your app crashes when I open the dashboard.",
"Please update my email address on the account.",
"Refund not received after 7 days.",
"My delivery is delayed and tracking is stuck.",
"I suspect fraudulent activity on my account.",
"How can I change my billing cycle date?",
"The website is very slow and times out.",
"I forgot my password and cannot login.",
"Chargeback process details please.",
"Need invoice for last month's payment."
]
random.shuffle(seeds)
Read more about it []
for i in range(n):
out.append(
Ticket(
ticket_id=f"T-{1000+i}",
user_id=f"U-{random.randint(100,999)}",
text=seeds[i % len(seeds)],
created_at=time.time() - random.randint(0, 7 * 24 * 3600),
)
)
Return to the page
tickets = make_tickets (10).
rprint(Panel.fit("n".join([f"- {t.ticket_id}: {t.text}" for t in tickets]), title="Sample Tickets"))
We create a dataset that mimics real customer problems by defining a data model with strong types. To mirror inputs from production, we create tickets that include timestamps. The dataset is used as a shared input for both agent-driven and offline pipelines. Look at the Full Codes here.
clear_tools()
@tool(_description="Classify a support ticket into a coarse category.")
def classify_ticket(text: str) -> Dict[str, Any]:
Lowercase t()
If "fraud" "In t" "fraudulent" In tReturn
return {"category": "fraud", "priority": "p0"}
If "cancel" Use a:Return
return {"category": "cancellation", "priority": "p1"}
If "refund" In t or "chargeback" In tReturn
return {"category": "refunds", "priority": "p1"}
If "password" In t or "login" :Return
return {"category": "account_access", "priority": "p2"}
If "crash" In t or "slow" In t or "timeout" ,Return
return {"category": "bug", "priority": "p2"}
If "payment" In t or "billing" In t or "invoice" In tReturn
return {"category": "billing", "priority": "p2"}
If "delivery" In t or "tracking" In tReturnReturn
return {"category": "delivery", "priority": "p3"}
return {"category": "general", "priority": "p3"}
@tool(_description="Route a ticket to a queue (returns queue id and SLA hours).")
def route_ticket(category: str, priority: str) -> Dict[str, Any]:
queue_map = {
"fraud": ("risk_ops", 2),
"cancellation": ("retention", 8),
"refunds": ("payments_ops", 12),
"account_access": ("identity", 12),
"bug": ("engineering_support", 24),
"billing": ("billing_support", 24),
"delivery": ("logistics_support", 48),
"general": ("support_general", 48),
}
q, sla = queue_map.get(category, ("support_general", 48))
If priority is == "p0":
Sla = min(sla2, 2)
Priority == "p1":
Sla = Min(sla, 8).ReturnTemplates =
return {"queue": q, "sla_hours": sla}
@tool(_description="Generate a playbook response based on category + priority.")
def draft_response(category: str, priority: str, ticket_text: str) -> Dict[str, Any]:
templates = {
"fraud": "We've temporarily secured your account. Please confirm last 3 transactions and reset credentials.",
"cancellation": "We can help cancel your subscription. Please confirm your plan and the effective date you want.",
"refunds": "We're checking the refund status. Please share the order/payment reference and date.",
"account_access": "Let's get you back in. Please use the password reset link; if blocked, we'll verify identity.",
"bug": "Thanks for reporting. Please share device/browser + a screenshot; we'll attempt reproduction.",
"billing": "We can help with billing. Please confirm the last 4 digits and the invoice period you need.",
"delivery": "We're checking shipment status. Please share your tracking ID and delivery address PIN/ZIP.",
"general": "Thanks for reaching out."
}
base = templates.get(category, templates["general"])
Ton = "urgent" If priority is == "p0" Other ("fast" If priority is == "p1" You can also find out more aboutReturn "standard")
return {
"tone": tone,
"message"""{base}nnContext we received: '{ticket_text}'",
"next_steps": ["request_missing_info", "log_case", "route_to_queue"]
}
Register = get_tool_registry()
tools_list = registry.list_tools() If register(hasattr, "list_tools"If ( ) then []
rprint(Panel.fit(f"Registered tools: {tools_list}", title="Tool Registry"))
GraphBit provides a tool interface that allows you to create deterministic business applications for ticket routing and classification. These tools are encoded with domain logic so they don’t require LLM. It creates a solid, tested foundation that can be used for agent orchestration. Visit the Full Codes here.
tool_exec_cfg = ExecutorConfig(
max_execution_time_ms=10_000,
max_tool_calls=50,
continue_on_error=False,
store_results=True,
enable_logging=False
)
tool_executor = ToolExecutor(config=tool_exec_cfg) if "config" in ToolExecutor.__init__.__code__.co_varnames else ToolExecutor()
def offline_triage(ticket: Ticket) -> Dict[str, Any]:
c = classify_ticket(ticket.text)
Route_ticket = c["category"]C["priority"])
dr = Draft_Response (c["category"]CReturn["priority"], ticket.text)
return {
"ticket_id": ticket.ticket_id,
"user_id": ticket.user_id,
"category"C["category"],
"priority"C["priority"],
"queue""["queue"],
"sla_hours""["sla_hours"],
"draft"dr["message"],
"tone"dr["tone"],
"steps": [
("classify_ticket", c),
("route_ticket", rt),
("draft_response", dr),
]
}
offline_results = [offline_triage(t) for t in tickets]
res_table = Table(title="Offline Pipeline Results")
res_table.add_column("Ticket", style="bold")
res_table.add_column("Category")
res_table.add_column("Priority")
res_table.add_column("Queue")
res_table.add_column("SLA (h)")
For r, offline_results
res_table.add_row(r["ticket_id"]"["category"]"["priority"]"["queue"], str(r["sla_hours"]))
rprint(res_table)
The PRIO_COUNTS:[str, int] = {}
sla_vals[int] = []
For r, offline_resultsMetrics =
prio_counts[r["priority"]] = prio_counts.get(r["priority"], 0) + 1
sla_vals.append(int(r["sla_hours"]))
metrics = {
"offline_mode": True,
"tickets": len(offline_results),
"priority_distribution": prio_counts,
"sla_mean": float(np.mean(sla_vals)) if sla_vals else None,
"sla_p95": float(np.percentile(sla_vals, 95)) if sla_vals else None,
}
rprint(Panel.fit(json.dumps(metrics, indent=2), title="Offline Metrics"))
Then, using the pipeline we create offline to execute all tools on tickets and produce structured results. To evaluate the system’s behavior, we aggregated outputs in tables and calculated priority and SLA metrics. The example shows that GraphBit’s logic can also be verified deterministically, before introducing any agents. Take a look at the Full Codes here.
SYSTEM_POLICY = "You are a reliable support ops agent. Return STRICT JSON only."
The Workflow = Workflow"Ticket Triage Workflow (GraphBit)")
summarizer = Node.agent(
name="Summarizer",
agent_id="summarizer",
system_prompt=SYSTEM_POLICY,
prompt="Summarize this ticket in 1-2 lines. Return JSON: {"You can read more about it here.":"..."}nTicket: {input}",
temperature=0.2,
max_tokens=200
)
router_agent = Node.agent(
name="RouterAgent",
agent_id="router",
system_prompt=SYSTEM_POLICY,
prompt=(
"You MUST use tools.n"
"Call classify_ticket(text), route_ticket(category, priority), draft_response(category, priority, ticket_text).n"
"Return JSON with fields: category, priority, queue, sla_hours, message.n"
"Ticket: {input}"
),
tools=[classify_ticket, route_ticket, draft_response],
temperature=0.1,
max_tokens=700
)
Formatter = Node.agent
name="FinalFormatter",
agent_id="final_formatter",
system_prompt=SYSTEM_POLICY,
prompt=(
"Validate the JSON and output STRICT JSON only:n"
"{"ticket_id":"...","category":"...","priority":"...","queue":"...","sla_hours":0,"customer_message":"..."}n"
"Input: {input}"
),
temperature=0.0,
max_tokens=500
)
sid = workflow.add_node(summarizer)
rid = workflow.add_node(router_agent)
fid = workflow.add_node(formatter)
workflow.connect(sid, rid)
workflow.connect(rid, fid)
workflow.validate()
rprint(Panel.fit("Workflow validated: Summarizer -> RouterAgent -> FinalFormatter", title="Workflow Graph"))
We build a directed GraphBit Workflow composed of multiple nodes that have clearly defined responsibilities, and adhere to strict JSON Contracts. The nodes in the graph are then connected to create a valid execution graph, which reflects earlier offline logic. Click here to see the Full Codes here.
Pick_llm_config() -> Optional[Any]:
os.getenv,"OPENAI_API_KEY"):
return LlmConfig.openai(os.getenv("OPENAI_API_KEY"), "gpt-4o-mini")
If (os.getenv)"ANTHROPIC_API_KEY"):
return LlmConfig.anthropic(os.getenv("ANTHROPIC_API_KEY"), "claude-sonnet-4-20250514")
If os.getenv()"DEEPSEEK_API_KEY"):
return LlmConfig.deepseek(os.getenv("DEEPSEEK_API_KEY"), "deepseek-chat")
If (os.getenv)"MISTRALAI_API_KEY"):
return LlmConfig.mistralai(os.getenv("MISTRALAI_API_KEY"), "mistral-large-latest")
Return No
def run_agent_flow_once(ticket_text: str) -> Dict[str, Any]:
llm_cfg = pick_llm_config()
If llm_cfg = NoneReturn
return {
"mode": "offline",
"note": "Set OPENAI_API_KEY / ANTHROPIC_API_KEY / DEEPSEEK_API_KEY / MISTRALAI_API_KEY to enable execution.",
"input": ticket_text
}
executor = Executor(llm_cfg, lightweight_mode=True, timeout_seconds=90, debug=False) if "lightweight_mode" in Executor.__init__.__code__.co_varnames else Executor(llm_cfg)
If hasattr (executor) "configure"):
executor.configure(timeout_seconds=90, max_retries=2, enable_metrics=True, debug=False)
wf is Workflow."Single Ticket Run")
s = Node.agent(
name="Summarizer",
agent_id="summarizer",
system_prompt=SYSTEM_POLICY,
prompt=f"Summarize this ticket in 1-2 lines. Return JSON: {{"You can read more about it here.":"..."}}nTicket: {ticket_text}",
temperature=0.2,
max_tokens=200
)
The Node.agent (r) is a node.
name="RouterAgent",
agent_id="router",
system_prompt=SYSTEM_POLICY,
prompt=(
"You MUST use tools.n"
"Call classify_ticket(text), route_ticket(category, priority), draft_response(category, priority, ticket_text).n"
"Return JSON with fields: category, priority, queue, sla_hours, message.n"
The f"Ticket: {ticket_text}"
),
tools=[classify_ticket, route_ticket, draft_response],
temperature=0.1,
max_tokens=700
)
f = Agent.Node(
name="FinalFormatter",
agent_id="final_formatter",
system_prompt=SYSTEM_POLICY,
prompt=(
"Validate the JSON and output STRICT JSON only:n"
"{"ticket_id":"...","category":"...","priority":"...","queue":"...","sla_hours":0,"customer_message":"..."}n"
"Input: {input}"
),
temperature=0.0,
max_tokens=500
)
The sid value is wf.add_node.
"""wf.add_node (r)
Add node to wf using the fid valueOut = ""
wf.connect(sid, rid)
wf.connect(rid, fid)
wf.validate()
t0 = time.time()
result = executor.execute(wf)
dt_ms = int((time.time() - t0) * 1000)
out = {"mode": "online", "execution_time_ms": dt_ms, "success": bool(result.is_success()"""" if result = hasattr() "is_success") else None}
If result is hasattr, "get_all_variables"):
The following are some of the most recent and popular posts on our website.["variables"] = result.get_all_variables()
else:
The following are some of the most recent and popular posts on our website.["raw"] = str(result)[:3000]
Return to the page
Sample = Tickets[0]
agent_run = run_agent_flow_once(sample.text)
rprint(Panel.fit(json.dumps(agent_run, indent=2)[:3000], title="Agent Workflow Run"))
rprint(Panel.fit("Done", title="Complete"))
The LLM configuration is optional and we can add execution logic to enable the workflow to run automatically when there’s a provider key available. This workflow is run on a ticket to capture the execution results and status. The final step shows how the system transitions seamlessly from offline determinism into fully agentic execution.
GraphBit is a powerful tool that allows for a wide range of workflows, including runtime configuration and registration, deterministic offline execution, metric aggregation and agent-based orchestration. We illustrated how the exact same business logic can run both manually with tools, and automatically using nodes that are connected to a validated graph. By doing so, we highlighted GraphBit’s strengths as an execution substrate instead of just an LLM wrapping. We proved that agentic systems are capable of being designed so they can fail gracefully and operate without any external dependencies. They also scale up to autonomous workflows once LLMs have been enabled.
Take a look at the Full Codes here. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe Now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.
Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to harnessing Artificial Intelligence’s potential for the social good. Marktechpost is his latest venture, a media platform that focuses on Artificial Intelligence. It is known for providing in-depth news coverage about machine learning, deep learning, and other topics. The content is technically accurate and easy to understand by an audience of all backgrounds. Over 2 million views per month are a testament to the platform’s popularity.

