Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Prego Has a Dinner-Conversation-Recording Device, Capisce?
  • AI CEOs think they can be everywhere at once
  • OpenAI’s GPT-5.4 Cyber: A Finely Tuned Model for Verified Security Defenders
  • Code Implementation for an AI-Powered Pipeline to Detect File Types and Perform Security Analysis with OpenAI and Magika
  • TabPFN’s superior accuracy on tabular data sets is achieved by leveraging in-context learning compared to Random Forest or CatBoost
  • Moonshot AI Researchers and Tsinghua Researchers propose PrfaaS, a cross-datacenter KVCache architecture that rethinks how LLMs can be served at scale.
  • OpenMythos – A PyTorch Open Source Reconstruction of Claude Mythos, where 770M Parameters match a 1.3B Transformator
  • This tutorial will show you how to run PrismML Bonsai 1Bit LLM using CUDA, Benchmarking and Chat with JSON, RAG, GGUF.All 128 weights have the same FP16 scaling factor. 1 bit (sign) + 16/128 bits (shared scale) = 1.125 bpw Compare Memory for Bonsai 1.7B:?It is 14.2 times smaller than Q1_0_g128!
AI-trends.todayAI-trends.today
Home»Tech»Learn how to build a conversational AI agent with LangGraph using step replay and time-travel checkpoints

Learn how to build a conversational AI agent with LangGraph using step replay and time-travel checkpoints

Tech By Gavin Wallace31/08/20257 Mins Read
Facebook Twitter LinkedIn Email
Can LLMs Really Judge with Reasoning? Microsoft and Tsinghua Researchers
Can LLMs Really Judge with Reasoning? Microsoft and Tsinghua Researchers
Share
Facebook Twitter LinkedIn Email

We will learn how LangGraph can be used to create conversation flow structures and to provide the flexibility to change them. “time travel” Through checkpoints. We can build a chatbot using a Wikipedia and Gemini tool to add more steps into a conversation, track each checkpoint and replay all the state histories. We can see in real-time how LangGraph’s interface allows for the manipulation and tracking of conversations with control and clarity. Click here to see the FULL CODES here.

!pip -q install -U langgraph langchain langchain-google-genai google-generativeai typing_extensions
!pip -q install "requests==2.32.4"


Import os
Import json
Textwrap
import getpass
import time
Type import to get Annotated List, Dict or Optional, and any other type of list.


Import TypedDict from typing_extensions


from langchain.chat_models import init_chat_model
from langchain_core.messages import BaseMessage
Import tool from langchain_core.tools


StateGraph import from langgraph.graph, START and END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import InMemorySaver
From langgraph.prebuilt, import Tools_condition


Requests for imports
Retry HTTPAdapter can be imported by importing requests.adapters.


If not, os.environ.get ("GOOGLE_API_KEY"):
   os.environ["GOOGLE_API_KEY"] = getpass.getpass("🔑 Enter your Google API Key (Gemini): ")


The llm model is initialized by llm."google_genai:gemini-2.0-flash")

Installation of the required libraries is followed by setting up the Gemini API key and importation of all necessary modules. LangChain is used to initialize Gemini so we can then use it in our LangGraph workflow. See the FULL CODES here.

WIKI_SEARCH_URL = "https://en.wikipedia.org/w/api.php"


_session = requests.Session()
_session.headers.update({
   "User-Agent": "LangGraph-Colab-Demo/1.0 (contact: [email protected])",
   "Accept": "application/json",
})
Retry = (
   total=5, connect=5, read=5, backoff_factor=0.5,
   status_forcelist=(429, 500, 502, 503, 504),
   allowed_methods=("GET", "POST")
)
_session.mount("https://", HTTPAdapter(max_retries=retry))
_session.mount("http://", HTTPAdapter(max_retries=retry))


def _wiki_search_raw(query: str, limit: int = 3) -> List[Dict[str, str]]:
   """
 You can use the MediaWiki Search API to:
     - origin='*' (good practice for CORS)
 The retry option is available for both the Polite UA and the retries."params" =
   Returns compact list of {title, snippet_html, url}.
   """
   params = {
       "action": "query",
       "list": "search",
       "format": "json",
       "srsearch": query,
       "srlimit": limit,
       "srprop": "snippet",
       "utf8": 1,
       "origin": "*",
   }
   r = _session.get(WIKI_SEARCH_URL, params=params, timeout=15)
   r.raise_for_status()
 Data = r.json()
 You can also say: []
 Get item from data"query", {}).get("search", []):
 The title is item.get"title", "")
 Page_url = "f""https://en.wikipedia.org/wiki/{title.replace(' ', '_')}"
 The snippet is item.get("snippet", "")
       out.append({"title": title, "snippet_html": snippet, "url": page_url})
 Return out


@tool
def wiki_search(query: str) -> List[Dict[str, str]]:
   """Search Wikipedia and return up to 3 results with title, snippet_html, and url."""
   try:
       results = _wiki_search_raw(query, limit=3)
       return results if results else [{"title": "No results", "snippet_html": "", "url": ""}]
 Except Exception As e.
 You can return to your original language by clicking here. [{"title": "Error", "snippet_html": str(e), "url": ""}]


TOOLS [wiki_search]

Set up a Wikipedia Search Tool with Retries and polite User-Agent. The _wiki_search_raw tool is used to search the MediaWiki API. It’s wrapped as a LangChain Tool, so we can call it seamlessly within our LangGraph workflow. See the FULL CODES here.

"class State" (TypedDict, Typed Dictionary)
 All messages are Annotated[list, add_messages]


graph_builder = StateGraph(State)


llm_with_tools = llm.bind_tools(TOOLS)


SYSTEM_INSTRUCTIONS = textwrap.dedent("""
ResearchBuddy is your diligent assistant.
If you are asked to do so by the user "research", "find info", "latest", "web", or references a library/framework/product,
 Before settling on a final answer, you should call the wiki_search search tool.
Be concise when you create the text around your call.
After you receive the tool results, at least cite page titles that were used to create your summary.Return
""").strip()


def chatbot(state: State) -> Dict[str, Any]:
   """Single step: call the LLM (with tools bound) on the current messages."""
   return {"messages": [llm_with_tools.invoke(state["msgs"])]}


graph_builder.add_node("chatbot", chatbot)


InMemorySaver = memory()
graph = graph_builder.compile(checkpointer=memory)

The wiki_search can then call our Gemini tool when necessary. The chatbot is added, along with a node for tools. We wire these nodes with conditional edges and activate checkpointing using an in-memory saving. Compile the graph now so that we can replay history, add steps and resume from any checked point. Take a look at the FULL CODES here.

def print_last_message(event: Dict[str, Any]):
   """Pretty-print the last message in an event if available."""
 If you want to know more about if "messages" The event is a big deal["messages"]:
 Event = msg["messages"][-1]
       try:
 If isInstance(msg BaseMessage), then:
               msg.pretty_print()
           else:
 A role is msg.get()"role", "unknown")
 Get the content of msg.get()"content", "")
               print(f"n[{role.upper()}]n{content}n")
 The exception:
           print(str(msg))


def show_state_history(cfg: Dict[str, Any]) -> List[Any]:
   """Print a concise view of checkpoints; return the list as well."""
   history = list(graph.get_state_history(cfg))
   print("n=== 📜 State history (most recent first) ===")
 The enumerate (history) is for the i and st:
 Next = n
 N_txt=f"{n}" if n else "()"
       print(f"{i:02d}) NumMessages={len(st.values.get('messages', []))}  Next={n_txt}")
   print("=== End history ===n")
 Return History


def pick_checkpoint_by_next(history: List[Any]Node_name : str "tools") -> Optional[Any]:
   """Pick the first checkpoint whose `next` includes a given node (e.g., 'tools')."""
 For st, in history
 If st.next then tuple, else nxt.()
 Node_name in the nxt
 The return of the st
 Return None

Add utility functions that make the LangGraph workflow more readable and easier to control. We use print_last_message to neatly display the most recent response, show_state_history to list all saved checkpoints, and pick_checkpoint_by_next to locate a checkpoint where the graph is about to run a specific node, such as the tools step. Click here to see the FULL CODES here.

config = {"configurable": {"thread_id": "demo-thread-1"}}


first_turn = {
   "messages": [
       {"role": "system", "content": SYSTEM_INSTRUCTIONS},
       {"role": "user", "content": "I'm learning LangGraph. Could you do some research on it for me?"},
   ]
}


print("n==================== 🟢 STEP 1: First user turn ====================")
events = graph.stream(first_turn, config, stream_mode="values")
For ev, see events
   print_last_message(ev)


second_turn = {
   "messages": [
       {"role": "user", "content": "Ya. Maybe I'll build an agent with it!"}
   ]
}


print("n==================== 🟢 STEP 2: Second user turn ====================")
events = graph.stream(second_turn, config, stream_mode="values")
For ev, see events
   print_last_message(ev)

We can stream events in the graph to simulate the interaction of two users. First, we give system instructions to the assistant and request that she research LangGraph. Then we send a message telling her about how to build an autonomous agent. We can replay each step or continue from the previous state. Click here to see the FULL CODES here.

print("n==================== 🔁 REPLAY: Full state history ====================")
history = show_state_history(config)


to_replay = pick_checkpoint_by_next(history, node_name="tools")
If to_replay = None
 To_replay is history[min(2, len(history) - 1)]


print("Chosen checkpoint to resume from:")
print("  Next:", to_replay.next)
print("  Config:", to_replay.config)


print("n==================== ⏪ RESUME from chosen checkpoint ====================")
For ev in graph.stream. (None. to_replay.config. stream_mode="vals"):
   print_last_message(ev)


Manual_Index = None 
If MANUAL_INDEX ain't None, then it will be 0. 

Replay the entire checkpoint history in order to determine how your conversation develops through each step and where you want to continue. Then, we continue. “time travel” By restarting the dialog from an selected checkpoint and, optionally, from any manual index.

As a conclusion, our understanding of LangGraph’s capabilities to checkpoint and travel in time has improved. These features bring transparency and flexibility to the conversation management process. This framework allows us to experience the strength of its capabilities in creating reliable research assistants and autonomous agents. We realize that this isn’t just a simple demo. Instead, it provides a solid foundation for more complex applications. Reproducibility and traceability will be as critical as the answer itself.


Click here to find out more FULL CODES here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe Now our Newsletter.


Asif Razzaq serves as the CEO at Marktechpost Media Inc. As an entrepreneur, Asif has a passion for harnessing Artificial Intelligence to benefit society. Marktechpost was his most recent venture. This platform, which focuses on machine learning and deep-learning news, is both technical and understandable to a broad audience. This platform has over 2,000,000 monthly views which shows its popularity.

AI travel
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

OpenAI’s GPT-5.4 Cyber: A Finely Tuned Model for Verified Security Defenders

20/04/2026

Code Implementation for an AI-Powered Pipeline to Detect File Types and Perform Security Analysis with OpenAI and Magika

20/04/2026

TabPFN’s superior accuracy on tabular data sets is achieved by leveraging in-context learning compared to Random Forest or CatBoost

20/04/2026

Moonshot AI Researchers and Tsinghua Researchers propose PrfaaS, a cross-datacenter KVCache architecture that rethinks how LLMs can be served at scale.

20/04/2026
Top News

People Are Protesting Data Centers—but Embracing the Factories That Supply Them

RentAHuman: I was hired by AI agents to promote their startups

Latin America’s Free, Open Source and Collaborative AI: Latam GPT

WIRED| WIRED

Google AI Workers Fired Hundreds Amid Struggle Over Working Conditions

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

YouTube’s next big bang is AI

19/09/2025

How to Limit Galaxy AI to On-Device Processing—or Turn It Off Altogether

20/07/2025
Latest News

Prego Has a Dinner-Conversation-Recording Device, Capisce?

20/04/2026

AI CEOs think they can be everywhere at once

20/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.