Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers
  • Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
AI-trends.todayAI-trends.today
Home»Tech»LangGraph Tutorial – A step-by-step guide for creating a Text Analysis pipeline

LangGraph Tutorial – A step-by-step guide for creating a Text Analysis pipeline

Tech By Gavin Wallace30/07/202510 Mins Read
Facebook Twitter LinkedIn Email
LifelongAgentBench: A Benchmark for Evaluating Continuous Learning in LLM-Based Agents
LifelongAgentBench: A Benchmark for Evaluating Continuous Learning in LLM-Based Agents
Share
Facebook Twitter LinkedIn Email

Reading time estimate: 5 The following are some of the most recent and relevant articles.

LangGraph

LangGraph by LangChain is an advanced framework for developing stateful applications that include multiple actors. The framework provides the tools and structure needed to create sophisticated AI agents using a graph-based method.

Think of LangGraph as an architect’s drafting table – it gives us the tools to design how our agent will think and act. LangGraph allows us to design the way different abilities will interact and flow within our agent, just as an architect would draw blueprints that show how rooms are connected.

Key Features

  • State Management Keep persistent state through interactions
  • Flexible routing: Define complex fluid flows between components
  • Persistence: Workflows can be saved and resumed.
  • Visualization: Understand your agent’s structure

We’ll build a text-processing pipeline in this tutorial that includes three steps:

  1. Text Classification Text input can be categorized into categories that you have defined.
  2. Entity extraction: Key entities in the text
  3. Summary of Text: Generating a brief summary of the text input

The pipeline shows how LangGraph is used to build a modular and extensible workflow that can handle natural language processing.

Setting Up Our Environment

Set up your development environment before you start writing code.

Installation

Install all required packages
You can install Langgraph and other Python modules using!pip.

Setup API keys

You’ll also need to have an OpenAI API Key in order to access their models. Get one from https://platform.openai.com/signup.

Take a look at the Full Codes here

Import os
From dotenv, import load_dotenv

Create a.env environment file with your API Key and load the variables.
load_dotenv()

OpenAI API key #
os.environ["OPENAI_API_KEY"] = os.getenv('OPENAI_API_KEY')

Test Our Setup

We can test our OpenAI model to see if it works.

Import ChatOpenAI from Langchain_openai

ChatOpenAI: Initialize this instance
Model = llm (ChatOpenAI)"gpt-4o-mini")

Test your setup
response = llm.invoke("Hello! Are you working?")
print(response.content)

Text Analysis Pipeline – Building Our Text Analysis Tool

Import the required packages to our LangGraph pipeline.

Import os
Import TypedDict List and Annotated
StateGraph can be imported from langgraph.graph.
Import PromptTemplate from langchain.prompts
Import ChatOpenAI from Langchain_openai
Import HumanMessage from langchain.schema
from langchain_core.runnables.graph import MermaidDrawMethod
Import Image from IPython.display

Designing Agents’ Memories

Our agent requires a memory just like a human. This is created using TypedDict for our state structure. Full Codes here

Class State (TypedDict),
 Text
 Classification:
 Entities: List[str]
 Summary:

# Initialize the temperature of our language model to zero for more deterministic results
Model = llm (ChatOpenAI)"gpt-4o-mini", temperature=0)

The Core Capabilities of Our Agent

We’ll now create the skills that our agent will actually use. The functions for each of these abilities perform a certain type of analyis. Visit the Full Codes here

1. Classification Node

def classification_node(state: State):
 "''Classify this text in one of these categories: Research, News, or Blog''Return
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Classify the following text into one of the categories: News, Blog, Research, or Other.nnText:{text}nnCategory:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
    classification = llm.invoke([message]).content.strip()
    return {"classification": classification}

2. Entity Extraction Node

def entity_extraction_node(state: State):
 "''Extract the Entities (Person/Organization/Location)'' from the Text''Return
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Extract all the entities (Person, Organization, Location) from the following text. Provide the result as a comma-separated list.nnText:{text}nnEntities:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
    entities = llm.invoke([message]).content.strip().split(", ")
    return {"entities": entities}

3. The Summarization Node

def summarization_node(state: State):
 Summarize the entire text with one simple sentence
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Summarize the following text in one short sentence.nnText:{text}nnSummary:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
 () summary = llm.invokeReturn[message]).content.strip()
    return {"summary": summary}

All Together Now

Now comes the most exciting part – connecting these capabilities into a coordinated system using LangGraph:

Take a look at the Full Codes here

StateGraph: Create your own StateGraph
Workflow = StateGraph (State)

Add more nodes to your graph
workflow.add_node("classification_node", classification_node)
workflow.add_node("entity_extraction", entity_extraction_node)
workflow.add_node("summarization", summarization_node)

# Add edges on the graph
workflow.set_entry_point("classification_node"(# Sets the entry point in the graph
workflow.add_edge("classification_node", "entity_extraction")
workflow.add_edge("entity_extraction", "summarization")
workflow.add_edge("summarization", END)

Compose the graph
The app is a workflow.compile()

Workflow Structure This is the path that our pipeline takes:
classification_node → entity_extraction → summarization → END

Testing our Agent

We’ll now see what our agent can do with a realistic text example.

Take a look at the Full Codes here

sample_text = """ OpenAI has announced the GPT-4 model, which is a large multimodal model that exhibits human-level performance on various professional benchmarks. It is developed to improve the alignment and safety of AI systems. Additionally, the model is designed to be more efficient and scalable than its predecessor, GPT-3. The GPT-4 model is expected to be released in the coming months and will be available to the public for research and development purposes. """ 
state_input = {"text": sample_text} 
result = app.invoke(state_input) 
print("Classification:"Results["classification"]) 
print("nEntities:"Results["entities"]) 
print("nSummary:"Results["summary"])
Classification of News Entities ['OpenAI', 'GPT-4', 'GPT-3'] OpenAI’s GPT-4 is an upcoming multimodal AI model that will aim for performance at the level of a human and improve safety, efficiency and scalability over GPT-3.

The Power of Coordinated Processing

What makes this result particularly impressive isn’t just the individual outputs – it’s how each step builds on the others to create a complete understanding of the text.

  • You can also find out more about the following: Classification The context helps to frame the understanding of text types
  • You can also find out more about the following: Entity extraction Names and concepts that are important
  • It is important to note that the word “you” means “the”. Summary The essence of a document is distilled

This mirrors human reading comprehension, where we naturally form an understanding of what kind of text it is, note important names and concepts, and form a mental summary – all while maintaining the relationships between these different aspects of understanding.

You can also try your own text

Try our pipeline again with another text example:

Take a look at the Full Codes here

Please replace the text below with your own to get a better understanding of your text.# Run the pipeline to process your text. """ The recent advancements in quantum computing have opened new possibilities for cryptography and data security. Researchers at MIT and Google have demonstrated quantum algorithms that could potentially break current encryption methods. However, they are also developing new quantum-resistant encryption techniques to protect data in the future. """ 

# Process the text through our pipeline your_result = app.invoke({"text": your_text}) print("Classification:", your_result["classification"]) 

print("nEntities:", your_result["entities"]) 
print("nSummary:", your_result["summary"])

Researchers: Classification ['MIT', 'Google'] The recent advances in quantum computing could threaten existing encryption techniques while also leading to new quantum-resistant methods.

Adding More Capabilities (Advanced)

LangGraph’s ability to add new features is one of its most powerful attributes. Add a node for sentiment analysis to our pipeline.

Take a look at the Full Codes here

Update our state to include sentiment
class EnhancedState(TypedDict):
 Text
 Classification:
 Entities: List[str]
 Summary
 Sentiment:

Create our sentiment node
def sentiment_node(state: EnhancedState):
 Analyze whether the message is positive, negative, or neutral.Return
    prompt = PromptTemplate(
        input_variables=["text"],
        template="Analyze the sentiment of the following text. Is it Positive, Negative, or Neutral?nnText:{text}nnSentiment:"
    )
    message = HumanMessage(content=prompt.format(text=state["text"]))
    sentiment = llm.invoke([message]).content.strip()
    return {"sentiment": sentiment}

# Create a workflow that includes the enhanced state
enhanced_workflow = StateGraph(EnhancedState)

# Add nodes to the list
enhanced_workflow.add_node("classification_node", classification_node)
enhanced_workflow.add_node("entity_extraction", entity_extraction_node)
enhanced_workflow.add_node("summarization", summarization_node)

# Add new node for sentiment
enhanced_workflow.add_node("sentiment_analysis", sentiment_node)

# Create more complex workflows with branches
enhanced_workflow.set_entry_point("classification_node")
enhanced_workflow.add_edge("classification_node", "entity_extraction")
enhanced_workflow.add_edge("entity_extraction", "summarization")
enhanced_workflow.add_edge("summarization", "sentiment_analysis")
enhanced_workflow.add_edge("sentiment_analysis", END)

# Create the enhanced graph
enhanced_app = enhanced_workflow.compile()

Test the enhanced Agent

# Try the enhanced version of the pipeline using the same text
enhanced_result = enhanced_app.invoke({"text": sample_text})

print("Classification:", enhanced_result["classification"])
print("nEntities:", enhanced_result["entities"])
print("nSummary:", enhanced_result["summary"])
print("nSentiment:", enhanced_result["sentiment"])
News Classification

Entities: ['OpenAI', 'GPT-4', 'GPT-3']

OpenAI’s GPT-4 is an upcoming multimodal AI model that will aim for performance at the level of a human and improve safety, efficiency and scalability over GPT-3.

Sentiment : The text has a positive sentiment. The text highlights GPT-4's improvements and advancements, including its efficiency, human-level performance and scalability. It also emphasizes the positive effects on AI alignment and security. Its release to the public is expected soon, which adds further positivity.

Adding conditional edges (Advanced Logic).

Why conditional edges?

So far, our graph has followed a fixed linear path: classification_node → entity_extraction → summarization → (sentiment)

We often run some steps in the real world only when necessary. As an example,

  • You can only extract the entities when it is an article of News or Research.
  • You can skip the summarization of a very brief text.
  • Add custom processing for Blog posts

LangGraph makes this easy through conditional edges – logic gates that dynamically route execution based on data in the current state.

Take a look at the Full Codes here

How to create a routing function

# Route after classification
def route_after_classification(state: EnhancedState) -> str:
 Category = state["classification"].lower() # returns: "news", "blog", "research", "other"
 Please return the category. ["news", "research"]

Definition of the Conditional Diagram

StateGraph can be imported from langgraph.graph.

conditional_workflow = StateGraph(EnhancedState)

# Add nodes
conditional_workflow.add_node("classification_node", classification_node)
conditional_workflow.add_node("entity_extraction", entity_extraction_node)
conditional_workflow.add_node("summarization", summarization_node)
conditional_workflow.add_node("sentiment_analysis", sentiment_node)

Set the entry point
conditional_workflow.set_entry_point("classification_node")

Add conditional edge
conditional_workflow.add_conditional_edges("classification_node", route_after_classification, path_map={
    True: "entity_extraction",
    False: "summarization"
})

# Remove remaining static edge
conditional_workflow.add_edge("entity_extraction", "summarization")
conditional_workflow.add_edge("summarization", "sentiment_analysis")
conditional_workflow.add_edge("sentiment_analysis", END)

# Compile
conditional_app = conditional_workflow.compile()

Conditional Pipeline Testing

test_text = """
OpenAI launched the GPT-4 with improved performance for academic and professional task. This is viewed as a breakthrough in reasoning and alignment capabilities.
"""

result = conditional_app.invoke({"text": test_text})

print("Classification:"Results["classification"])
print("Entities:"Results.get("entities", "Skipped"))
print("Summary:"Results["summary"])
print("Sentiment:", result["sentiment"])
News Classification
Entities: ['OpenAI', 'GPT-4']
OpenAI's GPT-4 improves performance on academic and professional tasks. This is a major breakthrough for alignment and reasoning.
Sentiment : The text has a positive sentiment. The text highlights that the GPT-4 is a major advancement and emphasizes its improved performance.

Take a look at the Full Codes here

Try it now with a blog:

blog_text = """
This is what I discovered after spending a whole week in silent meditation. No phones, no talking—just me, my breath, and some deep realizations.
"""

result = conditional_app.invoke({"text": blog_text})

print("Classification:"Results["classification"])
print("Entities:"Results.get("entities", "Skipped (not applicable)"))
print("Summary:"Results["summary"])
print("Sentiment:", result["sentiment"])
Bloggers are classified according to their blogs
Entities: Select (not applicable).
Summary: After a week of silence, I gained profound insights.
The text has a positive sentiment. It is important to mention the word " "deep realizations" Meditation is a practice that can be beneficial, and this reflective aspect of it suggests an enlightening and beneficial outcome.

Our agent now has the ability to:

  • Take decisions in context
  • Avoid unnecessary steps
  • Faster and cheaper
  • How to behave more intelligently

The conclusion of the article is:

We’ve tried to:

  1. LangGraph concept and graph-based approach explored
  2. Build a text-processing pipeline that includes classification, entity extraction and summary.
  3. We have enhanced our pipeline capabilities
  4. Introduced conditional edge to dynamically control the flows based upon classification results
  5. Visualize the workflow
  6. Testing our Agent with Real-World Text Examples

LangGraph offers a powerful framework to build AI agents. It does this by modeling the AI agents as graphs. It is easy to create, modify and extend AI systems using this approach.

The Next Steps

  • Expand your agent’s abilities by adding more nodes
  • Try out different LLMs with parameters
  • LangGraph offers state persistence for conversations that are ongoing.

Take a look at the Full Codes here. This research is the work of researchers. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe now our Newsletter.

Other similar items NVIDIA’s Open Sourced Cosmos DiffusionRenderer [Check it now]


Nir Diamant has over 10 years of AI and algorithm research experience. He is a leader in the AI Community because his open-source project has been viewed millions of times, and he receives over 500K views per month.

Nir’s work on GitHub, the DiamantAI Newsletter and other platforms has enabled millions to improve their AI abilities with tutorials and guides.

x
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale

18/04/2026
Top News

Biggest Artificial Intelligence Companies Meet to Discover a Better Way for Chatbots

A toy AI exposed 50,000 logs of its chats with kids for anyone who has a Gmail account

Disney-OpenAI deal redefines AI copyright war

AI will replace Nuclear Treaties. You’re still not scared?

OpenAI supports a bill that would limit liability for AI-enabled mass deaths or financial disasters

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

BAAI launches OmniGen2 – a Unified Diffusion Model and Transformer for Multimodal AI

25/06/2025

Use Motionize to create this viral video.

08/12/2025
Latest News

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.