Reading time estimate: 5 The following are some of the most recent and relevant articles.
LangGraph
LangGraph by LangChain is an advanced framework for developing stateful applications that include multiple actors. The framework provides the tools and structure needed to create sophisticated AI agents using a graph-based method.
Think of LangGraph as an architect’s drafting table – it gives us the tools to design how our agent will think and act. LangGraph allows us to design the way different abilities will interact and flow within our agent, just as an architect would draw blueprints that show how rooms are connected.
Key Features
- State Management Keep persistent state through interactions
- Flexible routing: Define complex fluid flows between components
- Persistence: Workflows can be saved and resumed.
- Visualization: Understand your agent’s structure
We’ll build a text-processing pipeline in this tutorial that includes three steps:
- Text Classification Text input can be categorized into categories that you have defined.
- Entity extraction: Key entities in the text
- Summary of Text: Generating a brief summary of the text input
The pipeline shows how LangGraph is used to build a modular and extensible workflow that can handle natural language processing.
Setting Up Our Environment
Set up your development environment before you start writing code.
Installation
Install all required packages
You can install Langgraph and other Python modules using!pip.
Setup API keys
You’ll also need to have an OpenAI API Key in order to access their models. Get one from https://platform.openai.com/signup.
Take a look at the Full Codes here
Import os
From dotenv, import load_dotenv
Create a.env environment file with your API Key and load the variables.
load_dotenv()
OpenAI API key #
os.environ["OPENAI_API_KEY"] = os.getenv('OPENAI_API_KEY')
Test Our Setup
We can test our OpenAI model to see if it works.
Import ChatOpenAI from Langchain_openai
ChatOpenAI: Initialize this instance
Model = llm (ChatOpenAI)"gpt-4o-mini")
Test your setup
response = llm.invoke("Hello! Are you working?")
print(response.content)
Text Analysis Pipeline – Building Our Text Analysis Tool
Import the required packages to our LangGraph pipeline.
Import os
Import TypedDict List and Annotated
StateGraph can be imported from langgraph.graph.
Import PromptTemplate from langchain.prompts
Import ChatOpenAI from Langchain_openai
Import HumanMessage from langchain.schema
from langchain_core.runnables.graph import MermaidDrawMethod
Import Image from IPython.display
Designing Agents’ Memories
Our agent requires a memory just like a human. This is created using TypedDict for our state structure. Full Codes here
Class State (TypedDict),
Text
Classification:
Entities: List[str]
Summary:
# Initialize the temperature of our language model to zero for more deterministic results
Model = llm (ChatOpenAI)"gpt-4o-mini", temperature=0)
The Core Capabilities of Our Agent
We’ll now create the skills that our agent will actually use. The functions for each of these abilities perform a certain type of analyis. Visit the Full Codes here
1. Classification Node
def classification_node(state: State):
"''Classify this text in one of these categories: Research, News, or Blog''Return
prompt = PromptTemplate(
input_variables=["text"],
template="Classify the following text into one of the categories: News, Blog, Research, or Other.nnText:{text}nnCategory:"
)
message = HumanMessage(content=prompt.format(text=state["text"]))
classification = llm.invoke([message]).content.strip()
return {"classification": classification}
2. Entity Extraction Node
def entity_extraction_node(state: State):
"''Extract the Entities (Person/Organization/Location)'' from the Text''Return
prompt = PromptTemplate(
input_variables=["text"],
template="Extract all the entities (Person, Organization, Location) from the following text. Provide the result as a comma-separated list.nnText:{text}nnEntities:"
)
message = HumanMessage(content=prompt.format(text=state["text"]))
entities = llm.invoke([message]).content.strip().split(", ")
return {"entities": entities}
3. The Summarization Node
def summarization_node(state: State):
Summarize the entire text with one simple sentence
prompt = PromptTemplate(
input_variables=["text"],
template="Summarize the following text in one short sentence.nnText:{text}nnSummary:"
)
message = HumanMessage(content=prompt.format(text=state["text"]))
() summary = llm.invokeReturn[message]).content.strip()
return {"summary": summary}
All Together Now
Now comes the most exciting part – connecting these capabilities into a coordinated system using LangGraph:
Take a look at the Full Codes here
StateGraph: Create your own StateGraph
Workflow = StateGraph (State)
Add more nodes to your graph
workflow.add_node("classification_node", classification_node)
workflow.add_node("entity_extraction", entity_extraction_node)
workflow.add_node("summarization", summarization_node)
# Add edges on the graph
workflow.set_entry_point("classification_node"(# Sets the entry point in the graph
workflow.add_edge("classification_node", "entity_extraction")
workflow.add_edge("entity_extraction", "summarization")
workflow.add_edge("summarization", END)
Compose the graph
The app is a workflow.compile()
Workflow Structure This is the path that our pipeline takes:
classification_node → entity_extraction → summarization → END
Testing our Agent
We’ll now see what our agent can do with a realistic text example.
Take a look at the Full Codes here
sample_text = """ OpenAI has announced the GPT-4 model, which is a large multimodal model that exhibits human-level performance on various professional benchmarks. It is developed to improve the alignment and safety of AI systems. Additionally, the model is designed to be more efficient and scalable than its predecessor, GPT-3. The GPT-4 model is expected to be released in the coming months and will be available to the public for research and development purposes. """
state_input = {"text": sample_text}
result = app.invoke(state_input)
print("Classification:"Results["classification"])
print("nEntities:"Results["entities"])
print("nSummary:"Results["summary"])
Classification of News Entities ['OpenAI', 'GPT-4', 'GPT-3'] OpenAI’s GPT-4 is an upcoming multimodal AI model that will aim for performance at the level of a human and improve safety, efficiency and scalability over GPT-3.
The Power of Coordinated Processing
What makes this result particularly impressive isn’t just the individual outputs – it’s how each step builds on the others to create a complete understanding of the text.
- You can also find out more about the following: Classification The context helps to frame the understanding of text types
- You can also find out more about the following: Entity extraction Names and concepts that are important
- It is important to note that the word “you” means “the”. Summary The essence of a document is distilled
This mirrors human reading comprehension, where we naturally form an understanding of what kind of text it is, note important names and concepts, and form a mental summary – all while maintaining the relationships between these different aspects of understanding.
You can also try your own text
Try our pipeline again with another text example:
Take a look at the Full Codes here
Please replace the text below with your own to get a better understanding of your text.# Run the pipeline to process your text. """ The recent advancements in quantum computing have opened new possibilities for cryptography and data security. Researchers at MIT and Google have demonstrated quantum algorithms that could potentially break current encryption methods. However, they are also developing new quantum-resistant encryption techniques to protect data in the future. """
# Process the text through our pipeline your_result = app.invoke({"text": your_text}) print("Classification:", your_result["classification"])
print("nEntities:", your_result["entities"])
print("nSummary:", your_result["summary"])
Researchers: Classification ['MIT', 'Google'] The recent advances in quantum computing could threaten existing encryption techniques while also leading to new quantum-resistant methods.
Adding More Capabilities (Advanced)
LangGraph’s ability to add new features is one of its most powerful attributes. Add a node for sentiment analysis to our pipeline.
Take a look at the Full Codes here
Update our state to include sentiment
class EnhancedState(TypedDict):
Text
Classification:
Entities: List[str]
Summary
Sentiment:
Create our sentiment node
def sentiment_node(state: EnhancedState):
Analyze whether the message is positive, negative, or neutral.Return
prompt = PromptTemplate(
input_variables=["text"],
template="Analyze the sentiment of the following text. Is it Positive, Negative, or Neutral?nnText:{text}nnSentiment:"
)
message = HumanMessage(content=prompt.format(text=state["text"]))
sentiment = llm.invoke([message]).content.strip()
return {"sentiment": sentiment}
# Create a workflow that includes the enhanced state
enhanced_workflow = StateGraph(EnhancedState)
# Add nodes to the list
enhanced_workflow.add_node("classification_node", classification_node)
enhanced_workflow.add_node("entity_extraction", entity_extraction_node)
enhanced_workflow.add_node("summarization", summarization_node)
# Add new node for sentiment
enhanced_workflow.add_node("sentiment_analysis", sentiment_node)
# Create more complex workflows with branches
enhanced_workflow.set_entry_point("classification_node")
enhanced_workflow.add_edge("classification_node", "entity_extraction")
enhanced_workflow.add_edge("entity_extraction", "summarization")
enhanced_workflow.add_edge("summarization", "sentiment_analysis")
enhanced_workflow.add_edge("sentiment_analysis", END)
# Create the enhanced graph
enhanced_app = enhanced_workflow.compile()
Test the enhanced Agent
# Try the enhanced version of the pipeline using the same text
enhanced_result = enhanced_app.invoke({"text": sample_text})
print("Classification:", enhanced_result["classification"])
print("nEntities:", enhanced_result["entities"])
print("nSummary:", enhanced_result["summary"])
print("nSentiment:", enhanced_result["sentiment"])
News Classification Entities: ['OpenAI', 'GPT-4', 'GPT-3'] OpenAI’s GPT-4 is an upcoming multimodal AI model that will aim for performance at the level of a human and improve safety, efficiency and scalability over GPT-3. Sentiment : The text has a positive sentiment. The text highlights GPT-4's improvements and advancements, including its efficiency, human-level performance and scalability. It also emphasizes the positive effects on AI alignment and security. Its release to the public is expected soon, which adds further positivity.
Adding conditional edges (Advanced Logic).
Why conditional edges?
So far, our graph has followed a fixed linear path: classification_node → entity_extraction → summarization → (sentiment)
We often run some steps in the real world only when necessary. As an example,
- You can only extract the entities when it is an article of News or Research.
- You can skip the summarization of a very brief text.
- Add custom processing for Blog posts
LangGraph makes this easy through conditional edges – logic gates that dynamically route execution based on data in the current state.
Take a look at the Full Codes here
How to create a routing function
# Route after classification
def route_after_classification(state: EnhancedState) -> str:
Category = state["classification"].lower() # returns: "news", "blog", "research", "other"
Please return the category. ["news", "research"]
Definition of the Conditional Diagram
StateGraph can be imported from langgraph.graph.
conditional_workflow = StateGraph(EnhancedState)
# Add nodes
conditional_workflow.add_node("classification_node", classification_node)
conditional_workflow.add_node("entity_extraction", entity_extraction_node)
conditional_workflow.add_node("summarization", summarization_node)
conditional_workflow.add_node("sentiment_analysis", sentiment_node)
Set the entry point
conditional_workflow.set_entry_point("classification_node")
Add conditional edge
conditional_workflow.add_conditional_edges("classification_node", route_after_classification, path_map={
True: "entity_extraction",
False: "summarization"
})
# Remove remaining static edge
conditional_workflow.add_edge("entity_extraction", "summarization")
conditional_workflow.add_edge("summarization", "sentiment_analysis")
conditional_workflow.add_edge("sentiment_analysis", END)
# Compile
conditional_app = conditional_workflow.compile()
Conditional Pipeline Testing
test_text = """
OpenAI launched the GPT-4 with improved performance for academic and professional task. This is viewed as a breakthrough in reasoning and alignment capabilities.
"""
result = conditional_app.invoke({"text": test_text})
print("Classification:"Results["classification"])
print("Entities:"Results.get("entities", "Skipped"))
print("Summary:"Results["summary"])
print("Sentiment:", result["sentiment"])
News Classification Entities: ['OpenAI', 'GPT-4'] OpenAI's GPT-4 improves performance on academic and professional tasks. This is a major breakthrough for alignment and reasoning. Sentiment : The text has a positive sentiment. The text highlights that the GPT-4 is a major advancement and emphasizes its improved performance.
Take a look at the Full Codes here
Try it now with a blog:
blog_text = """
This is what I discovered after spending a whole week in silent meditation. No phones, no talking—just me, my breath, and some deep realizations.
"""
result = conditional_app.invoke({"text": blog_text})
print("Classification:"Results["classification"])
print("Entities:"Results.get("entities", "Skipped (not applicable)"))
print("Summary:"Results["summary"])
print("Sentiment:", result["sentiment"])
Bloggers are classified according to their blogs Entities: Select (not applicable). Summary: After a week of silence, I gained profound insights. The text has a positive sentiment. It is important to mention the word " "deep realizations" Meditation is a practice that can be beneficial, and this reflective aspect of it suggests an enlightening and beneficial outcome.
Our agent now has the ability to:
- Take decisions in context
- Avoid unnecessary steps
- Faster and cheaper
- How to behave more intelligently
The conclusion of the article is:
We’ve tried to:
- LangGraph concept and graph-based approach explored
- Build a text-processing pipeline that includes classification, entity extraction and summary.
- We have enhanced our pipeline capabilities
- Introduced conditional edge to dynamically control the flows based upon classification results
- Visualize the workflow
- Testing our Agent with Real-World Text Examples
LangGraph offers a powerful framework to build AI agents. It does this by modeling the AI agents as graphs. It is easy to create, modify and extend AI systems using this approach.
The Next Steps
- Expand your agent’s abilities by adding more nodes
- Try out different LLMs with parameters
- LangGraph offers state persistence for conversations that are ongoing.
Take a look at the Full Codes here. This research is the work of researchers. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe now our Newsletter.
Other similar items NVIDIA’s Open Sourced Cosmos DiffusionRenderer [Check it now]
Nir Diamant has over 10 years of AI and algorithm research experience. He is a leader in the AI Community because his open-source project has been viewed millions of times, and he receives over 500K views per month.
Nir’s work on GitHub, the DiamantAI Newsletter and other platforms has enabled millions to improve their AI abilities with tutorials and guides.


