Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Anthropic Mythos is Unauthorized by Discord Sleuths
  • Ace the Ping Pong Robot can Whup your Ass
  • GitNexus, an Open-Source Knowledge Graph Engine that is MCP Native and Gives Claude Coding and Cursor Complete Codebase Structure Awareness
  • Deepgram Python SDK Implementation for Transcription and Async Processing of Audio, Async Text Intelligence, and Async Text Intelligence.
  • DeepSeek AI releases DeepSeek V4: Sparse attention and heavily compressed attention enable one-million-token contexts.
  • AI-Designed drugs by a DeepMind spinoff are headed to human trials
  • Apple’s new CEO must launch an AI killer product
  • OpenMythos Coding Tutorial: Recurrent-Depth Transformers, Depth Extrapolation and Mixture of Experts Routing
AI-trends.todayAI-trends.today
Home»Tech»The Implementation of an AI Intelligent Assistant using LangChain and Gemini to retrieve information in real-time

The Implementation of an AI Intelligent Assistant using LangChain and Gemini to retrieve information in real-time

Tech By Gavin Wallace01/06/20258 Mins Read
Facebook Twitter LinkedIn Email
Researchers at UT Austin Introduce Panda: A Foundation Model for
Researchers at UT Austin Introduce Panda: A Foundation Model for
Share
Facebook Twitter LinkedIn Email

We demonstrate in this tutorial how you can build a intelligent AI assistant using LangChain 2.0 Flash, Gemini, and Jina Search tools. Combining the powerful capabilities of an LLM with an external API for search, we can create an assistant which provides up-to date information and citations. This tutorial will guide you through the process of setting up the API keys, downloading the necessary libraries, binding the tools to the Gemini Model, and creating a LangChain which dynamically invokes external tools whenever the model needs fresh information. This tutorial will result in a fully-functional, interactive AI assistant capable of providing accurate and current answers to queries from users.

Install %pip --quiet with -U "langchain-community>=0.2.16" langchain langchain-google-genai

Install the Python packages required for this project. The LangChain Framework for AI application development, LangChain Tools (version 0.2.16) and LangChain Integration with Google Gemini Models are included. These packages make it possible to use Gemini and other external tools seamlessly within LangChain’s pipelines.

import getpass
Import os
Import json
From typing Dict import Any

We include essential modules within the project. Getpass allows you to enter API keys safely without having them displayed on screen. OS helps with managing environment variables, file paths and other files. JSON handles JSON data structure, while typing gives type hints to variables such as dictionary and function arguments. This improves code maintainability.

If not, os.environ.get ("JINA_API_KEY"):
    os.environ["JINA_API_KEY"] = getpass.getpass("Enter your Jina API key: ")


If not, os.environ.get ("GOOGLE_API_KEY"):
    os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Google/Gemini API key: ")

We set the environment variables to include all necessary API keys. If the keys have not been defined, the script will prompt the user to enter them securely using the getpass module. The script will then prompt the user to securely enter the keys using the getpass modules, hiding the keys from sight for security reasons. This allows for seamless access to the services, without having sensitive information hardcoded in the code.

from langchain_community.tools import JinaSearch
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage


print("šŸ”§ Setting up tools and model...")

Importing key modules and classes is done to bring in the LangChain ecosystem. It introduces the JinaSearch tool for web search, the ChatGoogleGenerativeAI model for accessing Google’s Gemini, and essential classes from LangChain Core, including ChatPromptTemplate, RunnableConfig, and message structures (HumanMessage, AIMessage, and ToolMessage). These components allow the integration of Gemini with external tools for AI-driven dynamic information retrieval. This printout confirms the start of setup.

Search_tool = JinaSearch()
print(f"āœ… Jina Search tool initialized: {search_tool.name}")


print("nšŸ” Testing Jina Search directly:")
direct_search_result = search_tool.invoke({"query": "what is langgraph"})
print(f"Direct search result preview: {direct_search_result[:200]}...")

Initialize the Jina Search Tool by creating an instance JinaSearch() The tool will confirm that it is ready to be used. This tool will handle all web searches within the LangChain eco-system. It then performs a query directly. “what is langgraph”This step prints out a search preview using the invoke method. It is important to verify that the tool works correctly, before it can be integrated into an AI assistant’s workflow.

gemini_model = ChatGoogleGenerativeAI(
    model="gemini-2.0-flash",
    temperature=0.1,
    convert_system_message_to_human=True  
)
print("āœ… Gemini model initialized")

We initialize the Gemini 2.0 Flash model using the ChatGoogleGenerativeAI class from LangChain. The model is set with a low temperature (0.1) for more deterministic responses, and the convert_system_message_to_human=True parameter ensures system-level prompts are properly handled as human-readable messages for Gemini’s API. The print statement is the final confirmation that Gemini’s model is now ready to use.

detailed_prompt = ChatPromptTemplate.from_messages([
    ("system", """You are an intelligent assistant with access to web search capabilities.
    When users ask questions, you can use the Jina search tool to find current information.
   
    Instructions:
    1. If the question requires recent or specific information, use the search tool
    2. Provide comprehensive answers based on the search results
    3. Always cite your sources when using search results
    4. Be helpful and informative in your responses"""),
    ("human", "{user_input}"),
    ("placeholder", "{messages}"),
])

We define a prompt template using ChatPromptTemplate.from_messages() It is this message that directs the AI. This includes an AI assistant message, placeholders for human messages for questions from users, and placeholders for tool messages that are generated by tool calls. This prompt is structured to ensure that the AI responds with well-sourced and helpful responses.

gemini_with_tools = gemini_model.bind_tools([search_tool])
print("āœ… Tools bound to Gemini model")


main_chain = detailed_prompt | gemini_with_tools


def format_tool_result(tool_call: Dict[str, Any], tool_result: str) -> str:
    """Format tool results for better readability"""
 Return f"Search Results for '{tool_call['args']['query']}':n{tool_result[:800]}..."

Bind_tools is used to bind the Jina Search Tool with the Gemini Model.()This allows the model to call the search tool as needed. Main_chain is a combination of the Gemini tool enhanced model and the template for structured prompts. This creates a smooth workflow to handle user inputs as well as dynamic tool calls. The format_tool_result functions formats search results so that they are easily readable. Users can also understand what the results of their search queries look like.

@chain
def enhanced_search_chain(user_input: str, config: RunnableConfig):
    """
 The chain is able to handle tool requests and provide detailed responses
    """
    print(f"nšŸ¤– Processing query: '{user_input}'")
   
    input_data = {"user_input": user_input}
   
    print("šŸ“¤ Sending to Gemini...")
    ai_response = main_chain.invoke(input_data, config=config)
   
    if ai_response.tool_calls:
        print(f"šŸ› ļø  AI requested {len(ai_response.tool_calls)} tool call(s)")
       
        tool_messages = []
        for i, tool_call in enumerate(ai_response.tool_calls):
            print(f"   šŸ” Executing search {i+1}: {tool_call['args']['query']}")
           
            tool_result = search_tool.invoke(tool_call)
           
            tool_msg = ToolMessage(
                content=tool_result,
                tool_call_id=tool_call['id']
            )
            tool_messages.append(tool_msg)
       
        print("šŸ“„ Getting final response with search results...")
        final_input = {
            **input_data,
            "messages": [ai_response] + tool_messages
        }
        final_response = main_chain.invoke(final_input, config=config)
       
 Return final_response
    else:
        print("ā„¹ļø  No tool calls needed")
 Return ai_response

We define the enhanced_search_chain using the @chain decorator from LangChain, enabling it to handle user queries with dynamic tool usage. The enhanced_search_chain takes the user’s input, along with a configuration, and passes it through the main chain, which includes Gemini and tools, and then checks to see if AI has suggested any tool calls. The tool call is executed, ToolMessage objects are created, and the Chain is reactivated with tool results to produce a final context-rich answer. It returns directly the AI response if no tool call is made.

Def test_search_chain():
    """Test the search chain with various queries"""
   
    test_queries = [
        "what is langgraph",
        "latest developments in AI for 2024",
        "how does langchain work with different LLMs"
    ]
   
    print("n" + "="*60)
    print("🧪 TESTING ENHANCED SEARCH CHAIN")
    print("="*60)
   
    for i, query in enumerate(test_queries, 1):
        print(f"nšŸ“ Test {i}: {query}")
        print("-" * 50)
       
        try:
            response = enhanced_search_chain.invoke(query)
            print(f"āœ… Response: {response.content[:300]}...")
           
 If hasattr (response,'tool_calls" and response.
                print(f"šŸ› ļø  Used {len(response.tool_calls)} tool call(s)")
               
 Except Exception As e.
            print(f"āŒ Error: {str(e)}")
       
        print("-" * 50)

This function is called test_search_chain(), validates the entire AI assistant setup by running a series of test queries through the enhanced_search_chain. It creates a variety of different test requests, encompassing LangChain, AI and tool topics. Results are printed, showing whether the tool was used. It helps to verify whether the AI is able to trigger web searches and process user responses and provide useful information.

If the __name__ equals "__main__":
    print("nšŸš€ Starting enhanced LangChain + Gemini + Jina Search demo...")
    test_search_chain()
   
    print("n" + "="*60)
    print("šŸ’¬ INTERACTIVE MODE - Ask me anything! (type 'quit' to exit)")
    print("="*60)
   
 While True
        user_query = input("nšŸ—£ļø  Your question: ").strip()
 If user_query.lower() You can also find out more about the following: ['quit', 'exit', 'bye']:
            print("šŸ‘‹ Goodbye!")
 Breaking News
       
 If user_query is:
            try:
                response = enhanced_search_chain.invoke(user_query)
                print(f"nšŸ¤– Response:n{response.content}")
 Except as follows:
                print(f"āŒ Error: {str(e)}")

We run AI assistant directly when we execute the script. It calls first the test_search_chain() The system is validated with the help of predefined queries to make sure that the installation has been done correctly. After that, the system enters interactive mode. Users can type in custom queries and receive AI generated responses, enhanced with dynamic search engine results, when necessary. The loop continues until the user types ā€˜quit’, ā€˜exit’, or ā€˜bye’, providing an intuitive and hands-on way to interact with the AI system.

In summary, we have built a successful enhanced AI Assistant that leverages LangChain’s modular framework and Gemini 2.0 Flash’s generative capability, as well as Jina Search’s real-time Web search functionality. This hybrid approach demonstrates that AI models have the ability to extend their knowledge beyond static information, providing reliable and timely information. This project can be extended by adding additional tools, customizing the prompts or using it as an API for wider applications. The foundation allows for the creation of powerful, contextually-aware intelligent systems.


Click here to find out more Notebook on GitHub.Ā This research is the work of researchers. Also,Ā feel free to follow us onĀ TwitterĀ Don’t forget about ourĀ 95k+ ML SubReddit Subscribe now our Newsletter.


Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to harnessing Artificial Intelligence’s potential for the social good. Marktechpost is his latest venture, a media platform that focuses on Artificial Intelligence. It is known for providing in-depth news coverage about machine learning, deep learning, and other topics. The content is technically accurate and easy to understand by an audience of all backgrounds. Over 2 million views per month are a testament to the platform’s popularity.

AI
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

GitNexus, an Open-Source Knowledge Graph Engine that is MCP Native and Gives Claude Coding and Cursor Complete Codebase Structure Awareness

25/04/2026

Deepgram Python SDK Implementation for Transcription and Async Processing of Audio, Async Text Intelligence, and Async Text Intelligence.

25/04/2026

DeepSeek AI releases DeepSeek V4: Sparse attention and heavily compressed attention enable one-million-token contexts.

24/04/2026

OpenMythos Coding Tutorial: Recurrent-Depth Transformers, Depth Extrapolation and Mixture of Experts Routing

24/04/2026
Top News

My AI friend is a jerk

Drake and Free Chicken have supplanted Sora as the top app store in America.

Stanford Students Wait in Line to Hear From Silicon Valley Royalty at ‘AI Coachella’

Anthropic Claude Cowork is an AI agent that actually works.

AI Is A Lousy Cook

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Google AI launches Gemini 3.1 flash TTS, a new benchmark in expressive and controllable AI voice

15/04/2026

Facebook will only store live video for 30 days and then delete it

27/05/2025
Latest News

Anthropic Mythos is Unauthorized by Discord Sleuths

25/04/2026

Ace the Ping Pong Robot can Whup your Ass

25/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.