DEV Community

Seenivasa Ramadurai
Seenivasa Ramadurai

Posted on

Building a Simple Conversational Chatbot LLM Application with LangChain

LangChain provides a powerful framework for working with Large Language Models (LLMs), making it easier to build sophisticated applications, including conversational chatbots. In this blog, we’ll walk through the steps to create a simple chatbot application while exploring some of LangChain’s core functionalities.

Before diving into the chatbot implementation, let’s first understand LangChain’s Runnable interface. This interface forms the foundation of how LangChain interacts with LLMs, offering flexibility and control for developers.

Understanding LangChain’s Runnable Interface

The Runnable interface provides three key methods to interact with LLMs:

  1. invoke: This method takes a single input, processes it through the LLM, and returns the corresponding output. It’s ideal for handling one query at a time.
  2. batch: The batch method is designed for bulk processing. It allows you to send multiple inputs to the LLM and receive outputs for each input in a single call.
  3. stream: The stream method provides real-time output, delivering tokens as the LLM generates them. This is particularly useful for applications requiring a dynamic, streaming response.

What makes this interface even more powerful is its built-in support for retries, fallbacks, schema validation, and runtime configurability. Moreover, LangChain offers asyncio-compatible versions of these methods for asynchronous programming. However, in this blog, we’ll use the synchronous APIs for simplicity.

Getting Started with LangChain: Examples

Here’s a quick example to demonstrate how to use the invoke, batch, and stream methods with LangChain:

from langchain_openai.llms import OpenAI
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Initialize the model
model = OpenAI(model="gpt-3.5-turbo-instruct", temperature=0.1, max_tokens=100)

# Single query with `invoke`
response = model.invoke("What is the capital of India?")
print(response, end="\n\n")

# Multiple queries with `batch`
responses = model.batch([
    "What is the capital of India?", 
    "Who is the Prime Minister of India?", 
    "What is India's national sport?"
])
print(responses)

# Streaming output with `stream`
for token in model.stream("Write 5 lines about Lord Krishna:"):
    print(token)
Enter fullscreen mode Exit fullscreen mode

This example demonstrates how you can use LangChain to interact with LLMs, whether for single queries, bulk processing, or streaming responses.

Output

Image description

Building a Simple Chatbot

Now, let’s create a conversational chatbot using LangChain. This chatbot will take user queries, process them with an LLM, and return concise responses.

Here’s the complete implementation:

from langchain_openai.chat_models import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from dotenv import load_dotenv
from langchain_core.runnables import chain

# Load environment variables
load_dotenv()

# Define the prompt template
template = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant. Answer my questions in two to three lines."),
    ("human", "{question}")
])

# Initialize the chat model
model = ChatOpenAI()

# Define the chatbot function
@chain
def chatbot(question):
    prompt = template.invoke(question)
    for token in model.invoke(prompt):
        yield token

# Interactive chatbot loop
while True:
    query = input("Enter your query (or type 'q' to exit): ")
    if query.lower() in ["q", "exit"]:
        print("Goodbye!")
        break

    response = chatbot.invoke({"question": query})
    parsed_content = dict(zip(response[0::2], response[1::2]))
    print(parsed_content['content'])
Enter fullscreen mode Exit fullscreen mode

Output

Image description

In case you want to know my python project structure.

Image description

Why Use the @chain Decorator?

The @chain decorator is a key feature of LangChain. It simplifies the development process by automatically inheriting the Runnable interface’s methods (invoke, batch, and stream) for any decorated function.

In this chatbot example:

  1. Enhanced Flexibility: The chatbot function gains access to invoke, batch, and stream methods, making it versatile and reusable.
  2. Simplified Logic: With @chain, you can focus on the chatbot’s logic while LangChain handles the underlying complexities.

For instance, the invoke method allows the chatbot to process user queries seamlessly, while the stream method can enable token-by-token output for real-time interactions if required.

Key Features of This Chatbot

  1. Interactive Interface: The chatbot accepts queries from the user until they choose to exit.
  2. Customizable Responses: It uses a prompt template to ensure answers are concise and user-friendly.
  3. Built-in Flexibility: The @chain decorator equips the chatbot function with all the capabilities of the Runnable interface.
  4. Real-time Output: By leveraging LangChain’s stream method, the chatbot can generate dynamic, token-by-token responses.

Conclusion

LangChain makes it incredibly easy to build applications powered by LLMs. By combining its robust Runnable interface with creative prompt engineering, you can develop anything from simple chatbots to complex conversational systems. The flexibility provided by the @chain decorator and the built-in utilities for retries, fallbacks, and schema validation make LangChain a powerful tool for developers.

This simple chatbot is just the beginning. You can extend its capabilities by integrating APIs, adding custom logic, or exploring more advanced features of LangChain. The possibilities are endless!
Happy coding!

Thanks
Sreeni Ramadorai

Top comments (0)