DEV Community

Cover image for Crash Course on Developing AI Applications with LangChain
Alex Merced
Alex Merced

Posted on

Crash Course on Developing AI Applications with LangChain

Free Resources

Introduction

Large Language Models (LLMs) have revolutionized the way developers build AI-powered applications, from chatbots to intelligent search systems. However, managing LLM interactions effectively—structuring prompts, handling memory, and integrating external tools—can be complex. This is where LangChain comes in.

LangChain is an open-source framework designed to simplify working with LLMs, enabling developers to create powerful AI applications with ease. By providing a modular approach, LangChain allows you to compose prompt templates, chains, memory, and agents to build flexible and scalable solutions.

In this guide, we'll introduce you to LangChain and its companion libraries, including:

  • langchain_community: A collection of core integrations and utilities.
  • langchain_openai: A dedicated library for working with OpenAI models.

We'll walk you through key LangChain concepts, installation steps, and practical code examples to help you get started. Whether you're looking to build chatbots, AI-powered search engines, or decision-making agents, this guide will give you the foundation you need to start developing with LangChain.

What is LangChain?

LangChain is an open-source framework that simplifies building applications powered by Large Language Models (LLMs). Instead of manually handling prompts, API calls, and responses, LangChain provides a structured way to chain together different components such as prompts, memory, and external tools.

Why Use LangChain?

Without LangChain, interacting with an LLM typically involves:

  1. Formatting a prompt manually.
  2. Sending the request to an API (e.g., OpenAI, Cohere).
  3. Parsing the response and deciding the next action.

LangChain automates and streamlines these steps, making it easier to build complex AI applications with minimal effort.

Key Use Cases

LangChain is widely used for:

  • Chatbots & Virtual Assistants – Retaining conversation context and improving responses.
  • Retrieval-Augmented Generation (RAG) – Enhancing LLM responses by fetching external data sources.
  • Data Processing & Summarization – Analyzing and summarizing large documents.
  • AI Agents – Creating autonomous agents that interact with external APIs and databases.

By leveraging LangChain’s modular architecture, you can integrate various models, tools, and memory mechanisms to build dynamic AI-driven applications.

Core Concepts in LangChain

LangChain is built around a modular architecture that allows developers to compose different components into a pipeline. Here are some of the key concepts you need to understand when working with LangChain:

1. Prompt Templates

Prompt templates help structure the input given to an LLM. Instead of writing static prompts, you can create dynamic templates that format user inputs into well-structured queries.

Example:

from langchain.prompts import PromptTemplate

template = PromptTemplate(
    input_variables=["topic"],
    template="Explain {topic} in simple terms."
)

formatted_prompt = template.format(topic="LangChain")
print(formatted_prompt)
Enter fullscreen mode Exit fullscreen mode

This ensures that every input follows a structured format before being passed to the model.

2. LLMs and Model Wrappers

LangChain provides an easy way to interface with different LLM providers like OpenAI, Hugging Face, and more.

Example:

from langchain_openai import OpenAI

llm = OpenAI(api_key="your_api_key")
response = llm("What is LangChain?")
print(response)
Enter fullscreen mode Exit fullscreen mode

This allows you to seamlessly query the LLM without worrying about API details.

3. Chains

Chains allow you to combine multiple components (e.g., a prompt template and an LLM) into a single workflow.

Example:

from langchain.chains import LLMChain

llm_chain = LLMChain(llm=llm, prompt=template)
response = llm_chain.run("machine learning")
print(response)
Enter fullscreen mode Exit fullscreen mode

Here, the prompt is formatted and automatically passed to the LLM, reducing boilerplate code.

4. Memory

Memory allows your application to retain context between interactions, which is crucial for chatbots and multi-turn conversations.

Example:

from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory()
memory.save_context({"input": "Hello"}, {"output": "Hi, how can I help you?"})
print(memory.load_memory_variables({}))
Enter fullscreen mode Exit fullscreen mode

With memory, LangChain can track past interactions and use them to generate more coherent responses.

5. Agents and Tools

Agents allow an LLM to make decisions dynamically. Instead of following a predefined sequence, an agent determines which tool to call based on the user’s query.

Example:

from langchain.agents import initialize_agent, AgentType
from langchain.tools import Tool

def add_numbers(a, b):
    return a + b

tool = Tool(
    name="Calculator",
    func=add_numbers,
    description="Adds two numbers."
)

agent = initialize_agent(
    tools=[tool],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)

response = agent.run("What is 3 + 5?")
print(response)
Enter fullscreen mode Exit fullscreen mode

This enables an LLM to call functions, fetch data, or interact with APIs to generate more intelligent responses.

By understanding these core concepts, you can start building more structured and powerful AI applications with LangChain. In the next section, we’ll set up LangChain and its companion libraries to start developing real-world applications.

Installing LangChain and Companion Libraries

Before we start building with LangChain, we need to install the necessary packages. LangChain is modular, meaning that different functionalities are split across separate libraries. The main ones you'll need are:

  • langchain – The core LangChain library.
  • langchain_community – A collection of integrations for third-party tools and services.
  • langchain_openai – A dedicated package for working with OpenAI models.
  • openai – The OpenAI Python SDK for API access.

1. Installing LangChain and Dependencies

You can install the required libraries using pip:

pip install langchain langchain_community langchain_openai openai
Enter fullscreen mode Exit fullscreen mode

This command will install the core LangChain framework along with the OpenAI integration.

2. Setting Up an OpenAI API Key

If you plan to use OpenAI models, you’ll need an API key. Follow these steps:

  • Sign up at OpenAI.
  • Navigate to your API settings and generate an API key.
  • Store your API key securely.

You can set your API key in an environment variable:

export OPENAI_API_KEY="your_api_key_here"
Enter fullscreen mode Exit fullscreen mode

Or pass it directly in your code:

import os
os.environ["OPENAI_API_KEY"] = "your_api_key_here"
Enter fullscreen mode Exit fullscreen mode

3. Verifying the Installation

To test if everything is installed correctly, run the following Python script:

from langchain_openai import OpenAI

llm = OpenAI(api_key="your_api_key_here")
response = llm("Say hello in French.")
print(response)
Enter fullscreen mode Exit fullscreen mode

If you see the response "Bonjour!", then your setup is working properly.

4. Understanding the Role of Companion Libraries

  • langchain_community: Contains integrations for databases, vector stores, and APIs.
  • langchain_openai: A streamlined package for interacting with OpenAI's models.
  • Other integrations: LangChain supports many LLM providers (Cohere, Hugging Face, etc.), which can be installed separately.

With LangChain and its dependencies installed, you're ready to start building AI-powered applications. In the next section, we'll explore how to use LangChain with OpenAI models and create structured workflows.

Setting Up and Using LangChain

Now that we have LangChain installed, let's explore how to use it for interacting with LLMs, structuring prompts, and building simple AI workflows.

1. Connecting to an OpenAI Model

The first step in using LangChain is to connect to an LLM. We'll start by using OpenAI's models.

Example: Basic Query to an OpenAI Model

from langchain_openai import OpenAI

llm = OpenAI(api_key="your_api_key_here")

response = llm.invoke("What is LangChain?")
print(response)
Enter fullscreen mode Exit fullscreen mode

This sends a query to OpenAI and prints the response. The invoke method is the recommended way to interact with LLMs in LangChain.

2. Working with Prompt Templates

A prompt template ensures that user input is formatted consistently before being sent to an LLM. This is useful when you need structured responses.

Example: Creating and Using a Prompt Template

from langchain.prompts import PromptTemplate

template = PromptTemplate(
    input_variables=["topic"],
    template="Explain {topic} in simple terms."
)

formatted_prompt = template.format(topic="machine learning")
print(formatted_prompt)
Enter fullscreen mode Exit fullscreen mode

This generates a properly structured prompt:
"Explain machine learning in simple terms."

You can pass this formatted prompt to an LLM for processing.

3. Building a Basic Chain

A chain connects multiple components, such as prompts and LLMs, to automate workflows.

Example: Using a Chain to Generate Responses

from langchain.chains import LLMChain

llm_chain = LLMChain(llm=llm, prompt=template)
response = llm_chain.run("data science")
print(response)
Enter fullscreen mode Exit fullscreen mode

Here, LangChain automatically formats the prompt and sends it to the LLM, reducing manual effort.

4. Using Memory to Maintain Context

By default, LLMs don’t remember past interactions. LangChain provides memory components to store and retrieve conversation history.

Example: Storing Conversation History

from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory()

# Simulating a conversation
memory.save_context({"input": "Hello"}, {"output": "Hi, how can I help you?"})
memory.save_context({"input": "What is LangChain?"}, {"output": "LangChain is a framework for working with LLMs."})

# Retrieving stored interactions
print(memory.load_memory_variables({}))
Enter fullscreen mode Exit fullscreen mode

This ensures that previous interactions can be referenced in future queries.

5. Implementing an Agent with Tools

An agent allows LLMs to dynamically decide which tool to use for a given query. For example, we can create an agent that uses a calculator tool.

Example: Creating an Agent to Perform Calculations

from langchain.agents import initialize_agent, AgentType
from langchain.tools import Tool

# Defining a simple addition function
def add_numbers(a, b):
    return a + b

tool = Tool(
    name="Calculator",
    func=add_numbers,
    description="Adds two numbers."
)

# Creating an agent with the tool
agent = initialize_agent(
    tools=[tool],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)

# Running the agent
response = agent.run("What is 5 + 7?")
print(response)
Enter fullscreen mode Exit fullscreen mode

This enables the LLM to recognize when to use the calculator tool instead of responding based purely on its pre-trained knowledge.

What’s Next?

Now that we've covered basic LangChain functionalities, you can start experimenting with more advanced features like:

  • Retrieval-Augmented Generation (RAG) – Enhancing LLMs with external knowledge sources.
  • Vector Databases – Storing and retrieving information efficiently.
  • Custom Tools and APIs – Expanding agents to interact with real-world data.

In the next section, we'll discuss best practices for using LangChain efficiently and how to scale applications for production use.

Best Practices and Next Steps

Now that you understand the basics of LangChain—connecting to LLMs, structuring prompts, using chains, memory, and agents—let’s discuss some best practices for building efficient and scalable applications.


1. Optimize Prompt Engineering

  • Use clear and structured prompt templates to get better responses from LLMs.
  • Experiment with few-shot learning by providing example inputs and outputs.
  • Keep prompts concise to reduce token usage and improve performance.

Example: Few-Shot Prompting

from langchain.prompts import PromptTemplate

template = PromptTemplate(
    input_variables=["word"],
    template="Convert the following word into plural form: {word}\n\nExample:\n- dog -> dogs\n- cat -> cats\n- book -> ?"
)

print(template.format(word="tree"))
Enter fullscreen mode Exit fullscreen mode

Providing examples improves the model's accuracy.

2. Use Memory Efficiently

Only use conversation memory when necessary (e.g., chatbots).

Choose the right memory type:

  • ConversationBufferMemory – Stores all conversation history.
  • ConversationSummaryMemory – Summarizes past interactions.
  • ConversationKGMemory – Extracts key facts from a conversation.

Example: Using Summary Memory

from langchain.memory import ConversationSummaryMemory
from langchain_openai import OpenAI

llm = OpenAI(api_key="your_api_key")
memory = ConversationSummaryMemory(llm=llm)

memory.save_context({"input": "I love pizza."}, {"output": "Pizza is a great choice!"})
summary = memory.load_memory_variables({})
print(summary)
Enter fullscreen mode Exit fullscreen mode

This helps reduce storage while maintaining context.

3. Handle API Costs and Rate Limits

Use token-efficient prompts to reduce API costs.
Implement batch processing for multiple queries.
Monitor API usage with OpenAI’s rate limits in mind.

Example: Monitoring Token Usage

from langchain_openai import OpenAI

llm = OpenAI(api_key="your_api_key", model="gpt-4", max_tokens=100)
response = llm("Summarize the history of AI in 50 words.")
print(response)
Enter fullscreen mode Exit fullscreen mode

Setting max_tokens prevents excessive token consumption.

4. Enhance LLMs with External Knowledge (RAG)

Retrieval-Augmented Generation (RAG) improves LLM responses by fetching external data instead of relying solely on pre-trained knowledge.

  • Use vector databases like Pinecone or FAISS for document search.
  • Fetch real-time data from APIs.

Example: Querying an External Document

from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings

# Load and embed documents
embeddings = OpenAIEmbeddings(api_key="your_api_key")
vectorstore = FAISS.load_local("faiss_index", embeddings)

# Query the knowledge base
docs = vectorstore.similarity_search("What is LangChain?", k=2)
print(docs)
Enter fullscreen mode Exit fullscreen mode

This retrieves relevant documents to supplement the LLM’s response.

5. Scale Applications for Production

When moving from prototyping to production, consider:

  • Caching responses to avoid redundant API calls.
  • Logging interactions for debugging and improvement.
  • Implementing user authentication for secured access.

Example: Implementing Response Caching

from langchain.cache import InMemoryCache
from langchain.chains import LLMChain

llm_chain = LLMChain(llm=llm, prompt=template)
llm_chain.cache = InMemoryCache()  # Enable caching

response1 = llm_chain.run("machine learning")
response2 = llm_chain.run("machine learning")  # Cached response
print(response2)
Enter fullscreen mode Exit fullscreen mode

Caching reduces API calls, improving performance and cost-efficiency.

Conclusion

LangChain provides a powerful framework for building AI applications that leverage Large Language Models (LLMs). By combining prompt engineering, chains, memory, and agents, LangChain simplifies the development process, making it easier to create chatbots, AI assistants, and retrieval-augmented generation (RAG) applications.

In this guide, we covered:

  • What LangChain is and why it’s useful.
  • Core concepts like prompt templates, chains, memory, and agents.
  • How to install and set up LangChain along with langchain_openai and langchain_community.
  • Practical code examples for using LangChain with OpenAI models.
  • Best practices for optimizing prompts, managing memory, and reducing API costs.
  • How to scale LangChain applications for production using caching and external knowledge retrieval.

By applying these concepts, you can start building custom AI-powered solutions with real-world impact.


Where to Go from Here?

If you're ready to take the next step, consider:

  1. Building a LangChain Project – Try creating a chatbot, document summarizer, or an AI-driven search engine.
  2. Exploring Vector Databases – Learn how to integrate Pinecone, FAISS, or ChromaDB for RAG applications.
  3. Joining the Community – Engage with other developers on LangChain's GitHub or Discord.

LangChain is continuously evolving, and staying updated with the latest features will help you build more advanced and efficient AI applications. Start experimenting and bring your AI ideas to life!

Top comments (0)