DEV Community

Cover image for Get Started with LangChain: A Step-by-Step Tutorial for Beginners
Ďēv Šhãh 🥑
Ďēv Šhãh 🥑

Posted on

Get Started with LangChain: A Step-by-Step Tutorial for Beginners

Introduction

Whats up everyone? This is a tutorial for someone who is beginner to LangChain. This is a very basic operations, that is prompting the LLM and getting the generated response, that can be done using LangChain. As prerequisites to understand this tutorial, you should know Python. In case you are unaware of the topics, LangChain, Prompt Template, etc, I would recommend you to checkout my previous blog on this topic.

For this tutorial, we are building a project which generates a tailored weekly roadmap, for someone in IT who wants to get into a different profile, based on their experience.


Tutorial

Step 1: Create Virtual Environment for Python

Firstly, open a new folder and create a virtual environment for python using the following command.

python -m venv .venv 
Enter fullscreen mode Exit fullscreen mode

This create a virtual environment for installing python's dependencies. Further, activate the virtual environment by executing the activate script. Usually it would be under .venv/Scripts or .venv/bin directory.

Step 2: Install required dependencies

Once the virtual environment is activated, create an app.py file as a main file. Following are the required dependencies required for this project. Please install them using pip install [dependency]

python-dotenv
langchain
langchain_openai
Enter fullscreen mode Exit fullscreen mode

Once they are installed, all the following import statements in app.py file.

import os
from dotenv import load_dotenv
from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate

load_dotenv()
Enter fullscreen mode Exit fullscreen mode

Step 3: Create prompt_llm function

Further, create a function prompt_llm which accepts skill and experience. The purpose of this function is to declare an instance of LLM and prompt template, chain the llm with prompt template, invoke the chain and return the response.

def prompt_llm(skill, experience):
    # Declare an instance of LLM.
    openai_api_key = os.getenv("OPENAI_API_KEY")

    llm = OpenAI(temperature=0.5, openai_api_key=openai_api_key)

    # Declare the prompt template.
    prompt_template = PromptTemplate(
        input_variables=["skill", "experience"],
        template="""
        You are a very experienced professional in IT. You can help anyone with any level of experience in IT to transition into a new role. Your job is to create a weekly roadmap for the user by analyzing the following skills which the user wants to learn, while keeping in consideration the experience level which the user has. Please return the specific areas which the user should focus on each week along with a project idea which helps user to get hands on experience of that week's topic. Additionally, also mention the amount of hours which the user should focus on learning the topic per week.

        Skill: {skill}

        User's experience: {experience}
        """
    )

    # Chain the llm with prompt template.
    chain = prompt_template | llm

    # Invoke the chain.
    generated_response = chain.invoke({
        'skill': skill,
        'experience': experience
    })

    # Return the generated response.
    return generated_response
Enter fullscreen mode Exit fullscreen mode

Before explaining this function, we also need to have a .env file to store the OpenAI's API keys. Hence, create a file and store your OpenAI's API keys as follows.

OPENAI_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxx"
Enter fullscreen mode Exit fullscreen mode

After the variable is added in .env file, coming back to the explanation of prompt_llm function, in the first step, it takes the value of OpenAI's API keys and store it in a variable. Further, declare the instance of OpenAI's model by passing the required temperature and OpenAI's API keys.

The temperature ranges from 0 to 1. It is basically, a setting which you can do to allow LLM to generate creative responses. 0 indicates the LLM is not allowed to be creative and 1 indicates the LLM to be as much creative while generating the response. I prefer to keep it between 0.5 to 0.7 as it is a good balance. You can update this value as per the project requirement.

Moving forward, now its time to create a Prompt Template which accepts the skill that user wants to learn and the current experience of the user, as input variables. Further, the template contains the instructions for the LLM which contains what LLM is expected to do with the given data.

Once the prompt template and LLM is declared, they are chained together and stored in a variable. Further, the invoke method of this chain is called by passing the input variables required by the prompt template, and the generated response is returned.

Step 4: Create main function

Finally, prompt_llm function is ready and now its time to call it from the main function. Therefore, declare the main function as follows:

if __name__ == "__main__":
    skill = "DevOps"
    experience = "Full Stack Developer with 2 years of experience working with JS frameworks."

    response = prompt_llm(skill, experience)

    print(response)
Enter fullscreen mode Exit fullscreen mode

It declares the variables which stores the data about the user, calls the prompt_llm function by passing the required variables and prints the generated response. When you complete all the requirements and try to run the file using python app.py, you should get the following such response, containing the generated response by LLM.

Expected Output

Note: I hope you understand the you would/might not get the exact same response.


File Structure

Following is the visual representation of how your project should look like.

.
├── .venv/
├── .env
└── app.py
Enter fullscreen mode Exit fullscreen mode

Lastly, in case you want to learn, how to integrate a vector database with such a project to build a RAG application, check out the following blog.


Final Words

This was the small tutorial on how you can leverage LangChain to develop AI powered applications. Let me know if you have any questions. I will be happy to help. Lastly, follow my blogs and youtube channel to know more updates about AI related technologies.

Top comments (0)