DEV Community

Cover image for 🤖 Build an AI Travel Planner with CopilotKit, LangGraph & Google Maps API 🤩
Arindam Majumder Subscriber for CopilotKit

Posted on

🤖 Build an AI Travel Planner with CopilotKit, LangGraph & Google Maps API 🤩

TL;DR

In this easy-to-follow tutorial, we’ll take a simple travel planner application and enhance it with AI using CopilotKit.

By the end of this article, you’ll learn:

  • What an agentic copilot is and how to use it to add AI to your application.
  • How to let the copilot update the app state and render changes in real time.
  • How to use useCoAgentStateRender for human-in-the-loop workflows.

💁 Prefer learning through videos? Check out this video tutorial.

Here’s a preview of the application we’ll be building: 👇

💁 Interact with the travel planner demo by visiting this link.


Check out the repository

For this demonstration, we’ll start with a branch containing a basic, functional application without AI support, which we’ll enhance using CopilotKit. 🚀

Checkout CopilotKit ⭐️

Check out the Starting Branch

We’ll begin with the coagents-travel-tutorial-start branch, which contains the starter code for our travel app:

git clone -b coagents-travel-tutorial-start https://github.com/CopilotKit/CopilotKit.git
cd CopilotKit
Enter fullscreen mode Exit fullscreen mode

The tutorial code is located in the examples/coagents-travel directory, which includes two different directories:

  • ui/: It holds a Next.js application where we’ll integrate the LangGraph agent.

  • agent/: It holds a Python-based LangGraph agent.

Navigate to the examples/coagents-travel directory:

cd examples/coagents-travel
Enter fullscreen mode Exit fullscreen mode

Install Dependencies

Let’s set up the Next.js application. Ensure you have pnpm installed on your system, as our starter code uses it as the package manager:

npm install -g pnpm@latest-10
Enter fullscreen mode Exit fullscreen mode

Now, go to the ui directory and install all the required dependencies for the project:

cd ui
pnpm install
Enter fullscreen mode Exit fullscreen mode

Retrieve API Keys

Create a .env file in the ui directory and populate it with the necessary environment variables:

# 👇 ui/.env

OPENAI_API_KEY=<your_openai_api_key>
NEXT_PUBLIC_CPK_PUBLIC_API_KEY=<your_public_copilotkit_api_key>
Enter fullscreen mode Exit fullscreen mode

If you need a CopilotKit API key, you can grab one here. 🔑

Start the Project

Now, that we have all the dependencies installed, launch the development server:

pnpm run dev
Enter fullscreen mode Exit fullscreen mode

If everything is set up correctly, visit http://localhost:3000 to see the travel app in action. 😻

Now, we’ll take a look into the LangGraph agent and see how it works.


LangGraph Agent

Before we dive into integrating the LangGraph agent, let’s take a moment to understand how it works.

For this tutorial, we won’t be building the LangGraph agent from scratch. Instead, we’ll use the prebuilt version located in the agent directory.

💁 Interested in a detailed, step-by-step guide on building a LangGraph agent? Check out the LangGraph Quickstart Guide.

Let’s walk through the LangGraph agent to understand its inner workings before adding to our application.

Install LangGraph Studio

💡 LangGraph Studio is an excellent tool for visualizing and debugging LangGraph workflows. While not required for using CopilotKit, it’s highly recommended for understanding how LangGraph operates.

To install LangGraph Studio, refer to the LangChain Studio Setup Guide.

Retrieve API Keys

Create a .env file in the agent directory and populate it with the following environment variables:

# 👇 agent/.env

OPENAI_API_KEY=<your_openai_api_key>
GOOGLE_MAPS_API_KEY=<your_google_maps_api_key>
Enter fullscreen mode Exit fullscreen mode

Need a Google Maps API key? Follow this guide to get one. 🔑

Visualizing the LangGraph Agent

With LangGraph Studio installed, open the examples/coagents-travel/agent directory in the studio to load and visualize the LangGraph agent.

💡 Tip: It might take a little time to get everything set up, but once it’s installed and you've navigated to the agent folder, the visualization will look something like this:

LangGraph Visualization

Testing the LangGraph Agent

To test the LangGraph agent, simply add a message to the messages state variable and hit "Submit."

The agent will process the input, respond in the chat, and follow the defined workflow through the connected nodes.

In this example, the agent triggers the search_node to perform a search. Once it retrieves a response, it uses the trips_node to update the state by adding a new trip based on its findings. 🎯

Understanding Breakpoints

Let’s talk about a key concept in agentic copilots: human-in-the-loop.

Imagine your agent is eager to help but sometimes just a little too eager. Breakpoints are like a friendly pause button, letting users step in and approve the agent’s decisions before it goes rogue (or just makes a mistake). LangGraph makes this easy with breakpoints.

Here’s how it works:

  • Click on the trips_node and enable the interrupt_after option.

  • Try having the agent create a new trip. This time, it’ll stop mid-action and ask for your approval.

See? Even AI can learn good manners. 🙃

LangGraph Studio progress

Leave LangGraph Studio running

Your agent needs a home, and for now, LangGraph Studio is where it’s staying at. Leave the studio running locally; you’ll see its URL in the bottom-left corner of the application.

LangGraph Studio URL

We’ll use this URL later to connect our CopilotKit setup to the LangGraph agent.

We've done a great job so far, Now let's put all of this to work by integrating the LangGraph agent into our travel app as an agentic copilot. 🤖


Setting Up CopilotKit

Now for the fun part — let's add CopilotKit to bring everything together. Since we have both the application and the agent running, we're just one step away from integrating CopilotKit into our application.

For this tutorial, we will install the following dependencies:

  • @copilotkit/react-core: The core library for CopilotKit, which contains the CopilotKit provider and useful hooks.

  • @copilotkit/react-ui: The UI library for CopilotKit, which contains the CopilotKit UI components such as the sidebar, chat popup, textarea, and more.

Install Dependencies

First, navigate to the ui directory if you’re not already there:

cd ../ui
Enter fullscreen mode Exit fullscreen mode

Then, install the CopilotKit packages:

pnpm add @copilotkit/react-core @copilotkit/react-ui  
Enter fullscreen mode Exit fullscreen mode

These two packages are all it takes to install CopilotKit in a React application. The @copilotkit/react-core contains core CopilotKit functionality, and the @copilotkit/react-ui contains some pre-built UI components for us to plug in directly.

Adding CopilotKit

There are two ways to set up CopilotKit:

  • Copilot Cloud: Fast, super easy to get started, and fully managed.

  • Self-Hosting: More control but comes with extra complexity.

For this tutorial, we’re going the Cloud route (because why bother with the extra complexity for now?), but feel free to self-host if you’re curious. Check out the self-hosting guide if that's your thing.

Setting Up Copilot Cloud

Here’s how you can get started with Copilot Cloud:

  • Create an account

Head over to the Copilot Cloud and sign up. It's take just about a minute.

  • Get Your API Key

Once you're logged in, follow the steps shown on the screen to get your Copilot Cloud Public API Key. You'll also require an OpenAI API key.

Set your OpenAI API key, click the checkmark, and just like that, you'll get your public key.

CopilotKit Cloud UI

  • Add the API Key to your .env

Update the .env file in the ui directory with your Copilot Cloud API key:

# 👇 ui/.env

# Rest of the env variables...
NEXT_PUBLIC_CPK_PUBLIC_API_KEY=<your_copilotkit_public_key>
Enter fullscreen mode Exit fullscreen mode
  • Configuring the CopilotKit Provider

Now, to integrate CopilotKit into your application, wrap your app in the CopilotKit provider.

By wrapping our application with the provider, we ensure that the other UI components that we will be adding from the @copilotkit/react-ui component can interact with the CopilotKit SDK.

Edit the ui/app/page.tsx with the following lines of code:

// 👇 ui/app/page.tsx

"use client";

// Rest of the imports...
import { CopilotKit } from "@copilotkit/react-core"; 

// Rest of the code...

export default function Home() {
  return (
    <CopilotKit
      publicApiKey={process.env.NEXT_PUBLIC_CPK_PUBLIC_API_KEY}
    >
      <TooltipProvider>
        <TripsProvider>
          <main className="h-screen w-screen">
            <MapCanvas />
          </main>
        </TripsProvider>
      </TooltipProvider>
    </CopilotKit> 
  );
}
Enter fullscreen mode Exit fullscreen mode
  • CopilotKit UI Components

CopilotKit comes with several ready-to-use components like <CopilotPopup />, <CopilotSidebar />. Just place these components and they will look just wonderful.

If you don't want to go with the built-in components, no problem! CopilotKit also supports headless mode with useCopilotChat, so you can go full DIY if you're feeling creative. 😉

In this tutorial, we’ll use with <CopilotSidebar /> component to display the chat sidebar. However, the method stays the same with any other pre-built UI components.

Edit the ui/app/page.tsx file to include the <ChatSidebar /> component, and make sure the CSS styles are imported.

// 👇 ui/app/page.tsx

"use client";

// Rest of the imports...

import { TasksList } from "@/components/TasksList";
import { TasksProvider } from "@/lib/hooks/use-tasks";
import { CopilotKit } from "@copilotkit/react-core";
import { CopilotSidebar } from "@copilotkit/react-ui"; 
import "@copilotkit/react-ui/styles.css"; 

// Rest of the code...

export default function Home() {
  return (
    <CopilotKit
      publicApiKey={process.env.NEXT_PUBLIC_CPK_PUBLIC_API_KEY}
    >
      <CopilotSidebar
        defaultOpen={true}
        clickOutsideToClose={false}
        labels={{
          title: "Travel Planner",
          initial: "Hi! 👋 I'm here to plan your trips. I can help you manage your trips, add places to them, or just generally work with you to plan a new one.",
        }}
      />
      <TooltipProvider>
        <TripsProvider>
          <main className="h-screen w-screen">
            <MapCanvas />
          </main>
        </TripsProvider>
      </TooltipProvider>
    </CopilotKit>
  );
}
Enter fullscreen mode Exit fullscreen mode

First, we import the required modules and custom styles to make the sidebar look great out of the box 👌. Then, drop in the <CopilotSidebar /> component.

In the <CopilotSidebar /> component, you can pass the labels props to change the title and the initial chat message from the AI.

Now, get back to your app. Look to the right, and voilà! There’s a shiny new chat sidebar all ready to be used with just a few lines of code. 😻

CopilotKit ChatSidebar

But there's something missing and that's the ability for the copilot to gain decision-making skills. We will add the feature with the help of our LangGraph in the asset directory.


Making Your Copilot Agentic

We’ve got a LangGraph agent running in LangGraph Studio, and a non-agentic copilot that’s functional but… not quite as smart as it could be. Let’s give the copilot some real decision-making abilities! 😎

A Quick Look at React State

Let's quickly review how the app's state works. Open up the lib/hooks/use-trips.tsx file.

What you’ll find here is the TripsProvider, which defines a lot of useful stuffs. The star of the show? The state object, shaped by the AgentState type.

This state is accessible throughout the app using the useTrips hook, which feeds components like TripCard, TripContent, and TripSelect.

If you’ve worked with React apps before, this should feel familiar — managing state through context or a library is pretty standard stuff.

Merging the Agent with State

Now comes the important part: connecting our LangGraph agent to its state. To achieve this, we will set up a remote endpoint and use the useCoAgent hook to make the magic happen. 🌟

  • Setting Up a Tunnel

Remember the LangGraph Studio endpoint from earlier? You’ll need it now! If you’re using Copilot Cloud, you're all ready.

If you’ve gone the self-hosted route, follow the steps outlined here

To connect our locally running LangGraph agent and Copilot Cloud, we'll use the CopilotKit CLI. Grab the port number of your LangGraph Studio endpoint.

💁 LangGraph Studio Port: You'll find it at the bottom-left corner of the Studio interface.

LangGraph Studio Endpoint

Now, fire up your terminal and run this command:

# Replace <port_number> placeholder with the actual port number
npx @copilotkit/cli tunnel <port_number>
Enter fullscreen mode Exit fullscreen mode

Boom! You’ve created a tunnel. 🎉 Here’s what it’ll show you:

✔ Tunnel created successfully!  
Tunnel Information:  
Local: localhost:54209  
Public URL: https://light-pandas-argue.loca.lt  
Press Ctrl+C to stop the tunnel  
Enter fullscreen mode Exit fullscreen mode

Save that public URL. 🔖 That's going to be the gateway between our locally running LangGraph agent and CopilotKit Cloud.

💁 Now, for the next step, we'll require LangSmith API Key. Follow this guide to get one.

  • Connecting the Tunnel to Copilot Cloud

Head over to Copilot Cloud, scroll to the Remote Endpoints section, and hit the + Add New button.

  • Select the LangGraph platform.
  • Add the public URL. (the one generated from the CopilotKit CLI) and add your LangSmith API Key.
  • Click Create.

🎉 Done! Your agent’s endpoint is now listed, and CopilotKit knows exactly where to send requests when the agent is called.

  • Locking the agent

Since, we only have one single agent here, let's make sure to lock the <CopilotKit /> provider to lock all the requests to this specific agent. To add the agent, simply modify the props to include the name of the agent.

💁 Curious about handling multiple agents? Check out out the multi-agent concept guide.

// 👇 ui/app/page.tsx

// Rest of the code...
<CopilotKit
  // Rest of the code...
  agent="travel"
>
    {/* Rest of the code... */}
</CopilotKit>
Enter fullscreen mode Exit fullscreen mode

We provide the name of the agent as travel because we have already defined it in the agents/langgraph.json file.

And just like this, the copilot is now truly agentic. It can not only hold conversations but also make decisions. Pretty cool, right? 🤯

Hooking Up the Agent and State

At this point, we want to connect the LangGraph agent's state with our app's state. This will allow us to add real-time dynamic interactions!

LangGraph agents keep track of their own state, as you've already seen it in the LangGraph studio's bottom left corner.

🤔 Now what's the idea?

We want to achieve a two-way connection between these states. For this, we have our friendly hook useCoAgent from CopilotKit that helps us achieve just that.

Edit the ui/lib/hooks/use-trips.tsx file with the following lines of code to add the useCoAgent hook.

// 👇 ui/lib/hooks/use-trips.tsx

// Rest of the imports...
import { AgentState, defaultTrips} from "@/lib/trips"; 
import { useCoAgent } from "@copilotkit/react-core"; 

export const TripsProvider = ({ children }: { children: ReactNode }) => {
  const { state, setState } = useCoAgent<AgentState>({
    name: "travel",
    initialState: {
      trips: defaultTrips,
      selected_trip_id: defaultTrips[0].id,
    },
  });

  // Rest of the code...
Enter fullscreen mode Exit fullscreen mode

Yeah, you saw it right. That's all it takes to sync the two-way states. 😯

Now, let's break down the code line by line:

💡 The useCoAgent hook is generic, which means you can specify a type that mirrors your LangGraph agent’s state.
In this example, we use AgentState to keep things a bit consistent. You can get away with typecasting it as any, but that's not a good practice in general. So avoid doing that most of the time.

The name parameter ties everything back to your graph’s name from agent/langgraph.json. Make sure to name it correctly, as it ensures the agent and our app are always on the same page.

For the initialState, we use defaultTrips (though it is not required) from @/lib/types.ts.

We add some initial trips, just so we can test the working of it right away.

This is how the initial state, i.e., defaultTrips, is arranged:

// 👇 ui/lib/types.ts

export const defaultTrips: Trip[] = [
  {
    id: "1",
    name: "Business Trip to NYC",
    center_latitude: 40.7484,
    center_longitude: -73.9857,
    places: [
      {
        id: "1",
        name: "Central Park",
        address: "New York, NY 10024",
        description: "A famous park in New York City",
        latitude: 40.785091,
        longitude: -73.968285,
        rating: 4.7,
      },
      {
        id: "3",
        name: "Times Square",
        address: "Times Square, New York, NY 10036",
        description: "A famous square in New York City",
        latitude: 40.755499,
        longitude: -73.985701,
        rating: 4.6,
      },
    ],
    zoom_level: 14,
  },
  // Rest of the trips...
];
Enter fullscreen mode Exit fullscreen mode

Time to Test it!

Fire up your app and ask the Copilot something about your trips.

How many trips do I have?
Enter fullscreen mode Exit fullscreen mode

See how the agent pulls the data from the app's state? Magic, isn't it? 😻

The state is shared between the application and the agent, so try deleting/editing a trip manually, ask the question again, and it should reply accordingly:

What trips do I have now?
Enter fullscreen mode Exit fullscreen mode

The agent knows what’s up. Better yet, you can give the agent tasks directly:

Add some hotels to my Paris trip
Enter fullscreen mode Exit fullscreen mode

And voilà! The state updates, and your UI reflects the changes.

The core functionality of the app is done for now. We just need to improve the user experience by adding streaming text and other features to provide real-time responses to the user.


Stream the Response in the UI

Now that we can interact with our agent, retrieve and update data, why not level up the user experience by adding a text stream feature?

It's like watching real-time progress while the agent works, similar to how you see many other popular AIs like ChatGPT respond.

In this step, we’ll be implementing the copilotkit_emit_state SDK function within our LangGraph agent, so the agent will emit progress as it’s working. 🔥

Installing the CopilotKit SDK

First things first, let’s get the CopilotKit SDK installed. Since we’re working with a Python-based agent here (and managing it with poetry), we’ll install the Python SDK.

💁 Not sure how to install poetry? You can find the installation guide here.

poetry add copilotkit==0.1.31a4
Enter fullscreen mode Exit fullscreen mode

Since, we'll be editing the search_node, we'll jump right into the search.py file.

Manually emitting the agent’s state

With CoAgents, the agent’s state is emitted when a node changes (i.e., when an edge is traversed). But what if we want to display progress in the middle of an action? Great news! We can manually emit the state using copilotkit_emit_state.

Let’s add a custom configuration to the search_node so that we can emit the intermediate state.

Open up agent/travel/search.py and add the following lines of code:

# 👇 agent/travel/search.py

# Rest of the imports...
from copilotkit.langchain import copilotkit_emit_state, copilotkit_customize_config 

async def search_node(state: AgentState, config: RunnableConfig):
    """
    The search node is responsible for searching the for places.
    """
    ai_message = cast(AIMessage, state["messages"][-1])


    config = copilotkit_customize_config(
        config,
        emit_intermediate_state=[{
            "state_key": "search_progress",
            "tool": "search_for_places",
            "tool_argument": "search_progress",
        }],
    )

    # Rest of the code...
Enter fullscreen mode Exit fullscreen mode

Emitting the intermediate state

Now, let’s use copilotkit_emit_state to emit the state manually as the search progresses. You’ll see updates for every query we send.

Let’s edit agent/travel/search.py again to emit state at the beginning of our search and as results come in.

Edit the agent/travel/search.py with the following lines of code:

# 👇 agent/travel/search.py

# Rest of the code...
async def search_node(state: AgentState, config: RunnableConfig):
    """
    The search node is responsible for searching the for places.
    """
    ai_message = cast(AIMessage, state["messages"][-1])

    config = copilotkit_customize_config(
        config,
        emit_intermediate_state=[{
            "state_key": "search_progress",
            "tool": "search_for_places",
            "tool_argument": "search_progress",
        }],
    )

    # ^ Previous code

    state["search_progress"] = state.get("search_progress", [])
    queries = ai_message.tool_calls[0]["args"]["queries"]

    for query in queries:
        state["search_progress"].append({
            "query": query,
            "results": [],
            "done": False
        })

    await copilotkit_emit_state(config, state) 

    # Rest of the code...
Enter fullscreen mode Exit fullscreen mode

Updating and Emitting Progress

Now it’s time to show the results in real-time. As the search completes, we’ll update the progress.

Finally, update the agent/travel/search.py with the following lines of code:

# 👇 agent/travel/search.py

# Rest of the code...
async def search_node(state: AgentState, config: RunnableConfig):
    """
    The search node is responsible for searching the for places.
    """
    ai_message = cast(AIMessage, state["messages"][-1])

    config = copilotkit_customize_config(
        config,
        emit_intermediate_state=[{
            "state_key": "search_progress",
            "tool": "search_for_places",
            "tool_argument": "search_progress",
        }],
    )

    state["search_progress"] = state.get("search_progress", [])
    queries = ai_message.tool_calls[0]["args"]["queries"]

    for query in queries:
        state["search_progress"].append({
            "query": query,
            "results": [],
            "done": False
        })

    await copilotkit_emit_state(config, state) 

    # ^ Previous code

    places = []
    for i, query in enumerate(queries):
        response = gmaps.places(query)
        for result in response.get("results", []):
            place = {
                "id": result.get("place_id", f"{result.get('name', '')}-{i}"),
                "name": result.get("name", ""),
                "address": result.get("formatted_address", ""),
                "latitude": result.get("geometry", {}).get("location", {}).get("lat", 0),
                "longitude": result.get("geometry", {}).get("location", {}).get("lng", 0),
                "rating": result.get("rating", 0),
            }
            places.append(place)
        state["search_progress"][i]["done"] = True
        await copilotkit_emit_state(config, state) 

    state["search_progress"] = []
    await copilotkit_emit_state(config, state) 

    # Rest of the code...
Enter fullscreen mode Exit fullscreen mode

Rendering the Progress in the UI

To show the progress in the UI, we’ll use the useCoAgentStateRender hook. This hook will conditionally render the search_progress state.

All we need to do is tell CopilotKit to conditionally render the search_progress state key through the useCoAgentStateRender hook.

Now let’s modify ui/lib/hooks/use-trips.tsx to display the search progress:

// 👇 ui/lib/hooks/use-trips.tsx

// Rest of the imports...
import { useCoAgent, useCoAgentStateRender } from "@copilotkit/react-core"; 
import { SearchProgress } from "@/components/SearchProgress"; 

export const TripsProvider = ({ children }: { children: ReactNode }) => {
  // Rest of the code...

  useCoAgentStateRender<AgentState>({
    name: "travel",
    render: ({ state }) => {
      if (state.search_progress) {
        return <SearchProgress progress={state.search_progress} />
      }
      return null;
    },
  });

  // Rest of the code...
}
Enter fullscreen mode Exit fullscreen mode

The <SearchProgress /> component is already set up for you. If you're curious, feel free to check out the implementation in ui/components/SearchProgress.tsx. 🙌

💁 Bonus: The search_progress state key is pre-defined in the AgentState type in ui/lib/types.ts, so no need to worry about creating it from scratch!

Now, give it a spin! Ask the agent questions and you’ll see the progress update in real time. 🤩


Add control by adding Human in the Loop

Alright, time for some real-world magic. What if the agent is about to make a decision you don’t agree with?

Human in the Loop (HITL) lets you approve, reject, or modify the actions the agent wants to perform.

In this step, we’ll set up a "breakpoint" in the agent's flow, forcing it to pause and wait for your approval before continuing.

💁 Curious about breakpoints? You can learn more here.

Once the breakpoint is hit, we’ll send it to the front end, where the user will approve or reject the action. The agent will then continue based on the user’s decision.
All together, this process will look like this:

Check out the infographic below to see how the whole flow works: 👇

Coagents HITL Infographic

  • Adding a Breakpoint for Human-in-the-Loop

With our LangGraph, adding human-in-the-loop functionality is straightforward. The trips_node functions as an intermediary to the perform_trips_node, which lets us pause execution at the trips_node by setting a breakpoint.

In the agent/travel/agent.py file, we just need to tell the agent where to hit that pause. We’re doing this in the compile function:

# 👇 agent/travel/agent.py

# Rest of the code...

graph = graph_builder.compile(
    checkpointer=MemorySaver(),
    # Pause right here and wait for the user!
    interrupt_after=["trips_node"], 
)
Enter fullscreen mode Exit fullscreen mode

Now, just by doing this, the agent will ask, "Should I proceed?" instead of blindly continuing with the work.

  • Handling the User's Decision

When the user hits that pause button, we need to check what they want to do. Did they approve the action? Or did they hit "Cancel" and start over?

In perform_trips_node, we’ll grab the tool message and check what the user decided:

# 👇 agent/travel/trips.py

# Rest of the code...

async def perform_trips_node(state: AgentState, config: RunnableConfig):
    """Execute trip operations"""
    ai_message = cast(AIMessage, state["messages"][-2]) 
    tool_message = cast(ToolMessage, state["messages"][-1]) 

    # Rest of the code...
Enter fullscreen mode Exit fullscreen mode

Now, here, the conditional will check the user's decision and act accordingly.

If the user says "Cancel," we stop everything and simply return with the custom message. Otherwise, for any other response, it will continue with the work.

# 👇 agent/travel/trips.py

# Rest of the code...
async def perform_trips_node(state: AgentState, config: RunnableConfig):
    """Execute trip operations"""
    ai_message = cast(AIMessage, state["messages"][-2])
    tool_message = cast(ToolMessage, state["messages"][-1])


    if tool_message.content == "CANCEL":
      return {
        "messages": AIMessage(content="Cancelled operation of trip."),
      }

    # handle the edge case where the AI message is not an AIMessage or does not have tool calls, should never happen.
    if not isinstance(ai_message, AIMessage) or not ai_message.tool_calls:
        return state

    # Rest of the code...
Enter fullscreen mode Exit fullscreen mode
  • Rendering the Decision UI

Now, it's time to update the front-end to render the tool calls and capture the user's decision, passing it back to the agent. For this, we'll use useCopilotAction hooks for each tool call with the renderAndWait option.

Edit the ui/lib/hooks/use-trips.tsx with the following lines of code:

// 👇 ui/lib/hooks/use-trips.tsx

// Rest of the imports...
import { AddTrips, EditTrips, DeleteTrips } from "@/components/humanInTheLoop"; 
import { useCoAgent, useCoAgentStateRender, useCopilotAction } from "@copilotkit/react-core"; 

// Rest of the code...

export const TripsProvider = ({ children }: { children: ReactNode }) => {
  // Rest of the code...

  useCoAgentStateRender<AgentState>({
    name: "travel",
    render: ({ state }) => {
      return <SearchProgress progress={state.search_progress} />
    },
  });


  useCopilotAction({ 
    name: "add_trips",
    description: "Add some trips",
    parameters: [
      {
        name: "trips",
        type: "object[]",
        description: "The trips to add",
        required: true,
      },
    ],
    renderAndWait: AddTrips,
  });

  useCopilotAction({
    name: "update_trips",
    description: "Update some trips",
    parameters: [
      {
        name: "trips",
        type: "object[]",
        description: "The trips to update",
        required: true,
      },
    ],
    renderAndWait: EditTrips,
  });

  useCopilotAction({
    name: "delete_trips",
    description: "Delete some trips",
    parameters: [
      {
        name: "trip_ids",
        type: "string[]",
        description: "The ids of the trips to delete",
        required: true,
      },
    ],
    renderAndWait: (props) => DeleteTrips({ ...props, trips: state.trips }),
  });

  // Rest of the code...
Enter fullscreen mode Exit fullscreen mode

With this setup, the front-end is ready to render the tool calls and capture the user's decision. However, there’s one important thing we haven’t yet covered: how we handle the user’s input and send it back to the agent?

  • Optional: Understanding the humanInTheLoop components

Let’s take a quick dive into how we handle this in the front-end. We'll use the DeleteTrips component as an example, but the same logic applies to AddTrips and EditTrips.

// 👇 ui/lib/components/humanInTheLoop/DeleteTrips.tsx

import { Trip } from "@/lib/types";
import { PlaceCard } from "@/components/PlaceCard";
import { X, Trash } from "lucide-react";
import { ActionButtons } from "./ActionButtons"; 
import { RenderFunctionStatus } from "@copilotkit/react-core";

export type DeleteTripsProps = {
  args: any;
  status: RenderFunctionStatus;
  handler: any;
  trips: Trip[];
};

export const DeleteTrips = ({ args, status, handler, trips }: DeleteTripsProps) => {
  const tripsToDelete = trips.filter((trip: Trip) => args?.trip_ids?.includes(trip.id));

  return (
    <div className="space-y-4 w-full bg-secondary p-6 rounded-lg">
    <h1 className="text-sm">The following trips will be deleted:</h1>
      {status !== "complete" && tripsToDelete?.map((trip: Trip) => (
        <div key={trip.id} className="flex flex-col gap-4">
          <>
            <hr className="my-2" />
            <div className="flex flex-col gap-4">
            <h2 className="text-lg font-bold">{trip.name}</h2>
            {trip.places?.map((place) => (
              <PlaceCard key={place.id} place={place} />
            ))}
            </div>
          </>
        </div>
      ))}
      { status !== "complete" && (

        <ActionButtons
          status={status} 
          handler={handler} 
          approve={<><Trash className="w-4 h-4 mr-2" /> Delete</>} 
          reject={<><X className="w-4 h-4 mr-2" /> Cancel</>} 
        />
      )}
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

The crucial part here is the ActionButtons component, which allows the user to approve or reject the action. This is how we capture the user's decision:

// 👇 ui/lib/components/humanInTheLoop/ActionButtons.tsx

import { RenderFunctionStatus } from "@copilotkit/react-core";
import { Button } from "../ui/button";

export type ActionButtonsProps = {
    status: RenderFunctionStatus;
    handler: any;
    approve: React.ReactNode;
    reject: React.ReactNode;
}

export const ActionButtons = ({ status, handler, approve, reject }: ActionButtonsProps) => (
  <div className="flex gap-4 justify-between">
    <Button 
      className="w-full"
      variant="outline"
      disabled={status === "complete" || status === "inProgress"} 
      onClick={() => handler?.("CANCEL")} 
    >
      {reject}
    </Button>
    <Button 
      className="w-full"
      disabled={status === "complete" || status === "inProgress"} 
      onClick={() => handler?.("SEND")} 
    >
      {approve}
    </Button>
  </div>
);
Enter fullscreen mode Exit fullscreen mode

The buttons here call the handler?.("CANCEL") when the user clicks "Cancel" and handler?.("SEND") when they click "Delete." This sends the user's decision (either "CANCEL" or "SEND") back to the agent.

The important piece here is that the onClick handlers emit the user's decision back to the agent.

💡 If you want to allow the user to edit the tool call arguments before sending them back to the agent, you can achieve this by modifying the onClick handlers and adjusting the way the agent processes the tool calls.

That's all. 😮‍💨 We've successfully added human-in-the-loop functionality. Now it can prompt the user to approve or reject actions like adding, editing, or deleting trips, and have those decisions sent back to the agent.


Conclusion ⚡

We covered a lot in this tutorial. 😴 Hopefully, you got to know how to add an agentic copilot to your application with CopilotKit and also learned how to perform state changes in real-time and implement the human-in-the-loop concept.

Complete Source Code: Source Code

💁 You can also refer to the original documentation of this project.

Thank you so much for reading! 🎉 🫡

Share your thoughts in the comment section below! 👇

Cat Typing

Top comments (19)

Collapse
 
uliyahoo profile image
uliyahoo

Wow, really great article.

Collapse
 
arindam_1729 profile image
Arindam Majumder

Thanks Uli!

Collapse
 
arshadayvid profile image
David Asaolu

This is a really cool project! 🔥
Thank you for putting this together

Collapse
 
arindam_1729 profile image
Arindam Majumder

Thanks for checking out David!

Collapse
 
david-723 profile image
David

What a cool project!
Bookmarking it!

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

David, I'd love to get your feedback when you build it

Collapse
 
arindam_1729 profile image
Arindam Majumder

Awesome!

Glad you liked it

Collapse
 
shricodev profile image
Shrijal Acharya

Great guide. Really love it. 🔥

Collapse
 
arindam_1729 profile image
Arindam Majumder

Glad you liked it, Shrijal!

Collapse
 
ferguson0121 profile image
Ferguson

I'm impressed
Tried out LangGraph studio for the first time

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

I'm pretty new to LangGraph Studio myself but I think if you're going to build agents, it's a super useful tool.

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

Hey Arindam, this tutorial thoroughly shows how powerful CopilotKit is when used with other libraries such as LangGraph and the Google Maps API.

Nicely done!

Collapse
 
arindam_1729 profile image
Arindam Majumder

Thanks, Nathan!

CopilotKit makes it really easy to integrate with other tools and libraries!

Love the Great work By the Engineering Team!🤍

Collapse
 
mathew00112 profile image
Mathew

Thanks for also adding the video to the article!

Collapse
 
arindam_1729 profile image
Arindam Majumder

Yes. We thought it would be much easier to understand with Video!

Glad you liked it!

Collapse
 
johncook1122 profile image
John Cook

One of my colleagues turned me onto Copilotkit recently.
We are building a SaaS product and have been looking around for solutions that bolt to CrewAI.

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

Hi John, I would love to get your feedback.
Regarding support for CrewAI, it is on our near term roadmap.

Collapse
 
nevodavid profile image
Nevo David

Great stuff!

Collapse
 
samcurran12 profile image
Sammy Scolling

Pretty cool!