When building AI agents, one of the most powerful aspects is their ability to manage and execute tools (function calls). Tools can help an agent perform tasks like scraping data, summarizing content, or even solving complex workflows. But as your AI agent grows in size and capabilities, it becomes increasingly difficult to manage/maintain multiple tools.
In this tutorial, we’ll focus on using the Toolhouse SDK to demonstrate how to use pre-built tools and how we can track every single tool call using the platform.
For this example, we’ll build a very simple interface where a user can input a URL and a prompt, and an AI agent will use tools to scrape the webpage and process the data.
Why Tool Management Matters in AI Agents
AI agents are nothing without Tools. They're like arms and legs of the AI agent. Each Tool is a specialized skill or function that the AI relies on to complete a specific task.
User-facing AI agents need to be flawless in their execution of different tasks. And writing AI tools from scratch to implement API integration or web scraping logic is like re-inventing the wheel that also requires maintenance over the long run by the dev team.
These problems are taken care of by Toolhouse. It helps you to:
- Choose from a variety of specialized AI Tools for tasks such as web scraping, sending emails, taking screenshots, integrating APIs like the LinkedIn API for searching for profiles on LinkedIn, etc.
- Track you AI agent's tool calls on the Toolhouse app.
- Use the Toolhouse SDK to implement powerful AI features in no time.
These capabilities simplify your tool management and lets you focus on building smarter AI agents instead of worrying about building/maintaining Tools.
Getting Started with Toolhouse
Alright so let's build an AI-powered web-scraper. Sounds fancy but it's just a single page app that lets you input a URL to scrape and an optional prompt that you want to execute along with the scraped data.
Here's what you'll need:
- Node.js (v16 or later).
- An OpenAI API key
- A Toolhouse API key
Step 1: Set Up Your React Project
We’ll use React to create a simple frontend for managing tool calls. Make sure you have create-react-app installed which we'll use to initialize a new React application. If you don't have it installed, you can do so by running:
npm install -g create-react-app
Open your favorite code editor and inside the terminal type the following:
npx create-react-app ai-scraper
Once it's done creating a new app, change into the project directory:
cd ai-scraper
If you expand the ai-scraper folder, it should look like this:
Great! Let's now start the server:
npm start
It should automatically start a new app at localhost:3000:
Neat! Let's install all the essential libraries now.
Step 2: Install Toolhouse and OpenAI SDKs
These SDKs will let our app interact with the Toolhouse platform and OpenAI models.
npm install @toolhouseai/sdk openai
Step 3: Add the API keys
Create a new .env inside the project folder ai-scraper and add the following API keys:
REACT_APP_TOOLHOUSE_API_KEY=your_toolhouse_api_key
REACT_APP_OPENAI_API_KEY=your_openai_api_key
You can find your OpenAI API key at platform.openai.com/api-keys. In the .env file replace "your_openai_api_key" with the actual OpenAI key.
Let's now see how we can set-up our Toolhouse account for our AI web scraping app. In order to get your Toolhouse API key you'll first need to create an account at Toolhouse.ai.
Once you've signed up, go to API Keys page. This page should look something like the following:
Clicking on the eye icon should reveal your API key. Copy this and paste it in your .env file in the place of "your_toolhouse_api_key".
Step 4: Creating and setting up your Bundle in Toolhouse
This is how your Dashboard looks like:
On the left menu click on "Bundles". This will take us to a new page where we can create a new Bundle. The purpose of Bundles is to organize our AI Tools into groups or packs.
Once created, you will then be taken to this page, where you can find different pre-made tools and add them to your Bundle:
If you scroll further down, you'll find a Tool named Tavily web search. Enable this Tool and it will be added to your Bundle:
Step 5: Building the App.js Component
Coming back to our app, we’ll now create a simple React component to showcase how tools are managed and executed. Go to your App.js file (or App.ts if you're using TypeScript) inside the src folder and replace the entire code inside with the following code:
import React, { useState } from "react";
import { Toolhouse } from "@toolhouseai/sdk";
import OpenAI from "openai";
import "./App.css";
const MODEL = "gpt-4o-mini";
function App() {
const [url, setUrl] = useState("");
const [prompt, setPrompt] = useState("");
const [result, setResult] = useState("");
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState("");
const handleSubmit = async (e) => {
e.preventDefault();
setIsLoading(true);
setError("");
setResult("");
try {
const toolhouse = new Toolhouse({
apiKey: process.env.REACT_APP_TOOLHOUSE_API_KEY,
metadata: {
id: "user_id",
timezone: "0",
},
});
const client = new OpenAI({
apiKey: process.env.REACT_APP_OPENAI_API_KEY,
dangerouslyAllowBrowser: true,
});
const messages = [
{
role: "user",
content: `Get the contents of ${url} and ${prompt}`,
},
];
const tools = await toolhouse.getTools();
const chatCompletion = await client.chat.completions.create({
messages,
model: MODEL,
tools,
});
const openAiMessage = await toolhouse.runTools(chatCompletion);
const newMessages = [...messages, ...openAiMessage];
const chatCompleted = await client.chat.completions.create({
messages: newMessages,
model: MODEL,
tools,
});
setResult(chatCompleted?.choices[0]?.message?.content);
} catch (err) {
console.error("Error occurred:", err);
setError(
`An error occurred while processing your request. ${
err.message || JSON.stringify(err)
}`
);
} finally {
setIsLoading(false);
}
};
return (
<div className="container">
<h1>AI Scraper with Toolhouse</h1>
<form onSubmit={handleSubmit}>
<div className="form-group">
<label htmlFor="url">URL</label>
<input
type="url"
id="url"
value={url}
onChange={(e) => setUrl(e.target.value)}
placeholder="https://example.com"
required
/>
</div>
<div className="form-group">
<label htmlFor="prompt">Prompt</label>
<textarea
id="prompt"
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Summarize the content..."
required
/>
</div>
<button type="submit" disabled={isLoading}>
{isLoading ? "Processing..." : "Scrape and Process"}
</button>
</form>
{error && <div className="error">{error}</div>}
{result && (
<div className="result">
<h2>Result:</h2>
<pre>{result}</pre>
</div>
)}
</div>
);
}
export default App;
Step 6: Style our App
To make the app look better, add some styles in App.css file inside the src folder:
.container {
max-width: 600px;
margin: 0 auto;
padding: 20px;
font-family: Arial, sans-serif;
}
h1 {
text-align: center;
}
.form-group {
margin-bottom: 15px;
}
label {
display: block;
margin-bottom: 5px;
font-weight: bold;
}
input, textarea {
width: 100%;
padding: 8px;
margin-bottom: 10px;
border: 1px solid #ccc;
border-radius: 4px;
}
button {
padding: 10px 15px;
background-color: #007bff;
color: white;
border: none;
border-radius: 4px;
cursor: pointer;
}
button:disabled {
background-color: #ccc;
cursor: not-allowed;
}
.error {
color: red;
margin-top: 10px;
}
.result {
margin-top: 20px;
padding: 10px;
background-color: #f9f9f9;
border: 1px solid #ddd;
border-radius: 4px;
}
Step 7: Re-start the App
Stop the React server if it's already running by typing Ctrl+C inside the terminal. Run the following command to start the server again in order to load up the environment variables:
npm start
Final App
This is how your app should look like:
You can enter any URL and then a prompt, then our AI agent will scrape the URL and summarize the webpage. Note that some websites like microsoft.com don't allow scraping and hence our scraper will fail in those cases, so make sure the URLs you use allow scraping.
Here's me playing around with the scraper:
Monitoring Tool calls using the Execution Logger in Toolhouse
You can also monitor every single Tool call made to the Tools hosted on Toolhouse. This can help you estimate the number of Tool calls and optimize your Tool calls to save time and money.
Here's how the Execution Logs look like:
As you can see you'll find the exact time of each Tool call as well as the ouput of each Tool call in the Execution Logs.
That's about it for this tutorial. If you want to learn more about building AI agents, feel free to follow me here or on LinkedIn.
Top comments (0)