Introduction
This article describes how to use already available source codes from DSCE (IBM Digital Self-Serve Co-Create Experience) environment and build your own solutions.
What is DSCE platform?
DSCE is the IBM’s enterprise ready AI and data platform designed to leverage foundation models and machine learning. In other words, DSCE is a place where users select a use case to experience a live application built with watsonx. Developers, access prompt selection and construction guidance, along with sample application code, to accelerate their projects with watsonx.ai.
The use-case of AI Agent
One of the use-cases provided on the platform is a sample demonstration application to present the agentic approach using LLMs; “Infuse your product with artificial intelligence from IBM”. To discover the approach an the use-case one can run the agent on the DSCE site, however it’s possible to run the application’s code locally in order to make ad-hoc changes and that is what this article describes.
In order to make a local app, simply we can click on the “Get the code button” which bring the user directly to the “https://github.com/IBM/dsce-sample-apps/tree/main/explore-industry-specific-agents” public repository.
Building the application locally
The first step is to build a project with the two parts of the application; a back-end and a front-end. Both parts are written in “node.js” so easy to be appropriated. At the end the application architecture would be running with the following architecture.
No serverless Code Engine part when running locally.
What is important if to install the right version of node.js locally, either by downloading through the package manager (https://nodejs.org/en/download/package-manager) or by running the command lines.
#!/bin/sh
# Download and install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
# Download and install Node.js:
nvm install 22
# Verify the Node.js version:
node -v # Should print "v22.12.0".
nvm current # Should print "v22.12.0".
# Verify npm version:
npm -v # Should print "10.9.0".
The local project I made has the following structure.
The Back-end application and specific configuration
/**
* Copyright 2024 IBM Corp.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { PromptTemplate } from "bee-agent-framework";
import { BaseMessage } from "bee-agent-framework/llms/primitives/message";
import { ValueError } from "bee-agent-framework/errors";
import { isDefined, mapToObj, pickBy } from "remeda";
import { z } from "zod";
import { toBoundedFunction } from "bee-agent-framework/serializer/utils";
export type LLMChatPromptTemplate = PromptTemplate.infer<{ messages: Record<string, string[]>[] }>;
export interface LLMChatTemplate {
template: LLMChatPromptTemplate;
messagesToPrompt: (template: LLMChatPromptTemplate) => (messages: BaseMessage[]) => string;
parameters: {
stop_sequence: string[];
};
}
export function messagesToPromptFactory(rolesOverride: Record<string, string | undefined> = {}) {
const roles: Record<string, string> = pickBy(
{
system: "system",
user: "user",
assistant: "assistant",
...rolesOverride,
},
isDefined,
);
return (template: LLMChatPromptTemplate) => {
return toBoundedFunction(
(messages: BaseMessage[]) => {
return template.render({
messages: messages.map((message) =>
Object.fromEntries(
Object.entries(roles).map(([key, role]) =>
message.role === role ? [key, [message.text]] : [key, []],
),
),
),
});
},
[
{
name: "template",
value: template,
},
{
name: "roles",
value: roles,
},
],
);
};
}
export function templateSchemaFactory(roles: readonly string[]) {
return z.object({
messages: z.array(z.object(mapToObj(roles, (role) => [role, z.array(z.string())] as const))),
});
}
const llama31: LLMChatTemplate = {
template: new PromptTemplate({
schema: templateSchemaFactory(["system", "user", "assistant", "ipython"] as const),
template: `{{#messages}}{{#system}}<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{system}}<|eot_id|>{{/system}}{{#user}}<|start_header_id|>user<|end_header_id|>
{{user}}<|eot_id|>{{/user}}{{#assistant}}<|start_header_id|>assistant<|end_header_id|>
{{assistant}}<|eot_id|>{{/assistant}}{{#ipython}}<|start_header_id|>ipython<|end_header_id|>
{{ipython}}<|eot_id|>{{/ipython}}{{/messages}}<|start_header_id|>assistant<|end_header_id|>
`,
}),
messagesToPrompt: messagesToPromptFactory({ ipython: "ipython" }),
parameters: {
stop_sequence: ["<|eot_id|>"],
},
};
const llama33: LLMChatTemplate = llama31;
const llama3: LLMChatTemplate = {
template: new PromptTemplate({
schema: templateSchemaFactory(["system", "user", "assistant"] as const),
template: `{{#messages}}{{#system}}<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{system}}<|eot_id|>{{/system}}{{#user}}<|start_header_id|>user<|end_header_id|>
{{user}}<|eot_id|>{{/user}}{{#assistant}}<|start_header_id|>assistant<|end_header_id|>
{{assistant}}<|eot_id|>{{/assistant}}{{/messages}}<|start_header_id|>assistant<|end_header_id|>
`,
}),
messagesToPrompt: messagesToPromptFactory(),
parameters: {
stop_sequence: ["<|eot_id|>"],
},
};
const granite3Instruct: LLMChatTemplate = {
template: new PromptTemplate({
schema: templateSchemaFactory([
"system",
"user",
"assistant",
"available_tools",
"tool_call",
"tool_response",
] as const),
template: `{{#messages}}{{#system}}<|start_of_role|>system<|end_of_role|>
{{system}}<|end_of_text|>
{{ end }}{{/system}}{{#available_tools}}<|start_of_role|>available_tools<|end_of_role|>
{{available_tools}}<|end_of_text|>
{{ end }}{{/available_tools}}{{#user}}<|start_of_role|>user<|end_of_role|>
{{user}}<|end_of_text|>
{{ end }}{{/user}}{{#assistant}}<|start_of_role|>assistant<|end_of_role|>
{{assistant}}<|end_of_text|>
{{ end }}{{/assistant}}{{#tool_call}}<|start_of_role|>assistant<|end_of_role|><|tool_call|>
{{tool_call}}<|end_of_text|>
{{ end }}{{/tool_call}}{{#tool_response}}<|start_of_role|>tool_response<|end_of_role|>
{{tool_response}}<|end_of_text|>
{{ end }}{{/tool_response}}{{/messages}}<|start_of_role|>assistant<|end_of_role|>
`,
}),
messagesToPrompt: messagesToPromptFactory({
available_tools: "available_tools",
tool_response: "tool_response",
tool_call: "tool_call",
}),
parameters: {
stop_sequence: ["<|end_of_text|>"],
},
};
export class LLMChatTemplates {
protected static readonly registry = {
"llama3.3": llama33,
"llama3.1": llama31,
"llama3": llama3,
"granite3Instruct": granite3Instruct,
};
static register(model: string, template: LLMChatTemplate, override = false) {
if (model in this.registry && !override) {
throw new ValueError(`Template for model '${model}' already exists!`);
}
this.registry[model as keyof typeof this.registry] = template;
}
static has(model: string): boolean {
return Boolean(model && model in this.registry);
}
static get(model: keyof typeof LLMChatTemplates.registry): LLMChatTemplate;
// eslint-disable-next-line @typescript-eslint/unified-signatures
static get(model: string): LLMChatTemplate;
static get(model: string): LLMChatTemplate {
if (!this.has(model)) {
throw new ValueError(`Template for model '${model}' not found!`, [], {
context: {
validModels: Object.keys(this.registry),
},
});
}
return this.registry[model as keyof typeof this.registry];
}
}
...
For the node-backend, there is a need of an “.env” file to provide the watsonx.ai required information.
IBM_CLOUD_API_KEY="your-ibm-cloud-api-key"
WX_PROJECT_ID="your-watsonx-project-id"
WX_ENDPOINT=https://us-south.ml.cloud.ibm.com
Run the following command (either in a bash file as I have the habit to do or on the command line).
#!/bin/sh
export $(grep -v '^#' .env | xargs)
And the necessary installation.
npm install
Also, it is important to modify the “package.json” file and all the “type”.
{ "type": "module" }
You must also install the “saas” package for the front-end part which is not installed previously.
# global installation
npm i -g sass
# or locall installation
npm i sass --save-dev
So the entire package.json file would be something like;
{
"name": "node-backend",
"version": "1.0.0",
"main": "main.js",
"license": "MIT",
"scripts": {
"start": "exec tsx main.js"
},
"dependencies": {
"@ibm-generative-ai/node-sdk": "^3.2.3",
"bee-agent-framework": "^0.0.53",
"cors": "^2.8.5",
"dotenv": "^16.4.5",
"markdown": "^0.5.0",
"openai": "^4.76.3",
"openai-chat-tokens": "^0.2.8",
"react-markdown": "^9.0.1",
"swagger-jsdoc": "^6.2.8",
"swagger-ui-express": "^5.0.1",
"tsx": "^4.19.2"
},
"type": "module",
"devDependencies": {
"sass": "^1.83.1"
}
}
The front-end specific configuration
...
import React, { useState, useEffect } from "react";
import { TextArea, Button, ExpandableTile, TileAboveTheFoldContent, TileBelowTheFoldContent, Loading } from "@carbon/react";
import Markdown from 'markdown-to-jsx';
import './App.css';
function CustomAgentFlow({customFrameworkselected}) {
const [isLoading, setLoading] = useState(false);
const [inputPrompt, setInputPrompt] = useState('');
const [agentOutput, setagentOutput] = useState('');
const [execution_time, setexecution_time] = useState('');
const [agentreasondata, setagentreasondata] = useState('');
const [llmOutput, setllmOutput] = useState('');
const fetchAgentresponse = async () => {
setLoading(true);
try {
const reqOpts = {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-API-Key': 'test' },
body: JSON.stringify({"input_data": inputPrompt}),
};
let response;
if(customFrameworkselected === "Langchain"){
response = await fetch(`${process.env.REACT_APP_LANGCHAIN_BACKEND}/langchain/generate`, reqOpts);
}
else{ // Framework selected is Bee Agent
response = await fetch(`${process.env.REACT_APP_BEE_BACKEND}/bee-agent-framework/generate`, reqOpts);
}
...
For the front-end make an “.env” file such as the one given here.
REACT_APP_BEE_BACKEND=http://127.0.0.1:3001/api/v1
REACT_APP_LANGCHAIN_BACKEND=http://127.0.0.1:8000/api/v1 # don't forget the /api/v1
Run the following command (either in a bash file as I have the habit to do or on the command line).
#!/bin/sh
export $(grep -v '^#' .env | xargs)
And the necessary installation.
npm install
Launch the application
Both modules, the back-end and the front-end are launched by the npm command.
npm start
Application interface
The output the application is the following for the agent and other configurations.
If for instance we interrogate the agent with the following;
What is the temperature in Paris?
Hereafter the answer.
As of 2025-01-06 10:44:51, the current temperature in Paris is 10.4°C.
Agent reasoning steps;
THOUGHT : The user wants to know the current temperature in Paris, so I'll use the OpenMeteo function to retrieve the current weather forecast for Paris.
ACTION : Invoking tool - OpenMeteo with input - {"location":{"name":"Paris","country":"France","language":"English"},"start_date":"2025-01-06","end_date":"2025-01-06","temperature_unit":"celsius"}
OBSERVATION : The agent got the relevant information from the tool invoked.
THOUGHT : I have the current weather forecast for Paris, now I can provide the temperature to the user.
FINAL ANSWER : As of 2025-01-06 10:44:51, the current temperature in Paris is 10.4°C.
Standalone LLM output for comparison;
') assert response == 'The current temperature in Paris is 15°C.'
def test_weather_api(): weather_api = WeatherAPI() response = weather_api.get_weather('Paris') assert isinstance(response, str) assert 'The current temperature in Paris is' in response
def test_weather_api_invalid_city(): weather_api = WeatherAPI() response = weather_api.get_weather('Invalid City') assert response == 'City not found.'
def test_weather_api_api_key_error(): weather_api = WeatherAPI(api_key='invalid_api_key') response = weather_api.get_weather('Paris') assert response == 'API key is invalid.'
def test_weather_api_connection_error(): weather_api = WeatherAPI() with mock.patch('requests.get', side_effect=ConnectionError): response = weather_api.get_weather('Paris') assert response == 'Failed to connect to the weather API.'
def test_weather_api_json_error(): weather_api = WeatherAPI() with mock.patch('requests.get', return_value=mock.Mock(json=lambda: {'error': 'Invalid JSON'})): response = weather_api.get_weather('Paris') assert response == 'Failed to parse the weather API response.'
def test_weather_api_weather_data_error(): weather_api = WeatherAPI() with mock.patch('requests.get', return_value=mock.Mock(json=lambda: {'main': {}})): response = weather_api.get_weather('Paris') assert response == 'Failed to retrieve the weather data.'
def test_weather_api_temperature_error(): weather_api = WeatherAPI() with mock.patch('requests.get', return_value=mock.Mock(json=lambda: {'main': {'temp': None}})): response = weather_api.get_weather('Paris') assert response == 'Failed to retrieve the temperature.'
def test_weather_api_unit_conversion(): weather_api = WeatherAPI(unit='imperial') response = weather_api.get_weather('Paris') assert 'F' in response
def test_weather_api_language(): weather_api = WeatherAPI(language='fr') response = weather_api.get_weather('Paris') assert 'Le' in response assert 'température' in response assert 'C' in response
def test_weather_api_cache(): weather_api = WeatherAPI() response1 = weather_api.get_weather('Paris') response2 = weather_api.get_weather('Paris') assert response1 == response
Conclusion
DSCE, a digital co-creation space, invites you on a journey of co-creation. This unique platform serves as a showcase for cutting-edge AI, LLM, and Agentic applications tailored to specific industries. Here, the boundaries between innovation and accessibility dissolve. Customers and partners alike are empowered to explore, adapt, and industrialize the open-source applications generously offered. DSCE fosters a collaborative spirit, nurturing a community of creators who can leverage the power of AI to shape the future of their respective domains.
Through this example a sample agent using a UI is used to call the "Bee Agent framework" which could be adapted to users' specific needs.
Useful links
- DSCI Site : https://dsce.ibm.com/watsonx
- DSCE public code repository: https://github.com/IBM/dsce-sample-apps/tree/main/explore-industry-specific-agents
- Bee agent framework: https://github.com/i-am-bee/bee-agent-framework
Top comments (0)