DEV Community

Abdullah
Abdullah

Posted on

Vercel Ai Sdk

As of March 4, 2025, Vercel has introduced enhancements to its AI SDK, focusing on developer experience (DX) improvements and client-side tool execution.

DX Improvements

The AI SDK now offers better observability, allowing developers to monitor token usage and errors when using streamText and streamObject. The new onFinish callback provides insights into token usage upon stream completion.

const result = await streamObject({
  model: openai('gpt-4-turbo'),
  schema: z.object({
    name: z.object({
      firstName: z.string(),
      lastName: z.string(),
    }),
  }),
  prompt: "Generate a random name",
  onFinish({ object, error, usage, ...rest}) {
    console.log("Token usage:", usage);
    if (object === undefined) {
      console.error("Error:", error);
    } else {
      console.log("Success!", JSON.stringify(object, null, 2));
    }
  },
});
Enter fullscreen mode Exit fullscreen mode

Developers can now access the final, typed object as a promise from the streamObject result, ensuring type safety.

const result = await streamObject({
  model: openai('gpt-4-turbo'),
  schema: z.object({
    name: z.object({
      firstName: z.string(),
      lastName: z.string(),
    }),
  }),
  prompt: "Generate a random name",
});

result.object.then(({ name }) => {
  console.log("Name:", name.firstName, name.lastName);
});
Enter fullscreen mode Exit fullscreen mode

To optimize bundle size, the AI SDK UI has been split by framework. Developers are encouraged to migrate to @ai-sdk/react, @ai-sdk/vue, @ai-sdk/svelte, or @ai-sdk/solid.

Client-Side Tool Execution

With the 3.2 release, building generative UI chatbots client-side is streamlined using useChat and streamText in React projects. The SDK supports client and server-side tool execution with streamText and introduces toolInvocations and onToolCall utilities, allowing conditional UI rendering based on LLM tool calls.

Example of a chatbot determining user location:

// app/api/chat/route.ts
export async function POST(req: Request) {
  const { messages } = await req.json();
  const result = await streamText({
    model: openai('gpt-4-turbo'),
    messages: convertToCoreMessages(messages),
    tools: {
      askForConfirmation: {
        description: "Ask the user for confirmation",
        parameters: z.object({
          message: z.string().describe("The message to ask for confirmation"),
        }),
      },
      getLocation: {
        description: "Get the user location. Always ask for confirmation before using this tool.",
        parameters: z.object({}),
      },
    },
  });
}
Enter fullscreen mode Exit fullscreen mode

In the streamText call, omitting the execute parameter allows client-side tool execution.

// app/page.tsx
export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, addToolResult } = useChat({
    maxToolRoundtrips: 5,
    async onToolCall({ toolCall }) {
      if (toolCall.toolName === 'getLocation') {
        return getUserLocation();
      }
    },
  });

  return (
    <div>
      {messages?.map((m: Message) => (
        <div key={m.id}>
          <strong>{m.role}:</strong>
          {m.content}
          {m.toolInvocations?.map((toolInvocation: ToolInvocation) => {
            const toolCallId = toolInvocation.toolCallId;
            const addResult = (result: string) => addToolResult({ toolCallId, result });
            if (toolInvocation.toolName === 'askForConfirmation') {
              return (
                <div key={toolCallId}>
                  {'result' in toolInvocation ? (
                    <b>
                      {toolInvocation.args.message}: {toolInvocation.result}
                    </b>
                  ) : (
                    <>
                      {toolInvocation.args.message}:{' '}
                      <button onClick={() => addResult('Yes')}>Yes</button>
                      <button onClick={() => addResult('No')}>No</button>
                    </>
                  )}
                </div>
              );
            }
          })}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
      </form>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

The onToolCall within useChat defines functions for client-side tools. The toolInvocation provides access to tools selected by the LLM on the client, enabling conditional UI rendering to handle tool calls. The addToolResult function allows passing user-provided information back to the LLM for future responses.

These updates enhance the flexibility and efficiency of building AI-powered applications with Vercel's AI SDK.

Top comments (0)