One of the most exciting new advancements in NextJS/React architecture isn’t in NextJS. It’s in Vercel’s AI library. Even there, it’s tucked away in a little corner. It’s the experimental streaming API and what it can do could allow a revolution in application architecture.
The example in the documentation is a little terse, but the idea is simple. The library provides two key elements: experimental_StreamingReactResponse that allows for API routes or server actions to stream back responses, as well as the useChat hook that invokes the API route or server action and handles the streamed response.
This streaming functionality is exciting for two reasons. First, streaming used to be limited to just page routes. Now, we can stream data back from server actions and API routes, which can be super handy for longer running transactions.
However, the second reason is the real mind blower. Here is the server action code from my recent video on this topic:
return new experimental_StreamingReactResponse(stream, {
ui: async ({ content }) => {
const albums = await getSpotifyAlbums(content);
return (
<div className="grid grid-cols-2">
{albums.map((album) => (
<div key={`${album.artist}-${album.name}`} className="p-1">
<Album {...album} />
</div>
))}
</div>
);
},
});
In this code, we are passing the AI response stream to experimental_StreamingReactResponse which invokes the ui method each time the AI comes back with new content. Within that UI, we can parse that AI response and turn it into any React UI we want. In this case, the div tags are rendered on the server and the Album component is a client component that renders on both the client and the server.
Think about how we would have had to do this before. The server action or API route would have sent back JSON that the receiving component would have to turn into UI. The more complex the layout and data, the more complex the logic would have to be on the client to manage laying out the components and giving them the right data.
With this new functionality in the AI library, we can just send back a React Server Component (RSC) stream that the React code on the client can decode back into component instances using the same mechanisms that we use to hydrate the component tree after a page navigation. It’s super slick!
Unfortunately, all of this awesome component streaming functionality is linked to their AI wrapper, and I don’t know about you but I don’t see those two things as linked. You can stream all kinds of responses, not just from AIs. But hey, this is what we have, and it’s amazing.
Let’s try it out with a server action. That could be as simple as this:
"use server";
import OpenAI from "openai";
import { OpenAIStream, experimental_StreamingReactResponse, Message } from "ai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});
export async function handler({ messages }: { messages: Message[] }) {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
stream: true,
messages: messages.map((m) => ({
role: m.role,
content: m.content,
})),
});
const stream = OpenAIStream(response);
return new experimental_StreamingReactResponse(stream, {
ui({ content }) {
return <div className="italic text-red-800"> {content} </div>;
},
});
}
We start by importing the AI library from openai as well as the experimental_StreamingReactResponse function that we’ll use to stream the response back to the client.
Next, we export a function named handler, though you can name it anything you want. The only rule is that it has to be an async function. To use this with the useChat hook, this server action function needs to take an object containing an array of messages as the first argument.
This async server action function then sends the messages to openai (or whatever AI you want) to start the AI request and to open the stream. For OpenAI, in particular, we need to send the name of the model. There are lots to choose from. We also need to specify that we want a streaming response by setting the stream key to true. Finally, we need to send the list of messages. This list of messages is a set of prompts and responses to and from the AI. And we persist those messages so that the AI has the context of the previous user prompts to apply to the new prompt.
Once we’ve started the request, we turn that into an OpenAIStream stream from Vercel’s AI library. If you are using a different AI library, you’ll want to use a different stream adapter.
Finally, we return the experimental_StreamingReactResponse which takes the stream as input and calls the ui function each time a new streamed content message is received. It’s this function that turns that content into a set of tags, React Server Components, and client components, using JSX. In this example, we are simply wrapping the content in a div formatted with Tailwind.
Using Our Server Action
Once we have the server action created, we need to import it into our page. In the example provided by Vercel, we import the handler and then pass it to the Chat component as shown below:
import { handler } from "./action";
import { Chat } from "./chat";
export const runtime = "edge";
export default function Page() {
return <Chat handler={handler} />;
}
This Page component is an RSC, and that’s good because only an RSC can import a server action. So, the whole job of this component is to import the handler and send it to the Chat component, which is a client component.
It’s the Chat client component that is going to talk to the server action handler that the component sends to it. Let’s take it apart in sections.
In the first part, we import the useChat hook that will drive the interface and we define our Chat component with the handler property that has the server action.
"use client";
import { useChat } from "ai/react";
export function Chat({ handler }: { handler: any }) {
Inside the Chat client component, we call the useChat hook from the AI library.
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: handler,
});
This gives us some handy data and functions that we can use:
- messages — This is the array of messages to and from the user and the AI. Each message is an object containing role, content, and ui. Role is a string that tells us whether the message comes from the user or the AI. Content is the text version of the content. And ui has the rendered components from the ui method in the server action.
- input — This is the text string of the input prompt.
- handleInputChange — This is an event handler that you assign to the onChange of the input control.
- handleSubmit — This is a form submit function that you assign to the onSubmit of the input form.
We then take that data and the functions and use them in our JSX return.
return (
<div className="container mx-auto p-4">
<ul>
{messages.map((m, index) => (
<li key={index}>
{m.role === "user" ? "User: " : "AI: "}
{m.role === "user" ? m.content : m.ui}
</li>
))}
</ul>
<form
className="flex gap-2 fixed bottom-0 left-0 w-full p-4 border-t"
onSubmit={handleSubmit}
>
<input
className="border border-gray-500 rounded p-2 w-full"
placeholder="what is Next.js…"
value={input}
onChange={handleInputChange}
autoFocus
/>
<button type="submit" className="bg-black text-white rounded px-4">
Send
</button>
</form>
</div>
);
}
The code above takes the messages and formats them into a simple unordered list. You can, of course, lay this out and style it however you like.
It’s the ui portion that’s the real magic here. Of course, you can do whatever you want to layout this ui component tree wherever you want it. But the contents of the ui component tree are defined by the server action and that’s the beauty of this whole thing.
Finally, we need a form and an input to interact with AI.
<form
className="flex gap-2 fixed bottom-0 left-0 w-full p-4 border-t"
onSubmit={handleSubmit}
>
<input
className="border border-gray-500 rounded p-2 w-full"
placeholder="what is Next.js…"
value={input}
onChange={handleInputChange}
autoFocus
/>
<button type="submit" className="bg-black text-white rounded px-4">
Send
</button>
</form>
</div>
);
}
The way the hook is set up, the handleSubmit function needs to be on a form tag, so we wrap our text input in a form tag and put the handleSubmit on the onSubmit.
Within the form we have the input tag that we connect with the input text by setting the value to input and the onChange to handleInputChange.
And finally, we have a submit button that automatically calls the onSubmit of the parent form when it’s clicked or when the user hits return in the text field.
All in all, it’s a combination of the mundane and the magical. This is just a simple client component that renders some data it gets back from a hook, and then has an input form that can send data to a server action. But wow, what we can do with this ui component tree that is returned from the server action (or API route)!
React Streamed Components Video Thumbnail
Conclusions
This nifty little experimental function in Vercel’s AI library opens up a world of possible dynamic server rendered UI. We’ve been talking about HTMX and how it puts rendering back into the server. Well, this implements a similar model but gives you even more power because the components returned from the server can be client components that are interactive. The client components returned from the server action can also access context defined in the application. That means that the dynamically returned client components can act just like any other component on the page! It’s very exciting!
Top comments (0)