DEV Community

Cover image for Build Voice AI Nextjs Apps with OpenAI Realtime API Beta (WebRTC) & shadcn/ui
Cameron King
Cameron King

Posted on

Build Voice AI Nextjs Apps with OpenAI Realtime API Beta (WebRTC) & shadcn/ui

Integrating Voice AI into your Next.js applications has become more streamlined with the advent of OpenAI’s Realtime API Beta in December 2024's 12 Days of Christmas. On Day 9, OpenAI announced capability with WebRTC.

To facilitate this integration, the shadcn-openai-realtime-api repository offers a robust starter template for developers aiming to build voice-enabled AI assistants with tool-calling and Next.js.

Demo here: OpenAI Realtime Nextjs Starter

What This Repository Offers

The shadcn-openai-realtime-api repository is a Next.js 15 starter project that leverages the OpenAI GPT-4o Realtime WebRTC API and tool calling. It enables developers to build their own Voice AI assistants using the latest technologies as of December 2024.

Key features include:

  • WebRTC-Based Audio Streaming: Facilitates real-time audio conversations by capturing microphone input and streaming audio data to the AI backend.
  • OpenAI Realtime API Integration: Utilizes OpenAI’s Realtime API to process audio inputs and generate audio responses, enabling natural speech-to-speech interactions. 
  • Tool Calling: Includes example functions demonstrating client-side tools such as getCurrentTime, partyMode, changeBackground, and launchWebsite, showcasing how to extend AI capabilities.
  • Shadcn/UI Components: Employs modern, accessible UI components from Shadcn/UI for rapid prototyping and development. -Tailwind CSS Styling: Offers clean and customizable UI components styled with Tailwind CSS, making them easy to adapt to any design system.

Getting Started

Clone the Repository:

   git clone https://github.com/cameronking4/shadcn-openai-realtime-api.git
   cd shadcn-openai-realtime-api
   pnpm i && pnpm dev
Enter fullscreen mode Exit fullscreen mode

Create .env file

OPENAI_API_KEY=sk-proj-...
Enter fullscreen mode Exit fullscreen mode

OpenAI Realtime Blocks

OpenAI Realtime Blocks
To further enhance your Voice AI application, consider integrating pre-built, styled components from the openai-realtime-blocks repository.

These components are designed to seamlessly work with the same WebRTC hooks used in your current setup, providing a consistent and efficient development experience.

OpenAI Realtime Blocks

Live Demo: Experience these components in action at openai-realtime-blocks.vercel.app.

Notable components in the UI Library include a morphing glob, a siri recreation, a ChatGPT animation, 3D orb.

By copying and pasting components from the OpenAI Realtime Blocks Library, you can:

  • Enhance UI Consistency: Utilize professionally styled components that align with modern design standards.

  • Streamline Development: Reduce the time spent on custom styling and focus on core functionality.

  • Ensure Compatibility: Leverage components built to integrate smoothly with existing WebRTC hooks and the OpenAI Realtime API.

To launch your own Voice AI app and integrate these components:

  1. Explore the Repository: Review the available components and documentation in the openai-realtime-blocks repository.

  2. Install Necessary Packages: Follow the installation instructions provided in the repository to add the components to your project.

  3. Implement Components: Add or augment your existing UI elements with the pre-built blocks, ensuring to utilize the provided WebRTC hooks for seamless functionality.

By following these steps, you can elevate the user experience of your Voice AI application with minimal effort, leveraging the power of community-driven, open-source components.

Top comments (0)