Authors: Mark Sze, Tvrtko Sternak, Davor Runje, Davorin Rusevljan
TL;DR:
Build a real-time voice application using WebRTC and connect it with the
RealtimeAgent
. Demo implementation.Optimized for Real-Time Interactions: Experience seamless voice communication with minimal latency and enhanced reliability.
Realtime Voice Applications with WebRTC
In our previous blog post, we introduced the WebSocketAudioAdapter
, a simple way to stream real-time audio using WebSockets. While effective, WebSockets can face challenges with quality and reliability in high-latency or network-variable scenarios. Enter WebRTC.
Today, we’re excited to showcase the integration with OpenAI Realtime API with WebRTC, leveraging WebRTC’s peer-to-peer communication capabilities to provide a robust, low-latency, high-quality audio streaming experience directly from the browser.
Why WebRTC?
WebRTC (Web Real-Time Communication) is a powerful technology for enabling direct peer-to-peer communication between browsers and servers. It was built with real-time audio, video, and data transfer in mind, making it an ideal choice for real-time voice applications. Here are some key benefits:
1. Low Latency
WebRTC’s peer-to-peer design minimizes latency, ensuring natural, fluid conversations.
2. Adaptive Quality
WebRTC dynamically adjusts audio quality based on network conditions, maintaining a seamless user experience even in suboptimal environments.
3. Secure by Design
With encryption (DTLS and SRTP) baked into its architecture, WebRTC ensures secure communication between peers.
4. Widely Supported
WebRTC is supported by all major modern browsers, making it highly accessible for end users.
How It Works
This example demonstrates using WebRTC to establish low-latency, real-time interactions with OpenAI Realtime API with WebRTC from a web browser. Here’s how it works:
1. Request an Ephemeral API Key
* The browser connects to your backend via [**WebSockets**](https://fastapi.tiangolo.com/advanced/websockets/) to exchange configuration details, such as the ephemeral key and model information.
* [**WebSockets**](https://fastapi.tiangolo.com/advanced/websockets/) handle signaling to bootstrap the [**WebRTC**](https://webrtc.org/) session.
* The browser requests a short-lived API key from your server.
2. Generate an Ephemeral API Key
* Your backend generates an ephemeral key via the OpenAI REST API and returns it. These keys expire after one minute to enhance security.
3. Initialize the WebRTC Connection
* **Audio Streaming**: The browser captures microphone input and streams it to OpenAI while playing audio responses via an `<audio>` element.
* **DataChannel**: A `DataChannel` is established to send and receive events (e.g., function calls).
* **Session Handshake**: The browser creates an SDP offer, sends it to OpenAI with the ephemeral key, and sets the remote SDP answer to finalize the connection.
* The audio stream and events flow in real time, enabling interactive, low-latency conversations.
Example: Build a Voice-Enabled Language Translator
Let’s walk through a practical example of using WebRTC to create a voice-enabled language translator.
You can find the full example here.
1. Clone the Repository
Start by cloning the example project from GitHub:
git clone https://github.com/ag2ai/realtime-agent-over-webrtc.git
cd realtime-agent-over-webrtc
2. Set Up Environment Variables
Create a OAI_CONFIG_LIST
file based on the provided OAI_CONFIG_LIST_sample
:
cp OAI_CONFIG_LIST_sample OAI_CONFIG_LIST
In the OAI_CONFIG_LIST
file, update the api_key
with your OpenAI API key.
Supported key format
Currently WebRTC can be used only by API keys the begin with:
sk-proj
Other keys may result internal server error (500) on OpenAI server. For more details see this issue
(Optional) Create and Use a Virtual Environment
To avoid cluttering your global Python environment:
python3 -m venv env
source env/bin/activate
3. Install Dependencies
Install the required Python packages:
pip install -r requirements.txt
4. Start the Server
Run the application with Uvicorn:
uvicorn realtime_over_webrtc.main:app --port 5050
When the server starts, you should see:
INFO: Started server process [12345]
INFO: Uvicorn running on http://0.0.0.0:5050 (Press CTRL+C to quit)
5. Open the Application
Navigate to localhost:5050/start-chat in your browser. The application will request microphone permissions to enable real-time voice interaction.
6. Start Speaking
To get started, simply speak into your microphone and ask a question. For example, you can say:
“What’s the weather like in Rome?”
This initial question will activate the agent, and it will respond, showcasing its ability to understand and interact with you in real time.
Code review
WebRTC connection
A lot of the WebRTC connection logic happens in the website_files/static /WebRTC.js, so lets take a look at the code there first.
WebSocket Initialization
The WebSocket is responsible for exchanging initialization data and signaling messages.
ws = new WebSocket(webSocketUrl);
ws.onmessage = async event => {
const message = JSON.parse(event.data);
console.info("Received Message from AG2 backend", message);
if (message.type === "ag2.init") {
await openRTC(message.config); // Starts the WebRTC connection
return;
}
if (dc) {
dc.send(JSON.stringify(message)); // Sends data via DataChannel
} else {
console.log("DC not ready yet", message);
}
};
WebRTC Setup
This block configures the WebRTC connection, adds audio tracks, and initializes the DataChannel
.
async function openRTC(data) {
const EPHEMERAL_KEY = data.client_secret.value;
// Set up to play remote audio
const audioEl = document.createElement("audio");
audioEl.autoplay = true;
pc.ontrack = e => audioEl.srcObject = e.streams[0];
// Add microphone input as local audio track
const ms = await navigator.mediaDevices.getUserMedia({ audio: true });
pc.addTrack(ms.getTracks()[0]);
// Create a DataChannel
dc = pc.createDataChannel("oai-events");
dc.addEventListener("message", e => {
const message = JSON.parse(e.data);
if (message.type.includes("function")) {
ws.send(e.data); // Forward function messages to WebSocket
}
});
// Create and send an SDP offer
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
// Send the offer to OpenAI
const baseUrl = "https://api.openai.com/v1/realtime";
const sdpResponse = await fetch(`${baseUrl}?model=${data.model}`, {
method: "POST",
body: offer.sdp,
headers: {
Authorization: `Bearer ${EPHEMERAL_KEY}`,
"Content-Type": "application/sdp"
},
});
// Set the remote SDP answer
const answer = { type: "answer", sdp: await sdpResponse.text() };
await pc.setRemoteDescription(answer);
console.log("Connected to OpenAI WebRTC");
}
Server implementation
This server implementation uses FastAPI to set up a WebRTC and WebSockets interaction, allowing clients to communicate with a chatbot powered by OpenAI’s Realtime API. The server provides endpoints for a simple chat interface and real-time audio communication.
Create an app using FastAPI
First, initialize a FastAPI app instance to handle HTTP requests and WebSocket connections.
app = FastAPI()
This creates an app instance that will be used to manage both regular HTTP requests and real-time WebSocket interactions.
Define the root endpoint for status
Next, define a root endpoint to verify that the server is running.
@app.get("/", response_class=JSONResponse)
async def index_page():
return {"message": "WebRTC AG2 Server is running!"}
When accessed, this endpoint responds with a simple status message indicating that the WebRTC server is up and running.
Set up static files and templates
Mount a directory for static files (e.g., CSS, JavaScript) and configure templates for rendering HTML.
website_files_path = Path(__file__).parent / "website_files"
app.mount(
"/static", StaticFiles(directory=website_files_path / "static"), name="static"
)
templates = Jinja2Templates(directory=website_files_path / "templates")
This ensures that static assets (like styling or scripts) can be served and that HTML templates can be rendered for dynamic responses.
Serve the chat interface page
Create an endpoint to serve the HTML page for the chat interface.
@app.get("/start-chat/", response_class=HTMLResponse)
async def start_chat(request: Request):
"""Endpoint to return the HTML page for audio chat."""
port = request.url.port
return templates.TemplateResponse("chat.html", {"request": request, "port": port})
This endpoint serves the chat.html
page and provides the port number in the template, which is used for WebSockets connections.
Handle WebSocket connections for media streaming
Set up a WebSocket endpoint to handle real-time interactions, including receiving audio streams and responding with OpenAI’s model output.
@app.websocket("/session")
async def handle_media_stream(websocket: WebSocket):
"""Handle WebSocket connections providing audio stream and OpenAI."""
await websocket.accept()
logger = getLogger("uvicorn.error")
realtime_agent = RealtimeAgent(
name="Weather Bot",
system_message="Hello there! I am an AI voice assistant powered by Autogen and the OpenAI Realtime API. You can ask me about weather, jokes, or anything you can imagine. Start by saying 'How can I help you'?",
llm_config=realtime_llm_config,
websocket=websocket,
logger=logger,
)
This WebSocket endpoint establishes a connection and creates a RealtimeAgent
that will manage interactions with OpenAI’s Realtime API. It also includes logging for monitoring the process.
Register and implement real-time functions
Define custom real-time functions that can be called from the client side, such as fetching weather data.
@realtime_agent.register_realtime_function(
name="get_weather", description="Get the current weather"
)
def get_weather(location: Annotated[str, "city"]) -> str:
logger.info(f"Checking the weather: {location}")
return (
"The weather is cloudy." if location == "Rome" else "The weather is sunny."
)
Here, a weather-related function is registered with the RealtimeAgent
. It responds with a simple weather message based on the input city.
Run the RealtimeAgent
Finally, run the RealtimeAgent
to start handling the WebSocket interactions.
await realtime_agent.run()
This starts the agent’s event loop, which listens for incoming messages and responds accordingly.
Conclusion
New integration of OpenAI Realtime API with WebRTC unlocks the full potential of WebRTC for real-time voice applications. With its low latency, adaptive quality, and secure communication, it’s the perfect tool for building interactive, voice-enabled applications.
Try it today and take your voice applications to the next level!
Finding this useful?
The AG2 team is working hard to create content like this, not to mention building a powerful, open-source, end-to-end platform for multi-agent automation.
The easiest way to show your support is just to star AG2 repo, but also take a look at it for contributions or simply to give it a try.
Also, let us know if you have any interesting use cases for RealtimeAgent? Or maybe you would like to see more features or improvements? Do join our Discord server for discussion.
Top comments (0)