As AI technology evolves, running sophisticated machine learning models directly within the browser is becoming increasingly feasible. This guide will walk you through how to load and use the DeepSeek-R1 model in a browser using JavaScript. We'll also cover the implementation details based on the example found here.
Why Run NLP Models in the Browser?
Traditionally, Natural Language Processing (NLP) models are deployed server-side, requiring internet connections for sending requests and receiving responses. However, with advancements like WebGPU and ONNX.js, it's now possible to run advanced models such as DeepSeek-R1 directly in the browser. The benefits include:
- Enhanced Privacy: User data never leaves their device.
- Reduced Latency: Eliminates delays associated with server communication.
- Offline Availability: Operable even without an internet connection.
About DeepSeek-R1
DeepSeek-R1 is a lightweight yet efficient NLP model optimized for on-device inference. It offers high-quality text processing capabilities while maintaining a small footprint, making it ideal for browser environments.
Setting Up Your Project
Prerequisites
To get started with running the DeepSeek-R1 model in your browser, you'll need:
- A modern browser that supports WebGPU/WebGL.
- The
@huggingface/transformers
library for executing transformers models in JavaScript. - The script file containing the logic for loading and handling the DeepSeek-R1 model.
Demo: try it!.
Implementation Details
Here’s a step-by-step guide to loading and using the DeepSeek-R1 model in your browser:
import {
AutoTokenizer,
AutoModelForCausalLM,
TextStreamer,
InterruptableStoppingCriteria,
} from "@huggingface/transformers";
/**
* Helper function to perform feature detection for WebGPU
*/
async function check() {
try {
const adapter = await navigator.gpu.requestAdapter();
if (!adapter) {
throw new Error("WebGPU is not supported (no adapter found)");
}
} catch (e) {
self.postMessage({
status: "error",
data: e.toString(),
});
}
}
/**
* This class uses the Singleton pattern to enable lazy-loading of the pipeline
*/
class TextGenerationPipeline {
static model_id = "onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX";
static async getInstance(progress_callback = null) {
if (!this.tokenizer) {
this.tokenizer = await AutoTokenizer.from_pretrained(this.model_id, {
progress_callback,
});
}
if (!this.model) {
this.model = await AutoModelForCausalLM.from_pretrained(this.model_id, {
dtype: "q4f16",
device: "webgpu",
progress_callback,
});
}
return [this.tokenizer, this.model];
}
}
const stopping_criteria = new InterruptableStoppingCriteria();
let past_key_values_cache = null;
async function generate(messages) {
// Retrieve the text-generation pipeline.
const [tokenizer, model] = await TextGenerationPipeline.getInstance();
const inputs = tokenizer.apply_chat_template(messages, {
add_generation_prompt: true,
return_dict: true,
});
const [START_THINKING_TOKEN_ID, END_THINKING_TOKEN_ID] = tokenizer.encode(
"<think></think>",
{ add_special_tokens: false },
);
let state = "thinking"; // 'thinking' or 'answering'
let startTime;
let numTokens = 0;
let tps;
const token_callback_function = (tokens) => {
startTime ??= performance.now();
if (numTokens++ > 0) {
tps = (numTokens / (performance.now() - startTime)) * 1000;
}
if (tokens[0] === END_THINKING_TOKEN_ID) {
state = "answering";
}
};
const callback_function = (output) => {
self.postMessage({
status: "update",
output,
tps,
numTokens,
state,
});
};
const streamer = new TextStreamer(tokenizer, {
skip_prompt: true,
skip_special_tokens: true,
callback_function,
token_callback_function,
});
// Tell the main thread we are starting
self.postMessage({ status: "start" });
const { past_key_values, sequences } = await model.generate({
...inputs,
do_sample: false,
max_new_tokens: 2048,
streamer,
stopping_criteria,
return_dict_in_generate: true,
});
past_key_values_cache = past_key_values;
const decoded = tokenizer.batch_decode(sequences, {
skip_special_tokens: true,
});
// Send the output back to the main thread
self.postMessage({
status: "complete",
output: decoded,
});
}
async function load() {
self.postMessage({
status: "loading",
data: "Loading model...",
});
// Load the pipeline and save it for future use.
const [tokenizer, model] = await TextGenerationPipeline.getInstance((x) => {
self.postMessage(x);
});
self.postMessage({
status: "loading",
data: "Compiling shaders and warming up model...",
});
// Run model with dummy input to compile shaders
const inputs = tokenizer("a");
await model.generate({ ...inputs, max_new_tokens: 1 });
self.postMessage({ status: "ready" });
}
// Listen for messages from the main thread
self.addEventListener("message", async (e) => {
const { type, data } = e.data;
switch (type) {
case "check":
check();
break;
case "load":
load();
break;
case "generate":
stopping_criteria.reset();
generate(data);
break;
case "interrupt":
stopping_criteria.interrupt();
break;
case "reset":
past_key_values_cache = null;
stopping_criteria.reset();
break;
}
});
Key Points
-
Feature Detection: The
check
function performs feature detection to ensure WebGPU support. -
Singleton Pattern: The
TextGenerationPipeline
class ensures that the tokenizer and model are loaded only once, preventing redundant initialization. -
Model Loading: The
getInstance
method loads the tokenizer and model from a pre-trained source, supporting progress callbacks. -
Inference: The
generate
function processes input and generates text output, usingTextStreamer
for streaming tokens. - Communication: The worker listens for messages from the main thread and executes corresponding actions based on message types (e.g., check, load, generate, interrupt, reset).
Conclusion
Running NLP models like DeepSeek-R1 in the browser marks a significant advancement in enhancing user experiences and protecting data privacy. With just a few lines of JavaScript and the power of the @huggingface/transformers
library, you can develop responsive and powerful applications. Whether you're building interactive tools or intelligent assistants, browser-based NLP can be a game-changer.
Explore the potential of DeepSeek-R1 in the browser and start crafting smarter front-end applications today!
This guide provides a comprehensive overview of how to load and use the DeepSeek-R1 model in a browser environment, complete with detailed code examples. For more specific implementation details, refer to the linked GitHub repository.
Top comments (0)