Buffers in Node.js are as close to the metal as you can get in JavaScript—raw binary data, fast, easy to transport and transform.
A generalist can build many kinds of apps.
A specialist can build advanced ones.
Why not be both? Buffers are your easy step toward depth.
Why Buffers Matter
Buffers, combined with raw TCP servers, form the backbone of some of the world’s most critical software systems.
Take MySQL—it’s a virtual machine, storage engine with a TCP interface, communicating via buffers.
MongoDB? It uses BSON (built on buffers) to store large chunks of data.
At its core, a buffer is a raw chunk of memory.
const buffer = Buffer.alloc(1);
buffer.writeInt8(7, 0);
console.log(buffer.readInt8(0));
What’s happening?
- We ask the computer for a raw memory allocation of size 1.
- We write the number 7 as an 8-bit integer.
- We read it back.
That’s it. Raw memory, straight from JavaScript.
Building Intuition: Buffers as Memory
Code is fun. But intuition is just as important.
Think of memory like one long array—exactly as von Neumann envisioned:
const MEM_SIZE = 4090; // Example size
const mem = new Array(MEM_SIZE);
We all know arrays.
A buffer is just a subarray where each slot holds a byte (8 bits) of memory.
[ ■ ][ ■ ][ ■ ][ ■ ][ ■ ][ ■ ] ...
What’s the difference between 8-bit, 16-bit, and 32-bit values?
It’s how much information they can hold.
8 bits: One pixel
16 bits: Two slots
32 bits: Four slots
Images? Collections of 8-bit pixels.
Videos? Collections of images.
Here’s the power:
How you interpret those bytes changes everything.
- Treat them as individual bytes → store pixels or characters.
- Group them into 4 or 8 bytes → represent complex numbers.
Practical Buffers
We’ve already seen how to create a buffer—they’re the bread and butter of low-level programming and how data is represented!
One thing I absolutely love about Node.js is its API.
It’s so well-defined and closely maps to what’s happening under the hood. Take the from
function, for example. It’s a neat little tool that converts high-level data into a buffer—simple but powerful.
import { Buffer } from 'node:buffer';
const b = Buffer.from("Hello I am a string", "utf-8"); // utf-8 encoded, supports other encodings too
console.log(b.toString()); // handy decoder
I’m assuming Node uses the native TextEncoder and TextDecoder under the hood—I haven’t dug into the source yet.
Here’s how they work:
Text encoding:
const e = new TextEncoder();
const res = e.encode("Some Text");
Text decoding:
const d = new TextDecoder();
console.log(d.decode(res)); // "Some Text"
What’s happening here? All these helpers are manipulating subarrays in memory.
This is the art of programming—what seems trivial at first is actually foundational. Think about the entire web protocol: HTTP is just a string! We transport data as text across the web. Buffers let us represent that text (or any data) at a lower level.
For example:
// client
const stringified = JSON.stringify(somedata);
sendToserverAPI(stringified);
// server
const parsed = JSON.parse(req.body);
This cycle is the heartbeat of most modern backends (unless you’re working with raw TCP). You’ll see these patterns everywhere, and we’ll dive even deeper in the bunniMQ series, where we’ll build a custom protocol and handshake system.
Serialization and Deserialization
Serialization Converts data into a stream of bytes, Deserialization is the opposite
Let’s get even more practical with a native Node.js server—no libraries, just the raw stuff:
import http from "node:http";
async function serialize(data) {
console.log(data);
// { data: '{"msg":"Hello I am data"}', contentType: 'application/json' }
}
const server = http.createServer((req, res) => {
let body = '';
req.on('data', chunk => {
body += chunk.toString();
});
req.on('end', () => {
serialize({ data: body, contentType: req.headers['content-type'] });
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ received: JSON.parse(body) }));
});
});
server.listen(3000, () => console.log("Server running on :3000"));
Guess what? This is exactly what Express.js wraps under the hood.
Want to see the raw fetch side? Here’s a native request object:
import { request } from "http";
import { Buffer } from 'node:buffer';
const postData = JSON.stringify({ msg: 'Hello I am data' });
const options = {
hostname: 'localhost',
port: 3000,
path: '/',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(postData),
},
};
const req = request(options, (res) => {
console.log(`STATUS: ${res.statusCode}`);
res.setEncoding('utf8');
res.on('data', chunk => console.log(`BODY: ${chunk}`));
res.on('end', () => console.log('No more data in response.'));
});
req.on('error', (e) => console.error(`Request error: ${e.message}`));
req.write(postData);
req.end(); // don't forget this!
Nothing fancy here—it’s pretty readable. We’re just creating a request to port 3000.
Serialization
Let’s update our serialize function to use buffers:
import { Buffer } from 'node:buffer';
import { writeFileSync } from 'node:fs';
async function serialize(data) {
const b = Buffer.from(data.data, "utf-8"); // actual body
const contentType = Buffer.from(data.contentType, 'utf-8'); // HTTP header
const bodyLength = Buffer.alloc(4); // holds body length
bodyLength.writeInt32BE(b.length, 0); // write as 32-bit integer
const combinedBuff = Buffer.concat([bodyLength, b, contentType]);
console.log(combinedBuff); // full serialized buffer
writeFileSync("./binaryData.bin", combinedBuff); // save to file
}
Many low-level systems rely on this simple process. MongoDB’s BSON format? Same idea.
What’s key here is this part:
const bodyLength = Buffer.alloc(4);
bodyLength.writeInt32BE(b.length, 0); // 32-bit body length
Why? Because the deserializer needs to know where the body starts and ends—everything’s packed into a single array!
Deserialization: Reading It Back
Here’s how we reverse the process:
import { readFileSync } from 'node:fs';
function deserialize() {
const f = readFileSync("./binaryData.bin"); // read binary file
const b = Buffer.from(f); // convert to buffer
const bodyLength = b.readInt32BE(0); // read first 4 bytes
const body = b.subarray(4, 4 + bodyLength); // extract body
const contentType = b.subarray(4 + bodyLength); // extract content type
console.log({ bodyLength, body, contentType });
return [body, contentType];
}
Deserialization is just reading the buffer in the same order we serialized it:
const combinedBuff = Buffer.concat([bodyLength, b, contentType]);
If this still feels confusing—don’t stress! Buffers take practice, and you’ll get plenty of hands-on time in the series.
Bonus: Handling GET Requests
To round things off, let’s add a basic GET handler before req.on("data"):
if (req.url === "/data" && req.method === "GET") {
const [body, contentType] = deserialize();
res.writeHead(200, { 'Content-Type': contentType.toString() });
res.end(body.toString());
return;
}
Hit /data
with a request object and you’ll see the buffer decoded back to its original form. Pretty cool, right?
Abstractions are beautiful—they make our lives easier as developers.
But let’s be real (don’t shoot the messenger): Abstraction is a double-edged sword. The further you drift from the metal, the less you truly own your system. And that can affect your value as an engineer.
How did I climb the ladder so quickly and stay afloat in my first job? Simple: I can go wide and deep.
- Wide: I’ve built websites with React and Laravel.
- Deep : I’ve worked on native software and backend systems in Go, Python, C#, and JavaScript.
I’ve created a custom dataframe in JavaScript, built an internal pipeline framework inspired by Kedro, and developed an internal graph data visualization tool.
None of that would’ve happened if I hadn’t taken a wild leap in 2022 and built a compiler for my own frontend framework.
And guess what? Buffers can be that first step for you—a gateway into low-level programming through a friendly language.
They’re everywhere:
In Go: Buffers handle raw data streams efficiently.
package main
import "bytes"
import "fmt"
func main() {
data := []byte("Hello, buffer world!")
buf := bytes.NewBuffer(data)
fmt.Println("Buffer contents:", buf.String())
}
In C: They form the foundation of memory management.
#include <stdio.h>
#include <string.h>
int main() {
char data[] = "Hello, buffer world!";
char buffer[50];
strcpy(buffer, data);
printf("Buffer contents: %s\n", buffer);
return 0;
}
In the browser: Buffers power WebSockets, File APIs, and multimedia processing.
They’re powerful. In BunnyMQ, we build an entire protocol around buffers for data serialization and TCP transfers.
Wrapping Up
Here’s what we covered:
✅ The basics of buffers.
✅ Practical examples—including a simple serializer and deserializer on an HTTP server.
✅ Why going deep matters (and how buffers give you that edge).
Buffers are fun. Low-level programming makes you feel alive.
Embrace it—you’ll love it. This is the essence of software.
Building a pure JavaScript message broker for distributed systems:
You can find me on x
see ya!🫡
Top comments (1)
More resources:
Buffers in the browser
Node.js Docs