Part 1 of this series is here.
Coroutines are interruptible functions. They're functions that pause at specific points and allow the runtime to resume them when it wishes.
Think about this for a moment. A regular function has to be executed until it ends, returns or throws an exception. A coroutine, however, executes up until an interruption point, and then pauses and hands control over to the runtime. The runtime can then decide to do other things and come back later to continue the coroutine.
The kind of multitasking you get with coroutines is known as cooperative multitasking, because coroutines "cooperate" with the runtime. In preemptive multitasking, the runtime can interrupt a process whenever it wishes, as opposed to waiting for it to hand over control.
Coroutines are everywhere in JavaScript
Here's a fun fact: if you've written any asynchronous code in JavaScript, you've effectively written a coroutine. Take a look at this:
const fs = require("fs/promises");
async function readDirectoryCoroutine() {
console.log("Reading directory");
// interruption point ⤋
const files = await fs.readdir(__dirname);
console.log("Directory read");
}
async function readFileCoroutine() {
console.log("Reading file");
// interruption point ⤋
const contents = await fs.readFile(__filename);
console.log("File read");
}
readDirectoryCoroutine();
readFileCoroutine();
When I execute this script, I get:
❯ node .\test.js
Reading directory
Reading file
Directory read
File read
Here, await
serves as an interruption point. readDirectoryCoroutine
executes until it gets to the fs.readdir()
call. It dispatches that to the operating system and pauses, allowing the runtime to switch over to readFileCoroutine
, which in turn pauses at fs.readFile()
. That pause gives the runtime a chance to check back on the earlier fs.readdir()
call—results are in, so it switches over to continue readDirectoryCoroutine
, and so on.
Why coroutines?
Coroutines are an important tool to achieve concurrency within a single thread. In the first part of this series, we saw how JavaScript is single-threaded—every action is executed one after the other, on one thread, leading to long wait times for incoming requests. Coroutines help us get around that, without having to add more threads. We can execute our request handler up to an interruption point, and then switch to handling the next request, and so on.
Another place where concurrency is really important is user interfaces. User interfaces need to be snappy, especially highly interactive applications like games. A UI that reacts 500ms after you click it will feel slow. In fact, Nielsen says 100ms is the optimal time for a responsive UI. In the browser, there's a single thread doing everything—rendering UI, calculations, animations, HTTP requests, and more—so we need to be extra careful to keep UI latency low. If some part of our code takes too long to execute, UI interactions such as clicking and scrolling will lag, because the browser needs to wait for the code to finish.
Here's an example. Click the "Run" button, then immediately try clicking the other button.
See the Pen Blocking the main thread by Shalvah (@shalvah) on CodePen.
You'll see that page interactions don't work until after the copying and stringifying is done. Everything else had to wait, because the thread was busy.
Coroutines are made for this: we can pause our work at intervals to allow the browser can respond to the UI and keep things feeling smooth.
This is why many operations in JavaScript are asynchronous. If fetch()
was synchronous, scrolling a page while it made an AJAX call wouldn't work. XMLHttpRequest
has a synchronous option, but everyone says not to use it, for this exact reason. But our example is different: the work we're doing (copying an array and converting to JSON) doesn't come with an asynchronous option. So how about we make one? The good news is that that's easy to do in JavaScript.
Custom coroutines
Here's the main portion of code that cause the browser to freeze in the previous example:
// Create an array with 8 million items
const array = Array.from({ length: 8_000_000 }, () => ({ a: 3 }));
function copyAndStringify() {
const copy = [...array];
const stringified = JSON.stringify(copy);
}
Note: This example is a bit contrived—8 million items is probably an edge case—but I used it to make the lag easily noticeable. Most web apps that have a blocking problem only have a slight lag (UI temporarily glitches and then responds), but it's still jarring.
What if there was a way we could split up our copying and stringifying tasks? Perhaps we could copy only a couple of items at a go, then pause and hand over to the browser so it can respond to any UI events and then allow us to continue. How can we do this?
We can't just add an await
in front of the function. await
serves as an interruption point, but only for asynchronous functions that return promises. However, there's another JavaScript feature that allows for interruptible functions: generators.
Generators are functions that can be paused and continued. Here's an example:
function* exampleGenerator() {
console.log("Executing Part 1");
yield;
console.log("Executing Part 2");
yield;
console.log("Executing Part 3");
yield;
return "Finished";
}
const generator = exampleGenerator();
generator.next(); // Prints "Executing Part 1"
generator.next(); // Prints "Executing Part 2"
generator.next(); // Prints "Executing Part 3"
In generators, the yield
operator serves as an interruption point. Whenever you call next()
, the generator function will execute until it encounters a yield
, then pause and return control to you. When you call next()
again, it'll resume from where it stopped last time and continue until it gets to another yield
, and so on. At the end, when you call next()
and there are no more yield
s, it will return. Pretty cool, huh?
Generators are often used to implement lazy computations such as infinite sequences. Rather than calculating a list of 100,000 values at a go, I can just call next()
to calculate the next one when I'm ready to use it. However, our use case today is to implement custom coroutines. Let's see how.
Implementing coroutines with generators
We'll need two things:
- a function that provides interruption points (the coroutine)
- a dispatcher—the runtime that our coroutines yield control to. The dispatcher will be in charge of executing and resuming our coroutine at a good time.
Let's start with the coroutine. We'll be using a generator, with a yield
at every place we want to pause. But this means we can't use the inbuilt JavaScript JSON.stringify
and spread operator (...
) to stringify/copy, as they're not pause-able. We'll have to write our own versions. Luckily, it's not hard:
// For copying, we iterate over the array and push the items to a new array
function* copyGenerator(array) {
let copiedArray = [];
let index = 0;
for (let item of array) {
if (index % 50_000 === 0) {
yield; // Pause every 50k items
}
copiedArray.push(item);
index++;
}
return copiedArray;
}
// For stringifying, since we know it's an array,
// we can just stringify each item, join with ',' and wrap in []
// eg. [{a: 3}, {b: 4}] becomes '[{"a": 3}, {"b": 4}]'
function* stringifyGenerator(array) {
let result = "[";
let index = 0;
for (let item of array) {
if (index % 50_000 === 0) {
yield; // Pause every 50k items
}
result += JSON.stringify(item);
result += ",";
index++;
}
result[result.length - 1] = "]";
return result;
}
In both functions, we're pausing after every 50,000 elements. There isn't a rule that says the number to use there, but JS copies and stringifies 50k items very quickly, so that should give us enough time to get some stuff done without holding up the user.
Over to the dispatcher. We can't use these new functions as exact replacements of the old ones, because the browser will simply run them once and move on. We need a dispatcher, something that knows that these are meant to be resumed at intervals when the browser is free. Once again we're lucky: JavaScript has an event loop that allows us to easily schedule an operation (like resuming a function) for later. Here's a simple dispatcher implementation:
const dispatcher = {
run(coroutine) {
return new Promise((resolve, reject) => {
const generator = coroutine();
setTimeout(() => this.resumeGenerator(generator, resolve), 0);
});
},
resumeGenerator(generator, resolve) {
let { value, done } = generator.next();
if (done === false) {
// Schedule the next batch
setTimeout(() => this.resumeGenerator(generator, resolve), 0);
} else {
// When the generator finishes, resolve the promise
resolve(value);
}
}
};
Okay, let's break it down:
- The dispatcher's
run()
function is the main entry point here. We give it our coroutine (the generator function), and it kicks things off. - We made
dispatcher.run()
return a promise, so that can use async/await sugar to get the return value (returnValue = await dispatcher.run(coroutine)
). - The
generator.next()
function returns adone
property that tells us if the generator is done (no more pauses left). The dispatcher uses that to track execution; if the generator has finished, it will return the value; otherwise, it schedules the next batch. -
setTimeout()
is another kind of interruption. Using asetTimeout
tells the event loop to run this code after a specific timeout, only when it is done with all pending work. So if there are pending UI events, the browser will get to attend to them before running the scheduled function.
And here's how we use the dispatcher:
async function copyAndStringify() {
const copy = await dispatcher.run(() => copyGenerator(array));
const stringified = await dispatcher.run(() => stringifyGenerator(copy));
}
Pretty neat, eh? Now let's put it to the test. Try clicking "Run" now, then click the other button. You should see that there's little or no UI freezing.
See the Pen Custom coroutines to avoid blocking by Shalvah (@shalvah) on CodePen.
Even though it takes longer to run (around 5-6s if you're doing a lot of clicking, compared to the blocking version's 1.3s), the overall user experience is better: the UI remains responsive. And if we still experience a bit of lag, we can play around with changing the batch size (50k). A lower batch size will lead to less lag, but will make the operation take longer.
And here ends our experiment. On the server-side (Node.js), coroutines are a little less important, since you'd typically move long-running tasks to a queue, but they're still a useful technique to be aware of.
If you'd like to explore this stuff some more, there's a more robust package that makes it simpler to write custom coroutines in JS. It also provides a coroutine version of common operations like copying an object, parsing JSON, stringifying to JSON, and sorting an array—operations that can take a long time for large input and block the main thread. You'll also need to learn about the event loop and how it runs JavaScript code. It might take a lot of reading and re-reading to get it, but here are a few articles to get you started: Node.js, MDN, javascript.info.
Hope you had fun experimenting with coroutines! In the next part of the series, we'll see how we can work with coroutines in some more languages.
Top comments (0)