Written by Fimber Elemuwa✏️
JavaScript generators allow you to easily define iterators and write code that can be paused and resumed, allowing you to control the execution flow. In this article, we’ll explore how generators let you “yield” from a function to manage state, report progress on long-running operations, and make your code more readable.
While many developers immediately reach for tools like RxJS or other observables to handle asynchronous tasks, generators are often overlooked — and they can be surprisingly powerful.
We’ll compare generators with popular solutions like RxJS, showing you where generators shine and where more conventional approaches might be a better fit. Without further ado, let’s get started!
Understanding JavaScript generators
Simply put, generators are special functions in JavaScript that can be paused and resumed, allowing you to control the flow of execution more precisely than you can with regular functions.
A generator function is a special type of function that returns a generator object and conforms to the iterable protocol and the iterator protocol.
Generators were first introduced in ES6 and have since become a fundamental part of the language. They are defined using the function keyword suffixed with an asterisk like: function*
. Here’s an example:
function* generatorFunction() {
return "Hello World"; //generator body
}
Sometimes, you might find the asterisks prefixed to the function name like so *function
. While this syntax is less common, it is still valid.
How generators differ from standard functions
At first glance, a generator might look like a normal function (minus the asterisk), but some important differences make them uniquely powerful.
In a standard function, once you call it, the function runs from start to finish. There’s no way to pause halfway and then pick back up again. Generators, on the other hand, allow you to pause execution at any yield
point.
This pausable nature also preserves state between each pause, making generators perfect for scenarios where you need to keep track of what’s happened so far — like processing a large dataset in chunks. Additionally, when a normal function is called, it runs to completion and returns a value. However, when you call generator functions, they return a generator object. This object is an iterator used for looping through a sequence of values.
When you work with a generator, you don’t just call it once and forget about it. Instead, you use methods like next()
, throw()
, and return()
to control its state from the outside:
-
**next(value)**
: Resumes the generator and can pass a value back into the generator, which is received by the lastyield
expression. Thenext()
method returns an object withvalue
anddone
properties.value
represents the returned value, anddone
indicates whether the iterator has run through all its values -
throw(error)
: Throws an error inside the generator, effectively letting you handle exceptions in a more controlled manner -
return(value)
: Ends the generator early, returning the specified value
This two-way communication is a big step up from regular functions and can be used to build more sophisticated workflows, including error handling and partial data processing.
Example of generators in use
To begin, let’s initialize the Hello World generator function we showed earlier and retrieve its value:
const generator = generatorFunction();
When you call generatorFunction()
and store it in a variable, you won’t see the "Hello World"
string right away. Instead, you get what’s called a generator object, and it’s initially in a “suspended” state, meaning it’s paused and hasn’t run any code yet.
If you try logging generator
, you’ll see it’s not a plain string. It’s an object representing the paused generator. To get the value of the generator function, we need to call the next()
method on the generator object:
const result = generator.next();
This will give us the following output:
{ value: 'Hello World', done: true }
It returned our “Hello World” string as the value of the object key, and the state done
equal to true because there was no more code to execute. As a result, the status of the generator function changes from suspended to closed. So far, we’ve only seen how to return a single value from a generator function. But what if we want to return multiple values? This is where the yield
operator comes in.
The yield
operator
JavaScript generators allow you to pause and resume function execution using the yield
keyword. For example, imagine you have a generator function like this:
function* generatorFunction() {
yield "first value";
yield "second value";
yield "third value";
yield "last value";
}
Each time you call next()
on the generator, the function runs until it hits a yield
statement, and then it pauses. At that point, the generator returns an object with two properties:
-
value
: The actual value you’re yielding -
done
: A Boolean indicating whether the generator is finished
As long as there’s another yield
(or until it hits a return
), done
will be false
. Once the generator has no more yield
statements left, done
becomes true
. Expanding on the above example, if we call the next()
method four times, we’ll get the following output:
const generator = generatorFunction();
generator.next(); // { value: 'first value', done: false }
generator.next(); // { value: 'second value', done: false }
generator.next(); // { value: 'third value', done: false }
generator.next(); // { value: 'last value', done: true }
Notice how the first three calls to next()
each return a new value with done: false
. By the fourth call, the generator has run out of yield
statements, so it returns done: true.
Passing values to generators
What’s really cool is that yield
isn’t just about returning values — like a two-way street, it can also receive them from wherever the generator is being called, giving you two-way communication between the generator and its caller.
To pass a value to a generator function, we can call the next()
method with an argument. Here’s a simple example:
function* generatorFunction() {
console.log(yield);
console.log(yield);
}
const generator = generatorFunction();
generator.next(); // First call — no yield has been paused yet, so nothing to pass in
generator.next("first input");
generator.next("second input");
This would log the following sequentially:
first input
second input
See how the first call to generator.next()
doesn’t print anything? That’s because, at that point, there isn’t a paused yield
ready to accept a value. By the time we call generator.next("first input")
, there’s a suspended yield
waiting, so "first input"
gets logged. The same pattern follows for the third call.
This is exactly how generators allow you to pass data back and forth between the caller and the generator itself.
Processing long async operations and streams
The arrival of ECMAScript 2017 introduced async generators, a special kind of generator function that works with promises. Thanks to async generators, you’re no longer limited to synchronous code in your generators. You can now handle tasks like fetching data from an API, reading files, or anything else that involves waiting on a promise.
Here’s a simple example of an async generator function:
async function* asyncGenerator() {
yield await Promise.resolve("1");
yield await Promise.resolve("2");
yield await Promise.resolve("3");
}
const generator = asyncGenerator();
await generator.next(); // { value: '1', done: false }
await generator.next(); // { value: '2', done: false }
await generator.next(); // { value: '3', done: true }
The main difference is that you have to use await
on each generator.next()
call to retrieve the value, because everything is happening asynchronously.
We can further demonstrate how to use async generators to view paginated datasets from a remote API. This is a perfect use case for async generators as we can encapsulate our sequential iteration logic in a single function. For this example, we’ll use the free DummyJSON API to fetch a list of paginated products.
To get data from this API, we can make a GET request to the following endpoint. We’ll pass limit and skip params to limit and skip the results for pagination:
https://dummyjson.com/products?limit=10&skip=0
A sample response from this endpoint might look like this:
{
"products": [
{
"id": 1,
"title": "Annibale Colombo Bed",
"price": 1899.99
},
{...},
// 10 items
],
"total": 194,
"skip": 0,
"limit": 10
}
To load the next batch of products, you just increase skip
by the same limit
until you’ve fetched everything. With that in mind, this is how we can implement a custom generator function to fetch all the products from the API:
async function* fetchProducts(skip = 0, limit = 10) {
let total = 0;
do {
const response = await fetch(
`https://dummyjson.com/products?limit=${limit}&skip=${skip}`,
);
const { products, total: totalProducts } = await response.json();
total = totalProducts;
skip += limit;
yield products;
} while (skip < total);
}
Now we can iterate over it to get all the products using the for await...of
loop:
for await (const products of fetchProducts()) {
for (const product of products) {
console.log(product.title);
}
}
It will log the products until there is no more data to fetch:
Essence Mascara Lash Princess
Eyeshadow Palette with Mirror
Powder Canister
Red Lipstick
Red Nail Polish
... // 15 more items
By wrapping the entire pagination logic in an async generator, your main code remains clean and focused. Whenever you need more data, the generator transparently fetches and yields the next set of results, making pagination feel like a straightforward, continuous stream of data.
Generators as state machines
While generators can be used as simple state machines (they remember where they left off each time), they aren’t always the most practical choice — especially when you consider the robust state management tools offered by most modern JavaScript frameworks.
In many cases, the extra code and complexity of implementing a state machine with generators can outweigh the benefits.
If you still want to explore this approach, you might look into the Actor model, which originates from the Erlang programming language. Although the details go beyond the scope of this article, the Actor model is often more effective for managing state.
In this model, “actors” are independent entities that encapsulate their own state and behavior, and communicate exclusively through message passing. This design ensures that state changes happen only within the actor itself, making the system more modular and easier to reason about.
RxJS vs. Generators for processing web streams
When it comes to processing streams of data, both JavaScript generators and RxJS are great tools, but each comes with its strengths and weaknesses. Lucky for us, they aren’t mutually exclusive so we can use both.
To demonstrate this, let’s imagine we have an endpoint that returns a multiple randomized 8-character string as a stream. For the first step, we can use a generator function to lazily yield chunks of data as we fetch it from the stream:
// Fetch data from HTTP stream
async function* fetchStream() {
const response = await fetch("https://example/api/stream");
const reader = response.body?.getReader();
if (!reader) throw new Error();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
yield value;
}
} catch (error) {
throw error;
} finally {
reader.releaseLock();
}
}
Calling fetchStream()
returns an async generator. You can then iterate over these chunks using a loop — or, as we’ll see next, harness RxJS to add some stream-processing superpowers.
RxJS provides a rich set of operators — like map
, filter
, and take
– that help you transform and filter asynchronous data flows. To use them with your async generator, you can convert the generator into an observable using RxJS’s from
operator.
We’ll now use the take
operator to filter the first five chunks of data:
import { from, take } from "rxjs";
// Consume HTTP stream using RxJS
async () => {
from(fetchStream())
.pipe(take(5))
.subscribe({
next: (chunk) => {
const decoder = new TextDecoder();
console.log("Chunk:", decoder.decode(chunk));
},
complete: () => {
console.log("Stream complete");
},
});
};
If you are new to RxJS, the from
operator converts the async generator into an observable. This allows us to subscribe and access the fetched data as if it were synchronous. Looking at our log output after decoding, we should be able to see the first five chunks of the stream:
Chunk: ky^p1egh
Chunk: 1q)zIz43
Chunk: xm5aJGSX
Chunk: GSx6a2UQ
Chunk: GFlwWPu^
Stream complete
Alternatively, you could consume the stream using a for await...of
loop:
// Consume the HTTP stream using for-await-of
for await (const chunk of fetchStream()) {
const decoder = new TextDecoder();
console.log("Chunk:", decoder.decode(chunk));
}
But with this approach, we miss out on RxJS operators, which make it easier to manipulate the stream in more flexible ways. For example, we can’t use the take
operator to limit the number of chunks we want to consume.
However, this limitation won’t last forever. With Iteration Helpers proposed for the next version of ECMAScript (currently Stage 4), you’ll eventually be able to do things like limiting or transforming generator output natively — similar to what RxJS already does for observables.
For more complex asynchronous flows, RxJS still offers a robust toolkit that won’t be easily replaced by native iteration helpers anytime soon:
**Aspect** | **Generators** | **RxJS (Observables)** |
**Programming model** | **Pull-based**: Consumer calls `next()` to retrieve data | **Push-based**: Data is emitted to subscribers when available |
Built-in vs. Library | Native to JavaScript (no extra dependency) | Requires RxJS library |
Ease of Use | Relatively straightforward for sequential flows, but can be unfamiliar | Steeper learning curve due to extensive API (operators, subscriptions) |
Data glow | Yields one value at a time, pausing between yields | Can emit multiple values over time, pushing them to subscribers |
**Operators and transformations** | Minimal (manual iteration, no built-in transformations) | Rich operator ecosystem (`map`, `filter`, `merge`, `switchMap`, etc.) |
**Scalability** | Can become cumbersome with multiple streams or complex branching | Designed for large-scale, reactive architectures, and multiple streams |
**Performance considerations** | Lightweight for simpler tasks (no external overhead) | Efficient for real-time or complex pipelines, but adds library overhead |
**When to choose** | If you need fine-grained control, simpler iteration, fewer transformations | If you need robust data stream handling, built-in ops, or event pipelines |
Conclusion
JavaScript generators offer a powerful and often overlooked way to handle asynchronous operations, manage state, and process data streams. By allowing you to pause and resume execution, they enable a more precise flow of control compared to regular functions — especially when you need to tackle long-running tasks or iterate over large datasets.
While generators excel in many scenarios, tools like RxJS provide a powerful ecosystem of operators that can streamline complex, event-driven flows.
But there’s no need to pick sides: you can combine the elegance of generators with RxJS’s powerful transformations, or even just stick to a simple for await...of
loop if that suits your needs.
Looking ahead, new iteration helpers may bring generator capabilities closer to those of RxJS — but for the foreseeable future, RxJS remains a staple for handling intricate reactive patterns.
LogRocket: Debug JavaScript errors more easily by understanding the context
Debugging code is always a tedious task. But the more you understand your errors, the easier it is to fix them.
LogRocket allows you to understand these errors in new and unique ways. Our frontend monitoring solution tracks user engagement with your JavaScript frontends to give you the ability to see exactly what the user did that led to an error.
LogRocket records console logs, page load times, stack traces, slow network requests/responses with headers + bodies, browser metadata, and custom logs. Understanding the impact of your JavaScript code will never be easier!
Top comments (0)