Introduction
Over the past 15 years, the JavaScript ecosystem has expanded rapidly, introducing countless tools to make development easier. But these tools come at a cost: growing bundle sizes. In fact, data from the HTTP Archive shows that the average amount of JavaScript transferred per page has surged from 90 KB in 2010 to 650 KB in 2024 (source).
Despite growing adoption and advances in compression, this trend shows no signs of slowing. As we keep adding features, the challenge remains: How can we ship less JavaScript?
Oddly enough, the solutions are both easy and hard. The easy part is the project-level tweaks that can yield quick wins. The hard part is making a lasting impact, which requires a community-wide change to improve bundlers, libraries, and tools.
This article focuses on actionable improvements for your projects, covering:
- Bundlers: Optimizing build tools to reduce output size.
- Libraries: Choosing and using external dependencies wisely.
- Your project: Practical steps to shrink your bundles.
Future articles will address ecosystem-wide improvements we can make, but for now, let’s tackle how these factors contribute to bloated bundles — and how to manage them.
Why optimizing JavaScript matters
JavaScript is the engine behind the modern web interactivity, but it isn’t free. JavaScript is the most computationally expensive resource your browser has to handle. It’s often the bottleneck that determines whether a page feels fast or sluggish, as a bloated bundle can block rendering and degrade overall performance.
The bigger the JavaScript bundle, the longer it takes to load, parse, compile, and run. This delays everything else — like showing content or letting users interact with the page. For someone on a high-end laptop with a fiber connection, this might be a minor annoyance. But for someone on a low-powered phone or a spotty network, it can be the difference between staying or leaving your site entirely.
The first step at reducing the JavaScript bundle size is tree shaking (or “dead code elimination”), which most bundlers do out of the box. But are all bundlers equal?
Bundlers
Bundling in JavaScript has come a long way — from manual concatenation and task runners to sophisticated bundlers. Today, bundler performance is a key focus, with developers prioritizing faster builds. However, build speed isn’t everything. Equally important is the size of the bundles they produce, as smaller bundles translate to faster loading times for users.
In search for better performance, we have moved from writing bundlers in JavaScript to languages like Rust and Go. This switch required writing them from scratch, so every feature and optimization that was present in the old bundlers has to be reimplemented. In the long run, this will likely pay off. However, in the short term, it means that they are missing some features that JavaScript bundlers had years to develop, like good tree shaking. And this is the very feature that can help us minimize bundle size.
Benchmark
Of course, talk is cheap, so let's look at the numbers, shall we?
Let's compare eight popular libraries and bundle them with seven popular bundlers. To keep things fair, I used:
- Node 22.12.0
- Self-reported build times
- Third-run timings to let caches warm up, especially for tools like Parcel
- Configurations that remove all comments, including licenses, since bundlers handle them differently
You can check out the benchmark setup repository for the exact configurations.
Bundlers tested:
- esbuild (
0.24.0
) - Parcel (
2.13.2
) - Rolldown (
0.15.0-snapshot-993c4a1-20241205003858
) - Rollup (
4.28.0
) - Rspack (
1.1.5
) - Vite (
6.0.3
) - webpack (
5.97.1
)
Note that at the time of writing, Rolldown is still in alpha, so it's at the disadvantage and its results will likely improve over time.
Libraries tested:
chart.js
ckeditor5
d3
handsontable
luxon
mobx
tippy.js
zod
These libraries vary in size and features — some can function almost like standalone applications.
Build speed
Let's start with build speed, as this is something that developers seem to care a lot about. When bundling all of these libraries together, esbuild is the winner, with the build time of 192 ms. Comparing it to the slowest build time of 7.23 seconds, that's over 37 times faster.
Based on these results, we can group the bundlers into three categories:
- Blazingly fast™: esbuild, Parcel, Rolldown (<500 ms, with esbuild under 200 ms).
- Faster: Rspack (2.2 seconds).
- Slow: Rollup, Vite, webpack (5+ seconds each).
The differences are stark. For instance, Rolldown and Rspack are 11.5× and 3.3× faster than their older counterparts, Rollup and webpack, respectively — all while maintaining theoretical backward compatibility. Switching to these newer bundlers could boost productivity significantly on larger projects.
Output size
When it comes to output size, the differences aren’t as drastic as build times, but they still matter.
Aggregated results
When bundling all eight libraries together, Vite is the winner, with the output size of 2087 KiB. Comparing it to the largest output size of 2576 KiB, that's over 23.5% smaller output.
A 23.5% difference in output size is substantial: on a slow 3G connection, the smallest bundle might take around 5.7 s to download, while the largest closer to 7 s. Parsing and execution times also scale with bundle size, so the real-world difference could be even more noticeable.
Based on these results, we can group the bundlers outputs into three categories again:
- Smallest: esbuild, Parcel, Rollup, and Vite (~2085–2160 KiB).
- Okay: webpack (~2317 KiB).
- Big: Rolldown, Rspack (~2490–2580 KiB).
Individual libraries
Aggregated results don't paint the whole picture because it's unlikely that you will use all the libraries listed above in your project. What is more interesting is how these bundlers handle the individual libraries.
For libraries like chart.js
and mobx
, the choice of bundler can dramatically affect the output size, with differences reaching 70%. This highlights the importance of testing bundlers with your specific dependencies. In most other cases, the difference is much smaller, at around 20-30%.
Moreover, while in aggregate webpack ended up in the middle, it performed the best in 6 out of 8 cases. However, because it performed much worse when bundling handsontable
and chart.js
, it ended up where it did. This means that, depending on the libraries you use, webpack can be a good choice.
On the other side of the spectrum, we have Rolldown. It performed the worst in 7 out of 8 cases (remember that it's still in alpha).
Rspack is a similar story. While it performed better than Rolldown, it still produced a bundle much larger than the other bundlers.
If you’re considering migrating to a newer bundler, test it with the libraries you use to see if the faster build speed doesn't come at the cost of increased output size.
Bundle size vs. output speed
As shown, newer bundlers are much faster but may produce larger bundles. When migrating from an older bundler, don’t only compare build times — compare the resulting bundle sizes, too. You might find yourself trading faster builds for larger bundles.
For example, after Angular switched from webpack to esbuild, some developers reported that the size of an empty Angular app increased by around 20 KB. This highlights perfectly the build-speed vs. bundle-size trade-off.
It's not to say that you should not look at the build speed because it's important for developer productiveness and happiness. There's also a correlation between the CI build time and the time needed to merge the code.
When choosing a bundler, look at the features it provides first. Then aim for a balance between build speed and bundle size. Select the bundler that can produce possibly the smallest bundle in the time that you are comfortable with.
Test a few representative libraries from your project. If your dependencies make up most of your codebase, the differences you see in these benchmarks can be a good predictor for your situation.
Libraries
Next on our list are external libraries, which often make up the bulk of your JavaScript bundle. In many, if not most, applications I have worked on, they accounted for the majority of the bundle size. That's why it's so important to choose (and use) them wisely.
Gold but old
Many of us have installed libraries like lodash
, axios
, or moment
just to use a single function — leading to bloated applications. These libraries are great and historically important, but as they became more popular, lighter alternatives were created, and some of their features were added to the language itself.
We can take advantage of that. I could list native APIs or newer and smaller alternatives for these libraries, but there are already many articles covering that. And there are so many other libraries that it would be impossible to cover them all.
That's why I will only give you a general advice to take a look at the libraries you use and see if you can remove or replace them with native APIs or smaller alternatives. The YOU MIGHT NOT NEED * website is a great resource to get started.
Optimized installation paths
Most libraries aren’t optimized for size by default, but some offer special installation paths or partial builds. Even among the libraries in our test, chart.js
, handsontable
, and ckeditor5
offer a way to reduce the size of the library by only including the parts you need. Let's look at ckeditor5
as an example.
The default installation path results in the bundle size between 660 and 800 KiB. However, if we use the optimized installation path, the bundle size drops to 603-653 KiB, with only the bundle produced by Rolldown being around 750 KiB. This is a 7% to 23% reduction in size, depending on the bundler.
Duplicated dependencies
Another thing to look out for are duplicate dependencies. This is a surprisingly common problem in JavaScript applications. For example, Bluesky embed widget had two versions of zod
validation library. Removing the duplicate reduced the bundle size by ~9%.
This problem usually doesn't happen because you pulled two different versions of the same library, but because you and one of the external libraries depend on the same library, but in different versions. This can often be solved by updating the libraries you depend on.
Your project
With all of this in mind, we can finally move to the last piece of the puzzle — your project. Here’s what you can do to shrink your bundles and improve performance.
Inspect your bundles
The first step is visibility. Without understanding what’s inside your bundles, reducing their size becomes a guessing game. For this, you can use a bundle analyzer and visualizer I created called Sonda. It works with most bundlers mentioned above (except Parcel) and accurately shows the size of individual files that contribute to the bundle.
You can start by installing it in your project and visually inspecting parts of your bundle.
Once you have a good understanding of what’s inside the bundles and have identified the parts that can be optimized, you can click on the graph tiles to see:
- file sizes before and after compression,
- list of files that import the selected file,
- and even inspect parts of the source code included in the bundle.
Sonda also warns you about duplicate dependencies so you can quickly identify and fix the root of the problem.
Ideally, you should not only do a one-time inspection, but set up continues monitoring as part of your CI pipeline. Tracking changes over time, especially in large projects, can help you prevent small changes from snowballing into significant bloat over time.
Remove or optimize external libraries
The fastest code is the code you don’t ship. Whenever possible:
- Remove libraries that can be replaced by native APIs.
- Swap out heavyweight libraries for smaller alternatives.
- Use optimized installation paths if the library support it.
Use code splitting
If you can’t remove some part of your application, try code splitting. Code splitting allows you to defer loading certain parts of your app until they’re needed, improving initial load times.
Use dynamic import()
to load modules on demand. For example, if a particular feature isn’t needed until the user clicks a button, defer loading it until that moment.
Modern frontend frameworks support lazy loading out of the box, making it easier than ever to integrate code splitting into your workflow.
Follow the best practices
This is a general advice, but it's worth repeating. Follow the best practices, like:
-
Use the latest
target
you can so that the code is not unnecessarily transpiled or polyfilled. Some polyfills can add a lot of code that is not needed at all in modern browsers, but many environments still add them by default. You can also set up a reminder to update thetarget
every year. - Regularly update dependencies, as every so often the newer versions are smaller or faster. This can also prevent you from having to deal with security vulnerabilities or duplicate dependencies.
- Evaluate each dependency you already have or are considering adding. If you can't justify the size, don't add it or search for a smaller alternative.
Join the Ecosystem Performance (e18e) community
If you are interested in making the web faster or simply just learning new things, you should consider joining the Ecosystem Performance community. We focus on three main areas:
- Clean up — Improving packages by removing redundant dependencies or replacing them with modern alternatives.
- Speed up — Improving performance of widely used packages.
- Level up — Building modern alternatives to outdated packages.
Conclusion
I hope this article illustrates that you can ship the same features with less code. Bundle sizes can grow out of control if unmanaged, but even small changes can significantly improve performance.
Start today: analyze your bundles, test a new tool, or replace a heavyweight library. The impact will surprise you.
I hope you enjoyed this article. If you have any questions or comments, or if you like to learn more about a specific topic, please let me know in the comments below. If you want to learn more about the topic of JavaScript performance, bundling and tree-shaking, you can follow me here or on BlueSky and join the e18e community.
Top comments (3)
Thanks for pulling all this information together and sharing your insights!
It's been my world for the last 6 months reducing the bundle size of AG Grid!
Greetings from CKEditor! Shipping JavaScript libraries is much fun, isn't it, Filip? :D
There are some notable issues that need to be pointed out for this comparison:
You are using Parcel's build time from a cached run. On my machine, a fresh build with Parcel is about 20x slower than a cache hit.
Your bundle size comparison is not just comparing bundlers, but a combination of bundlers + minifiers. This is quite difficult to make it apples-to-apples because some bundlers come with built-in minification, some require external minifier via plugins, and most of them allow switching between minifiers. The choice of which minifier to use is full of speed vs. quality trade-offs in itself. For example, Vite uses esbuild as the minifier by default, but can use terser or swc instead. In your benchmark, you are using esbuild as the minifier in the rollup config, but using terser in the webpack config. Terser is significantly slower than esbuild, but yields better minification ratio (see github.com/privatenumber/minificat...) If you want to compare only bundlers, then you should do it without minification, but that will not reflect production cases; if you want to compare bundle + minification, then you should at least use the same minifier for bundlers that don't have built-in minification.
Why Rolldown's bundle size look big in this case: while Rolldown has built-in minification (via Oxc minifier), it is still WIP. It only implements very rudimentary compression and is in there for integration test purposes, and has a LOT of room for improvements in the next few months. We should probably emit a warning when users use Rolldown's built-in minify before it is ready. For now, we recommend using a more mature minifier via a plugin. If you use swc as the minifier via a plugin with Rolldown, you should see similar bundle size compared to Rollup.
esbuild has a unique advantage in this benchmark in that its built-in minification adds very little overhead compared to its non-minified build, because it parallelizes many minify-related operations in its per-module transform phase, and also performs the final minification on the same AST. This architectural choice results in better performance, but limits the amount of cross-module optimizations that can be performed. In Rolldown / Oxc, we have opted to perform minification on a separate AST on the bundled chunks in order to be able to add more cross-module-analysis based optimizations down the road. This sacrifices some performance when the minifier is enabled, but will result in smaller bundles in the long run.
I'm not sure how you are measuring the numbers. Usually, numbers reported by the tools themselves omit some necessary overhead (e.g. launching the CLI, parsing the config etc.), where as end-to-end time via npm scripts includes unnecessary overhead of
npm
or the script runner. I think a more accurate way to report numbers is usinghyperfine
to run the builds using Node 22'snode --run
so that we measure the end-to-end time withoutnpm run
overhead.