DEV Community

Cover image for How I Leveraged Source Maps to Improve My Team’s Debugging
Itay Ben Ami
Itay Ben Ami

Posted on

How I Leveraged Source Maps to Improve My Team’s Debugging

Let's face it, the modern web is full of magic.
Nowadays, when I use Vite, spinning up a new project is effortless. All it takes is a little Hocus Pocus—run:

pnpm create vite my-app --template react-ts
Enter fullscreen mode Exit fullscreen mode

and Poof! A fully configured React + TypeScript app appears.

Want to run it locally? Just type: pnpm run dev and Voila!: it works. All my JSX and TS code running seamlessly in the browser. But as developers, we can’t always rely on abstractions. Understanding what’s happening under the hood helps us fine-tune our apps and improve in our field.

Today, we're going to pull back the curtain and explore Vite's build process a bit, learn about Source Maps and hear how I used them to improve my team’s logs. By the end, you’ll have a solid grasp of how to leverage Source Maps for debugging and logging in your own projects. So let's get started! 🚀

A Bit About Vite

In order for Vite to pull browser-friendly Javascript out of a hat, it needs to transform all our JSX and Typescript. It achieves this by using Esbuild, a blazing-fast Javascript transpiler written in Go.

In production, Vite takes it a step further by leveraging Rollup for advanced optimizations:

  • Tree shaking → Removes unused code to reduce bundle size.
  • Code splitting → Breaks the output into smaller, optimized chunks, so browsers only load what’s needed (kind of like the "Saw a Lady in Half" trick 🪄).
  • Minification → While ESBuild is mainly used in development, it also removes unnecessary characters in production, making files smaller and faster to load.

Vite, Esbuild and Rollup Simpsons Meme

So, What Are Source Maps?

If Vite is like a magician, then Source Maps are the Magician's Handbook. Why? Because they reveal how the trick was done, or in our case how our compiled code maps back to our original source code.

Now let's see Source Maps in action.

First, let’s generate and preview a production build of our Vite app:

pnpm build && pnpm preview
Enter fullscreen mode Exit fullscreen mode

If we open the Sources tab in our browser, we'll see the compiled Javascript output Vite generates of our code. However, this code isn’t very human-readable:
Vite build output

To enable source maps, let's head to our vite.config.ts file:

export default defineConfig({
  plugins: [react()],
  build: {
    sourcemap: true
  }
})
Enter fullscreen mode Exit fullscreen mode

Now, let’s re-run the build and preview commands:

pnpm build && pnpm preview
Enter fullscreen mode Exit fullscreen mode

Heading back to the Sources tab, we can now see our original source code being displayed!

Vite build output with Source Maps

So what happened here?
Vite generated files ending with .map for every bundled Javascript file in the build. Modern browsers automatically detect these Source Map files when they are present and use them to reconstruct the original source code in DevTools.

Source Map files in dist

The Problem We Encountered

It all started with our error logs. Our frontend sends all browser errors to our reporting service, which in turn sends these logs to our log aggregation platform.

The problem? Since our code is bundled and minified in production, the stack traces we received were almost useless.

"How can I recreate this error?"
"Was this caused because of our code or an external dependency?"

These questions were not always easy to answer.

While shipping Source Maps to the browser can be nice for local development, it isn't always ideal for production. In our case, we didn’t want our client’s source code to be publicly accessible to anyone using our site.

Our Solution

Because shipping Source Maps to the browser wasn't an option, we decided on a workaround: parsing the Source Maps in our reporting service.

In order to achieve this, we first needed to get the Source Maps to the reporting service every time our frontend was built. Thankfully, since we use a monorepo, this part was relatively simple.

Our build process first builds the UI, then runs a Docker build for every service separately. This meant that we could grab the Source Maps after the UI build and copy them over to our reporting service:

pnpm build && cp -r services/frontend/dist services/reporting-service/src/source-maps
Enter fullscreen mode Exit fullscreen mode

Now comes the fun part: writing code that takes our stack traces and parses them using our Source Maps.

First, we parse our stack trace and iterate through each line of the trace:

const error = new Error(message);
error.stack = stack;

const stackFrames = await StackTrace.fromError(error);

for (const frame of stackFrames) {
    // parsing logic
}
Enter fullscreen mode Exit fullscreen mode

Then, for each line we extract the original function name, file name, and line and column numbers:

const file = JSON.parse(await fs.readFile(filePath, 'utf-8'));
const consumer = new SourceMapConsumer(file);

const mappedPosition = consumer.originalPositionFor({
      line: frame.lineNumber as number,
      column: frame.columnNumber as number,
    });

const parsedFrame = ` at ${mappedPosition.name ?? ''} (${mappedPosition.source}:${mappedPosition.line}:${mappedPosition.column})`;

parsedFrames.push(!parsedFrames.length ? `${message} ${parsedFrame}` : parsedFrame);
Enter fullscreen mode Exit fullscreen mode

We're using:

  • stacktrace-js → To parse and iterate through each line of the stack trace.
  • source-map-js → To resolve stack traces back to the original function, file, and positions.

And it worked wonderfully! The error logs were being parsed correctly, providing useful insights into where the error was originally thrown.

That is... until our reports service began crashing because of out of memory errors:

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Enter fullscreen mode Exit fullscreen mode

Caching to the Rescue

As it turns out, Source Map files for production codebases can get pretty large, and parsing them is memory-intensive.

While we could have just bumped up the memory limit for the reports service, we still wanted to reduce the performance risk of repeatedly parsing Source Maps.

In order to do that, we added three things:

  1. Preloading Source Map Files → On startup, the reports service reads and loads Source Map files into memory, so they don't need to be re-read from disk each time.
  2. Caching Source Map Results → Since looking up an error in a Source Map can get pretty heavy, results are cached in memory after the first lookup, avoiding redundant work.
  3. Limiting the Stack Trace Depth → For most errors, we found that iterating through the first five lines was sufficient to clearly understand the issue.

With these improvements in mind, let's take a look at our revised code:

Preloading the Source Map files:

const loadSourceMaps = async () => {
  const sourceMapsDir = await fs.opendir(sourceMapsPath);
  for await (const sourceMapFile of sourceMapsDir) {
    const fileName = sourceMapFile.name;
    if (fileName.includes('.map')) {
      const file = await fs.readFile(path.join(sourceMapsPath, fileName), 'utf-8');
      sourceMapConsumers[fileName] = new SourceMapConsumer(JSON.parse(file));
    }
  }
};

loadSourceMaps();
Enter fullscreen mode Exit fullscreen mode

Limiting stack trace depth and caching results:

for (const frame of stackFrames.slice(0, 5)) {
  const sourceMapFilePath = path.join(sourceMapsPath, `${path.basename(frame?.fileName ?? '')}.map`);
  const consumer = sourceMapConsumers[path.basename(sourceMapFilePath)];

  if (consumer) {
    let originalPosition = sourceMapResults[`${sourceMapsPath}${frame.lineNumber}${frame.columnNumber}`];

    if (!originalPosition) {
      originalPosition = consumer.originalPositionFor({
          line: frame.lineNumber as number,
          column: frame.columnNumber as number,
        });
      sourceMapResults[`${sourceMapsPath}${frame.lineNumber}${frame.columnNumber}`] = originalPosition;
    }

    const parsedFrame = ` at ${originalPosition.name ?? ''} (${originalPosition.source}:${originalPosition.line}:${originalPosition.column})`;
    parsedFrames.push(!parsedFrames.length ? `${message} ${parsedFrame}` : parsedFrame);
  }      
}
Enter fullscreen mode Exit fullscreen mode

Notice how I added a unique key to every Source Map result, similar to how you might create a Redis key.

A potential performance improvement can be to add an in-memory cache with a timeout to free up unused memory. However, we decided against it since our current memory usage is stable, and this approach could introduce unnecessary recalculations of cached values.

Overall, this implementation worked very well. Thanks to it, we now get much more value from our error logs. Parsing our stack traces allowed us to filter out dependency-related errors, reducing the noise of our error logs.

It's worth mentioning that if you're using observability services like Datadog or Sentry, you can offload stack trace parsing by uploading your Source Maps directly to them.

Roll Curtains 🎩

In this post, we uncovered the mystery of Vite and its build process before exploring Source Maps and building our own Source Maps parser.

I hope this post inspired you to explore your own codebase and dive deeper into some of the abstractions you rely on.
By deepening our understanding of the tools we use, we become better developers, enabling us to optimize performance and make smarter technical decisions.

Have you played around with Source Maps or approached debugging differently in your projects? Let me know, I’d love to hear about it in the comments!

You can find the complete code for this article in this GitHub repository. Happy coding!

Top comments (6)

Collapse
 
shakedbl12 profile image
Shaked Blushtein

Cool post! Will definitely try this in my own projects ✨

Collapse
 
yotam_dekel_44b75b26ae1aa profile image
Yotam Dekel

Awesome, we're using datadog in my team, but so cool to finally understand how that works

Collapse
 
paripsky profile image
Yehonatan Paripsky

This was a great read! The caching approach for Source Maps is such a smart move.

Collapse
 
madvinking profile image
Nir Winkler

yafe

Collapse
 
jonathan_gosher_at profile image
JOnathan_GOsher_AT

That out of memory error moment... I felt that. But you came up with a great solution!

Collapse
 
riya_marketing_2025 profile image
Riya

Great Insight