DEV Community

Cover image for Accelerating Nuxt 3 Applications: 3 Simple Steps to Boost Performance
Mykhailo Kukharskyi
Mykhailo Kukharskyi

Posted on

Accelerating Nuxt 3 Applications: 3 Simple Steps to Boost Performance

Hey everyone! I’m Mykhailo Kukharskyi, a Frontend Engineer at Futurra Group, a Ukrainian IT-company. Most of our web apps are built on Nuxt. Why? Because it’s packed with powerful, developer-friendly tools that speed up the process and deliver an outstanding user experience right out of the box.

And here’s the catch: no matter how efficient and convenient Nuxt 3 is, achieving excellent page speed scores – those that represent what users actually experience – can still be a challenge. That’s mainly because there isn’t much in-depth guidance out there on how to fine-tune Nuxt apps for top performance.

In this article, I’ll share practical solutions that have helped us dramatically improve the performance of our projects.

Why Performance Metrics Matter and How to Track Them

Performance isn’t just about fast loading times. It’s also about how quickly your users can start interacting with your app and completing key actions that directly impact your business.

For example, Pinterest found that improving their app’s performance metrics reduced wait times by 40%, boosted search traffic by 15%, and increased sign-ups by another 15%.

Your target audience plays a big role. In regions with fast, affordable internet, users might not notice performance issues even if your app is packed with content and heavy on HTTP requests. But if your audience is in areas like Asia, Latin America, or Africa, where internet connectivity is less reliable, optimizing performance becomes critical.

Here’s a quick breakdown of the six key metrics that define web performance:

  1. TTFB (Time to First Byte): How long it takes to connect to the web server and receive the first byte of data.

  2. LCP (Largest Contentful Paint): How quickly the largest visible element (like an image or block of text) is rendered.

  3. CLS (Cumulative Layout Shift): in my opinion, one of the most intriguing metrics. It measures how often a site's content unexpectedly shifts or how individual elements change position during the first load.

  4. FCP (First Contentful Paint): The time it takes to display the first piece of content on the page.

  5. INP (Interaction to Next Paint): A measure of how responsive your app is to user interactions.

  6. FID (First Input Delay): The time it takes for your app to process the user’s first interaction. The difference between INP and FID lies in what they measure: INP tracks the overall delay between user interactions throughout the entire session, while FID focuses specifically on the delay before the first interaction.

The go-to tool for performance analysis is Google PageSpeed Insights, which provides detailed reports and flags issues that need attention. However, it’s not the only, nor the best, tool for this purpose, as there are ways to "cheat" it. For instance, some modules or publicly available scripts can defer hydration processes on the site, artificially improving first-load metrics.

That said, it does a great job of highlighting problem areas. Additionally, if your site’s traffic is substantial, it can provide statistics based on real user experiences rather than algorithm-measured scores.

A much better practice is to focus on real user metrics – Core Web Vitals – which are also available through Google PageSpeed Insights, provided your site’s traffic is high enough. Core Web Vitals specifically analyze real user data for LCP, CLS, and FID.

Another tool I’ve found particularly useful for measuring these metrics is DebugBear. It allows you to specify which pages to monitor, how often, and in which regions, while also breaking down results into mobile and desktop categories.

How to Improve Performance

The recommendations for improving TTFB, LCP, CLS, FCP, INP, and FID are outlined in the Google PageSpeed reports. However, these guidelines often lack detailed instructions, as does the official Nuxt 3 documentation. As a result, developers frequently need to dive into the documentation of related technologies used by Nuxt "under the hood," such as Vite and Nitro. Here, I’ll share what I discovered.

Universal Rendering

Let’s start with one of Nuxt’s greatest advantages: Universal Rendering. When implemented correctly, it allows you to achieve top-tier performance metrics. Universal Rendering combines Server-Side Rendering(SSR) and Static Site Generation (SSG). What’s the difference, and why is Universal Rendering so beneficial?

  • SSR: This approach loads the entire DOM structure of your web application when users access it. While this may result in slightly longer initial page loads, users immediately get a fully functional app without having to wait for images, text, or functionality to load. Additionally, users always see the most up-to-date content, as dynamic data is loaded server-side.

  • SSG: In contrast, SSG preloads all content before the user accesses the application, resulting in near-instant page loads. However, this option isn’t ideal for applications requiring user-specific content or for those with a significant amount of dynamic data.

Universal Rendering in Nuxt 3 combines the best of both approaches: instant static page loads with an excellent user experience (UX) for dynamic data during the initial app interaction.

Now, let’s move on to optimizing your project or laying the foundation for strong performance in its early stages.

Step 1: Analyze Bundle Size

If you’re optimizing an existing application, start by analyzing the final bundle size. Fortunately, Nuxt offers an effective tool for this: vite-bundle-visualizer. Using the command nuxt analyze, you can visualize both the server-side Nitro bundle and the final client-side bundle:

Image description

This visualization can help you identify parts of the codebase that need optimization, splitting into smaller chunks, or even replacing with alternative solutions if third-party libraries dominate the bundle. Larger blocks (especially those containing many smaller sub-blocks) are the first areas to prioritize.

Key factors affecting bundle size include:

  • Importing entire modules into the files instead of just the required parts:

import library from "someLibrary"

Can be replaced with:

import { element } from "someLibrary"

  • Overly large codebases within components or pages, which suggests breaking them into smaller pieces.

Big components can also be loaded asynchronically. Imagine you need to display a popup window under certain conditions. This means there’s no need to load the component during the initial load.

For example:

<script setup lang="ts">
const SomeComponent = defineAsyncComponent(() => import("@/components/SomeComponent.vue"));

const isComponentShown = ref<boolean>(false);

const showComponent = () => {
    isComponentShown.value = true;
};
</script>

<template>
    <div>
        <span>Some content</span>
        <SomeComponent v-if="isComponentShown" />
    </div>
</template>
Enter fullscreen mode Exit fullscreen mode

You can learn more on this here.

By the way, Nuxt offers a comfortable replacement for the defineAsyncComponent function. You can simply add the ‘Lazy’ prefix to your component, and it will be loaded asynchronically. However, you will still need to use the defineAsyncComponent function sometimes. For example, when you need to load different components based on some conditions:

<script setup lang=”ts”>
const dynamicComponent = computed(() => {
 if (condition_1) {
   return defineAsyncComponent(() => import("@/components/Component1.vue"));
 }


 if (condition_2) {
   return defineAsyncComponent(() => import("@/components/Component2.vue"));
 }
});
</script>


<template>
   <component :is="dynamicComponent"   />
</template>

Enter fullscreen mode Exit fullscreen mode

Step 2: Optimize Modules and Libraries

Below are some modules and libraries that can significantly improve your project’s performance.

NuxtImage

Optimizing static content, such as images, is a crucial step in improving your app’s performance. Images are often both a major asset and a bottleneck in any project.

For instance, consider a homepage filled with images for posts, user avatars, etc. While the original image quality may be high, the final displayed size might only be 48 pixels. Loading these images in their original resolutions and formats, only to resize them later, is inefficient. Instead, it’s better to serve images in the exact size needed.

The same principle applies to formats. Modern formats like .webp may not work perfectly everywhere but are suitable when quality isn’t the top priority.

The NuxtImage module simplifies and fastens such processes:

  • It offers replacement for standard and HTML tags with and .
  • The format attribute allows easy control over image formats.
  • The width and height attributes let you specify the displayed size of content.
  • The sizes attribute enables adaptive size selection, streamlining mobile-first development.
  • If not all of your content is stored directly within the project bundle, you can easily use the module with other content providers, as NuxtImg supports a wide range of them.

These are not all the configuration options available for the module – just the most accessible ones that are likely to be used most frequently.

Let’s start with adding NuxtImage to the project:

npx nuxi@latest module add image

Then, here’s what has to appear in your nuxt.config.ts file:

export default defineNuxtConfig({
  modules: [
    "@nuxt/image",
  ]
});
Enter fullscreen mode Exit fullscreen mode

Consider that NuxtImg will only work properly with the contents of the /public folder.

Its main difference from /assets is that Vite (or Webpack) does not cache or minify its content, and its content will be available on the server in public access. Obviously, this approach (when images and content are not minified during the build) is not always suitable, especially if you offer a large amount of static content on the site.

Note that if you try to use the path to the content from the /assets folder in NuxtImg, it will work correctly only in the local environment, during the build all your efforts will disappear.

Therefore, using NuxtImg is not a one-size-fits-all solution for all images, but I personally recommend using it in those moments where loading SSR, dynamic content, etc. are important.

For instance, suppose you have a large image on your homepage, which effectively serves as the LCP (Largest Contentful Paint) of your site. This image plays a vital role during the user’s first load. In this case, prioritizing server-side rendering (SSR) for this image to improve the LCP metric is more important than simply minifying it during the build.

How does NuxtImage work?

  • For lazy loading, the module uses the Observer API, which ensures that content is only loaded when it approaches the user’s viewport. This is particularly beneficial for large, content-heavy pages.
  • Additionally, NuxtImg automatically adjusts based on the Device Pixel Ratio (DPR) when loading images. This prevents unnecessary loading of oversized images, ensuring that, for example, mobile users aren’t served the same image sizes as desktop users.
  • At the SSR stage, NuxtImg checks whether the content is optimized and ready for display. It assigns necessary attributes like srcset, enabling faster page rendering while maintaining SEO performance.

VSharp

VSharp is a module for Vite, so installing it if you’re still using Webpack is pointless. VSharp allows for more effective minification and compression of images in your project.

It is primarily based on the Node.js module Sharp, which automatically minimizes images to more user-friendly sizes and formats.

Diving a bit deeper, VSharp utilizes libvips – an incredibly fast and time-tested library that remains actively maintained despite being originally developed in 1989. This technology is widely used for image optimization not only in frontend development but in many other fields as well.

Let’s install VSharp and add it to nuxt.config.ts:

npm install vite-plugin-vsharp --save-dev

import vsharp from "vite-plugin-vsharp";

export default defineNuxtConfig({
  modules: [
    "@nuxt/image",
  ],
  vite: {
      plugins: [vsharp()],
  },
});
Enter fullscreen mode Exit fullscreen mode

This plugin also allows for additional compression settings for images (you can read more about them here. However, in my opinion, the default configuration is more than sufficient.

ViteSVGLoader

While this plugin isn’t a game changer in terms of optimization, it still provides valuable support. We’ll use it to load SVGs as components, with SSR support.

This module automatically optimizes SVG content using SVGO (SVG Optimizer) – a Node.js library that converts SVGs into abstract syntax trees and manipulates their content by removing unnecessary elements, such as empty spaces, indentation, and redundant attributes.

As a result, your SVG files will be smaller, which helps reduce the bundle size and improve the page’s initial load time.

Let’s start with installation:

npm install vite-svg-loader --save-dev

import vsharp from "vite-plugin-vsharp";
import ViteSvgLoader from "vite-svg-loader";

export default defineNuxtConfig({
  modules: [
    "@nuxt/image",
  ],
  vite: {
      plugins: [vsharp(), ViteSvgLoader()],
  },
});
Enter fullscreen mode Exit fullscreen mode

Instead of using standard tags to load SVG components, we can do the following:

<script setup lang="ts">
import SpinnerLoader from "@/assets/img/icons/spinner_loader.svg?component";
</script>

<template>
    <SpinnerLoader />
</template>
Enter fullscreen mode Exit fullscreen mode

These components will support SSR and be optimized by the plugin itself. However, this approach cannot be combined with NuxtImg, so it’s important to separate their use cases:

  • NuxtImg – for large, heavy, and important images.
  • ViteSVGLoader – for SVGs.

Step 3: Nuxt Configuration in nuxt.config.ts

Let’s dive into project configuration in nuxt.config.ts, which also played a critical role in improving the performance of our applications.

Nuxt documentation can be divided into three equally important parts: Nuxt, Vite, and Nitro.

Through trial and error, as well as in-depth research, I’ve found the following nuxt.config setup to be highly effective:

import vsharp from "vite-plugin-vsharp";
import ViteSvgLoader from "vite-svg-loader";

export default defineNuxtConfig({
  modules: [
    "@nuxt/image",
  ],
  nitro: {
    minify: true,
    compressPublicAssets: {
      brotli: true,
    },
  },
  vite: {
      ssr: {
        noExternal: true,
    },
    json: {
      stringify: true,
    },
    build: {
      cssMinify: "lightningcss",
      ssrManifest: true,
      minify: "terser",
    },
      plugins: [vsharp(), ViteSvgLoader()],
  },
});

Enter fullscreen mode Exit fullscreen mode

NITRO:

minify: true – Default value: false. This option should be enabled to apply additional minification to your bundle.

compressPublicAssets: brotli – Default value { gzip: false, brotli: false } – сompresses elements in the public folder using either gzip or Brotli. When enabled, the highest compression level is applied.

VITE:

ssr.noExternal: true – A configuration that prevents dependencies from being externalized during the SSR build. This ensures all project dependencies are included in the build, which improves server response times and initial load performance.

Important: This option should be disabled during development; otherwise, your project won’t run locally as server error will appear.

json.stringify: true – Default value false – All imported JSON files in the project will be transformed using export default JSON.parse("..."), which significantly enhances performance, especially in projects with large JSON datasets.

However, this setting may cause issues when reading deeply nested JSON files.

build.minify: “lightningcss” – Allows you to select different tools for minifying JS and CSS files. The default value is esbuild, which isn’t the best choice for either JS or CSS. Using lightningcss offers better minification levels but requires installing an additional package:

npm add -D lightningcss

build.minify: “terser” – As mentioned earlier, the default tool, esbuild, does not minify JS files as effectively as other options. Switching to terser provides better results for JS minification.

build.ssrManifest: true – Default value – false. During the build process, this option generates an additional SSR manifest, specifying instructions for preloading content.

Results

All of the above insights were the result of extensive testing, research, and hands-on experience, whether developing projects from scratch or optimizing existing ones.

While there are certainly other improvements and tools that can enhance performance, these are the approaches I’ve tested and am confident in.

To illustrate the impact, here’s how our PageSpeed metrics looked before implementing these changes:

Image description

and after:

Image description

From the graphs above, it’s clear that all performance metrics before optimization were very low – not just according to Google PageSpeed algorithms, but also based on real user statistics from Core Web Vitals. After optimization, all key metrics moved into the green zone, significantly improving user satisfaction and elevating the overall UX of our application.

Of course, there are still areas for improvement, such as optimizing FCP and TTFB for mobile. However, these are less critical, as the primary goals have been achieved: passing the Core Web Vitals assessment, achieving high Google PageSpeed scores, and ensuring user satisfaction.

Below, you can see the metrics for two of the most visited, content-heavy, and functional pages of the project, which was developed from scratch following these recommendations:

Image description

(Mobile is on the left side, desktop is on the right)

These metrics prove that by implementing highly optimized solutions during development, improving Google PageSpeed scores won’t remain a backlog item for a long time, and user satisfaction with your app’s loading speed won’t become a problematic factor.

Optimized web applications have a much better chance of meeting user needs and ensuring successful engagement with your product. This, in turn, leads to increased revenue. That’s why I strongly recommend focusing on site/application performance – not just at the initial stages, but also when optimizing existing projects.

I hope the solutions I’ve shared will be helpful to you as well. I’d love to hear about your performance optimization experiences in the comments!

Top comments (0)