We embarked in a website redesign using Nuxt that spanned 2 months of hardwork before launch. After a couple of weeks with our new website live, it was time to run an user experience and core web vitals audit. When the reports came in, we had 2 tasks.
How do we improve our web performance score to comply with Lighthouse, specifically, FCP and Total Blocking time when navigating between posts and how do we decrease the page load time when navigatig between posts.
After digging deep on our CWV reports and our codebase. I nailed down the main offenders of our bad performance scores and long load times when navigating between pages:
GraphQL queries for our flagship pages: Depending on the information we needed to fetch, GraphQL would take longer to respond.
Too many requests
Some iframes were not being correctly deferred
With these findings, I had 3 tactics that I could implement:
- Accelerating Resource Loading with Preconnection
- Asynchronous creation and loading of Embeds
- Migrating the main post request from GraphQL to REST
I have to clarify, before going further, that already had best practices such as image optimization or lazy loading are already in place but these I'm explaining below are just tactics we weren't initially implementing.
1. Accelerating Resource Loading with Preconnect
We use resources from many external domains such as google fonts and our content backend. When visiting a page as a new user, this often requires establishing connections to domains that are not initially connected to our main page's loading process. This connection time might mean a longer response time from the server.
We tackled this problem by establishing network connections early to our font servers and our backend in our layout page.
<template>
<Head>
<link rel="preload" as="font" />
<link rel="preconnect" href="https://fonts.googleapis.com" />
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
<link rel="preconnect" :href="hostname" crossorigin />
</Head>
</template>
This might mean less wait/queue time by the time we need to download those resources.
Pre-connection for font downloading might not be the best example, but pre-connecting avoids getting a performance penalty from google because font loading is something that could be render blocking or could induce cumulative layout shift.
2. Asynchronous component creation and lazy loading of Embeds
In some cases we have embeds that are most of the time above the fold, this means they execute too close to our initial Document load (DOMContentLoaded event), and we cannot defer them organically; they will initialize loading regardless of our intent to lazy load.
This has a negative side effect on our performance in the way that when our embed begins to load, it's going to initialize downloading all its dependencies.
In our case, on some pages, we have an embedded real-time stocks scanner that is its separate SPA (Single Page App) and most of the posts load a text-to-speed audio player.
The result is that so close to our initial load, our page is expediting the request for many resources almost equivalent to the download of a new page that is not critical for our FCP. This meant a variable amount of time from 100ms to more than 1 second depending on how the other server/domain is loaded at the moment of our request. There also could be the case where all this data is downloaded and executed alongside other heavy processes on our website, which could slow down a user's computer.
Google might penalize you by saying "You need to reduce your JS execution time or your initial payload is too big".
To workaround this issue I took 2 approaches:
Timing Lazy Loading the Iframes
Move the iframe to its out component and make its rendering conditional on the parent component.
<template>
<!-- Conditionally render the iframe component based on the 'showIframe' variable -->
<LazyTopGainerIframe v-if="showIframe" />
</template>
<script setup>
const showIframe = ref(false);
onMounted( async () => {
// Check if we are on the client-side before setting the timeout
if (process.client) {
// Use setTimeout to delay setting 'showIframe' to true
setTimeout(() => {
showIframe.value = true;
}, 3000); // 3-second delay
}
})
</script>
Dynamically Create the Iframe
Outside of our mounting lifecycle, we create a function that creates the iframe and assign its src attribute or have the iframe in place but instead of loading the src straight up, we create a data-src attribute and in our function we query for the iframe and switch data-src to src.
export const addScript = () => {
let embedScript = document.createElement("script");
embedScript.setAttribute("src", "https://platform.example.com/widgets.js");
embedScript.setAttribute("defer", true);
const body = document.querySelector("body");
body.appendChild(embedScript);
};
onMounted(async () => {
if (post.tags) {
await useSidebarSorter(post.tags);
}
if (process.client) {
await fetchLeftoverInformation();
await setTimeout(() => {
addScript();
}, 1000);
}
});
The most important part for this to work, is to make this function async and wrap our iframe generator or DOM query into a timeout we are awaiting. What this trick does is that we can add enough time to wait for execution that we know for sure that the DOM and other more important resources for our FCP (First Contentful Paint) or LCP (Large Contentful Paint) like images or fonts have finished loading. Making instructions a promise (async/await) makes our code concurrent, meaning waiting for the result of a certain function’s execution does not stop everything else from running.
With this approach, I’d suggest the biggest offenders (the elements that might represent the biggest performance budget) be executed as late as possible.
3. Migrating GraphQL to REST for our Nuxt - Headless Wordpress site
I noticed that depending on how big the query, the server response time varies. I played with the query a little bit and stripped it down multiple times to identify how much time everything takes to fetch.
But sadly the crucial information for our page load: post data such as title, id, SEO and Saswap Schema taxed too much the website.
That’s when I decided that the best course of action was to switch to rest for data fetching. Rest most of the time is 2x or 3x faster than GraphQL.
GraphQL allows us to query once for all the info we need but is slow, in this case a REST call brings most of the info we need to display the content above the fold. The only thing we need more calls for are the categories and comments.
I got stressed a little bit in the beginning because REST doesn’t let us fetch a post by slug and the comments don’t come in order.
The solution I came up with was a script to fetch all the posts and then made a dictionary with the slug as its key with the POST ID as its property.
Then I proceeded to create a small plugin to expose that information to our [..slug] page.
That way when we SPA Navigate, on our setup process I take the slug and search by that kay the POST ID on our Dictionary to do the REST call.
Post comments, category information, tags and related posts are fetched with GraphQL in our mounted hook on client side because they are not crucial for our initial page load. Our Mounted hook is Async so the execution and process of a function don’t block other processes.
const fetchLeftoverInformation = async () => {
try {
const { data } = await $fetch($config.public.hostname + "graphql", {
method: "post",
body: {
query: `
query NewQuery {
postBy(slug: "${route.path}") {
${POST_CATEGORIES_QUERY}
${TAGS_QUERY}
${COMMENTS_QUERY}
}
}`
}
});
if (data && data.postBy) {
category.value = data.postBy.categories.nodes[0];
comments.value = data.postBy.comments.nodes;
tags.value = data.postBy.tags.nodes;
commentsCount.value = data.postBy.commentsCount;
}
} catch (e) {
// eslint-disable-next-line no-console
console.error(e);
}
};
onMounted(async () => {
if (process.client) {
await fetchLeftoverInformation();
}
});
You might wonder Why a dictionary and not saving the post information in a JSON and filtering by id? Because checking by index is always going to yield the same time versus looping to find a specific item in the array where the time it takes to find an element grows linearly with the size of the array.
Conclusions
The average document load time (DOMContentLoaded evnet) and the full load for one of our most visited posts used to be 120 seconds and 150 seconds for full load.
Before
After
After this rework, the initial load is averaging 66 seconds.
These tactics helped achieve a wooping 45% decrease in our initial load time.
And I hope dearly that if you stumbled upon this article and you are facing similar issues on your projects, these tactics help you reach positive results as they have for me.
Top comments (0)