DEV Community

Cover image for Building site-blocking cross-browser extension
Michael Savych
Michael Savych

Posted on

Building site-blocking cross-browser extension

In this article, I'm going to explain my step-by-step process of building a browser extension for blocking websites and describe the challenges I've encountered and the solutions I came up with. This is not meant to be an exhaustive guide. I don't claim to be an expert at anything. I just want to share my thought process behind building this project. So take everything here with a grain of salt. I won't cover every line but instead focus on the key points of the project, struggles, interesting cases, and quirks of the project. You're welcome to explore the source code in more detail for yourself.


Table of Contents:

Preface

Just like a lot of people, I struggle with focusing on different tasks, especially with the Internet being the omnipresent distractor. Luckily, as a programmer, I've developed great problem-creating skills, so I decided that, instead of looking for a better existing solution, I'd create my own browser extension that would block the websites users want to restrict access to.
First, let's outline the requirements and main features. The extension must:

  • be cross-browser.
  • block websites from the blacklist.
  • allow to choose a blocking option: either block the entire domain with its subdomains or block just the selected URL.
  • provide ability to disable a blocked website without deleting it from the blacklist.
  • provide an option to automatically restrict access if the user relapses or forgets to re-enable disabled URLs (helpful for people with ADHD).

Setting up the project

First, here's the main stack I chose:

  • TypeScript: I opted for TS over JS due to the numerous unfamiliar APIs for extensions to go without the autocomplete feature.
  • Webpack: Easier to use in this context compared to tsc for TS compilation. Besides, I encountered problems generating browser-compliant JS with tsc.
  • CSS: Vanilla CSS matched my goal for simplicity, smaller bundle size, and minimal dependencies. Also, I felt anything else would be an overkill for an extension with only a couple of pages. For those reasons I also decided against using tools like React or specific extension-building frameworks.

The main distinction of extension development from regular web dev is that extensions rely on service workers that handle most events, content scripts, and messaging between them.

Creating the Manifest

To support cross-browser functionality, I created two manifest files:

  • manifest.chrome.json: For Chrome's Manifest v3 requirement.
  • manifest.firefox.json: For Firefox, which better supports Manifest v2. Here's the main differences between the 2 files:

manifest.chrome.json:

{
  "manifest_version": 3,
  "action": {
    "default_title": "Click to show the form"
  },
  "incognito": "split",
  "permissions": [
    "activeTab",
    "declarativeNetRequestWithHostAccess",
    "scripting",
    "storage",
    "tabs"
  ],
  "host_permissions": ["*://*/"], // get access to all URLs
  "background": {
    "service_worker": "background.js"
  },
  "content_scripts": [{
    "matches": ["<all_urls>"]
  }],
  "web_accessible_resources": [
    {
      "resources": ["blocked.html", "options.html", "about.html", "icons/*.svg"],
      "matches": ["<all_urls>"]
    }
  ],
  "content_security_policy": {
    "extension_pages": "script-src 'self'; object-src 'self'"
  },
}
Enter fullscreen mode Exit fullscreen mode

manifest.firefox.json:

{
  "manifest_version": 2,
  "browser_action": {
    "default_title": "Click to show the form"
  },
  "permissions": [
    "activeTab",
    "declarativeNetRequest",
    "declarativeNetRequestWithHostAccess",
    "scripting", 
    "storage",
    "tabs",
    "*://*/"
  ],
  "background": {
    "scripts": [
      "background.js"
    ],
    "persistent": false
  },
  "content_scripts": [{
    "matches": ["<all_urls>"],
    "js": [
      "options.js",
      "blocked.js",
      "about.js"
    ]
  }],
  "web_accessible_resources": [
    "blocked.html",
    "options.html", 
    "icons/*.svg"
  ],
  "content_security_policy": "script-src 'self'; object-src 'self'",
}
Enter fullscreen mode Exit fullscreen mode

One interesting thing here is that Chrome required "incognito": "split", property specified to work properly in incognito mode while Firefox worked fine without it.

Here's the basic file structure of the extension:

dist/
node_modules/
src/
|-- background.tsc
|-- content.ts
static/
|-- manifest.chrome.json
|-- manifest.firefox.json
package.json
tsconfig.json
webpack.config.js
Enter fullscreen mode Exit fullscreen mode

Now let's talk about how the extension is supposed to work. The user should be able to trigger some kind of a form to submit the URL he wants to block. When he accesses a URL, the extension will intercept the request and check whether it should be blocked or allowed. It also needs some sort of options page where a user could see the list of all blocked URLs and be able to add, edit, disable, or delete a URL from the list.

Creating main input form

The form appears by injecting HTML and CSS into the current page when the user clicks on the extension icon or types the keyboard shortcut. There are different ways to display a form, like calling a pop-up, but it has limited customization options for my taste. The background script looks like this:

background.ts:

import browser, { DeclarativeNetRequest } from 'webextension-polyfill';

// on icon click
const action = chrome.action ?? browser.browserAction; // Manifest v2 only has browserAction method
action.onClicked.addListener(tab => {
  triggerPopup(tab as browser.Tabs.Tab);
});

// on shortcut key press 
browser.commands.onCommand.addListener(command => {
  if (command === 'trigger_form') {
    browser.tabs.query({ active: true, currentWindow: true })
      .then((tabs) => {
        const tab = tabs[0];
        if (tab) {
          triggerPopup(tab);
        }
      })
      .catch(error => console.error(error));
  }
});

function triggerPopup(tab: browser.Tabs.Tab) {
  if (tab.id) {
    const tabId = tab.id;
    browser.scripting.insertCSS(({
      target: { tabId },
      files: ['global.css', './popup.css'],
    }))
      .then(() => {
        browser.scripting.executeScript
          ? browser.scripting.executeScript({
            target: { tabId },
            files: ['./content.js'], // refer to the compiled JS files, not the original TS ones 
          })
          : browser.tabs.executeScript({
            file: './content.js',
          });
      })
      .catch(error => console.error(error));
  }
}
Enter fullscreen mode Exit fullscreen mode

Injecting HTML into every page can lead to unpredictable results because it is hard to predict how different styles of web pages are going to affect the form. A better alternative seems to be using Shadow DOM as it creates its own scope for styles. Definitely a potential improvement I'd like to work on in the future.

I used webextension-polyfill for browser compatibility. By using it, I didn't need to write separate extensions for different versions of manifest. You can read more about what it does here. To make it work, I included browser-polyfill.js file before other scripts in the manifest files.

manifest.chrome.json:

{
  "content_scripts": [{
    "js": ["browser-polyfill.js"]
  }],
}
Enter fullscreen mode Exit fullscreen mode

manifest.firefox.json:

{
  "background": {
    "scripts": [
      "browser-polyfill.js",
      // other scripts
    ],
  },
  "content_scripts": [{
    "js": [
      "browser-polyfill.js",
      // other scripts
    ]
  }],
}
Enter fullscreen mode Exit fullscreen mode

The process of injecting the form is a straightforward DOM manipulation, but note that each element must be created individually as opposed to applying one template literal to an element. Although more verbose and tedious, this method avoids Unsafe HTML injection warnings we'd get otherwise when trying to run the compiled code in the browser.

content.ts:

import browser from 'webextension-polyfill';
import { maxUrlLength, minUrlLength } from "./globals";
import { GetCurrentUrl, ResToSend } from "./types";
import { handleFormSubmission } from './helpers';

async function showPopup() {
  const body = document.body;
  const formExists = document.getElementById('extension-popup-form');
  if (!formExists) {
    const msg: GetCurrentUrl = { action: 'getCurrentUrl' };

    try {
      const res: ResToSend = await browser.runtime.sendMessage(msg);

      if (res.success && res.url) {
        const currUrl: string = res.url;
        const popupForm = document.createElement('form');
        popupForm.classList.add('extension-popup-form');
        popupForm.id = 'extension-popup-form';

        /* Create every child element the same way as above */

        body.appendChild(popupForm);
        popupForm.addEventListener('submit', (e) => {
          e.preventDefault();
          handleFormSubmission(popupForm, handleSuccessfulSubmission); // we'll discuss form submission later
        });
        document.addEventListener('keydown', (e) => {
          if (e.key === 'Escape') {
            if (popupForm) {
              body.removeChild(popupForm);
            }
          }
        });
      }
    } catch (error) {
      console.error(error);
      alert('Something went wrong. Please try again.');
    }
  }
}

function handleSuccessfulSubmission() {
  hidePopup();
  setTimeout(() => {
    window.location.reload();
  }, 100); // need to wait a little bit in order to see the changes
}

function hidePopup() {
  const popup = document.getElementById('extension-popup-form');
  popup && document.body.removeChild(popup);
}
Enter fullscreen mode Exit fullscreen mode

Now it's time to make sure the form gets displayed in the browser. To perform the required compilation step, I configured Webpack like this:

webpack.config.ts:

const path = require('path');
const CopyPlugin = require('copy-webpack-plugin');

const targetBrowser = process.env.TARGET_BROWSER || 'chrome';

module.exports = {
  mode: "development",
  entry: {
    background: './src/background.ts',
    content: './src/content.ts'
    /* other scripts */
  },
  output: {
    path: path.resolve(__dirname, "dist"),
    filename: "[name].js",
    clean: true // Clean the output directory before emit.
  },
  resolve: {
    extensions: [".ts", ".js"],
  },
  module: {
    rules: [
      {
        test: /\.ts$/,
        loader: "ts-loader",
        exclude: /node_modules/,
      },
    ],
  },
  devtool: 'cheap-module-source-map', // Avoids eval in source maps
  plugins: [
    new CopyPlugin({
      patterns: [
        {
          from: `static/manifest.${targetBrowser}.json`,
          to: 'manifest.json'
        },
        {
          from: 'static',
          globOptions: { ignore: ['**/manifest.*.json'] }
        },
        {
          from: 'node_modules/webextension-polyfill/dist/browser-polyfill.js'
        }
      ],
    })
  ]
};
Enter fullscreen mode Exit fullscreen mode

Basically, it takes the browser name from the environment variable of the commands I run to choose between 2 of the manifest files and compiles the TypeScript code into dist/ directory.

I was going to write proper tests for the extension, but I discovered that Puppeteer doesn’t support content script testing, making it impossible to test the most features. If you know about any workarounds for content script testing, I'd love to hear them in the comments.

My build commands in package.json are:

{
"scripts": {
    "build:chrome": "cross-env TARGET_BROWSER=chrome webpack --config webpack.config.js",
    "build:firefox": "cross-env TARGET_BROWSER=firefox webpack --config webpack.config.js",
  },
}
Enter fullscreen mode Exit fullscreen mode

So, for instance, whenever I run

npm run build:chrome
Enter fullscreen mode Exit fullscreen mode

the files for Chrome get compiled into dist/ directory. After triggering a form on any site either by clicking action icon or pressing the shortcut, the form looks like this:

Main form display

Handling URL block

Now that the main form is ready, the next task is to submit it. To implement blocking functionality, I leveraged declarativeNetRequest API and dynamic rules. The rules are going to be stored in the extension's storage. Manipulating dynamic rules is only possible in the service worker file, so to exchange data between the service worker and the content scripts, I'll be sending messages between them with necessary data. Since there are quite a few types of operations needed for this extension, I created types for every action. Here's an example operation type:

types.ts:

export interface AddAction {
  action: "blockUrl",
  url: string,
  blockDomain: boolean
}

// other actions look very similar

export type Action = AddAction | DeleteAction | DeleteAllAction | GetAllAction |
GetCurrentUrl | UpdateAction
Enter fullscreen mode Exit fullscreen mode

Since it's reasonable to be able to add new URLs both from the main form and from the options page, the submission was executed by a reusable function in a new file:

helpers.ts:

import browser from 'webextension-polyfill';
import { AddAction, ResToSend } from "./types";
import { forbiddenUrls, maxUrlLength, minUrlLength } from './globals';

export async function handleFormSubmission(urlForm: HTMLFormElement, successFn: Function) {
  if (urlForm && successFn) {
    const formData = new FormData(urlForm);
    const urlToBlock = formData.get('url') as string;
    const blockDomain = (document.getElementById('block-domain') as HTMLInputElement).checked;

    if (!urlToBlock || forbiddenUrls.some(url => url.test(urlToBlock))) { // forbiddenUrls includes browser-specific pages like chrome://, etc.
      console.error(`Invalid URL: ${urlToBlock}`);
      return;
    } else if (urlToBlock.length < minUrlLength) {
      console.error(`URL is too short`);
      return;
    } else if (urlToBlock.length > maxUrlLength) {
      console.error(`URL is too long`);
      return;
    }

    const msg: AddAction = { action: "blockUrl", url: urlToBlock, blockDomain };

    try {
      const res: ResToSend = await browser.runtime.sendMessage(msg);
      if (res.success) {
        if (res.status === 'added') {
          successFn();
        } else if (res.status === 'duplicate') {
          alert('URL is already blocked');
        }
      }
    } catch (error) {
      console.error(error);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

I'm calling handleFormSubmission() in content.ts that validates the provided URL and then sends it to the service worker to add it to the blacklist.

Dynamic rules have set max size that needs to be taken into account. Passing a too-long URL string will lead to unexpected behaviour when trying to save the dynamic rule for it. I found out that in my case, a 75-character-long URL was a good max length for a rule.

Here's how the service worker is going to process the received message:

background.ts:

import { Action, NewRule, ResToSend, Site } from "./types";

browser.runtime.onMessage.addListener(async (message, sender) => {
  const msg = message as Action;
  if (msg.action === 'blockUrl') {
    const blackList = await getRules();
    const normalizedUrl = msg.url?.replace(/^https?:\/\//, '').replace(/\/$/, ''); // remove the protocol and the last slash
    const urlToBlock = `^https?:\/\/${normalizedUrl}\/?${msg.blockDomain ? '.*' : '$'}`;

    if (blackList.some(site => site.url === urlToBlock)) {
      return { success: true, status: "duplicate", msg: 'URL is already blocked' };
    }

    let newId = Number(nanoid());
    let isUnique = !blackList.some(rule => rule.id === newId);
    while (isUnique === false) {
      newId = Number(nanoid());
      isUnique = !blackList.some(rule => rule.id === newId);
    }

    const newRule: NewRule = {
      id: newId,
      priority: 1,
      action: {
        type: 'redirect',
        redirect: {
          regexSubstitution: `${browser.runtime.getURL("blocked.html")}?id=${newId}`
        }
      },
      condition: {
        regexFilter: urlToBlock,
        resourceTypes: ["main_frame" as DeclarativeNetRequest.ResourceType]
      }
    };

    try {
      browser.declarativeNetRequest.updateDynamicRules({
        addRules: [newRule],
        removeRuleIds: []
      });
      const res: ResToSend = { success: true, status: "added", msg: 'URL has been saved' };
      return res;
    } catch (error) {
      console.error('Error updating rules:', error);
      return { success: false, error };
    }
  } else {
    // will throw error if type doesn't match any existing actions
    const exhaustiveCheck: never = msg;
    throw new Error('Unhandled action');
  }
});
Enter fullscreen mode Exit fullscreen mode

For submission I create a new rule object and update the dynamic rules to include it. A simple conditional regex allows me to choose between blocking the entire domain or just the specified URL.

After the completion, I send back the response message to the content script. The most interesting thing in this snippet is the use of nanoid. Through trial and error, I discovered that there's a limit for amount of dynamic rules - 5000 for older browsers and 30000 for newer ones. I found that through a bug when I tried to assign an ID to a rule that was bigger than 5000. I couldn't create a limit for my IDs to be under 4999, so I had to limit my IDs to 3-digit numbers (0-999, i.e. 1000 unique IDs in total). That meant I cut off the total amount of rules for my extension from 5000 to 1000, which on the one hand is quite significant, but on the other - the probability of a user having that many URLs for blocking was pretty low, and so I decided to settle for this not-so-graceful solution.

Now the user is able to add new URLs to the blacklist and choose the type of block he wants to assign to them. If he tries to access a blocked resource, he'll be redirected to a block page:

Block page

However, there's one edge case that needs to be addressed. The extension will block any unwanted URLs if the user accesses it directly. But if the website is an SPA with client-side redirection, the extension won't catch the forbidden URLs there. To handle this case, I updated my background.ts to listen the current tab and see if the URL has changed. When it happens, I manually check whether the URL is in the blacklist, and if it is, I redirect the user.

background.ts:

// client-side redirection (when no new requests are sent)
browser.tabs.onUpdated.addListener(async (tabId, changeInfo, tab) => {
  if (changeInfo.status === 'complete' || changeInfo.url) { 
    const url = changeInfo.url;
    if (url) {
      const blackList = await getRules();
      blackList.forEach(rule => {
        const regex = new RegExp(rule.url);
        // block request if the URL is in the list and is active
        if (regex.test(url) && rule.isActive) {
          browser.tabs.update(tabId, { url: browser.runtime.getURL('blocked.html') });
          return;
        }
      })
    }
    return true;
  }
});

// get dynamic rules in a more readable format
async function getRules(): Promise<Site[]> {
  try {
    const existingRules = await browser.declarativeNetRequest.getDynamicRules();

    const blackList: Site[] = existingRules.map(rule => ({
      id: rule.id,
      url: rule.condition.regexFilter as string,
      strippedUrl: stripUrl(rule.condition.regexFilter as string),
      blockDomain: rule.condition.regexFilter ? rule.condition.regexFilter[rule.condition.regexFilter.length - 1] === '*' : true, // set true by default
      isActive: rule.action.redirect ? true : false
    }));
    return blackList;
  } catch (error) {
    console.error(error);
    return [];
  }
}
Enter fullscreen mode Exit fullscreen mode

getRules() is a function that utilizes declarativeNetRequest.getDynamicRules() method to retrieve the list of all dynamic rules that I convert into a more readable format.

Now the extension correctly blocks URLs accessed directly and through SPAs.

Creating options page

The options page has a simple interface, as shown below:

Options page

This is the page with the main bulk of features like editing, deleting, disabling, and applying strict mode. Here's how I wired it.

Edit & delete functionality

Editing was probably the most complex task. Users can edit a URL by modifying its string or changing its block type (block entire domain or just the specific one). When editing, I collect the IDs of edited URLs into an array. Upon saving, I create updated dynamic rules that I pass to the service worker to apply changes. After every saved change or reload, I re-fetch the dynamic rules and render them in the table. Below is the simplified version of it:

options.ts:

async function saveChanges() {
  let updatedRules: NewRule[] = [];

  if (editedRulesIds.size === 0) {
    displayUrlList();
    alert('No changes were made');
    return;
  }

  const rulesToStore: RuleInStorage[] = [];
  for (const elem of editedRulesIds) {
    // Get the HTML element (table row) of an updated URL
    // Gather the data from its fields required for a dynamic rule creation
    // Store the newly created rules in an updatedRules array

    const updatedRule: NewRule = {
      id: rowId,
      priority: 1,
      action: {
        type: isActive
          ? 'redirect'
          : 'allow',
        ...(isActive && {
          redirect: {
            regexSubstitution: `${browser.runtime.getURL("blocked.html")}?id=${rowId}`
          }
        })
      },
      condition: {
        regexFilter: urlToBlock,
        resourceTypes: ["main_frame" as DeclarativeNetRequest.ResourceType]
      }
    };
    updatedRules.push(updatedRule);
  }

  const msg: UpdateAction = { action: 'updateRules', updatedRules };
  const res: ResToSend = await browser.runtime.sendMessage(msg);
  try {
    if (res.success && res.rules) {
      editedRulesIds.clear();
      displayUrlList();
      alert('Changes have been saved');
    }
  } catch (error) {
    console.error(res.error);
    alert('Could not save changes.');
  }
}
Enter fullscreen mode Exit fullscreen mode

The way I decide whether to block or allow a particular rule is simply by conditionally checking its isActive property. Updating the rules and retrieving the rules - those are 2 more operation to add to my background listener:

background.ts:

browser.runtime.onMessage.addListener(async (message, sender) => {
  const msg = message as Action;
  if (msg.action === 'blockUrl') {
    // logic for adding URL to blacklist
  } else if (msg.action === 'getRules') {
    const blackList = await getRules();
    return { success: true, status: "getRules", rules: blackList };
  } else if (msg.action === 'updateRules') {
    const uniqueFilters = new Map<string, number>(); // <url, id>
    const filteredRules: NewRule[] = [];
    const blackList = await getRules();

    // remove the rules if their updated versions became duplicates
    const rulesToRemove: number[] = [];
    msg.updatedRules.forEach(rule => {
      if (uniqueFilters.has(rule.condition.regexFilter)) {
        // if a duplicate occurs
        if (uniqueFilters.get(rule.condition.regexFilter) === rule.id) {
          // remove the new duplicate rule
          filteredRules.push(rule);
        }
      } else {
        filteredRules.push(rule);
      }
      rulesToRemove.push(rule.id);
    });

    await browser.declarativeNetRequest.updateDynamicRules({
      removeRuleIds: rulesToRemove,
      addRules: filteredRules
    });
    const storedRules = await getRules();
    return { success: true, status: "updated", msg: 'Rules updated', rules: storedRules };
  } else {
    const exhaustiveCheck: never = msg;
    throw new Error('Unhandled action');
  }
});
Enter fullscreen mode Exit fullscreen mode

The updating functionality was a bit tricky to get right because there's an edge case when an edited URL becomes a duplicate of an existing rule. Other than that, it's the same spiel - update the dynamic rules and send the appropriate message upon completion.

Deleting URLs was probably the easiest task. There are 2 types of deletion in this extension: deletion of a specific rule and deletion of all rules.

options.ts:

async function deleteRule(id: number) {
  const msg: DeleteAction = { action: "deleteRule", deleteRuleId: id };
  const res: ResToSend = await browser.runtime.sendMessage(msg);
  try {
    if (res.success) {
      displayUrlList(); // Re-fetch rules
    }
  } catch (error) {
    console.error(res.error);
    alert('Could not delete the URL.')
  }
}

async function deleteRules() {
  const msg: DeleteAllAction = { action: 'deleteAll' };
  try {
    const res: ResToSend = await browser.runtime.sendMessage(msg);
  } catch (error) {
    console.error(error);
    alert('Could not delete the URLs.');
  }
}
Enter fullscreen mode Exit fullscreen mode

And, just like before, I added 2 more actions to the service worker listener:

background.ts:

browser.runtime.onMessage.addListener(async (message, sender) => {
  const msg = message as Action;
  if (msg.action === 'blockUrl') {
    // logic for adding URL to blacklist
  } else if (msg.action === 'getRules') {
    // logic for retrieving all rules
  } else if (msg.action === 'deleteRule') {
    // delete single rule
    await browser.declarativeNetRequest.updateDynamicRules({
      removeRuleIds: [msg.deleteRuleId],
      addRules: []
    })
    return { success: true, status: "deletedRule", msg: `Rule ${msg.deleteRuleId} have been deleted` };
  } else if (msg.action === 'deleteAll') {
    // delete all rules
    const existingRules = await browser.declarativeNetRequest.getDynamicRules();
    await browser.declarativeNetRequest.updateDynamicRules({
      removeRuleIds: existingRules.map(rule => rule.id),
      addRules: []
    });
    return { success: true, status: "deleted", msg: 'All rules have been deleted' };
  } else if (msg.action === 'updateRules') {
    // logic for updating edited rules
  } else {
    // will throw error if type doesn't match the existing actions
    const exhaustiveCheck: never = msg;
    throw new Error('Unhandled action');
  }
});
Enter fullscreen mode Exit fullscreen mode

Implementing strict mode

Probably, the main feature of the extension is the ability to enforce disabled (allowed for access) rules blockage automatically for people who need more rigid control over their browsing habits. The idea is that when the strict mode is turned off, any disabled URL by the user will remain disabled until the user changes it. With the strict mode on, any disabled rules will automatically be re-enabled after 1 hour. To implement such a feature, I used the extension's local storage to store an array of objects representing each disabled rule. Every object includes a rule ID, unblock date, and the URL itself. Any time a user accesses a new resource or refreshes the blacklist, the extension will first check the storage for expired rules and update them accordingly.

options.ts:

async function handleInactiveRules(isStrictModeOn: boolean) {
  if (!isStrictModeOn) {
    browser.storage.local.set({ inactiveRules: [] });
  } else {
    const msg: GetAllAction = { action: 'getRules' };
    const res: ResToSend = await browser.runtime.sendMessage(msg);
    try {
      if (res.success && res.rules) {
        const inactiveRulesToStore: RuleInStorage[] = [];
        const date = new Date();
        const unblockDate = new Date(date.getTime() + strictModeBlockPeriod).getTime();
        res.rules.forEach(rule => {
          if (!rule.isActive) {
            const urlToBlock = `^https?:\/\/${rule.strippedUrl}?${rule.blockDomain ? '.*' : '$'}`;
            inactiveRulesToStore.push({ id: rule.id, unblockDate: unblockDate, urlToBlock })
          }
        });
        browser.storage.local.set({ inactiveRules: inactiveRulesToStore });
      }
    } catch (error) {
      console.error(error);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

isStrictModeOn boolean is being stored in the storage as well. If it's true, I loop over all the rules and add to the storage those that are disabled with a newly created unblock time for them. Then on every response, I check the storage for any disabled rules, remove the expired ones if they exist, and update them:

background.ts:

async function checkInactiveRules() {
  const result = await browser.storage.local.get([storageRulesKey]);
  const inactiveRules = result.inactiveRules as RuleInStorage[];

  // if there's no inactive rules
  if (!inactiveRules || inactiveRules.length === 0 || Object.keys(inactiveRules).length === 0) {
    return;
  };

  const currTime = new Date().getTime();
  const rulesToUpdate: NewRule[] = [];
  const expiredRulesSet = new Set<number>();
  // update the expired rules
  inactiveRules.forEach(rule => {
    if (rule.unblockDate < currTime) {
      const updatedRule: NewRule = {
        id: rule.id,
        priority: 1,
        action: {
          type: 'redirect',
          redirect: {
            regexSubstitution: `${browser.runtime.getURL("blocked.html")}?id=${rule.id}`
          }
        },
        condition: {
          regexFilter: rule.urlToBlock,
          resourceTypes: ["main_frame" as DeclarativeNetRequest.ResourceType]
        }
      };
      rulesToUpdate.push(updatedRule);
      expiredRulesSet.add(rule.id);
    }
  });
  browser.declarativeNetRequest.updateDynamicRules({
    removeRuleIds: rulesToUpdate.map(rule => rule.id),
    addRules: rulesToUpdate
  })
    .then(() => {
      // remove rules with expired block time from the storage 
      const updatedRules = inactiveRules.filter(rule => !expiredRulesSet.has(rule.id));
      browser.storage.local.set({ inactiveRules: updatedRules });
    })
    .catch(error => console.error(error));
}
Enter fullscreen mode Exit fullscreen mode

With that done, the website-blocking extension is completed. Users can add, edit, delete, and disable any URLs they want, apply partial or entire domain blocks, and use strict mode to help them maintain more discipline in their browsing.

Extension work example


Conclusion

That's the basic overview of my site-blocking extension. It's my first extension, and it was an interesting experience, especially given how the world of web dev can become mundane sometimes. There's definitely room for improvement and new features. Search bar for URLs in the blacklist, adding proper tests, custom time duration for strict mode, submission of multiple URLs at once - these are just a few things on my mind that I'd like to add some day to this project. I also initially planned on making the extension cross-platform but couldn't make it run on my phone.
If you enjoyed reading this walkthrough, learnt something new, or have any other feedback, your comments are appreciated. Thank you for reading.

The source code
The live version

Top comments (0)