Having gained much experience with the implementation of various microfrontend-based solutions — I’ll try to share what I’ve learned.
This article was originally published at Bits and Pieces
Microfrontends have become a viable option for developing mid to large scale web apps. Especially for distributed teams, the ability to develop and deploy independently seems charming. While frameworks like Piral make that quite easy we may want to implement our microfrontend solution from scratch. One problem that quickly arises: How can one microfrontend communicate with another?
Having gained much experience with the implementation of various microfrontend-based solutions in the past I’ll try to share what I’ve learned. Most of these ways will focus on client-side communication (i.e., using JS), however, I’ll also try to touch server-side stitching, too.
However you choose to implement your MFs, always make sure to share your UI components to a component hub using tools like Bit (Github). It’s a great way to maximize code reuse, build a more scalable & maintainable codebase and keep a consistent UI throughout your different Micro Frontends (some even use Bit as an implementation of Micro Frontends).
Loose Coupling
The most important aspect of implementing any communication pattern in microfrontends is loose coupling. This concept is not new and not exclusive to microfrontends. Already in microservice backends, we should take great care not to communicate directly. Quite often, we still do it — to simplify flows or infrastructure, or both.
How is loose coupling possible in microfrontend solutions? Well, it all starts with good naming. But before we come to that we need to take a step back.
Let’s first look at what’s possible with direct communication. We could, for instance, come up with the following implementation:
// microfrontend A
window.callMifeA = msg => {
//handle message;
};
// microfrontend B
window.callMifeA({
type: 'show_dialog',
name: 'close_file'
});
At first, this may also look nice: We want to talk from microfrontend B to A — we can do so. The message format allows us to handle different scenarios quite nicely. However, if we change the name in microfrontend A (e.g., to mifeA
) then this code will break.
Alternatively, if microfrontend A is not there it all for whatever reason then this code will break. Finally, this way always assumes that callMifeA
is a function.
The diagram below illustrates this problem of decoupled coupling.
The only advantage of this way is that we know for “sure” (at least in case of a working function call) to communicate with microfrontend A. Or do we? How can we make sure that callMifeA
has not been changed by another microfrontend?
So let’s decouple it using a central application shell:
// application shell
const mife = [];
window.registerMife = (name, call) => {
mife.push({
name,
call,
});
};
window.callMife = (target, msg) => {
mife.filter(m => m.name === target).forEach(m => m.call(msg));
};
// microfrontend A
window.registerMife('A', msg => {
//handle message;
});
// microfrontend B
window.callMife('A', {
type: 'show_dialog',
name: 'close_file'
});
Now calling callMife
should work in any case - we just should not expect that the anticipated behavior is guaranteed.
The introduced pool can also be drawn into the diagram.
Up to this point the naming convention is not really in place. Calling our microfrontends A
, B
etc. is not really ideal.
Naming Conventions
There are multiple ways how to structure names within such an application. I usually place them in three categories:
- Tailored to their domain (e.g., machines)
- According to their offering (e.g., recommendations)
- A domain offering (e.g., machine-recommendations)
Sometimes in really large systems the old namespace hierarchy (e.g., world.europe.germany.munich
) makes sense. Very often, however, it starts to be inconsistent quite early.
As usual, the most important part about a naming convention is to just stick with it. Nothing is more disturbing than an inconsistent naming scheme. It’s worse than a bad naming scheme.
While tools such as custom linting rules may be used to ensure that a consistent name scheme is applied, in practice only code reviews and central governance can be helpful. Linting rules may be used to ensure certain patterns (e.g., using a regular expression like /^[a-z]+(\.[a-z]+)*$/
) are found. To map back the individual parts to actual names is a much harder task. Who defined the domain specific language and terminology in the first place?
To shorten our quest here:
Naming things will always be one of the unsolved problems.
My recommendation is just to select a naming convention that seems to make sense and stick with it.
Exchanging Events
Naming conventions are also important for the communication in terms of events.
The already introduced communication pattern could be simplified by using the custom events API, too:
// microfrontend A
window.addEventListener('mife-a', e => {
const { msg } = e.detail;
//handle message;
});
// microfrontend B
window.dispatchEvent(new CustomEvent('mife-a', {
detail: {
type: 'show_dialog',
name: 'close_file'
}
}));
While this may look appealing at first it also comes with some clear drawbacks:
- What is the event for calling microfrontend A again?
- How should we properly type this?
- Can we support different mechanisms here, too — like fan-out, direct, …?
- Dead lettering and other things?
A message queue seems inevitable. Without supporting all the features above a simple implementation may start with the following:
const handlers = {};
window.publish = (topic, message) => {
window.dispatchEvent(new CustomEvent('pubsub', {
detail: { topic, message },
}));
};
window.subscribe = (topic, handler) => {
const topicHandlers = handlers[topic] || [];
topicHandlers.push(handler);
handlers[topic] = topicHandlers;
};
window.unsubscribe = (topic, handler) => {
const topicHandlers = handlers[topic] || [];
const index = topicHandlers.indexOf(handler);
index >= 0 && topicHandlers.splice(index, 1);
};
window.addEventListener('pubsub', ev => {
const { topic, message } = ev.detail;
const topicHandlers = handlers[topic] || [];
topicHandlers.forEach(handler => handler(message));
});
The code above would be placed in the application shell. Now the different microfrontends could use it:
// microfrontend A
window.subscribe('mife-a', msg => {
//handle message;
});
// microfrontend B
window.publish('mife-a', {
type: 'show_dialog',
name: 'close_file'
});
This is actually the closest way can get to the original code — but with loose coupling instead of an unreliable direct approach.
The application shell may also live differently than illustrated in the diagram above. The important part is that each microfrontend can access the event bus independently.
Sharing Data
While dispatching events or enqueuing a message seem to be straight forward in a loosely coupled world data sharing seems not.
There are multiple ways to approach this problems:
- single location, multiple owners — everyone can read and write
- single location, single owner — every can read, but only the owner can write
- single owner, everyone needs to get a copy directly from the owner
- single reference, everyone with a reference can actually modify the original
Due to loose coupling we should exclude the last two options. We need a single location — determined by the application shell.
Let’s start with the first option:
const data = {};
window.getData = name => data[name];
window.setData = (name, value) => (data[name] = value);
Very simple, yet not very effective. We would at least need to add some event handlers to be informed when the data changes.
The diagram below shows the read and write APIs attached to the DOM.
The addition of change events only affects the setData
function:
window.setData = (name, current) => {
const previous = data[name];
data[name] = current;
window.dispatchEvent(new CustomEvent('changed-data', {
detail: {
name,
previous,
current,
},
}));
};
While having multiple “owners” may have some benefits it also comes with lots of problems and confusion. Alternatively, we can come up with a way of only supporting a single owner:
const data = {};
window.getData = name => {
const item = data[name];
return item && item.value;
}
window.setData = (owner, name, value) => {
const previous = data[name];
if (!previous || previous.owner === owner) {
data[name] = {
owner,
name,
value,
};
window.dispatchEvent(new CustomEvent('changed-data', {
detail: {
name,
previous: previous && previous.value,
current: value,
},
}));
}
};
Here, the first parameter has to refer to the name of the owner. In case no one has yet claimed ownership we accept any value here. Otherwise, the provided owner name needs to match the current owner.
This model certainly seems charming at first, however, we’ll end up with some issues regarding the owner
parameter quite soon.
One way around this is to proxy all requests.
Centralized API
Global objects. Well, they certainly are practical and very helpful in many situations. In the same way, they are also the root of many problems. They can be manipulated. They are not very friendly for unit testing. They are quite implicit.
An easy way out is to treat every microfrontend as a kind of plugin that communicates with the app shell through its own proxy.
An initial setup may look as follows:
// microfrontend A
document.currentScript.setup = api => {
api.setData('secret', 42);
};
// microfrontend B
document.currentScript.setup = api => {
const value = api.getData('secret'); // 42
};
Every microfrontend may be represented by a set of (mostly JS) files — brought together by referencing a single entry script.
Using a list of available microfrontends (e.g., stored in a variable microfrontends
) we can load all microfrontends and pass in an individually created API proxy.
const data = {};
const getDataGlobal = name => {
const item = data[name];
return item && item.value;
}
const setDataGlobal = (owner, name, value) => {
const previous = data[name];
if (!previous || previous.owner === owner) {
data[name] = {
owner,
name,
value,
};
window.dispatchEvent(new CustomEvent('changed-data', {
detail: {
name,
previous: previous && previous.value,
current: value,
},
}));
}
};
microfrontends.forEach(mife => {
const api = {
getData: getDataGlobal,
setData(name, value) {
setDataGlobal(mife.name, name, value);
},
};
const script = document.createElement('script');
script.src = mife.url;
script.onload = () => {
script.setup(api);
};
document.body.appendChild(script);
});
Wonderful! Now please note that currentScript
is required for this technique, so IE 11 or earlier will require special attention.
The diagram below shows how the central API affects the overall communication in case of shared data.
The nice thing about this approach is that the api
object can be fully typed. Also, if the whole approach allows a progressive enhancement since it just passively declares a glue layer (setup
function).
This centralized API broker is definitely also helpful in all the other areas we’ve touched so far.
Activation Functions
Microfrontends are all about “when is my turn?” or “where should I render?”. The most natural way of getting this implemented is by introducing a simple component model.
The simplest one is to introduce paths and a path mapping:
const checkActive = location => location.pathname.startsWith('/sample');
window.registerApplication(checkActive, {
// lifecycle here
});
The lifecycle methods now depend fully on the component model. In the simplest approach we introduce load
, mount
, and unmount
.
The checking needs to be performed from a common runtime, which can be simply called “Activator” as it will determine when something is active.
How these look is still pretty much up to us. For instance, we can already provide the element of an underlying component essentially resulting in an activator hierarchy. Giving each component a URL and still being able to compose them together can be very powerful.
Component Aggregation
Another possibility is via some component aggregation. This approach has several benefits, however, still requires a common layer for mediation purposes.
While we can use any (or at least most) framework to provide an aggregator component, we will in this example try to do it with a web component — just to illustrate the concept in pure JavaScript. Actually, we will use LitElement, which is a small abstraction on top just to be a bit more brief.
The basic idea is to have a common component that can be used whenever we want to include “unknown” components from other microfrontends.
Consider the following code:
@customElement('product-page')
export class ProductPage extends LitElement {
render() {
return html`
<div>
<h1>My Product Page</h1>
<!-- ... -->
<component-reference name="recommendation"></component-reference>
<!-- ... -->
<component-reference name="catalogue"></component-reference>
</div>
`;
}
}
Here we created a new web component which should represent our product page. The page already comes with its own code, however, somewhere in this code we want to use other components coming from different microfrontends.
We should not know from where these components come. Nevertheless, using an aggregator component (component-reference
) we can still create a reference.
Let’s look how such an aggregator may be implemented.
const componentReferences = {};
@customElement('component-reference')
export class ComponentReference extends LitElement {
@property() name = '';
render() {
const refs = componentReferences[this.name] || [];
const content = refs.map(r => `<${r}></${r}>`).join('');
return html([content]);
}
}
We still need to add registration capabilities.
window.registerComponent = (name, component) => {
const refs = componentReference[name] || [];
componentReference[name] = [...refs, component];
};
Obviously there is a lot left aside here: How to avoid collisions. How to forward attributes / props accordingly. Robustness and reliability enhancements, e.g., for reactivity when the references change. Further convenience methods...
The list of missing features is long here, but keep in mind that the code above should only show you the idea.
The diagram below shows how the microfrontends can share components.
Usage of this is as simple as:
@customElement('super-cool-recommender')
export class SuperCoolRecommender extends LitElement {
render() {
return html<p>Recommender!</p>
;
}
}
window.registerComponent('recommendation', 'super-cool-recommender');
Conclusion
There are many many many possible patterns to apply when loose coupling should be followed. In the end, though, you’ll need a common API. If that one is the DOM or coming from a different abstraction is up to you. Personally, I favor the centralized API for its sandboxing and mocking capabilities.
Using the provided patterns in a much more robust and elegant way can be done via Piral, which gives you microfrontends with siteless UIs.
Top comments (17)
This is a great article, thank you. Reading the comments, you don't need to defend yourself. Using window is fine for your examples and kept focus your point. When implementing, like you said, create your own object.
Thanks again.
Ow my I smell trouble with those approaches. It looks like the
window
object has become the new playground...The examples should be just treated like such.
For most of these patterns you don't need to use
window
. Actually, a framework like Piral does not use any globals at all. Instead, each microfrontend exports a function that gets an object as argument. This is also shown in the article in a very rudimentary implementation.There are other frameworks though, which only rely on globals. With these - I have exactly the same feeling ...
Microfrontends invention is a playground for bored feds anyways, so more 1999 patterns wont hurt ;-)
Most of the time communications can be done with a WebComponent, properties/attributes and custom events or a method on the WebComponent...not with global window magic
I think you misunderstood the point.
Unfortunately, if you use WebComponents you already introduce "global window magic" as these need to be registered globally. That's actually far more worse as you'll need to have a proper naming scheme here. For instance, you cannot register the same name twice. So two microfrontends cannot register a custom element
my-dropdown
. Going beyond web components your custom events (as outlined in the article) are placed somewhere, too. Where do you add the listener here?If you need decoupling you'll need a way to decouple.
window
(or any global) is one way - not the recommended way, but certainly a possibility and easy to illustrate.Also if you have a web component from microfrontend A hosted at
#mife-a
and another web component from microfrontend B hosted at#mife-b
how should they reach each other? They don't know each other and as such one component cannot determine the attributes of the other.I think your comment is rather reflects that you did not really get in touch with the subject in full detail.
Why do you think passing down props and listening to events is not decoupling? It is an interface and you listen to the events on the custom element itself. Clear and straightforward. How can you even communicate if you don't know the interface of the other one?
Name collision is an issue with any of the aforementioned solutions...you need a naming convention...if you have the same names, it hurts any of them.
Why seeing something as over architected makes me look as I haven't done micro frontends before?
You cannot pass down props (or attributes) because they are both in different areas and don't know each other (otherwise see the aggregator component - there you can do exactly what you want to do but in a safe and decoupled way).
Sry I've meant attributes. You can get really far with attributes and events if the teams can agree upon the interface and doesn't change it often.
Thank you for this article, the part about APIs exposed by each frontend is gold!
thanks great article.
please fix last paragraph link to piral is broken:
https://github.com/smapiot/piral
instead ofhttps://github.com/piral
Thanks! Fixed!
Can we get working example of this article?
What you want / expect? Everything in this article is pretty much working.
Thanks for the immediate reply.
Component aggregation part is confusing. So looking if we have git repo so that it will make us to understand better. How this aggregator integration works in real time? I am looking for an example.
A real-world example is
Extension
components in the Piral framework: docs.piral.io/guidelines/tutorials...See for an example here: github.com/piral-samples/piral-mic...
Hope that helps!
Also for extension components have a look at Dante's great article: dev.to/dantederuwe/my-experiences-...
Some comments have been hidden by the post's author - find out more