Fate-sharing and micro-frontends

Fate-sharing is a design system principle that ensures that a system composed of many parts either works, assuming that all parts work, or a failure of any subsystem affects the whole system.

This principle comes from a network design where such property is essential for improving reliability and debugging of networks.

In Web frontend development this property is highly undesirable, and is increasingly becoming a problem for our ever-more-complex frontends developed by ever-growing teams and organizations.

In these large frontends one mistake by a team member responsible for a relatively insignificant feature, can result in a catastrophic performance regression or even functional failure of the entire application. The more developers, the more mistakes, the more frequent regressions and outages.

Risks associated with fate-sharing create a pressure to enable distinct parts of the application to be deployed independently. We’ve seen this happen in the backend space before, which has already mostly moved from monolithic servers to microservice architecture for modern large-scale development.

The frontend community followed suit and over the last few years we’ve seen a rise of interest in “micro-frontends” or MFEs — the frontend version of the microservice architecture.

The micro-frontend solutions that most of us are familiar with today are based on client-side-first Gen 2 Web solutions. The most popular variant is a webpack module federation, but there are many other custom implementations of the idea using different tools and different approaches.

In spite of sound goals and gaining developers' interest, the current micro-frontend solutions have not become a huge success story because they solve fate-sharing at the cost of further increasing payload sizes in two ways:

  1. In an MFE, the JavaScript code graph of the whole application is broken up into independently deployable parts. This modularization however disables build-time global code optimizations like tree-shaking and dead code elimination — both of which are fundamental optimizations that the whole Gen 2 Web stack is based on, and without them the JavaScript payload size grows out of control, and user experience deteriorates.

  2. Since many of the independently deployable parts have common dependencies, developers end up either:

    • proactively manage version skew issues, which forces teams to coordinate releases within the federation, defeating the whole goal of release independence,

    • not manage version skew issues at all, and accept inevitable and likely expensive production breakages, or

    • completely isolate the independently deployed parts, and accept code duplication within the application payloads — an approach that Google Cloud Console uses to ensure reliability of independent releases of over 150+ modules that make up this Angular-based mega-application.

Frontend developers and architects have now for years tried to solve the independent deployability issue in order to scale application development while preserving unified user experience, only to face performance degradation, functional disruption, or no significant improvements in developer velocity.

Natural engineering instinct prompts us to break up these monolithic codebases or dysfunctional federations into many smaller apps, but doing so would often add unacceptable maintenance overhead.

Micro-frontends approach the release independence from the deployment-time perspective — given a set of modules deployed independently, how can we at runtime compose a single application (typically a SPA).

An alternative approach, based on build-time composition, has been popularized by monorepos.

Monorepos have become increasingly popular, and many teams ranging from hobby developers to large corporations have successfully adopted them, often in combination with Gen 2 Web frameworks.

Monorepos increase developer velocity, create conditions for higher code quality, and decrease the risk of fate-sharing by co-hosting several independently deployable applications in a single SCM repository, where these applications can share and amortize the maintenance cost of libraries, build, testing, and releasing infrastructure.

Monorepos don't fully achieve the goal of a unified user experience without a significant amount of coordination — try updating a header component or look and feel of your design system across all SPAs within a monorepo and time it all for a big reveal coinciding with a product launch. In spite of this drawback, in my experience, this approach is generally much more reliable than Gen 2-based micro-frontend solutions, and is a pragmatic choice for today's mainstream development.

Not all frontends grow to the size where fate-sharing becomes a problem, but those that do rarely see this problem solved in a way that wouldn't sacrifice user or developer experience. The client-side heavy design that the current implementations are based on is to be blamed.

I remain optimistic that future implementations of MFEs, especially those using the server-first design will be much more successful. In the meantime, I suggest that you stick with a frontend monolith or split your frontend into multiple apps developed in a monorepo.

PS: I'd like to express my gratitude to all the reviewers who provided me with valuable feedback on this post, especially Natalia Venditto.


Thank you for reading. Share this post with friends! 🖖🏻

Got feedback? Message me on Twitter!