DEV Community

Brett Hoyer for YugabyteDB Distributed PostgreSQL Database

Posted on • Edited on

First Steps to Building a Globally-Distributed Application

As software developers, we’re often prompted to learn new technologies, either by our employers, or by our own curiosities. This endless learning is one of the primary reasons we got into this field to begin with. UI developers wish they had a deeper understanding of backend frameworks, and backend developers wish they could write CSS transitions and animations (no they don’t, but you get what I mean).

Throughout my own software journey, my desire to enhance my skills across the stack has sent me down a seemingly endless maze of blog posts, tutorials and instructional videos. While these mediums serve their purpose, I’m often left wanting to learn through my own explorations and failures to determine what’s “best”.

As such, I’ve started to build a new globally-distributed application called “The Largest River” that will certainly satisfy this desire. This blog series will highlight my discoveries, shortcomings, and everything in between as I work to complete this project.

The Project

Today’s application development landscape is drastically different than that of years past. We’re handling scalability in new and exciting ways, and serving traffic from all over the globe. This is what I want to focus on. How can I build a distributed application that will service a global marketplace? We’ve all built more than our fair share of “to-do list” applications. This will not be one of them.

There are few key aspects I’d like to highlight:

  • Serving traffic globally with low latency
  • Being resilient to potential zone or region outages
  • Properly adhering to data compliance laws (for instance, all EU user data must be stored in the EU)

While the precise features of the application are immaterial, the architecture is of primary importance. A lot of tools (and buzzwords) come to mind when trying to architect a modern web application. Assets can be served from a CDN to improve page load speed. A global load balancer can front all traffic, sending requests to the nearest server. Serverless functions and edge functions can be used to handle requests, eliminating the need to manage infrastructure altogether. Kubernetes can be deployed for container orchestration, networking and healing, amongst many other production-grade features. The list goes on.

In an attempt to walk before I run, I’ve decided to start with a relatively simple architecture.

Architecture Diagram

A React frontend sends traffic through an Nginx reverse proxy, to VMs running in multiple regions. Running VMs in multiple regions (once properly load-balanced) will result in shorter round trips, as well as enabling us to reroute traffic in the event of a region outage. These VMs are all running the same containerized Node.js process, which creates a connection to a YugabyteDB database. YugabyteDB is a Postgres-compliant, highly-available, distributed SQL database. If you’d like to spin up an always-free single-node cluster for yourself, it is easy to do so.

This architecture is intentionally a bit naive. I’m able to demonstrate that serving traffic to a single database node in another region comes with extremely high latencies. Businesses have operated this way for many years, scaling their databases vertically, at the cost of network latency (amongst many other things). As I continue to iterate on this design, I’ll deploy a multi-zone, multi-region database, which will be more representative of a modernized deployment. This will allow for both zone and region failures and enable data compliance, in addition to improving read and write latencies.

chaos gif

99% of the development process.

The Development Environment

I decided to use Docker and Docker Compose to simulate this distributed environment on my local machine. Containerization affords me the ability to easily manage and isolate dependencies, while also mirroring the production environment. Through a single command, I’m able to spin up all of the processes locally, passing the environment variables required to make connections to my remote database. Additionally, I’m using volumes to persist data, which affords me all of the niceties expected of modern application development, such as client reloads and server restarts on file changes.

The Production Environment

After countless hours of research and development, I’ve decided to run a Container-Optimized OS on Google Compute Engine VMs. These machines run images, which I’ve pushed to the Google Container Registry. As mentioned previously, this is helpful in that the same Docker images can be run locally and in production, with minimal differences in configuration.

Of course, this all sounds great, but how are the containers managed? How are they networked? In my career, I’ve rarely been faced with handling network traffic between multiple VMs and database nodes, so this learning curve is particularly steep. Thankfully, I’ve already made great progress (or so I think). I look forward to sharing my findings in future blog posts.

success gif

The 1% that makes it all worthwhile.

Top comments (0)