DEV Community

Cover image for Building a Distributed Microservice in Rust
Donald Johnson
Donald Johnson

Posted on

Building a Distributed Microservice in Rust

Overview

This project demonstrates how to split CPU-bound tasks (like prime factorization) across multiple Rust microservices:

  1. Load Balancer (Rust + Actix-Web)

    • Receives requests on http://localhost:9000/compute
    • Forwards them to healthy worker nodes
  2. Master Node (Optional)

    • If you have large tasks, you can chunk them out to multiple workers simultaneously
    • Listens on http://localhost:8081/distribute
  3. Worker Nodes (Rust + Rayon)

    • Actually perform the factorization (or any heavy logic)
    • Listen on http://localhost:8080/compute (and more if you spin up more workers)

Why This Is Cool

  • Scalable: Easily add more workers for heavier traffic.
  • Fault Tolerant: If a worker dies, the load balancer reroutes around it.
  • Modular: Each piece (LB, master, worker) can be tested and deployed independently.

Quickstart

1. Clone the Repository

git clone https://github.com/copyleftdev/prime_cluster.git
cd prime_cluster
Enter fullscreen mode Exit fullscreen mode

2. Use the Makefile

We’ve included a Makefile so you can run everything with quick commands.

Key Makefile Targets (click to expand)

  • make build – Builds all components (worker, master, load_balancer)
  • make run-all – Runs everything locally in separate tmux panes
  • make docker-build – Builds Docker images for each service
  • make docker-run – Spins them up via Docker Compose

3. Local Setup (No Docker)

  1. Build Everything
   make build
Enter fullscreen mode Exit fullscreen mode

This runs cargo build --release for each microservice.

  1. Run

    • Option A: Individual
     make run-load-balancer
     make run-master
     make run-worker
    

    or

    • Option B: All Together (requires tmux):
     make run-all
    
  • At this point:
    • Load Balancer → http://localhost:9000/compute
    • Master → http://localhost:8081/distribute
    • Worker → http://localhost:8080/compute
  1. Test a Request
   curl -X POST -H "Content-Type: application/json" \
        -d '{"numbers":[1234567, 9876543]}' \
        http://localhost:9000/compute
Enter fullscreen mode Exit fullscreen mode

The load balancer picks a healthy worker, which returns the prime factors.


4. Docker Approach

If you prefer Docker:

  1. Build Images
   make docker-build
Enter fullscreen mode Exit fullscreen mode
  1. Run Services
   make docker-run
Enter fullscreen mode Exit fullscreen mode
  1. Stop
   make docker-stop
Enter fullscreen mode Exit fullscreen mode

Afterward, the same endpoints apply (:9000 for LB, :8081 for Master, :8080 for Worker).


5. Load Testing (Optional)

Use our k6 script to see how the system handles traffic. In load_tester/:

cd load_tester
k6 run perf_load.js
Enter fullscreen mode Exit fullscreen mode

You’ll get metrics on how fast the cluster factorizes numbers under various load scenarios.


That’s It!

You’ve now spun up a distributed microservice in Rust using this code from github.com/copyleftdev/prime_cluster. Feel free to:

  • Modify the worker logic (it can do any CPU-heavy or GPU-accelerated task).
  • Scale by adding more worker containers in the docker-compose.yml.
  • Expand the master node to orchestrate even more complex tasks or parallel pipelines.

Have fun exploring this approach to building modular, scalable systems in Rust!

Top comments (0)