Forem

Cover image for Building a Scalable, Cost-Effective Live GPS Tracking System for 300+ Trucks: Lessons from a Logistics Tech Journey
Jackson Kasi
Jackson Kasi Subscriber

Posted on • Edited on

Building a Scalable, Cost-Effective Live GPS Tracking System for 300+ Trucks: Lessons from a Logistics Tech Journey

When managing an enterprise logistics fleet of over 300+ trucks, we faced a thrilling challenge: delivering real-time GPS tracking without breaking the bank. In this article, I share how we rethought our architecture, overcame technical hurdles, and achieved 4 million monthly requests at around $20/month—all while maintaining an average latency of just 0.17 seconds.


The Challenge: Balancing Cost and Performance

Initially, we evaluated AWS serverless offerings (e.g., Lambda, DynamoDB) to power our live tracking system. However, after running the numbers, it was clear that the cost associated with processing millions of requests was unsustainable—especially when each truck’s data ranged between 8 to 25 MB per booking cycle.

A major hurdle was Upstash Redis’s pricing model. With a pay-as-you-go plan that charges $0.2 per 100K commands and enforces a strict 1 MB request size limit per command, fetching large GPS data chunks (often over 10 MB per truck) would trigger extra charges. Given our target monthly budget of $15–$20, we had to engineer a solution that could handle massive data volumes efficiently.


The Approach: Leveraging Cloudflare Workers & Upstash Redis

To meet our performance and cost targets, we pivoted to a serverless stack optimized for edge computing and low-cost data operations:

  • Cloudflare Workers:

    We built our custom GPS tracking logic on Cloudflare Workers. Deploying our server-side code at the edge allowed us to process requests quickly and at a fraction of the cost compared to traditional cloud services.

    For detailed pricing information, refer to the Cloudflare Workers pricing page.

  • Upstash Redis:

    For real-time data storage, we chose Upstash Redis due to its multi-region support (with a Singapore node close to our client) and its attractive pay-as-you-go pricing model.

    Check out the Upstash Redis pricing for more details.

    However, the 1 MB per-command limit posed a significant challenge for handling large GPS data records.


Overcoming the Data Size Hurdle: Chunking and Metadata

To avoid incurring extra costs for oversized commands, we split each truck’s GPS data into smaller chunks. Instead of sending a single large command that could exceed the 1 MB limit, we:

  • Divided the data into 800 KB chunks.
  • Stored these chunks under keys formatted like {truckID}:chunk:<chunkNumber>.
  • Maintained a metadata key (e.g., {truckID}-meta or a shortened version like m:<truckID>) to track the number of chunks and total records.

This strategy allowed us to fetch the complete dataset through multiple smaller commands, each well within the 1 MB limit, thereby avoiding additional charges and ensuring smooth performance.


Further Optimizations: Reducing Redis Command Overhead

Even with data chunking, the increased number of Redis commands (such as GET, RPUSH, and SET) could have driven up costs. In our earlier implementation, Upstash executed approximately 4.3 million commands per month (with about 2,347,442 writes and 2,001,269 reads). However, after implementing further optimizations, we managed to drop the command execution by 50–60%, which significantly reduced our cost overhead.

Here are the key strategies we adopted:

  1. Lua Scripting for Atomic Transactions:

    By combining multiple operations into a single Lua script, we reduced the number of round-trips to Redis. This atomic approach—merging GET, RPUSH, and SET commands—cut our command count by 2–3 per request.

  2. Key Shortening:

    We minimized key sizes (e.g., using m:<plate_number> instead of <plate_number>-meta) to reduce payload sizes, improve performance, and lower bandwidth usage.

  3. Using Hashes Over JSON Strings:

    Storing metadata as Redis hashes instead of full JSON strings reduced serialization overhead and improved read/write speeds.

  4. Batching Writes:

    Where possible, we batched multiple GPS updates into a single RPUSH command. This consolidation further decreased the total number of commands executed.

  5. Local Caching of Metadata:

    To avoid repetitive GET commands, we implemented a local in-memory cache for metadata. This reduced redundant Redis lookups and contributed to overall command reduction.

With these optimizations, our system’s total command execution dropped dramatically—by roughly 50–60%—keeping our overall monthly cost (Cloudflare Workers plus Upstash Redis) within the $15–$21 range.


Key Metrics and Results

  • 4M+ requests/month handled with an average response time of 0.17 seconds.
  • Cloudflare Workers cost: Roughly $5/month.
  • Upstash Redis cost: Approximately $15–$20/month for 2M+ commands before optimization, with a significant drop in command execution (reduced by 50–60%) after optimizations.
  • Overall system performance: Fast, scalable, and resilient—even with dynamic changes in request volume.

Cloudflare Workers pricing and performance metrics, demonstrating cost-effective execution for high-volume requests.<br>
Cloudflare Workers pricing and performance metrics, demonstrating cost-effective execution for high-volume requests.

Upstash Redis dashboard displaying command execution statistics and pricing breakdown, showcasing significant cost optimization after query reductions
Upstash Redis dashboard displaying command execution statistics and pricing breakdown, showcasing significant cost optimization after query reductions


Lessons Learned

  1. Cost Management Requires Creativity:

    Sometimes, the best solution isn’t about choosing the highest-performing provider—it’s about engineering smart workarounds to reduce costs without sacrificing performance.

  2. Keep It Simple:

    A straightforward, well-thought-out architecture often outperforms overly complex designs. Focus on delivering essential functionality efficiently.

  3. Atomic Operations Are Game Changers:

    Using Lua scripting to bundle commands is a powerful technique that can dramatically reduce operational overhead and lower costs.

  4. Smart Data Design Saves Money:

    Enforcing data chunking and leveraging efficient data structures (like hashes) can avoid hidden costs and streamline operations.


Conclusion

For developers tackling real-time data systems in cost-sensitive environments, striking the right balance between performance and cost is critical. By pivoting to a serverless edge solution with Cloudflare Workers and optimizing our use of Upstash Redis, we built a scalable, high-performance live GPS tracking system without a hefty price tag.

This journey taught me that innovation isn’t just about technical prowess—it’s also about smart cost management and creative problem-solving. I hope this experience inspires you to explore innovative, cost-effective architectures for your own projects.

What’s your experience with cost-optimized systems? Let’s discuss in the comments!😉

Top comments (0)