From Monolith to Domains: Architecting a Scalable Solution
In our previous article, we introduced the motivation behind redesigning our image storage solution and the transition to a serverless-first approach. This follow-up focuses on the architectural challenges we faced while implementing the solution and how we overcame them.
Challenge 1: Splitting the Service by Domain
Initially, our legacy system was a monolithic structure, tightly coupled and challenging to scale or extend. To modernize, we began by breaking the solution into domains and APIs, each handling specific responsibilities:
- File API: Responsible for handling image uploads, downloads, and storage.
- Project API: Facilitating project save/load workflows for users working on designs like photo books.
- Scheduler Events: Automating processes like project expiration, cleanup, and background tasks.
Each domain directly tied into the workflows outlined in the previous article, ensuring that the solution remained cohesive yet modular. This separation allowed us to independently scale, test, and enhance each domain.
Challenge 2: Lambda scalability and relational database connection bottlenecks
To implement these APIs, we started with a straightforward and efficient architecture:
- API Gateway: Serving as the entry point for client requests.
- AWS Lambda: Running serverless functions to process requests and business logic.
- Aurora Serverless v1: Providing relational database capabilities for data storage.
Amazon Aurora Serverless v1 with Data API was chosen for its PostgreSQL compatibility and to avoid deploying Lambdas into a VPC or using RDS Proxy. However, the 1000 requests per second limit for the Data API (see AWS limits) became a bottleneck as usage grew. Aurora Serverless v2, available at the time, lacked Data API support (introduced in December 2023). To overcome these limitations, DynamoDB was adopted for its unlimited scalability and seamless serverless integration.
Solution: Switching to DynamoDB
To resolve the database bottleneck, we migrated to Amazon DynamoDB, which provided:
- Unlimited scalability: Handling concurrent connections efficiently without manual provisioning.
- High availability: Reducing latency for global users.
This shift significantly improved our system's performance and ensured that the APIs could handle rapid spikes in traffic. However, it also required significant learning efforts as we needed to understand the concepts of NoSQL databases. Specifically, we had to master handling 1-to-n and n-to-m relationships, as well as implementing single-table design patterns, which are fundamental to designing efficient and scalable NoSQL solutions.
Challenge 3: Choosing the Right API Gateway
Our initial implementation utilized HTTP APIs on API Gateway, which offered impressive performance and lower costs. However, as we integrated more advanced features, particularly around user authorization, we encountered limitations.
To address these, we switched to REST APIs, which provided:
- Enhanced integration with custom authorizers: Streamlining secure access to resources.
- Greater flexibility in routing: Supporting complex workflows between domains.
The switch allowed us to maintain the necessary performance while adding advanced features for authentication and access control. If you're deciding between REST APIs and HTTP APIs for your project, refer to this AWS documentation for a detailed comparison of their capabilities and use cases.
The Outcome
By addressing these challenges step-by-step, our architecture evolved into a robust, scalable solution that could support our workflows efficiently:
- Domains: Independently scalable APIs for File, Project, and Scheduler domains.
- Database: A highly scalable and reliable backend powered by DynamoDB.
- API Gateway: REST APIs for secure and feature-rich routing.
This architecture empowered us to meet our business goals while preparing for future feature integrations and user demands.
Conclusion
The journey from a monolithic system to a domain-driven architecture was not without its hurdles. However, each challenge—from database scalability to API design—provided valuable insights that strengthened our solution. In the next article, we’ll explore how we leveraged this architecture to optimize workflows and introduce new features seamlessly.
Stay tuned!
Top comments (0)