DEV Community

Cover image for Building Scalable E-commerce Systems: A Journey Through Microservices with ASP.NET Core and Kubernetes
Paulo Torres
Paulo Torres

Posted on

Building Scalable E-commerce Systems: A Journey Through Microservices with ASP.NET Core and Kubernetes

The Challenge

Our monolithic application served us well initially, handling about 10,000 daily users. However, as our user base grew to 100,000+ daily active users, we started experiencing several issues:

  • Deployment bottlenecks where updating a single feature required deploying the entire application
  • Scaling difficulties when different components needed different resources
  • Development team conflicts as multiple teams worked on the same codebase
  • Difficulty implementing new technologies due to tight coupling

Our Microservices Solution

Architecture Overview

We broke down our monolith into these core microservices:

  1. Product Catalog Service

    • Handles product information, categories, and search
    • Built with ASP.NET Core 8.0
    • Uses MongoDB for flexible product attributes
    • Implements Elasticsearch for full-text search
  2. Order Processing Service

    • Manages order lifecycle and payment processing
    • Uses ASP.NET Core minimal APIs
    • SQL Server with Entity Framework Core
    • Implements Saga pattern for distributed transactions
  3. Inventory Service

    • Real-time inventory management
    • Event-driven architecture using Azure Service Bus
    • Redis caching for high-performance stock checks
  4. User Management Service

    • Handles authentication and user profiles
    • IdentityServer4 integration
    • PostgreSQL database

Infrastructure Implementation

# Example Kubernetes deployment for Product Catalog Service
apiVersion: apps/v1
kind: Deployment
metadata:
  name: product-catalog
spec:
  replicas: 3
  selector:
    matchLabels:
      app: product-catalog
  template:
    metadata:
      labels:
        app: product-catalog
    spec:
      containers:
      - name: product-catalog
        image: techmart/product-catalog:1.0
        resources:
          requests:
            memory: "256Mi"
            cpu: "200m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        ports:
        - containerPort: 80
        env:
        - name: ASPNETCORE_ENVIRONMENT
          value: "Production"
        - name: ConnectionStrings__MongoDB
          valueFrom:
            secretKeyRef:
              name: mongodb-secrets
              key: connection-string
Enter fullscreen mode Exit fullscreen mode

Inter-Service Communication

We implemented both synchronous and asynchronous communication patterns:

  1. Synchronous Communication

    • REST APIs for direct service-to-service communication
    • gRPC for performance-critical operations
    • API Gateway pattern using Ocelot
  2. Asynchronous Communication

    • Event-driven architecture using Azure Service Bus
    • Dead letter queues for failed messages
    • Event sourcing for order processing
// Example of handling events in Order Processing Service
public class OrderCreatedEventHandler : IEventHandler<OrderCreatedEvent>
{
    private readonly IInventoryService _inventoryService;
    private readonly IMessageBus _messageBus;

    public async Task HandleAsync(OrderCreatedEvent @event)
    {
        try
        {
            // Reserve inventory
            var inventoryReserved = await _inventoryService.ReserveInventoryAsync(
                @event.OrderId,
                @event.Products
            );

            if (inventoryReserved)
            {
                // Publish inventory reserved event
                await _messageBus.PublishAsync(new InventoryReservedEvent(@event.OrderId));
            }
            else
            {
                // Handle insufficient inventory
                await _messageBus.PublishAsync(new OrderFailedEvent(
                    @event.OrderId,
                    "Insufficient inventory"
                ));
            }
        }
        catch (Exception ex)
        {
            // Handle errors and compensating transactions
            await _messageBus.PublishAsync(new OrderFailedEvent(
                @event.OrderId,
                ex.Message
            ));
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Monitoring and Observability

We implemented comprehensive monitoring using:

  • Prometheus for metrics collection
  • Grafana for visualization
  • Elasticsearch, Fluentd, and Kibana (EFK stack) for log aggregation
  • Jaeger for distributed tracing

Service Mesh Implementation

We adopted Istio as our service mesh to handle:

  • Traffic management
  • Security
  • Observability
  • Load balancing
  • Circuit breaking

Lessons Learned

  1. Start Small
    Rather than migrating everything at once, we started with non-critical services. This allowed us to learn and adjust our approach without risking core business functions.

  2. Data Management is Critical
    We spent significant time planning our data architecture. Each service owns its data, and we implemented careful boundaries to maintain data consistency.

  3. Automation is Essential
    We invested heavily in CI/CD pipelines using Azure DevOps, automating everything from build to deployment:

# Example Azure DevOps pipeline
trigger:
  branches:
    include:
    - main
    - feature/*

pool:
  vmImage: 'ubuntu-latest'

variables:
  dockerRegistry: 'container-registry'
  imageRepository: 'product-catalog'
  tag: '$(Build.BuildId)'

stages:
- stage: Build
  jobs:
  - job: BuildAndTest
    steps:
    - script: dotnet build --configuration Release
    - script: dotnet test
    - task: Docker@2
      inputs:
        containerRegistry: $(dockerRegistry)
        repository: $(imageRepository)
        command: 'buildAndPush'
        Dockerfile: '**/Dockerfile'
        tags: |
          $(tag)
          latest
Enter fullscreen mode Exit fullscreen mode
  1. Performance Optimization We implemented various strategies to maintain high performance:
  • Caching with Redis
  • Message queuing for asynchronous operations
  • Horizontal scaling based on metrics
  • CDN integration for static content

Results

After six months of gradual migration:

  • Deployment frequency increased from bi-weekly to daily
  • System availability improved from 99.9% to 99.99%
  • Average response time decreased by 40%
  • Development team productivity increased by 35%

Future Improvements

We're currently working on:

  1. Implementing chaos engineering practices
  2. Expanding our API gateway capabilities
  3. Enhancing our monitoring with AI-powered anomaly detection
  4. Implementing blue-green deployments for zero-downtime updates

Conclusion

Migrating to microservices was a challenging but rewarding journey. The key to our success was taking an incremental approach, maintaining strong team communication, and focusing on automation and observability from the start.

Remember that microservices aren't a silver bullet - they solve specific problems but also introduce complexity. Ensure your organization has the technical maturity and real need for microservices before embarking on this journey.

Top comments (0)