DEV Community

Cover image for Avoiding API Gateway’s integrations hard limit: scaling serverless architectures efficiently

Avoiding API Gateway’s integrations hard limit: scaling serverless architectures efficiently

Avoid AWS API Gateway’s integration limit with efficient multi-API Gateway setups. Learn how to scale serverless projects!

Last week, I was coding a new feature. It was looking good, until I had to deploy. AWS cli yelled at me and refused to deploy my stack. I got the following error : CREATE_FAILED: HttpApiIntegration (AWS::ApiGatewayV2::Integration) Maximum number of Integrations for this API has been reached. Did this ever happen to you? If it did not, and you are using AWS, then it might be worth to run a quick sanity check on your architecture.

AWS combined with API Gateway is widely used for scale-up apps. If API Gateway can handle a lot of routes (there is a 300 quota that can be increased), it has however a lesser-known but critical limitation of 300 integrations per API Gateway instance.

An API Gateway integration is a connection between an API Gateway and a backend service, such as a Lambda function, an HTTP endpoint, or an AWS service, allowing the API Gateway to route incoming requests to the appropriate service for processing and return responses to the client. An endpoint can have only one integration. Here is a great article about integrations.

In a non-monolithic serverless architecture, where every Lambda function represents a different endpoint, it’s easy to hit this limit once your app grows. This was exactly what happened to me in a large-scale serverless project using the Serverless Framework.

The challenge: reaching API Gateway’s 300 integration limit

Let’s represent my project as an online library, where users can read, buy, lend and comment on books. The backend is split into two services, users and books, representing the core features.

Initial architecture

Our architecture, typical of many non-monolithic setups, follows a one Lambda per endpoint model. When you have multiple services each exposing dozens of endpoints, it doesn’t take long to bump against the 300 integration per API limit (in this simplified example with only 2 services, it might take a long time I agree). This bottleneck emerged (I was not aware of it before!) when our API maxed out its available integrations and simply refused to deploy.

Evaluating the options: lambda-lith vs micro-APIs

With an architecture with all services living behind the same API Gateway, we faced two clear paths forward:

  1. Split the API into smaller chunks (micro-APIs).
  2. Shift to a more monolithic Lambda setup, where multiple endpoints share a Lambda.

Option 1

Option 2

Going down the monolith route would have required significant changes to our architecture, a large refactor, and a fair amount of coordination with the front-end. This wasn't ideal for us since we wanted minimal disruption to our existing setup and quick implementation. So we opted for the second option: splitting the API into smaller chunks by creating a new API Gateway for each service.

This approach allowed us to break our system into manageable micro-services while keeping our frontend mostly unchanged.

The solution: multi-API Gateways behind CloudFront

The architecture change was made easier by the fact that we had CloudFront in front of our API Gateway. Our frontend communicates with CloudFront, and CloudFront handles routing requests to our API.

This setup meant that adding new API Gateways could be done without any drastic changes to the frontend. All we needed to do was redirect calls to the appropriate API Gateway, based on the service prefix.

Here’s how we did it step-by-step:

Step 1: create a new API Gateway for the user service

We began by creating a new API Gateway for the user service. Since this was essentially a new API, we also had to configure a new Cognito authorizer to handle authentication for this service.

const serverlessConfiguration: AwsConfig.Serverless  = {
    servcice: 'user',
    ...
    providers: {
        httpApi: {
          cors: {
            allowedOrigins: ['http://localhost:3000',],
          allowedHeaders: ['Content-Type', 'Authorization', 'Origin'],
          allowedMethods: ['GET', 'POST', 'PATCH', 'DELETE', 'PUT', 'OPTIONS'],
          exposedResponseHeaders: ['apigw-requestid'],
        },
            authorizers: {
              httpUserApiAuthorizer: {
              identitySource: '$request.header.Authorization',
            audience: ['${self:custom.cognitoUserPoolClientId}'],
            issuerUrl: '${self:custom.issuerUrl}',
          },
          },
    },
    },
    custom: {
    cognitoUserPoolClientId: 'CognitoUserPoolClientId-${sls:stage}',
    issuerUrl: 'CognitoUserPoolIssuerUrl-${sls:stage}',
  },
};

Enter fullscreen mode Exit fullscreen mode

The goal here is not to explain how to set-up a Cognito authorization, if you want to learn about it you can read this article.

Step 2: Link service lambdas to the new API gateway

The next step involved linking all of the Lambdas in the user service to this new API Gateway. Serverless Framework does it for us when we declare the API in the configuration (step 1). However, we still need to set the new authorizer on our lambdas.

export const createUser = {
  handler: 'handler.main',
  events: [
    {
      httpApi: {
        method: 'post',
        path: '/api/user/create',
        // authorizer: { name: 'httpApiAuthorizer' }, < previous authorizer
        **authorizer: { name: 'httpUserApiAuthorizer' },**
      },
    },
  ],
};
Enter fullscreen mode Exit fullscreen mode

Step 3: Update API routes

We then prefixed all API routes for the user service with a specific route : /user. This allowed our CloudFront distribution to easily differentiate between services based on the URL.

export const createUser = {
  handler: 'handler.main',
  events: [
    {
      httpApi: {
        method: 'post',
        **path: '/api/user/create'**, // make sure all user routes are prefixed
        authorizer: { name: 'httpUserApiAuthorizer' },
      },
    },
  ],
};
Enter fullscreen mode Exit fullscreen mode

Step 4: Update frontend API calls

On the frontend, we updated all API calls related to the users service by adding the /users prefix. This ensured that requests were routed to the correct API Gateway without any confusion.

const userList = await client.request('get', '/users/list');
Enter fullscreen mode Exit fullscreen mode

Step 5: Redirect Service Requests via CloudFront

Finally, we set up a redirection rule in CloudFront. Any requests starting with /service2 were routed to the new API Gateway.

export const CloudFrontApiDistribution: AwsConfig.CloudFormationResource = {
  Type: 'AWS::CloudFront::Distribution',
  Properties: {
    DistributionConfig: {
      Enabled: true,
      HttpVersion: 'http2',
      Origins: [
        {
          DomainName: {
            'Fn::Join': [
              '',
              [
                '${self:custom.httpBooksApiId}',
                '.execute-api.',
                '${self:provider.region}',
                '.amazonaws.com',
              ],
            ],
          },
          // Id: 'BackendAPI', <- previous global API name
          **Id: 'BooksAPI',**
          OriginPath: '/api',
          CustomOriginConfig: {
            OriginProtocolPolicy: 'https-only',
          },
        },
        **{
          DomainName: {
            'Fn::Join': [
              '',
              [
                '${self:custom.httpUsersApiId}',
                '.execute-api.',
                '${self:provider.region}',
                '.amazonaws.com',
              ],
            ],
          },
          Id: 'UsersAPI',
          OriginPath: '/api',
          CustomOriginConfig: {
            OriginProtocolPolicy: 'https-only',
          },
        },**
      ],
      PriceClass: 'PriceClass_All',
      ViewerCertificate: {
        CloudFrontDefaultCertificate: true
      },
      DefaultCacheBehavior: {
        // TargetOriginId: 'BooksAPI', <- previous global API name
        TargetOriginId: 'BooksAPI',
      },
      **CacheBehaviors: [
        {
          PathPattern: '/users/*',
          TargetOriginId: 'UsersAPI',
        },
      ],**
    },
  },
};

Enter fullscreen mode Exit fullscreen mode

The default cache behaviors redirects all other routes to the original API, which is a keep-safe for all our other routes, because we don’t need to take care of them as they are already routed.

The benefits of this approach

By splitting the API and allocating different services to their own API Gateways, we gained several benefits:

  • Modular growth: Each service can now grow independently, with its own API Gateway, reducing the risk of hitting the 300 integration limit again.
  • Minimal frontend changes: Thanks to CloudFront, the frontend barely needed any modifications. The redirection was handled transparently.
  • Service isolation: Each service can now be managed and deployed independently, allowing for greater flexibility and scalability.

Conclusion

The AWS API Gateway limit of 300 integrations per API can be a hidden bottleneck, especially in fast-growing projects when you are unaware of its existence. In order to spot in before it arrives, I am adding a new rule on the sls-mentor tool, in order to be warned when you reach 250 integrations.

While initially daunting, addressing it through a multi-API Gateway approach can keep your architecture flexible, scalable, and modular. This experience reinforced the importance of periodically reviewing architectural decisions and splitting services when they become too large. From now on, when I have an architecture with micro services, I will be creating a micro API for each service.

For teams scaling on AWS, this approach ensures that when your services grow to the point of hitting such limits, you have a clear path forward without drastic changes to the frontend or overall application flow.

References

Top comments (0)