DEV Community

Ibrahim Cisse
Ibrahim Cisse

Posted on

Resolving CRD Size Limit Issues with KEDA on Kubernetes

Image description

While deploying KEDA (Kubernetes-based Event Driven Autoscaler) on our Kubernetes cluster, we encountered an unexpected hurdle related to Custom Resource Definitions (CRDs) being too large. This post details the problem, the strategies we attempted to resolve it, and the ultimate solution that worked. We hope this helps others facing similar challenges.

The Problem

Upon attempting to deploy KEDA using the following command:

kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.12.1/keda-2.12.1.yaml
Enter fullscreen mode Exit fullscreen mode

I received the error:

Error from server (Request Entity Too Large): error when creating "https://github.com/kedacore/keda/releases/download/v2.12.1/keda-2.12.1.yaml": etcdserver: request is too large

Enter fullscreen mode Exit fullscreen mode

This indicated that the CRD definitions in the YAML file were too large for etcd to process, a known issue when handling extensive Kubernetes configurations.

Strategies I Tried

  1. Splitting the YAML File

I attempted to split the keda-2.12.1.yaml into smaller chunks, applying each separately. Unfortunately, this led to dependency issues where certain resources required others to be present beforehand, causing the deployment to fail.

  1. Editing the YAML File

I opened the massive YAML file (over 7,000 lines) and tried to reduce its size by:

Removing redundant annotations.

  • Stripping down verbose descriptions in the CRDs.
  • Simplifying resource definitions wherever possible.

While this approach reduced the file size, it didn't sufficiently address the size limitation for etcd, and applying the modified file still resulted in the same error.

  1. Increasing etcd Size Limit

I considered adjusting the etcd size limit by modifying the --max-request-bytes parameter. However, this approach involved significant changes to the control plane, which wasn't feasible in GKE

Seeking Help from the Community

After exhausting the above strategies, we turned to the GitHub repository for KEDA, specifically issue #6447 related to the v2.16.1 release.

The GitHub Issue:

[KEDA v2.16.1 Release - Issue #6447 (https://github.com/kedacore/keda/discussions/6447)

In this thread, the KEDA maintainer, @JorTurFer, announced the new release and included fixes for CRD handling. While our issue wasn't directly addressed, sharing our experience in the comments helped us connect with others facing similar problems.

Comment on GitHub:

I detailed our attempts and frustrations, specifically mentioning the CRD size limitation and the strategies we tried. The community feedback was invaluable, leading us to the solution below.

The Solution

Server-side apply turned out to be the fix we needed. Instead of the traditional kubectl apply, we used:

kubectl apply --server-side -f https://github.com/kedacore/keda/releases/download/v2.12.1/keda-2.12.1-core.yaml

Enter fullscreen mode Exit fullscreen mode

This approach leverages Kubernetes' server-side apply feature, which handles larger resource definitions more efficiently and bypasses some of the etcd limitations.

Why Server-side Apply Works:

Efficient Resource Handling: Server-side apply processes resources on the server, reducing the data load on etcd.

Conflict Resolution: It provides better conflict management for CRDs and other complex resources.

After applying the above command, KEDA was successfully deployed without errors. We were then able to proceed with creating and managing our ScaledObjects.

Conclusion

Facing CRD size limitations when deploying KEDA was a challenging experience, but it provided valuable insights into Kubernetes resource management. By experimenting with various strategies, seeking help from the community, and leveraging Kubernetes' server-side apply feature, we overcame the issue.

If you're dealing with similar CRD size issues, we highly recommend trying the server-side apply approach. Also, don't hesitate to engage with the open-source community—the support and shared experiences can be incredibly helpful.

Top comments (0)