DEV Community

Cover image for How we slashed CI build time using Go’s cache
Siddhant Khare
Siddhant Khare

Posted on

How we slashed CI build time using Go’s cache

Our CI pipeline’s integration tests were running sluggishly. The primary bottleneck wasn't the tests themselves but the time required to compile the source code. To address this, we leveraged GitHub Actions’ caching feature to save and reuse the Go build cache, significantly cutting CI execution time.

This post walks through the process of optimizing GitHub Actions workflows, the challenges faced, and the operational hurdles around caching build artifacts. Additionally, we’ll generalize these findings to large build pipelines and their impact on CI/CD performance.

The problem

Most modern software projects suffer from long CI/CD build times due to:

  • Frequent dependency changes, invalidating cached builds.
  • Integration tests requiring full rebuilds, touching large portions of the codebase.
  • Inefficient cache retention policies, causing unnecessary storage bloat.
  • Poor dependency isolation, leading to cascading build invalidations.

These inefficiencies make CI pipelines slow, particularly in monorepos or large-scale projects where build performance directly affects development velocity.

Storing Go’s build cache in GitHub Actions

The workflow

A typical test workflow consists of:

  1. Restoring the Go build cache from GitHub Actions into the runner.
  2. Running tests, reusing the cached build while generating a new cache for modified code.
  3. Storing the updated build cache back in GitHub Actions.

For workflows using Docker, additional steps are needed:

  1. Load the build cache from GitHub Actions into the runner.
  2. Copy the cache from the runner to the Docker container.
  3. Run tests inside the Docker container, generating a new build cache.
  4. Copy the updated cache from the container back to the runner.
  5. Save the cache in GitHub Actions and remove outdated versions.

Visualizing dependency complexity

Dependency graph of our packages

Dependency Graph of Our Packages

GitHub actions workflow for Go build cache

- name: Restore Go cache
  id: restore-go-cache
  uses: actions/cache/restore@v4
  with:
    path: |
      /tmp/.cache
    key: test-integration-${{ runner.os }}-go-${{ github.sha }}
    restore-keys: |
      test-integration-${{ runner.os }}-go-

- name: Copy Go build cache to container
  run: |
    mkdir -p /tmp/.cache/go-build
    docker exec container_name mkdir -p /root/.cache
    docker cp /tmp/.cache container_name:/root
    rm -rf /tmp/.cache

- name: Run tests
  run: make test-integration

- name: Copy Go build cache from container
  run: |
    mkdir -p /tmp/.cache/go-build
    docker cp container_name:/root/.cache/go-build /tmp/.cache

- name: Save Go cache
  id: cache-save
  uses: actions/cache/save@v4
  with:
    path: |
      /tmp/.cache
    key: test-integration-${{ runner.os }}-go-${{ github.sha }}

- name: Delete older cache
  run: |
    gh extension install actions/gh-actions-cache
    gh actions-cache delete ${{ steps.restore-go-cache.outputs.cache-matched-key }} -B ${{ github.ref }} --confirm || true
  env:
    GH_TOKEN: ${{ github.token }}
Enter fullscreen mode Exit fullscreen mode

Managing cache lifecycle in GitHub Actions

Each branch maintains a separate build cache. Since a new cache doesn’t replace the old one automatically, the process consists of:

  1. Creating a new cache.
  2. Deleting outdated caches to prevent storage bloat.

CI cache retention issue

CI Cache Retention Issue

Cache storage problems & solutions

Since our integration tests generate ~1GB of cache per build, we faced GitHub Actions' 10GB cache limit. To mitigate this, we implemented:

  1. Larger runners: Increased storage availability by using bigger runners.
  2. Shared caching across workflows: Instead of separate caches per workflow, we centralized caching for reuse.
  3. Conditional cache clearing: Outdated caches were deleted selectively when changes invalidated them.

Detecting build bottlenecks

We used a simple timing approach to isolate slow build issues:

go clean -cache  # Clear cache
date && go test && date  # Measure test execution time
date && go test && date  # Measure test execution time again
Enter fullscreen mode Exit fullscreen mode

By comparing test execution times in cached vs. non-cached runs, we identified build overheads.

Symptoms of slow builds

  • CI workflows taking longer than expected despite running only a few tests.
  • Significant runtime differences between local and CI environments (local builds benefit from persistent caching).

Conclusion

By leveraging Go build caching in GitHub Actions, we reduced CI execution time by 7 minutes. Key takeaways:

  • Optimize cache usage across workflows to minimize redundant builds.
  • Manage cache lifecycle proactively to avoid storage constraints.
  • Identify slow build areas early through timing analysis.
  • Refactor dependencies where possible to minimize unnecessary rebuilds.

These improvements drastically enhanced developer productivity by accelerating feedback loops and optimizing CI/CD performance. If your CI builds are slow, optimizing caching and dependency management can lead to significant speedups.


For more tips and insights, follow me on Twitter @Siddhant_K_code and stay updated with the latest & detailed tech content like this.

Top comments (0)