DEV Community

Myroslav Vivcharyk
Myroslav Vivcharyk

Posted on

API testing through simulations

The Challenge of API Testing

API testing is a challenging aspect of software development. While there are many approaches available — from mocking to contract testing — each comes with its own set of trade-offs.
Rather than debating the pros and cons of different testing approaches (feel free to draw your testing pyramids in the comments), this series of articles focuses on a specific scenario: testing against real APIs. Whether it’s production, staging, or a sandbox environment, we’ll explore how to improve tests that interact with actual API endpoints.

The Real-World Context

Let’s face it: many development teams find themselves deeply integrated with real APIs — and that’s normal. Instead of suggesting a complete architectural overhaul or debating how we arrived at this point, let’s focus on practical improvements to your existing setup.
When interacting with real APIs, developers often face several critical challenges:

  • Network Reliability: Random connection errors and timeouts leading to flaky tests

  • Latency: API calls taking seconds (or even minutes!) to complete, significantly slowing down test execution

  • Resource Consumption: Every test run potentially consuming API quotas or incurring actual costs

  • Environment Stability: External API availability and rate limits affecting test reliability

A Practical Solution

We’ll explore Hoverfly — an open-source API simulation tool that deserves more attention. Despite its capabilities, comprehensive examples and real-world use cases can be hard to find — which is precisely why we’re writing this series.

“Hoverfly is a lightweight, open source API simulation tool. Using Hoverfly, you can create realistic simulations of the APIs your application depends on.”

What does this mean in practice? We can run our tests against a real API once (or periodically), record all interactions, and use these recordings for future test runs. This gives us the benefits of both worlds: real API behavior with predictable test data.

💡 Want to dive straight into the code? Check out the complete source code here.

Prerequisites

  • Docker and Docker Compose
  • Go 1.23 or later
  • Task (taskfile) for running our commands

Setting Up Our Test Environment

First, let’s set up our development environment. We’ll use OpenAPI Generator within Docker to generate our HTTP client, keeping our local environment clean. For this demonstration, we’ll work with fakerestapi.azurewebsites.net (kudos to the maintainers of public APIs).
Here’s our Docker configuration for the client generator:

FROM golang:1.23-alpine

RUN apk add --no-cache git

RUN go install github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen@v2.4.1

WORKDIR /app
ENTRYPOINT ["oapi-codegen"]
Enter fullscreen mode Exit fullscreen mode

And the corresponding docker-compose service:

services:
  oapi-generator:
    container_name: oapi-generator
    build:
      context: .
      dockerfile: ./docker/oapi.Dockerfile
    volumes:
      - .:/app
    working_dir: /app
    command:
      - "-package"
      - "oapi"
      - "-generate"
      - "types,client"
      - "-o"
      - "./client/oapi/client.go"
      - "./api/test_api.json"
Enter fullscreen mode Exit fullscreen mode

Writing First Tests

With our client generated, let’s write some basic tests to demonstrate API interactions. We’ll focus on the Authors endpoint for this example (but you can more in the source):

func TestAuthors(t *testing.T) {
 scenarios := []testCase{
  {
   name: "get author",
   testFunc: func(t *testing.T, client *ClientWithResponses) {
    var (
     ctx = context.Background()
     id  = int32(1)
    )

    resp, err := client.GetApiV1AuthorsIdWithResponse(ctx, id)
    require.NoError(t, err)
    require.NotNil(t, resp, "expected resp, got nil")
    require.NotNil(t, resp.ApplicationjsonV10200, "expected resp body, got nil")

    assert.Equal(t, http.StatusOK, resp.StatusCode())
    assert.Equal(t, id, *resp.ApplicationjsonV10200.Id)
    assert.NotEmpty(t, *resp.ApplicationjsonV10200.FirstName, "expected non-empty first name")
    assert.NotEmpty(t, *resp.ApplicationjsonV10200.LastName, "expected non-empty last name")
   },
  },
  {
   name: "create new author",
   testFunc: func(t *testing.T, client *ClientWithResponses) {
    var (
     ctx       = context.Background()
     id        = int32(1)
     firstName = randomTitleForResource("author")
     lastName  = randomTitleForResource("author")
    )

    resp, err := client.PostApiV1AuthorsWithApplicationJSONV10BodyWithResponse(
     ctx,
     PostApiV1AuthorsApplicationJSONV10RequestBody{
      Id:        tests.ToPtr(id),
      FirstName: tests.ToPtr(firstName),
      LastName:  tests.ToPtr(lastName),
     },
    )
    require.NoError(t, err)
    require.NotNil(t, resp, "expected resp, got nil")
    require.NotNil(t, resp.ApplicationjsonV10200, "expected resp body, got nil")

    assert.Equal(t, http.StatusOK, resp.StatusCode())
    assert.Equal(t, id, *resp.ApplicationjsonV10200.Id)
    assert.Equal(t, firstName, *resp.ApplicationjsonV10200.FirstName)
    assert.Equal(t, lastName, *resp.ApplicationjsonV10200.LastName)
   },
  },
 }

 client := setupClient(t)

 for _, tt := range scenarios {
  t.Run(tt.name, func(t *testing.T) {
   tt.testFunc(t, client)
  })
 }
}

func setupClient(t *testing.T) *ClientWithResponses {
 httpClient := http.DefaultClient
 client, err := NewClientWithResponses(baseURL, WithHTTPClient(httpClient))
 if err != nil {
  t.Fatalf("failed to inistialize client: %v", err)
 }

 return client
}
Enter fullscreen mode Exit fullscreen mode

Enter Hoverfly: Setting Up API Simulation

The core concept in Hoverfly is “simulations” — JSON files that store request-response pairs from your API interactions. Let’s set up Hoverfly in a Docker container:

FROM alpine:3.21.0

# Packages
RUN apk add --no-cache wget unzip curl

# Set default arguments
ARG HOVERFLY_VERSION="v1.10.6"
ARG HOVERFLY_ADMIN_PORT=8888
ARG HOVERFLY_PROXY_PORT=8500

ENV HOVERFLY_ADMIN_PORT=${HOVERFLY_ADMIN_PORT}
ENV HOVERFLY_PROXY_PORT=${HOVERFLY_PROXY_PORT}

# Download and install both hoverfly and hoverctl
RUN wget -q "https://github.com/SpectoLabs/hoverfly/releases/download/v${HOVERFLY_VERSION#v}/hoverfly_bundle_linux_amd64.zip" && \
    unzip hoverfly_bundle_linux_amd64.zip -d /tmp && \
    mv /tmp/hoverfly /usr/local/bin/ && \
    mv /tmp/hoverctl /usr/local/bin/ && \
    chmod +x /usr/local/bin/hoverfly && \
    chmod +x /usr/local/bin/hoverctl && \
    rm -rf hoverfly_bundle_linux_amd64.zip /tmp/*

# Create default hoverctl config with environment variables
RUN mkdir -p /root/.hoverfly && \
    echo "hoverfly.host: localhost" > /root/.hoverfly/config.yaml && \
    echo "hoverfly.admin.port: \"${HOVERFLY_ADMIN_PORT}\"" >> /root/.hoverfly/config.yaml && \
    echo "hoverfly.proxy.port: \"${HOVERFLY_PROXY_PORT}\"" >> /root/.hoverfly/config.yaml

EXPOSE ${HOVERFLY_PROXY_PORT} ${HOVERFLY_ADMIN_PORT}

ENTRYPOINT ["hoverfly", "-listen-on-host=0.0.0.0"]
Enter fullscreen mode Exit fullscreen mode

Add new docker-compose service to our configuration:

  hoverfly:
    container_name: api-test-demo-hoverfly
    build:
      context: ./docker
      dockerfile: hoverfly.Dockerfile
      args:
        - HOVERFLY_VERSION=1.10.6
        - HOVERFLY_ADMIN_PORT=${HOVERFLY_ADMIN_PORT:-8888}
        - HOVERFLY_PROXY_PORT=${HOVERFLY_PROXY_PORT:-8500}
    ports:
      - "${HOVERFLY_HOST_ADMIN_PORT:-8888}:${HOVERFLY_ADMIN_PORT:-8888}"
      - "${HOVERFLY_HOST_PROXY_PORT:-8500}:${HOVERFLY_PROXY_PORT:-8500}"
    volumes:
      - ./testdata/hoverfly:/testdata/hoverfly
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:${HOVERFLY_ADMIN_PORT:-8888}/api/health"]
      interval: 1s
      timeout: 3s
      retries: 5
Enter fullscreen mode Exit fullscreen mode

This setup provides:

  • A Hoverfly instance
  • Web UI for monitoring (port 8888)
  • Proxy server for API interception (port 8500)
  • Persistent storage for our simulations The last step would be adding proxy for our HTTP client configuration. For this we need to add a tiny change:
func NewHttpClient(t *testing.T) *http.Client {
    t.Helper()

    hoverflyAddr := os.Getenv("HOVERFLY_PROXY")
    if hoverflyAddr == "" {
        return http.DefaultClient
    }

    // We will run tests in parallel only if we are using Hoverfly.
    // Just to avoid redundant load for public API.
    t.Parallel()

    client := &http.Client{
        Timeout: 5 * time.Second,
        Transport: &http.Transport{
            Proxy: http.ProxyURL(&url.URL{
                Scheme: "http",
                Host:   hoverflyAddr,
            }),
            TLSClientConfig: &tls.Config{
                InsecureSkipVerify: true, // Skip certificate verification when using Hoverfly
            },
        },
    }

    return client
}
Enter fullscreen mode Exit fullscreen mode

Recording and Replaying API Interactions

Using task definitions, we can capture API interactions:

test:local:capture:
    desc: Run tests on local machine and capture traffic with Hoverfly. Export simulation to file.
    env:
      HOVERFLY_PROXY: '{{.HOVERFLY_PROXY}}'
    vars:
      TIMESTAMP:
        sh: date +%Y%m%d_%H%M%S
    cmds:
      - defer: {task: hoverfly:stop}
      - task: internal:bootstrap
        vars:
          CLI_ARGS: hoverfly
      - task: hoverfly:mode
        vars:
          MODE: capture
      - task: test:local
        vars:
          HOVERFLY_PROXY: "localhost:{{.HOVERFLY_PORT | default 8500}}"
      - task: hoverfly:export
        vars:
          SIMULATIONS_DIR: '{{.SIMULATIONS_DIR}}'
          SIMULATION_FILE: '{{.TIMESTAMP}}'

   test:local:
    desc: |
      Run all tests on local machine. By default, tests are run against real API.
      Optionally use Hoverfly for simulation or capturing by setting HOVERFLY_PROXY environment variable and running hoverfly service.
    deps:
      - internal:check-gotestsum
    cmds:
      - PATH="$(go env GOPATH)/bin:$PATH" gotestsum
        --format pkgname
        --hide-summary=skipped
        --
        -json ./...
        -race
        -timeout=30s
        -count=1
    env:
      HOVERFLY_PROXY: '{{.HOVERFLY_PROXY | default ""}}'
Enter fullscreen mode Exit fullscreen mode

When we run this task, it:

  1. Starts Hoverfly container
  2. Enables capture mode
  3. Runs tests through the proxy
  4. Saves interactions to a JSON file

Running Simulations

Once we’ve captured our API interactions, we can run tests using the simulated responses. Here’s our task definition:

test:local:simulate:
    desc: Run tests locally simulating the API with Hoverfly
    deps:
      - task: internal:bootstrap
        vars:
          CLI_ARGS: hoverfly
    vars:
      LATEST_SIM:
        sh: |
          if ! SIM=$(task internal:find:latest:simulation SIMULATIONS_DIR={{.SIMULATIONS_DIR}} EXPIRED_AFTER_DAYS={{.EXPIRED_AFTER_DAYS}}); then
            echo ""
          else
            echo "$SIM"
          fi
    preconditions:
      - sh: '[ -f "{{.LATEST_SIM}}" ]'
        msg: "Simulation file does not exist. Please run 'task test:local:capture' first"
    cmds:
      - task: hoverfly:mode
        vars:
          MODE: simulate
      - task: hoverfly:import
        vars:
          SIMULATION_FILE: '{{.LATEST_SIM}}'
      - task: test:local
        vars:
          HOVERFLY_PROXY: "localhost:{{.HOVERFLY_PORT | default 8500}}"
Enter fullscreen mode Exit fullscreen mode

The workflow is straightforward:

  1. Hoverfly starts in simulate mode
  2. We load our previously captured simulation file (the task automatically finds the most recent one)
  3. Tests run against Hoverfly instead of the real API

Validating Our Setup

To verify that our simulation is actually working (and not secretly hitting the real API), let’s intentionally break one of our tests:

func TestAuthors(t *testing.T) {
    scenarios := []testCase{
        {
            name: "get author",
            testFunc: func(t *testing.T, client *ClientWithResponses) {
                var (
                    ctx = context.Background()
                    id  = int32(1)
                )

                resp, err := client.GetApiV1AuthorsIdWithResponse(ctx, id)
                require.NoError(t, err)
                require.NotNil(t, resp, "expected resp, got nil")
                require.NotNil(t, resp.ApplicationjsonV10200, "expected resp body, got nil")

                assert.Equal(t, http.StatusOK, resp.StatusCode())

                // Intentionally incorrect assertion - the real API returns id=1
                assert.Equal(t, 10, *resp.ApplicationjsonV10200.Id)
            },
        },
    }
}
Enter fullscreen mode Exit fullscreen mode

This failure confirms three points:

  1. Isolation: Tests are running against simulated responses
  2. Accuracy: Simulations faithfully reflect real API behavior
  3. Validation: Test assertions work correctly against simulated data

Summary

Let’s recap what we’ve built: a testing infrastructure that combines real API testing with simulation capabilities. The best part? We achieved this with minimal changes to our existing codebase. No need for extensive refactoring or rewriting tests — we simply added a proxy configuration to our HTTP client and let Hoverfly handle the rest.
The workflow is straightforward — we write our tests as usual, but run them through a Hoverfly proxy server. Hoverfly captures API interactions and stores them as simulation files. For subsequent test runs, Hoverfly can replay these stored responses instead of hitting the real API.
This setup gives us the benefits of both worlds — we’re testing against real API behavior, but with the reliability of local tests. No more flaky tests due to network issues, no more burning through API quotas during development, and most importantly, consistent test execution across all environments.

💡 A Note on Performance Expectations

While Hoverfly is excellent for reliability and simulation, it’s important to manage your expectations. If your API typically responds within 150–200ms, you might not see significant speed improvements in your tests.
Choose Hoverfly for reliability and consistent behavior first, speed improvements second.

Looking Ahead

In the next part of this series, we’ll explore:

  • Dynamic Data: Handling variable responses
  • CI/CD Integration: Seamless pipeline implementation

Top comments (0)