DEV Community

Cover image for Designing HTTP API clients in .NET
Ilya
Ilya

Posted on • Edited on • Originally published at ilya-chumakov.com

Designing HTTP API clients in .NET

In this post, I describe a scalable architectural approach to building typed HTTP API clients for ASP.NET Core applications. I discuss how following this approach can contribute to API testability and maintainability. I provide some practical examples and highlight common pitfalls.

Problem definition

Distributed systems are popular. HTTP APIs are very popular. And .NET is a reasonable tool for implementing both, starting from the classic RESTful APIs and microservices-oriented architecture.

While one of the arguments in favour of microservices is sharing the workload between multiple development teams using different programming languages (and who said anything about Conway's Law), in this article I'd like to talk about a homogeneous, pure .NET ecosystem. Most of the stuff below is relevant to ASP.NET Core Web API applications. Over the years, I have designed, implemented, or overseen many of these, and I want to share the best practices I have learned.

Let's start with a published REST API. On the consumer side, managed by another team, the typed HTTP client code is written. Hopefully, its implementation is based on some docs shipped with the API. Then it just works, right? Well, yes and no. A typical enterprise-grade distributed system may have thousands of endpoints and hundreds of respectful consumers, developed and maintained by dozens of teams. Independently re-creating HTTP clients from scratch in each and every consumer project is out of the question, because it doesn't scale at all.

API hell

There are several ways to save time and reduce the amount of repeated boilerplate code:

  • Generate client code on each consumer using some form of specification from the producer (OpenAPI).
  • Reuse existing client code (NuGet).
  • Find a middle ground by using a high-level client builder.

And while this post revolves mostly around NuGet, it would be wrong of me not to mention the others.

High-level client builders

There are many third-party REST client builders for .NET on GitHub, and new ones keep popping up for some reason. Some of the alive and popular are:

Part of the community is quite happy using these libraries. However, based on my experience so far, and considering the occasional problems I've encountered (limitations, missing features, or questionable design), I wouldn't make a high-level builder the default choice in a large system. But never say never.

OpenAPI / Swagger

Swagger is a tool set based on the OpenAPI specification that needs no introduction these days. Given an OpenAPI document, the corresponding client/server stubs can be generated in any popular programming language, and C# is no exception. For the client side, here is a detailed list of .NET code generators. Alternatively, we can generate a document from the server code using Swashbuckle or NSwag (these two tools are now de facto standards).

However, OpenAPI's declarative resource specification does not cover your implementation details. It's simply designed to do the opposite: provide a language-agnostic way to express an API contract. And what are implementation details of an HTTP client? Authentication, authorization, serialization, validation, error handling, and so on.

Don't get me wrong: Swagger is great, use it, and keep the documents up to date. Your colleagues will appreciate it. But for the .NET parts of a distributed system we can smooth backend-to-backend integrations... with good old NuGet packages.

Solution structure

Here is the proposed dependency diagram for a typical ASP.NET Core application that implements POST /orders REST API:

Solution structure

  • As the root, there is Orders.Client class library containing a typed client and, naturally, all its input/output DTOs. This library is also published as NuGet package.
  • Web project depends directly on Orders.Client, and reuses ApiModels as endpoints' inputs/outputs.
  • Tests project reuses the typed client in API tests (I explain my definition of "API tests" in the next section).
  • Finally, an external consumer of POST /orders references the Orders.Client NuGet package.

An example of an input/output DTO could be OrderCreateApiModel, into which the POST /orders request body is deserialized. By the way, I prefer public contracts like this one to follow some global naming convention, i.e. "ApiModel" in this case. Such a rule serves well as a big red sign that any thoughtless change of such type can break the API backward comparability.

Typical code for each project might look like this:

Orders.Client:

//omitted for brewity: interfaces, cancellation tokens, attributes etc. 
public class OrdersClient(HttpClient httpClient) : IOrdersClient 
{
    public async Task<OrderApiModel> CreateOrderAsync(
        CreateOrderApiModel input);
}
Enter fullscreen mode Exit fullscreen mode

Orders.Web:

public OrderController : ApiController
{
    async Task<OrderApiModel> CreateOrderAsync(
        CreateOrderApiModel input);
}
Enter fullscreen mode Exit fullscreen mode

Orders.Web.Tests (based on WebApplicationFactory):

public class VeryBasicTests(WebApplicationFactory<Program> factory) 
    : IClassFixture<WebApplicationFactory<Program>>
{
    private readonly OrderClient _client = new(factory.CreateClient());

    /// <see cref="OrderController.Create" />
    [Fact]
    public async Task CreateOrderAsync_Default_OK()
    {
        await _client.CreateOrderAsync(new CreateOrderApiModel() {...});
        //omitted for brewity: Act, Assert steps
    }
}
Enter fullscreen mode Exit fullscreen mode

Advantages of sharing a typed client project:

  • Explicit contract. The Orders.Client project declares and maintains a public contract. All API changes start from here. During code review, this project is your starting point for hunting API-breaking changes.

  • Reusable code. As you can see in the above code, the same ApiModel types are used in a typed client, a controller, a NuGet package, even in API tests!

  • Tested code. You write API tests against the typed client, which means you also test the public client itself.

  • Code scaffold assistance. Having only the client interface, it is possible to generate a stub for the client itself, for the referenced ASP.NET Core controller/action (and vice versa). Generate a fully functional API test for each method of the typed client (i.e., for each public endpoint you declare).

  • Easy control over the dependencies of your contracts. You can even automate it by writing architectural tests (NetArchTest, ArchUnitNET).

Frankly, I want to have the same project structure even when there are no NuGet package consumers planned, see below.

API tests

Disclaimer: An effective approach to testing is to balance efforts across different levels of testing, i.e. to follow The Test Pyramid paradigm. And while this section focuses on in-memory unit/integration testing, nothing I say below implies in any way that system/end-to-end tests are not needed (they are the absolute must).

Advantages of using a typed client in testing

In-memory HTTP API testing can be tricky. Fortunately, ASP.NET Core ships with an in-memory test server that effectively reduces the cost of client-server emulation, and has full support for typed clients. Let's examine the benefits of reusing a typed client in tests:

Unified codebase. If you don't specify a team-wide convention for declaring a public contract in the form of a well-designed typed HTTP client... Then, it's an invitation to take a quick-and-dirty approach to writing API tests. Such as hardcoded calls to HttpClient, or at best a homemade typed client with potentially wrong serialization settings or a messed up HTTP handler pipeline.

Code coverage insights. Without a facade for making API calls, uncovered endpoints can only be reliably tracked at the coverage analysis stage. This is not bad, but we can do better. If the public method of a typed client has no usages, this usually means that it is not explicitly called in a test. In effect, it means that the referenced API endpoint itself is not covered. These client's unused methods are easy prey for static code analyzers, giving you compile-time test coverage insights before you even run a code coverage collector.

Therefore, even if there are no .NET consumers for your APIs, building a typed client is still recommended.

Giving API tests the right purpose

Then let's talk about what logic API tests should cover. Assuming you have separate sets of:

  • End-to-end language-agnostic tests (this is where Swagger can help a lot), ideally run somewhere in a test environment.
  • Unit and integration tests that cover business logic and data access layer.

Then there is little reason to play in their domain. On the contrary, at the API testing level, I'd like to have tested only the implementation details of the HTTP APIs:

  • Endpoint address.
  • (De)serialization of input/output data.
  • Application-level error handling (usually implemented via ASP.NET exception handler or middleware).
  • Middleware pipeline.
  • Authorization policy.

Thus the less business logic involved in API tests, the better. Mock it. Make the tests simple, fast, isolated, and suitable for continuous testing. This approach fits well with thin controllers (using CQRS and/or the Mediator pattern) if your controller's actions have a single, easy to mock dependency a'la IRequestHandler<TInput, TOutput>:

[HttpPost("/orders")]
public Task<OrderApiModel> Create(
    [FromServices] 
    IRequestHandler<CreateOrderApiModel, OrderApiModel> handler,
    [FromBody] 
    CreateOrderApiModel input)
{
    //...
    return handler.InvokeAsync(input);
}
Enter fullscreen mode Exit fullscreen mode

Key points:

  • Business logic is effectively decoupled from the communication layer.
  • API tests cover only the communication layer.
  • No messing with controllers in tests. Testing a fat controller may be tricky because of one big implicit dependency: HttpContext. In short, I recommend abstaining from manual controller creation in tests.

The other decorating handler

Custom HTTP handlers are well known as a mechanism to manage cross-cutting concerns around HTTP requests. The calling application has control over the HTTP handler pipeline, so it can be reconfigured, reordered, or even rebuilt from scratch. Decorating a client with a Token Management Handler or a custom Polly policy is easy... assuming the client accepts an HttpClient parameter in its constructor, and you haven't messed with the natural order of things by obstructing the client customization in some way (I really don't want to show how).

One thing to emphasize here is that a consumer may have their own reasons for decorating HTTP calls. So even if you are driven by good intentions, keep the public constructor. Even if your API needs that bunch of client-side decorators so badly that you bundled it with the client, at least provide an extension point for the handler pipeline. Otherwise, one day you will find that your "smart" factory method has just been decompiled and rewritten on the consumer side. Just because it was cheaper and faster than asking your team for changes.

DI registration

As mentioned above, keep the registration simple and open for extension. For instance, such an unobtrusive registration could be a wrap over a built-in AddHttpClient<TInterface, TImpl> call:

public static IHttpClientBuilder AddClient<TInterface, TImpl>(
    this IServiceCollection services,
    Func<IServiceProvider, string> getBaseAddress) // important
    where TImpl : class, TInterface
    where TInterface : class
{
    //any registration method should be self-contained if possible
    services.AddYourCustomHttpHandlers(provider => {...});

    var builder = services.AddHttpClient<TInterface, TImpl>(
    (provider, httpClient) =>
    {
        httpClient.BaseAddress = new Uri(getBaseAddress(provider));
    })
    .SetYourCustomHttpHandlers();

    //builder is returned to allow further extension
    return builder;
}
Enter fullscreen mode Exit fullscreen mode

Caller side:

services.TryAddSimpleClient<IFooClient, FooClient>(
    provider => provider
        .GetRequiredService<AppSettings>()
        .Middleware.Foo.Url);
Enter fullscreen mode Exit fullscreen mode

As you can see from the code above, instead of passing a settings object directly to the AddClient method, a delegate is passed. This doesn't force the caller to provide the dependency immediately and works well with ASP.NET Core configuration.

Key points:

  • Make the registration method self-contained.
  • Explicitly declare dependencies.
  • Never ask for configuration values before the ServiceProvider is built.
  • Let the caller decide how to resolve dependencies.

Adding client-side logic

How about something tightly coupled to the typed client itself? Like input validation:

public Task<OrderApiModel> CreateAsync(CreateOrderApiModel input)
{
    _validator.ThrowIfInvalid(input);

    var response = await httpClient.PostAsJsonAsync("orders", input);

    // ...
}
Enter fullscreen mode Exit fullscreen mode

Calling ThrowIfInvalid(input) promises great benefits indeed:

  • No HTTP call on invalid input.
  • Full error data is provided to the caller: we avoid the pain of stripping an exception of all security-sensitive info, fitting it into a ProblemDetails compliant response (a'la ValidationProblemDetails), and deserializing it on the client side.

What could go wrong? Well, depending on the validator, we may have just duplicated our business logic to an unknown number of external systems, effectively reducing the effort of building a distributed application almost to zero. While validating the input data makes some sense, business rules (I have covered the difference in more detail here) should under no circumstances leak to the caller.

With this in mind, I would recommend:

  • Keeping business logic where it belongs: in the domain.
  • Reducing a typed client's inner logic to the bare minumum.
  • Treating any typed client's auxiliary inner logic as a "high risk – low reward" move.

Making the client dependent on another package

Another questionable design decision is adding external dependencies to a typed client assembly. For example, referencing a core package with helper methods, default serialization settings (which I highly recommend defining), and all that shared sfuff. Another example might be using a third-party library. Like Newtonsoft.Json was the preferred way to handle JSON input/output in a typed client before we got System.Text.Json. And there is nothing wrong with that.

However, any client dependency can lead to a future case of the notorious "DLL hell" problem. While multiple major versions of the same package are transitively referenced by the root application, we have exactly one package version bound at runtime. Then, when we call the other version transitively, we can get a nasty runtime error. In general form, it is unsolvable on the root application side. More technical details can be found, for example, in this thread and its follow-ups: Referencing multiple package versions within one project with extern aliases.

Things often get ugly in a service that implements Saga or Facade patterns:

DLL hell

Of course, you could just update all your (N+1) typed clients at once in case of a breaking change in the core library, but this approach doesn't scale well.

Again, avoid external dependencies unless it absolutely necessary, and think twice before adding anything to the library.

Summary

In this post, I described a scalable architectural approach to building typed HTTP API clients on top of ASP.NET Core applications. I outlined a number of practices that contribute to API testability and maintainability. I advocated the use of NuGet packages over autogenerated client code. I also talked about customizing the HTTP handler pipeline, adding client-side logic, and avoiding the "DLL hell" problem.

Top comments (0)