DEV Community

Cover image for Implement a Batch API using FastAPI
Manuel Kanetscheider
Manuel Kanetscheider

Posted on

Implement a Batch API using FastAPI

In this blogpost I would like to present you a possible implementation for a batch route using FastAPI. FastAPI is a great framework for creating REST API's, unfortunately there is currently no native support for batching.

Why batching?

Batching is the process of combining multiple API requests into a single request with a single response. Without batching, the whole thing looks like this:

Without batch requests

Each request between client and server has a certain network latency, so the processing of several consecutive requests can take some time.

With a batch request, the roundtrip between client and server and thus the number of requests can be reduced, which in turn rapidly improves performance:

With batch requests

Technical consideration of the batch route

In the course of my research, I looked at the batch routes of other major API's, including:

Based on the API's listed above, there are two different ways to submit batch requests:

  • Submit batch request with MIME type multipart/mixed
    • The advantage of this approach is that requests can be sent with different MIME types (e.g. application/json, application/http, application/octet-stream etc.).
    • The body of the request is divided into parts that are separated from each other by a boundary string that is specified in the header of the request.
  • Submit batch request with MIME type application/json
    • The batch request is submitted in JSON format. This limits the API to the MIME type application/json.

I personally prefer the JSON batch approach, assembling the batch request is slightly easier with this approach. Of course this approach is limited to the MIME type application/json only, but since most FastAPI routes implement exactly this MIME type this is acceptable for me. The choice therefore varies simply on the technical requirements.

Our batch route is therefore inspired by the Microsoft Graph API, the batch request will have the following structure:

POST /batch HTTP/1.1
Content-Type: application/json

{
  "requests": [
    {
      "id": "1",
      "url": "/products?skip=0&limit=1",
      "method": "GET"
    },
     {
      "id": "2",
      "url": "/products/1",
      "method": "GET"
    },
    {
        "id": "3",
        "url": "/products",
        "method": "POST",
        "headers": {
            "Content-Type": "application/json"
        },
        "body": {
            "title": "Test Product"
        }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode
Property Mandatory Description
id Yes A correlation value to associate individual responses with requests. This value allows the server to process requests in the batch in the most efficient order.
url Yes The relative resource URL the individual request would typically be sent to.
method Yes The HTTP method (e.g. GET, POST etc.).
headers No A JSON object with the key/value pair for the headers.
body No JSON body.

The next step is to consider how the individual requests should be processed on the server side. Behind each FastAPI route there is a corresponding Python function that can be called directly. At first this sounds like a good idea, but then FastAPI is no longer able to determine the dependencies via depency injection, so we would have to do this manually.

Determining the dependencies manually could be difficult and probably often leads to unexpected behavior of the API route. So I did some research and came across this repository:

GitHub logo dtkav / flask-batch

Handle many API calls from a single HTTP request

Flask-Batch

Travis CI build status GitHub license Latest Version

Batch multiple requests at the http layer. Flask-Batch is inpsired by how google cloud storage does batching.

It adds a /batch route to your API which can execute batched HTTP requests against your API server side. The client wraps several requests in a single request using the multipart/mixed content type.

Installation

pip install Flask-Batch
# to include the dependencies for the batching client
pip install Flask-Batch[client]
Enter fullscreen mode Exit fullscreen mode

Getting Started

Server

from flask import Flask
from flask_batch import add_batch_route

app = Flask(__name__)
add_batch_route(app)

# that's it!
Enter fullscreen mode Exit fullscreen mode

Client

The client wraps a requests session.

from flask_batch.client import Batching
import json
alice_data = bob_data = {"example": "json"}

with Batching("http://localhost:5000/batch") as s:
    alice = s.patch("/people/alice/", json=alice_data)
    bob = s.patch("/people/bob/", json=bob_data)

alice         #
Enter fullscreen mode Exit fullscreen mode

Flask-Batch performs batching on the HTTP layer. This means that on the server side a HTTP client executes all batch requests. The network latency is much lower on the server side which implies that the requests can be processed faster on the server side. Additionally, since the requests are processed on the HTTP layer, FastAPI is also able to resolve the dependencies via dependicy injection. Therefore I also decided to do batching on HTTP level.

Let's get started

Defining the batch models

HTTPVerbs

This enum defines the allowed HTTP methods.

BatchRequest

This model represents a single API request. This model also ensures that the referenced endpoint meets our requirements, in this case that no absolute URL is given and that the route starts with a leading slash. Especially the check that it is not an absolute URL is extremely important to prevent potential improper usage of our batch route.

BatchResponse

This model represents the result of the corresponding request. Here also the ID of the request is returned so that the client can map request and response (theoretically the order of processing could vary). Besides the ID, the HTTP status code, the headers and the JSON body are also returned.

BatchIn

This is the container that includes all our batch requests. Here it is also checked if all ID's are unique and the allowed maximum amount of requests has not been exceeded.

BatchOut

This is the container that holds all the responses, this model is the one that is passed back to the caller.

Implement the batch route

The batch requests are processed using the Python package aiohttp. This allows the requests to be processed in an asynchronous manner. One blog post that helped me a lot with the implementation is the post Making 1 million requests with python-aiohttp which I would definitely like to recommend to you!

In the next step we implement the actual batch route, here we just have to plug everything together:

The repository, which contains all the code including a small test API, can be found here:

FastAPI Batch Tutorial

This repository contains a possible implementation of a batch route using FastAPI For more details, please check out my blog post on dev.to

Description

  • Json based batching - The data is exchanged using the MIME type application/json, this approach is inspired by the Microsoft Graph Batch API.
  • Batching takes place on the HTTP layer. This approach was chosen to ensure consistent behaviour.

Why batching

Batching is the process of combining multiple API requests into a single request with a single response Without batching, the whole thing looks like this:

sequenceDiagram
 autonumber
 Client ->> Server: API Request
 Server ->> Client: API Response
 Client ->> Server: API Request
 Server ->> Client: API Response
 Client ->> Server: API Request
 Server ->> Client: API Response

Each request between client and server has a certain network latency, so the processing of several consecutive requests can take some time.

With a batch…

Conclusion

Bathing is an important component to combine many requests to a single API call and thus increase the performance rapidly.
I hope you enjoyed my blog, thanks for reading! If you have any recommendations on how to improve my implementation, please let me know in the comments section.

Top comments (1)

Collapse
 
hussein-awala profile image
Hussein Awala

Nice blog! Batching is necessary for APIs that serve a machine learning model because these models can process a very large batch of items almost at the same time as a single item, that's why I created async-batcher, a new Python package that works in any Python application, regardless of the packages used, and provides a set of ready-to-use batchers.