DEV Community

Cover image for Configurable Kong API Gateway with Micronaut Services in Kotlin — A very odd Yucca tribute concert
João Esperancinha
João Esperancinha

Posted on

Configurable Kong API Gateway with Micronaut Services in Kotlin — A very odd Yucca tribute concert

1. Introduction

King Kong, the Movie, was released on the 28th of April 1933. Fast forward and Donkey Kong, the Video Game that was released on the 14th of June 1994. Kong seems to have a really nice association with Gorillas. Gorillaz, the virtual band, was founded in 1998. They have nothing to do with this article, but I love their music. Finally, the Kong framework was released much later. Their first release came out in October 2017.


The story of Kong begins in 2007 by Augustu Marietti, the current CEO(2022), in a garage in Milan. There he founded MemboxX which became one of the first data storage service providers in Europe. Two years later, Marietti cofounded Mashape, which was in its core as a "Mash"-up of functions provided together with an application server. The company moved quickly to the US in San Francisco. In 2015, the company launched an open-source project called Kong which led to the company rebranding to Kong Inc. in 2017. Nowadays (2021), Kong is a brand most widely known for its API gateway function. However, Kong is also a brand for other products like Kong-Mesh, Imsonia, and Konnect.


The first time I heard about Kong was a few years ago (today is 2021) and back then I would see Kong-related code when I searched for IT stuff on the web. I saw their advertising and read something about them, but I never really had to use it, or I never understood exactly what is for. With a bit of motivation, this year I was able to spend some time with this to figure out what I could do with it. Then I found out that Kong is implemented to be used as a gateway to a system. The system in this case can be anything. In our case, we are going to use it as a gateway to a network created by docker-compose. Kong works as a sort of proxy where we can change several properties to detect patterns of access to our applications inside such a network.

blogcenter


In this article we are going to see how to implement the API gateway, what can it do for us, and how we can use it in real-life scenarios with the rate-limiting plugin.

2. Case

The concept of the work that I’ve created in order to do this, revolves around an application created to reserve tickets for a concert. In this case, we are going to implement a concept that I call a Yucca concert concept. The idea is is to pass data around to allow resources to be released as soon as possible in order to receive more data. The idea is to make a system as reactive as possible to allow the creation of tickets to a concert. In this Yucca concept, the idea is that we have some sort of queue to receive data and then let our system handle that in the background. by doing so, we release the application so that more requests can be placed in the queue and then processed later in an asynchronous way. By allowing a system to be reactive, we also need to make sure that it can detect abuse. Abuse of a system can take many forms and if you are familiar with ethical hacking or software security concepts, you know that one of the biggest threats to reactive systems (actually any system, but probably even more so to reactive systems) is a DDoS attack. DDoS attack means a Distributed Denial of Service attack. The way to prevent such a thing is by using Software Gateway services like Kong or relying on the shared services of your provider, which can limit your work flexibility and not allow you to install certain types of software.

But, before further digressing, let’s talk about what we want to achieve. The idea about reserving a ticket in this case is to see a show. In this Yucca concept, the ticket comes associated with a mandatory one or more concert days, optional one or more drinks, optional one or more meals, and optional parking. Customers get immediately a reservation reference and then later, the registration is complete. Since most of the concerts provided by Yucca are online, and the expected attendance is massive, It becomes impossible to fulfill registrations as customers register their tickets online in real-time or at the ticket office. We know thus, that we’ll have different maximum rates for the different segments of the tickets. We also know that above a certain rate, the odds of it being a DDoS attack also increase. By the same principle, we also know that some people will reserve the ticket, but not actually complete the request, there may also be cancellations. Whatever maximum rate we calculate to be of the ticket requests, odds are that people will also reserve a meal and a drink. This means that we get a rate that is double the value of the ticket rate. A concert usually lasts around 3 days and so in this case we can consider roughly a triple rate value of the ticket requests. Finally, since the car parking places are quite limited, the rate limit should be much lower.

So what should we do, if after deployment, the rate is surpassed? Kong allows us to block further requests for a while and according to a criteria that we may define according to our expectations. We can avoid DDoS attacks in this way.

2.1. Architecture

For the implementation, the first question that popped into my mind was: "In what language do I want to implement this application?". Given the current hype around this newcomer into the JVM scene, Kotlin, I decided to give it a further go at it. Since it’s been going significantly strong since last year (2021) it made sense to me to check it out and how I could combine my research in Kotlin and Micronaut. For this specific case, I also wanted to try something new, just to see how it works and what I could take out of using a new framework. So, once established that I wanted to use Kotlin and Micronaut, I was still missing some sort of stream managing system, queues, or whatever. I just wanted something where I could inject some requests, reply back to the client, and release resources. Effectively I wanted to make every single endpoint as reactive as possible. And so I added Redis and implemented a Publisher-Subscriber system, typically used in Reactive Engineering. "Why not Kafka". I have already an advancement on this question because Kafka is a mega framework in the IT world, and it seems like it’s really all we know these days. That and Akka streams. So just for the sake of doing something radically new to me, I chose Redis. But I still wasn’t satisfied, because I still had to choose, which reactive programming model to choose from. As I explored before, Spring has webflux, but in this, I am using Micronaut. Another mega hype in the IT world these days is coroutines. This is what some people call a blatant copy of Project Loom. Anyway, since I had never used this before, I decided to actually use it. For the rest, I’m using the very traditional Docker/Docker-compose way of launching containers locally, nothing fancy, just because that would be too much out of scope for this article if I would add more advanced ways of launching containers.


Leveling up in the settlement of our architecture, I decided that every different request type would run on its own service. Here I try to follow some very, very basic microservices architecture, where every service has its own responsibility. In the following sequence diagram we can see what are we going to have a look at in terms of code:

blogcenter

So in this diagram we can a few players, but the only player open to the public is the yucca-api. This application is responsible for the generation of a receipt id (an UUID), which is immediately returned to the client. Unbenounced to the client, or the buyer, the ticket hasn’t actually been processed yet and it’s not even in the database yet. Let me ask you a question. Have you ever have received an email where it says something like "Your request is being processed. Within an hour you’ll receive your tickets. Keep this reference number if you need to call our help desk."?. Well, this is exactly what is happening. You just get a purchase confirmation, but further you get nothing else. In the meantime the yucca-api has picked up your request and assigned a reference number of the receipt that has just been persisted to the database to your ticket request. Then it will be shipped to Redis. The listeners in the backend will now fetch this request back and ship it via REST to the yucca-ticket. This makes it possible to process the rest of the ticket in a fully asynchronous way. The yucca-ticket runs exactly in the same way, except that now, when it gets picked up by the listener, it will break the ticket into three parts: concert data, parking data and catering data. To be precise it will actually break the request into four parts. The catering data, if you haven’t noticed, is divided into drinks and meals. Once this is done, four rest calls are going to be performed to three different API’s. These are the yucca-catering, yucca-concert and yucca-parking. These services provide thus 4 endpoints. All of them have no provision for REST calls since they are the last requests performed per ticket. They do, however, process these last requests also in a reactive way, by responding immediately after receiving the different payloads and only spending time setting them up to the Pub-Sub system provided by Redis. At this point we have concluded the ticket request and the user would only then receive an email. This last bit is well off the scope of this article. We can see this whole process in action in the following sequence diagram:

blogcenter

Please check the home of the project for a better visualisation of this sequence diagram.

In order to understand how this all plays out in the database, it is also important to have a good look at the ER model:

blogcenter

This will allow us to collect all the data that is sent in the initial payload.

3. Implementation

Implementing this system did involve a lot of research, but I was able to implement a few things. One is the codecs that is necessary to provide the payload both for our REST requests and the data streaming in and out of the Redis Pub-Sub system. Because I wanted to make it as generic as possible I’ve created it with reified types. This was just made for convenience and to explore the Kotlin programming language too. We’ll see later why this could be a terrible idea to do:

abstract class BuyOycType(
    open val type: AuditLogType = RECEIPT
) : Serializable

abstract class BuyOycCodec<T> : RedisCodec<String, T> {

    override fun decodeKey(byteBuffer: ByteBuffer): String {
        return Charset.defaultCharset().decode(byteBuffer).toString()
    }

    override fun decodeValue(byteBuffer: ByteBuffer): T =
        ObjectInputStream(
            ByteArrayInputStream(byteArrayCodec.decodeValue(byteBuffer))
        ).use { readCodecObject(it) }

    abstract fun readCodecObject(it: ObjectInputStream): T

    override fun encodeKey(key: String): ByteBuffer {
        return Charset.defaultCharset().encode(key)
    }

    override fun encodeValue(ticketDto: T): ByteBuffer =
        ByteArrayOutputStream().use { baos ->
            ObjectOutputStream(baos).use { oos ->
                oos.writeObject(ticketDto)
                byteArrayCodec.encodeValue(baos.toByteArray())
            }
        }

    companion object {
        val byteArrayCodec = ByteArrayCodec()
    }
}

inline fun <reified T : Any> ObjectInputStream.readTypedObject(): T = readObject() as T
Enter fullscreen mode Exit fullscreen mode

As we can see, there are two important interfaces that our DTO types need to follow. They need to be Serializable and they also need to be of type BuyOycType. This is just so that we can serialize them to Redis. The Serializable interface is normally not needed for just REST controllers. For those of you who don’t know what use is, this is an extreme sugary version of the try with resources in Java. If you use this in Kotlin you might not even be aware that this is what use is doing, so be ware if you don’t already know.

Since we are also having a look at how the data model is created in Micronaut, it is important also to have a look at a few of the entities created. In this case we’ll have a look at how some of the relations have been implemented by starting with the TicketReservation entity:

@MappedEntity(namingStrategy = UnderScoreSeparatedLowerCase::class)
data class TicketReservation(
    @field: Id
    @field: AutoPopulated
    var id: UUID? = null,
    val reference: UUID? = UUID.randomUUID(),
    val name: String,
    val address: String,
    val birthDate: LocalDate,
    val parkingReservation: ParkingReservation? = null,
    @field:DateCreated
    val createdAt: LocalDateTime? = LocalDateTime.now(),
)
Enter fullscreen mode Exit fullscreen mode

The MappedEntity annotation allows us to create entities. In short, entities represent the way we handle tables via code. Micronaut has provisions like this and if you know Hibernate you are probably already familiar with Entity. For our purpose in this article, they are quite the same. Giving that we are looking at a reactive model, actual database relations, especially complex ones like many-to-many, many-to-one, one-to-one and one-to-many make almost no sense. However, we can do something else using the Join annotation which seems to work quite well. One TicketReservation is personal, and so we can only associate one ParkingReservation with it. This is in every sense of the word a one-to-one relation. Once the car parking is reserved only one customer can access it and only one parking can be used by one customer. This is a logic requirement because that way we can promise fair car parking usage to the attendees. Let’s look at the ParkingReservation implementation:

@MappedEntity(namingStrategy = UnderScoreSeparatedLowerCase::class)
data class ParkingReservation(
    @field: Id
    @field: AutoPopulated
    var id: UUID? = null,
    val reference: UUID? = UUID.randomUUID(),
    var carParking: CarParking? = null,
    @field:DateCreated
    val createdAt: LocalDateTime? = LocalDateTime.now()
)
Enter fullscreen mode Exit fullscreen mode

In this case, the ParkingReservation is just a middleware table leading up to the actual CarParking table which defines the parking number. This is how we are allocating the parking space. The actual relation between tables is maintained with the Join annotation:

@R2dbcRepository(dialect = Dialect.POSTGRES)
interface ParkingReservationRepository : CoroutineCrudRepository<ParkingReservation, UUID>,
    CoroutineJpaSpecificationExecutor<ParkingReservation> {
    @Join(value = "carParking", type = Join.Type.FETCH)
    override suspend fun findById(id: UUID): ParkingReservation
}
Enter fullscreen mode Exit fullscreen mode

If you noticed, the ticket reservation entity itself does not have any reference to the drinks, meals or concert days reserved. This would be the case of a many ticket reservation to many concerts, many ticket reservation to many drinks and many ticket reservations to many meals. It’s the many keyword in these relations that do not work well with the reactive architecture. In a blocking architecture, we would have defined a list of meals, a list of drinks and a list of concert days, defining thus many-to-many relations bound with each other seamlessly with a join table. Drinks and Meals are actually fixed items in a menu and what we really need is their reservation counterparts. While the relation between ticket reservation and drinks is a many-to-many relation, a relation between a ticket reservation and a drink reservation is a one-to-many relation. In other words, for each ticket reservation we have, we can make many drink reservations, but all of those reservations can only be associated to one ticket reservation, hence one to many. We can apply the same reasoning to the meal reservation and the concert reservation.
In this project, we create the database using scripts that do use foreign keys, although in our code we actually do not use and perform joins instead in an eager fashion. The following is an example of this:

CREATE TABLE IF NOT EXISTS ticket.ticket_reservation
(
    id                     UUID      NOT NULL,
    reference              UUID      NOT NULL UNIQUE,
    name                   varchar,
    address                varchar,
    birth_date             date,
    parking_reservation_id UUID      NULL,
    created_at             TIMESTAMP NOT NULL DEFAULT LOCALTIMESTAMP,
    PRIMARY KEY (id),
    FOREIGN KEY (parking_reservation_id)
        REFERENCES ticket.parking_reservation (id)
);
Enter fullscreen mode Exit fullscreen mode

In Micronaut’s terms, just as in spring, a table is also called a relation. If you have noticed, all of the important tables contains a reference field. This reference field keeps the number given to the customer in the ticket where they can retrieve the vouchers and tickets at the entrance when the concert starts. This is not implemented in this project, but it is important to mention its intention. The past table is therefore not used in this first instance. This is just an example to show the table styles being used.

4. Setting up Kong

After learning a bit on how the application is setup, we can finally start looking at the way Kong is setup. I’ve learned on a first instance to start Kong using a PostgreSQL database for persistence, a Kong migration image and finally the Kong service. Let’s look at a code snippet from the actual docker-compose.yaml file located at the root of the project:

services:
  kong-migrations-up:
    container_name: kong-migrations
    image: "${KONG_DOCKER_TAG:-kong:latest}"
    command: kong migrations bootstrap && kong migrations up && kong migrations finish
    depends_on:
      yucca-db:
        condition: service_healthy
    environment:
      <<: *kong-env
    secrets:
      - kong_postgres_password
    networks:
      - yucca-net

  kong:
    hostname: kong
    container_name: kong
    image: "${KONG_DOCKER_TAG:-kong:latest}"
    depends_on:
        yucca-db:
          condition: service_healthy
        buy-oyc-api:
          condition: service_healthy
        buy-oyc-ticket:
          condition: service_healthy
        buy-oyc-concert:
          condition: service_healthy
        buy-oyc-catering:
          condition: service_healthy
        buy-oyc-parking:
          condition: service_healthy
        kong-migrations-up:
          condition: service_completed_successfully
    user: "${KONG_USER:-kong}"
    environment:
      <<: *kong-env
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_PROXY_LISTEN: "${KONG_PROXY_LISTEN:-0.0.0.0:8000}"
      KONG_ADMIN_LISTEN: "${KONG_ADMIN_LISTEN:-0.0.0.0:8001}"
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_PREFIX: ${KONG_PREFIX:-/var/run/kong}
      KONG_ADMIN_GUI_URL: "http://0.0.0.0:8002"
      KONG_PORTAL_GUI_HOST: "0.0.0.0:8003"
      KONG_PORTAL: "on"
      KONG_LUA_PACKAGE_PATH: "./?.lua;./?/init.lua;"
    restart: on-failure
    secrets:
      - kong_postgres_password
    ports:
      - "127.0.0.1:8001:8001/tcp"
      - "127.0.0.1:8444:8444/tcp"
      - "8002:8002"
      - "8003:8003"
    healthcheck:
      test: [ "CMD", "kong", "health" ]
      interval: 10s
      timeout: 10s
      retries: 20
      start_period: 0s
    volumes:
      - ./kong_prefix_vol:${KONG_PREFIX:-/var/run/kong}
      - ./kong_tmp_vol:/tmp
      - ./kong:/opt/kong
      - ./kong/kong-migration.sh:/opt/kong/kong-migration.sh
    security_opt:
      - no-new-privileges
    networks:
      yucca-net:

  kong-deck:
    hostname: kong-deck
    container_name: kong-deck
    image: kong/deck:v1.16.1
    volumes:
      - ${PWD}/kong:/deck
    command: "--kong-addr http://kong:8001 -s /deck/kong.yaml sync"
    networks:
      yucca-net:
    depends_on:
      kong:
        condition: service_healthy
      yucca-db:
        condition: service_healthy
      buy-oyc-api:
        condition: service_healthy
      buy-oyc-ticket:
        condition: service_healthy
      buy-oyc-concert:
        condition: service_healthy
      buy-oyc-catering:
        condition: service_healthy
      buy-oyc-parking:
        condition: service_healthy
      kong-migrations-up:
        condition: service_completed_successfully

  yucca-db:
    hostname: yucca-db
    container_name: yucca-db
    image: postgres
#    user: ${YUCCA_USER_DB}
    command: postgres -c listen_addresses='*' -c 'max_connections=400' -c 'shared_buffers=100MB'
    environment:
      POSTGRES_DB: ${KONG_PG_DATABASE:-kong}
      POSTGRES_USER: ${KONG_PG_USER:-kong}
      POSTGRES_PASSWORD_FILE: /run/secrets/kong_postgres_password
      POSTGRES_MULTIPLE_DATABASES: yucca
    secrets:
      - kong_postgres_password
    healthcheck:
      test: [ "CMD", "pg_isready", "-U", "${KONG_PG_USER:-kong}" ]
      interval: 30s
      timeout: 30s
      retries: 10
      start_period: 0s
    restart: on-failure
    stdin_open: true
    tty: true
    expose:
      - 5432
    volumes:
      - ./kong_data_vol:/var/lib/postgresql/data
      - ./docker-images/docker-psql:/docker-entrypoint-initdb.d
      - ./docker-images/docker-psql/multiple:/docker-entrypoint-initdb.d/multiple
    networks:
      yucca-net:

  buy-oyc-ticket:
    hostname: buy-oyc-ticket
    container_name: buy-oyc-ticket
    depends_on:
        yucca-db:
          condition: service_healthy
    build:
      context: buy-oyc-ticket-service/.
    environment:
      REDIS_HOST: redis
      POSTGRESQL_HOST: yucca-db
      KONG_SERVICE_IP: kong
    networks:
      yucca-net:
    healthcheck:
      test: ["CMD", "curl", "--silent", "http://127.0.0.1:8084/swagger/views/swagger-ui/index.html"]
      interval: 5s
      timeout: 240s
      retries: 60

  buy-oyc-concert:
    hostname: buy-oyc-concert
    container_name: buy-oyc-concert
    depends_on:
        yucca-db:
          condition: service_healthy
    build:
      context: buy-oyc-concert-service/.
    environment:
      REDIS_HOST: redis
      POSTGRESQL_HOST: yucca-db
    networks:
      yucca-net:
    healthcheck:
      test: ["CMD", "curl", "--silent", "http://127.0.0.1:8085/swagger/views/swagger-ui/index.html"]
      interval: 5s
      timeout: 240s
      retries: 60

  buy-oyc-parking:
    hostname: buy-oyc-parking
    container_name: buy-oyc-parking
    depends_on:
        yucca-db:
          condition: service_healthy
    build:
      context: buy-oyc-parking-service/.
    environment:
      REDIS_HOST: redis
      POSTGRESQL_HOST: yucca-db
    networks:
      yucca-net:
    healthcheck:
      test: ["CMD", "curl", "--silent", "http://127.0.0.1:8086/swagger/views/swagger-ui/index.html"]
      interval: 5s
      timeout: 240s
      retries: 60

  buy-oyc-catering:
    hostname: buy-oyc-catering
    container_name: buy-oyc-catering
    depends_on:
        yucca-db:
          condition: service_healthy
    build:
      context: buy-oyc-catering-service/.
    environment:
      REDIS_HOST: redis
      POSTGRESQL_HOST: yucca-db
    networks:
      yucca-net:
    healthcheck:
      test: ["CMD", "curl", "--silent", "http://127.0.0.1:8087/swagger/views/swagger-ui/index.html"]
      interval: 5s
      timeout: 240s
      retries: 60

  buy-oyc-api:
    hostname: buy-oyc-api
    container_name: buy-oyc-api
    depends_on:
        yucca-db:
          condition: service_healthy
    build:
      context: buy-oyc-api-service/.
    environment:
      REDIS_HOST: redis
      POSTGRESQL_HOST: yucca-db
      KONG_SERVICE_IP: kong
    networks:
      yucca-net:
    healthcheck:
      test: ["CMD", "curl", "--silent", "http://127.0.0.1:8088/swagger/views/swagger-ui/index.html"]
      interval: 5s
      timeout: 240s
      retries: 60

  buy-oyc-nginx:
    hostname: buy-oyc-nginx
    container_name: buy-oyc-nginx
    build:
      context: ./buy-odd-yucca-gui/.
    ports:
      - "8080:8080"
    restart: on-failure
    environment:
      - KONG_SERVICE_IP=kong
    deploy:
      resources:
        limits:
          memory: 300M
        reservations:
          memory: 300M
    networks:
      yucca-net:
    depends_on:
      kong:
        condition: service_healthy

  redis:
    container_name: redis
    image: redis
    ports:
       - "6379:6379"
    networks:
      yucca-net:

secrets:
  kong_postgres_password:
    file: ./password
Enter fullscreen mode Exit fullscreen mode

When Kong starts, we need to let it run the mode we prefer. Since I want the setup to be loaded at the startup of docker-compose, I have no choice but to say that I do not want to use the persistence model of Kong. The excerpt above is from the docker-compose-it.yaml, that I use exclusively for the integration tests. So, from top to bottom, we see firstly the declaration of the docker-migrations-up service. This is actually another Kong image that we are using as a runnable to perform migrations to the database. As mentioned before, this isn’t technically necessary for the integration tests. The second service is another Kong container that we are using to actually run Kong. There is really much to talk about in this setup, but for this article let’s just focus on port 8000. This port is what we are going to use to access all of our application API’s. Kong uses this port (or whatever we want to configure) as the gateway access-point to redirect all traffic to the destination API. If you noticed on the previous diagrams, there is never a direct access to any of the API’s on any occasion. We’ll talk shortly about the other different ports available in Kong. But first, we look into the env variables. The specific Kong variables allow for several configurations. The ones that allow automatic setup loading are:

  • KONG_DATABASE="off"
  • KONG_DECLARATIVE_CONFIG="/opt/kong/kong.yaml

For the KONG_DECLARATIVE_CONFIG, I’m getting the configuration there using volume ./kong:/opt/kong.
Finally, we find the database. In this case the volumes created allow the database to be initialised with this startup script available originally on: https://github.com/mrts/docker-postgresql-multiple-databases/blob/master/create-multiple-postgresql-databases.sh'

#!/bin/bash

# Credits to https://github.com/mrts/docker-postgresql-multiple-databases/blob/master/create-multiple-postgresql-databases.sh
# This is an improvement of that file
# To create an image, this file follows some rules
# It is inspired by the way Spring scans for script and data files
# The root folder of all script files must be docker-entrypoint-initdb.d/multiple/
# `multiple` is a choice.
# Originally, the docker entrypoint located in /user/local/bin looks for all .sql, .gz and .xz files to import on docker-entrypoint-initdb.d/.
# There is not clean way around this. Check the entrypoint details on https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh.
# This script will search for sql files in the root folder docker-entrypoint-initdb.d/multiple/ prepended like schema-<database>.sql and data-<database>.sql
# You can disable this by setting variable POSTGRES_SCAN_DISABLED to true
# Databases are declared in POSTGRES_MULTIPLE_DATABASES as a comma separated string of database tags
# You can specify a bundle of scripts to execute in a certain folder per database like so <database>:<folder> as one of these elements
# If you wish to use folders with the same name as the database and don't want to use this notation then you have to set POSTGRES_FOLDER_MAPPING to true
# To specify the database user use POSTGRES_USER
# To specify the database password use POSTGRES_PASSWORD
# The script creates a user with the given name who has the given password
# It also creates another user with the database name and the given password per database created
# If both match, then only one user is created with the database name and the given password
# This script is available to download in a small example I've created in https://github.com/jesperancinha/project-signer/tree/master/docker-templates/docker-psql

set -e
set -u

POSTGRES_SCAN_DISABLED="${POSTGRES_SCAN_DISABLED:-false}"
POSTGRES_FOLDER_MAPPING="${POSTGRES_FOLDER_MAPPING:-false}"

function create_user_and_database() {
  database=$(echo "$command" | awk -F':' '{print $1}')
  echo "  Creating user and database '$database'"
  psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -tc "SELECT 1 FROM pg_user WHERE usename = '$database'" |
    grep -q 1 ||
    psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
        CREATE USER $database with PASSWORD '$POSTGRES_PASSWORD';
        CREATE DATABASE $database;
        GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
EOSQL
}

function import_file_to_database() {
  echo "Importing file $2 to database $1..."
  psql -U "$POSTGRES_USER" -d "$1" -f "$2"
}

function import_files_from_folder() {
  directory=$1
  database=$2
  echo "Database bundle $directory for database $database requested"
  if [ -d "$directory" ]; then
    for script in "$directory"/*.sql; do
      echo "Request importing file $script to database $database"
      import_file_to_database "$database" "$script"
    done
  else
    echo "WARNING: No script bundle directory found for database $database"
  fi
}

function create_and_import_files() {
  local rootDir="docker-entrypoint-initdb.d/multiple/"
  if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
    echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
    for command in $(echo "$POSTGRES_MULTIPLE_DATABASES" | tr ',' ' '); do
      create_user_and_database "$command"
      if [[ "$command" == *":"* ]]; then
        database=$(echo "$command" | awk -F':' '{print $1}')
        directory=$rootDir$(echo "$command" | awk -F':' '{print $2}')
        import_files_from_folder "$directory" "$database"
      elif [ "$POSTGRES_FOLDER_MAPPING" == true ]; then
        if [[ "$command" != *":"* ]]; then
          import_files_from_folder "$rootDir$database" "$database"
        fi
      fi
      if [ "$POSTGRES_SCAN_DISABLED" != true ]; then
        database=$(echo "$command" | awk -F':' '{print $1}')
        echo "Auto-scanning files for database $database"
        echo "Auto-scanning schema files"
        schema_file=$rootDir"schema-$database.sql"
        if [ -f "$schema_file" ]; then
          import_file_to_database "$database" "$schema_file"
          echo "Auto-scanning data files"
          schema_data=$rootDir"data-$database.sql"
          if [ -f "$schema_data" ]; then
            import_file_to_database "$database" "$schema_data"
          else
            echo "WARNING: No data file detected for database $database"
          fi
        else
          echo "WARNING: No schema file detected for database $database"
        fi
      fi
    done
    echo "Multiple databases created"
  fi
}

create_and_import_files
Enter fullscreen mode Exit fullscreen mode

This script essentially allows for the creation of different databases based of their different names, and folder names and use different data.sql and schema.sql scripts. It makes the code cleaner and easier to use. The file I am using has been developed from this one.
Finally, we can look at the kong file, kong.yaml, which sets up the ports, the urls and the mappings:

_format_version: "3.0"
_transform: true
services:

# Ticket Service
- url: http://buy-oyc-ticket:8084/api/yucca-ticket/swagger/views/swagger-ui/res
  name: buy-oyc-ticket-swagger-ui
  protocol: http
  routes:
    - name: buy-oyc-ticket-service-route-swagger-ui
      paths:
        - /api/yucca-ticket/swagger-ui/res
      strip_path: true
- url: http://buy-oyc-ticket:8084/api/yucca-ticket/swagger
  name: buy-oyc-ticket-swagger
  protocol: http
  routes:
    - name: buy-oyc-ticket-service-route-swagger
      paths:
        - /api/yucca-ticket/swagger
      strip_path: true
- url: http://buy-oyc-ticket:8084/api/yucca-ticket/api
  name: buy-oyc-ticket
  protocol: http
  routes:
    - name: buy-oyc-ticket-service-route
      paths:
        - /api/yucca-ticket/api
      strip_path: true
- url: http://buy-oyc-ticket:8084/api/yucca-ticket
  name: buy-oyc-ticket-rest
  protocol: http
  routes:
    - name: buy-oyc-ticket-service-route-root
      paths:
        - /api/yucca-ticket
      strip_path: true

# Concert Service
- url: http://buy-oyc-concert:8085/api/yucca-concert/swagger/views/swagger-ui/res
  name: buy-oyc-concert-swagger-ui
  protocol: http
  routes:
    - name: buy-oyc-concert-service-route-swagger-ui
      paths:
        - /api/yucca-concert/swagger-ui/res
      strip_path: true
- url: http://buy-oyc-concert:8085/api/yucca-concert/swagger
  name: buy-oyc-concert-swagger
  protocol: http
  routes:
    - name: buy-oyc-concert-service-route-swagger
      paths:
        - /api/yucca-concert/swagger
      strip_path: true
- url: http://buy-oyc-concert:8085/api/yucca-concert/api
  name: buy-oyc-concert
  protocol: http
  routes:
    - name: buy-oyc-concert-service-route
      paths:
        - /api/yucca-concert/api
      strip_path: true
- url: http://buy-oyc-concert:8085/api/yucca-concert
  name: buy-oyc-concert-rest
  protocol: http
  routes:
    - name: buy-oyc-concert-service-route-root
      paths:
        - /api/yucca-concert
      strip_path: true

# Parking Service
- url: http://buy-oyc-parking:8086/api/yucca-parking/swagger/views/swagger-ui/res
  name: buy-oyc-parking-swagger-ui
  protocol: http
  routes:
    - name: buy-oyc-parking-service-route-swagger-ui
      paths:
        - /api/yucca-parking/swagger-ui/res
      strip_path: true
- url: http://buy-oyc-parking:8086/api/yucca-parking/swagger
  name: buy-oyc-parking-swagger
  protocol: http
  routes:
    - name: buy-oyc-parking-service-route-swagger
      paths:
        - /api/yucca-parking/swagger
      strip_path: true
- url: http://buy-oyc-parking:8086/api/yucca-parking/api
  name: buy-oyc-parking
  protocol: http
  routes:
    - name: buy-oyc-parking-service-route
      paths:
        - /api/yucca-parking/api
      strip_path: true
- url: http://buy-oyc-parking:8086/api/yucca-parking
  name: buy-oyc-parking-rest
  protocol: http
  routes:
    - name: buy-oyc-parking-service-route-root
      paths:
        - /api/yucca-parking
      strip_path: true

# Catering Service
- url: http://buy-oyc-catering:8087/api/yucca-catering/swagger/views/swagger-ui/res
  name: buy-oyc-catering-swagger-ui
  protocol: http
  routes:
    - name: buy-oyc-catering-service-route-swagger-ui
      paths:
        - /api/yucca-catering/swagger-ui/res
      strip_path: true
- url: http://buy-oyc-catering:8087/api/yucca-catering/swagger
  name: buy-oyc-catering-swagger
  protocol: http
  routes:
    - name: buy-oyc-catering-service-route-swagger
      paths:
        - /api/yucca-catering/swagger
      strip_path: true
- url: http://buy-oyc-catering:8087/api/yucca-catering/api
  name: buy-oyc-catering
  protocol: http
  routes:
    - name: buy-oyc-catering-service-route
      paths:
        - /api/yucca-catering/api
      strip_path: true
- url: http://buy-oyc-catering:8087/api/yucca-catering
  name: buy-oyc-catering-rest
  protocol: http
  routes:
    - name: buy-oyc-catering-service-route-root
      paths:
        - /api/yucca-catering
      strip_path: true

# API Service
- url: http://buy-oyc-api:8088/api/yucca-api/swagger/views/swagger-ui/res
  name: buy-oyc-api-swagger-ui
  protocol: http
  routes:
  - name: buy-oyc-api-service-route-swagger-ui
    paths:
    - /api/yucca-api/swagger-ui/res
    strip_path: true
- url: http://buy-oyc-api:8088/api/yucca-api/swagger
  name: buy-oyc-api-swagger
  protocol: http
  routes:
  - name: buy-oyc-api-service-route-swagger
    paths:
    - /api/yucca-api/swagger
    strip_path: true
- url: http://buy-oyc-api:8088/api/yucca-api/api
  name: buy-oyc-api
  protocol: http
  routes:
  - name: buy-oyc-api-service-route
    paths:
    - /api/yucca-api/api
    strip_path: true
- url: http://buy-oyc-api:8088/api/yucca-api
  name: buy-oyc-api-rest
  protocol: http
  routes:
    - name: buy-oyc-api-service-route-root
      paths:
        - /api/yucca-api
      strip_path: true
Enter fullscreen mode Exit fullscreen mode

This mapping is loaded directly with the integration tests. If we just run our setup locally, we then use a different approach.
First we use docker-compose up. This starts up everything and all the containers until it stabilises. Once the start-up has finished, we need to inject this configuration file manually into Kong. For that, Kong, makes a small utility available to install called deck. Please check their website to find out more about how to install it. In the Makefile, I’ve created, there is an instruction we can use to do this automatically, where I’m using a small bash polling trick to find out if Kong has started or not. The polling itself is located in the kong_wait.sh file:

#!/bin/bash

function checkServiceByNameAndMessage() {
    name=$1
    message=$2
    printf "%s." "$name"
    docker-compose logs "$name" &> "logs"
    string=$(cat logs)
    echo "$string"
    counter=0
    while [[ "$string" != *"$message"* ]]
    do
      printf "."
      docker-compose logs "$name" &> "logs"
      string=$(cat logs)
      sleep 1
      counter=$((counter+1))
      if [ $counter -eq 200 ]; then
          echo "Failed after $counter tries! Cypress tests mail fail!!"
          echo "$string"
          exit 1
      fi
      if [[ "$string" = *"[PostgreSQL error] failed to retrieve PostgreSQL"* ]]; then
          echo "Failed PostgreSQL connection after $counter tries! Cypress tests mail fail!!"
          echo "$string"
          exit 1
      fi
    done
    counter=$((counter+1))
    echo "succeeded $name Service after $counter tries!"
}

checkServiceByNameAndMessage kong 'init_worker_by_lua'
Enter fullscreen mode Exit fullscreen mode

Finally, in the Makefile we can see this snipped, which calls it:

create-folders:
    mkdir -p kong_prefix_vol kong_tmp_vol kong_data_vol
set-permissions:
    if [[ -d kong_data_vol ]]; then sudo chmod -R 777 kong_data_vol; else mkdir kong_data_vol && sudo chmod -R 777 kong_data_vol; fi
    if [[ -d kong_tmp_vol ]]; then sudo chmod -R 777 kong_tmp_vol; else mkdir kong_tmp_vol && sudo chmod -R 777 kong_tmp_vol; fi
    if [[ -d kong_prefix_vol ]]; then sudo chmod -R 777 kong_prefix_vol; else mkdir kong_prefix_vol && sudo chmod -R 777 kong_prefix_vol; fi
docker: create-folders set-permissions
    docker-compose up -d --build --remove-orphans
Enter fullscreen mode Exit fullscreen mode

What’s important to know is that the kong.yaml file is a configuration file for the endpoints. In terms of rate limiting, we can use REST calls.

5. Running Kong

As I mentioned before, port 8000 is crucial, in order to access the different API’s and allow Kong to perform gateway filtration on the requests. In this whole project, we are looking at rate-limiting. Rate-limiting can be applied to paths and the different API’s individually. We can see every definition of these API’s using individually another Kong API. This API is opened, and following our own configuration, on port 8001. In this section we are going to run our application and so for now please run this command at the root:

make docker
Enter fullscreen mode Exit fullscreen mode

This will run the docker command we seen earlier. If all runs smoothly, you should get all containers running. Once this is done, please open your browser on http://localhost:8001. Here you’ll find a list of all sorts of configuration inside Kong. This is just a warmup look. Now please have look at this url: http://localhost:8001/plugins. You should see a list of plugins. One of them could be something similar to this:

blogcenter

This means that rate-limiting is active and in this case we have a limitation of 1 request per second and 10000 requests per hour. It is also important to know at this point that Kong will not allow you to make overlapping configurations. For example if you only allow to make 1 request per hour, this means that you can only make 60 * 60 = 3600 request per seconds. If you configure anymore than that for seconds, then Kong will rightfully let you know that the second limitation conflicts with the hour limitation, and it will return a message like this one:

blogcenter

But for now, we have no plugin configure, so let’s start. In your command line please run the following:

curl -i -X POST http://localhost:8001/services/buy-oyc-ticket-service/plugins \
    --data "name=rate-limiting"  \
    --data "config.second=1" \
    --data "config.hour=10000" \
    --data "config.policy=local" \
    --data "config.fault_tolerant=true" \
    --data "config.hide_client_headers=false"
Enter fullscreen mode Exit fullscreen mode

This just means that the API now has a plugin that in practice, only allow up to 1 request per second to be pushed through the gateway. Essentially we are limiting our concerts to be created at the rate of 1 per second. All others will be getting an error. We want in this case to be as transparent as possible in our analysis and how we go about to evaluate the requests. This is why we set up the default values of local, true, and false to config.policy, config.fault_tolerant and config.hide_client_headers respectively.
Now we are ready to start locus. On another command line, please run the following:

make locust-start
Enter fullscreen mode Exit fullscreen mode

This will start locust. If you do not know locust, this is essentially a benchmark tool. Find out how to install it directly at the source. We aren’t interested in making performance tests. Instead, we want to see in practice how rate-limiting is working in practice. Perhaps now is a good idea to have a look at our test cases. Let’s first have a look at the task script in python located at /locust/welcome:

import requests
from locust import HttpUser, task, constant_throughput

class BuyOddYuccaConcert(HttpUser):
    wait_time = constant_throughput(1)

    @task
    def yucca_ticket_welcome_message(self):
        send_payload(
            name="будинок",
            address="будинок Адреса",
            birth_date="1979-01-01",
            concert_id="5359A368-CA49-4027-BC25-F375E3EA2463",
            drink_id="B2A5E349-76E7-4CD6-8105-308D1BC94953",
            meal_id="59B97053-37CF-4FAF-AB50-E77CEF8E8CC8")
        send_payload(
            name="Home",
            address="Home Address",
            birth_date="1979-01-01",
            concert_id="2E4522B1-D9FF-4B2B-9FFA-052CBAD9D5F2",
            drink_id="B2A5E349-76E7-4CD6-8105-308D1BC94953",
            meal_id="59B97053-37CF-4FAF-AB50-E77CEF8E8CC8")

Enter fullscreen mode Exit fullscreen mode

These Ids are references to data in our database. I am fixing the request to perform minimally one time per second. In locust terms, this means that every locust performs this task once per second. Since the task performs two post requests, this means that we are minimally making 2 requests per second. The requests are fixed. The function that is being called, makes a call to the API:

def send_payload(name, address, birth_date, concert_id, drink_id, meal_id):
    payload = {
        "name": name,
        "address": address,
        "birthDate": birth_date,
        "concertDays": [
            {
                "concertId": concert_id
            }
        ],
        "meals": [
            {
                "mealId": meal_id
            }
        ],
        "drinks": [
            {
                "drinkId": drink_id
            }
        ],
        "parkingReservation": [
            {
                "carParkingId": 1
            }
        ]
    }
    r = requests.post('http://localhost:8000/api/yucca-api/api', json=payload)
    print(
        f"Person: {name} living in {address} born on {birth_date}, "
        f"just reserved concert {concert_id} with drink {drink_id} and meal {meal_id}, "
        f"Status Code: {r.status_code}, "
        f"Response: {r.json()}")
Enter fullscreen mode Exit fullscreen mode

The payload is just a representation of the general format for one person that makes a reservation for one drink, one meal, one parking space and one concert day. We are going to use this data in order to make sure we can perform our tests:

insert into ticket.car_parking (id, parking_number)
values ('E3BB8287-8F4F-477B-AFDF-44D78665A08C', 1);
insert into ticket.car_parking (id, parking_number)
values ('23745A3C-9426-4E92-A940-89E07C2ED24D', 2);
insert into ticket.car_parking (id, parking_number)
values ('27728F31-244A-4469-81AC-86B332E7BD1B', 3);
insert into ticket.car_parking (id, parking_number)
values ('0892A233-4973-4477-A6DA-91E44E1386D3', 4);
insert into ticket.car_parking (id, parking_number)
values ('61BF1E61-A7C2-4D6E-951A-8E97BD4E4FB3', 5);

insert into ticket.concert_day(id, name, description, concert_date)
values ('5359A368-CA49-4027-BC25-F375E3EA2463', 'Jamala', '', now());
insert into ticket.concert_day(id, name, description, concert_date)
values ('2E4522B1-D9FF-4B2B-9FFA-052CBAD9D5F2', 'Kalush Orchestra', '', now());

insert into ticket.drink(id, name, width, height, shape, volume, price)
values ('2377198D-9E41-4134-8E89-ABD66FE0C59B', 'Varenukha', 10, 10, 10, 10, 10);
insert into ticket.drink(id, name, width, height, shape, volume, price)
values ('B2A5E349-76E7-4CD6-8105-308D1BC94953', 'Uzvar', 10, 10, 10, 10, 10);

insert into ticket.meal(id, coupon , box_type, discount, price, processed)
values ('59B97053-37CF-4FAF-AB50-E77CEF8E8CC8', gen_random_uuid(), 'XS', 10, 10, false);
insert into ticket.meal(id, coupon , box_type, discount, price, processed)
values ('4581DECF-7740-44E8-8B2C-B7EC0FEE31C3', gen_random_uuid(), 'XS', 10, 10, false);
Enter fullscreen mode Exit fullscreen mode

Now we are ready to fire our locust tests. If you noticed you have in your command line logs like this one:

blogcenter

If you are running a good terminal, you should be able to click on the link and go straight to http://localhost:8089:

blogcenter

Just let it be the way it is. It will start one user, and it will spawn more users at a rate of 1 per second. This means that, since we configured Kong to make requests at a rate of 1 request per second max, on the yucca-api gateway, we are just barely on the edge sending requests to the gateway. We will get errors. Our logs will look like this:

blogcenter

So Kong, now configured to accept only one request per second, won’t be able to accept 2 requests per second, which, if you noticed from before, is exactly what our locust task does. Let’s leave locust running.
In our command line you should have gotten something like this when you ran the first request:

blogcenter

What this means is that we have activated a plugin and we named it rate-limiting. This is also a key to the map of plugins that we could activate for Kong. We now want to change the rate, and so we need to remove this plugin and create a new one. For this case we need to indicate the id of the plugin we want to remove. In our case, it is: 0ff09073–3e16–4442-ac78–428b25818b7f:

curl -X DELETE http://localhost:8081/plugins/0ff09073–3e16–4442-ac78–428b25818b7f
Enter fullscreen mode Exit fullscreen mode

Now I’ll show you how we can change this live on the fly. Just make sure that Locust is still running. Run this command now:

curl -i -X POST http://localhost:8001/services/buy-oyc-ticket-service/plugins \
    --data "name=rate-limiting"  \
    --data "config.second=2" \
    --data "config.hour=10000" \
    --data "config.policy=local" \
    --data "config.fault_tolerant=true" \
    --data "config.hide_client_headers=false"
Enter fullscreen mode Exit fullscreen mode

You should not be getting any errors now:

blogcenter

From this point onwards, you should understand how to change the rate to other requests. Feel free to give it a go and try different combinations on all sub-requests.

6. Questions and Answers

"How do we know when the ticket is complete in the back end?"

Since we process the ticket asynchronously, there is actually no way of knowing this in an asynchronous way. The last thing we want is to join all of these processes and wait for them to finish somewhere. It is also not something possible, since the different services should be allowed to work on different machines, areas, regions and domains, making a multithread control impossible. What can be done is some sort of polling system to check when all the database elements have been received and persisted correctly into the database. It should be made in a separate service that would crawl through the database and verify this information.

7. Conclusion

Thanks for following this project with me and for being open to see how everything works. I hope to have given you a good introduction on the basics of rate-limiting in Kong. Using Kong we can do much more than just rate-limiting. There are endless choices and possibilities using Kong.
Regarding Micronaut, I truly believe that it is a great competitor to Spring and I especially like the idea that it brings me focus to engineering problems rather than the current hype around code beauty. This is something I didn’t really experience with Spring. With Micronaut, just as an example I was forced to think about annotation processors and select which one of them were important for my development. This is something I have not seen in other Enterprise frameworks. Where Micronaut forces us to think about what we do on a very low level, other frameworks seem to do a lot of things for us, which can be a bittersweet present.
Regarding Kotlin, this project led me to further deepen my doubts about this new language and what seems to be a technology sweeping everyone off their feet. I have to be honest here, if I compare the way Java has been developing and the way Kotlin has burst into the IT scene, I think my soul remains divided. Although there is a lot of beauty related to this new language, it is starting to look more like the Beauty of the JVM world rather than an engineering-oriented language. I can say that using things like by, reified and not having to think if I’m implementing or extending has also removed quite a bit of the lexicon of the language and we need lexicon and semantics in our thought process. This whole philosophy brought by Kotlin about "not having to" also spread to the use of utility functions that we now make as extension functions. Sometimes you see them in interfaces, classes, companion objects, or wherever they can be, and you’ll find them very much compliant with Murphy’s Law for the good or the worse. Now is there a problem with this? Not directly, but I have already witnessed too many style discussions in the Kotlin world that I didn’t see in the Java world. For example, using findById()? is "so ugly, how could you do that?", because "findByIdOrNull is more Kotlin-like". There are studies that show that shortcuts and fragmenting a language (not just programming) can potentially impair the way we think because it limits our expression choices. But hey, I’m just not allowing myself to jump into the idea that Java is worse than Kotlin. In this project, there is a lot that I did that made my life easier. But I noticed, that for example, I didn’t even have to think about primitives. No int, no long, no float, not double. It’s just a dynamic that since I’ve been working in Kotlin for my projects, I don’t even think about. But I know what they are. Because of this, I always have this question. If I didn’t have a background in Java, would I understand what I do now about Kotlin? I have more and more the feeling that I wouldn’t. I am, however, impressed with the idea of coroutines, but with Project Loom lingering for years and coming out now, what should I conclude about the very similar ideas that the two have with each other?
As a final note, I have to say that I am extremely pleased to have studied Micronaut and Kong for this article. I’m not sure what to think about Kotlin, though. I think my next project will be in the latest Java version and maybe then I’ll compare Java back to Kotlin. Using Redis as a lifeboat for a sort of quick solution for my project with a queue system worked like a charm, and I am very much happy with it.
I have placed all the source code of this application in GitLab
I hope that you have enjoyed this article as much as I enjoyed writing it.
I’d love to hear your thoughts on it, so please leave your comments below.
Thank you for reading!

8. References




Top comments (0)