DEV Community

Cover image for How to Manage Your APIs Across Multi-Cloud Environments
João Esperancinha
João Esperancinha

Posted on

How to Manage Your APIs Across Multi-Cloud Environments

1. Introduction

To manage APIs across muli-cloud environments we need an API gateway. Kong provides this using Kong KONNECT. In this article, we’ll look into ways to make that a reality and work with them.

2. Configure API Gateways in Each Cloud Provider

I made a video about API gateways a while ago that you can find over here:

The video contains a detailed description of how to set up an API gateway that can serve as the entry point for your application, whether it contains many services or just one. Cloud providers may offer many alternatives to deploy an API gateway, but it always follows the same principle. The idea is that you can use a Gateway just like I mentioned in my wildlife-safety-monitor project located on GitHub. This is the diagram of that project:

blogcenter

What we see in this diagram is an internal network managed by Kuma that has a gateway. But we are probably more interested in a simpler example like this one:

blogcenter

These two examples are only here to show how an API gateway can be implemented. An example of that is the one I talk about in this video:

https://youtu.be/rJKbAzjb5lQ

To provision an API gateway via KONNECT, we just have to first log into our console and create something called a data-plane node over here:

blogcenter

Then we just need to fill out some fields:

blogcenter

Once we have selected the version of the Kong Gateway we want to use and the platform we want to run it in, we just need to copy the generated script to our command line and run the container with the API gateway. The running command contains a lot of inputs to our container and we should use this to set up the container for our API gateway within our cluster or environment.
When we run that script locally, it will connect to KONNECT and then we can save that node.
We could use this with Kubernetes, docker-compose, or other container management software. In the video about gRPC that I mentioned above, I explain further how I achieve this, but what is essential to understand is of course how this works.
Kong KONNECT is just a cloud configuration web application that allows us to configure pretty much anything related to our gateway and this includes request protocol translation, rate-limiting, different security protections, load balancing, etc. The idea of using Kong KONNECT only is just to provide an entry point to the settings of our running container. This container uses Keys to be able to know exactly from which instance to obtain the requests from which Kong KONNECT user. This allows for a seamless transmission of requests from a user who wants to configure the gateway to the gateway itself. Without Kong KONNECT the only way was to use the command line command or update the Kong gateway config file and restart the container, which is not desired, namely in high-availability systems. And this is where we can segway into what I did for the project wildlife-safety-monitor over here on GitHub:

KONG_ROLE=data_plane
KONG_DATABASE=off
KONG_VITALS=off
KONG_CLUSTER_MTLS=pki
KONG_CLUSTER_CONTROL_PLANE=4674f82943.eu.cp0.konghq.com:443
KONG_CLUSTER_SERVER_NAME=4674f82943.eu.cp0.konghq.com
KONG_CLUSTER_TELEMETRY_ENDPOINT=4674f82943.eu.tp0.konghq.com:443
KONG_CLUSTER_TELEMETRY_SERVER_NAME=4674f82943.eu.tp0.konghq.com
KONG_CLUSTER_CERT="-----BEGIN CERTIFICATE-----FAKE-----END CERTIFICATE-----"
KONG_CLUSTER_CERT_KEY="-----BEGIN PRIVATE KEY-----FAKE-----END PRIVATE KEY-----"
KONG_LUA_SSL_TRUSTED_CERTIFICATE=system
KONG_KONNECT_MODE=on
KONG_CLUSTER_DP_LABELS=created-by:quickstart,type:docker-linuxdockerOS
Enter fullscreen mode Exit fullscreen mode

On that project, there is a Makefile that has a script to generate this from a script.sh file where we only need to copy the contents of what is given to us on the Kong KONNECT website. Once we generate the file, we see immediately that connecting to Kong KONNECT via the running docker container on our local machine isn’t that complicated, because we are essentially immediately establishing a connection with KONNECT, and that means that only by logging into Kong KONNECT we are already empowered to configure our gateway online.

2.1 API Gateway Federation and synchronization

An API Gateway federation is critical because it allows us to group data-plane nodes into different subjects. This allows us to keep the same granular control of our data planes but gives us the option of distribution configurations across different data-plane nodes.
Nowadays, we need to ensure that our applications are maintained as uniformly as possible, whether between multiple replicas behind a load balancer or separate APIs. Because our APIs almost always run in distributed systems, it is important to keep them uniform to minimize maintenance costs. A change in one API can be applied to another API using the same process. This means that we need to make sure as much as possible that we keep the same versions of dependencies, databases, etc. The process that allows that is known as API synchronization and this is something that Kong KONNECT allows us to do very well.
There are several techniques to do that with Kong KONNECT: Automated API Deployment, Service Mesh configuration, and Centralized management.
For Kong Konnect, however, we are only interested in Centralized management at this point. To do that, we need to create a couple of data plane nodes. In this case, we need first to create a new Gateway Service. The Gateway Service controls data plane nodes:

blogcenter

Once we select the Kong Gateway option, we can create a new Kong Gateway service:

blogcenter

Upon creating, this will immediately lead us to create another dataplane node. If we add one more, then we can create a data plane node group and in this case, make sure to change to ports 8001 and 8444, if you are running on your local machine:

docker run -d \
-e "KONG_ROLE=data_plane" \
-e "KONG_DATABASE=off" \
-e "KONG_VITALS=off" \
-e "KONG_CLUSTER_MTLS=pki" \
-e "KONG_CLUSTER_CONTROL_PLANE=c0088bf6d5.eu.cp0.konghq.com:443" \
-e "KONG_CLUSTER_SERVER_NAME=c0088bf6d5.eu.cp0.konghq.com" \
-e "KONG_CLUSTER_TELEMETRY_ENDPOINT=c0088bf6d5.eu.tp0.konghq.com:443" \
-e "KONG_CLUSTER_TELEMETRY_SERVER_NAME=c0088bf6d5.eu.tp0.konghq.com" \
-e "KONG_CLUSTER_CERT=-----BEGIN CERTIFICATE-----FAKE-----END CERTIFICATE-----" \
-e "KONG_CLUSTER_CERT_KEY=-----BEGIN PRIVATE KEY-----FAKE-----END PRIVATE KEY-----" \
-e "KONG_LUA_SSL_TRUSTED_CERTIFICATE=system" \
-e "KONG_KONNECT_MODE=on" \
-p 8001:8000 \
-p 8444:8443 \
kong/kong-gateway:3.7.0.0
Enter fullscreen mode Exit fullscreen mode

Notice that the only ports changed are the outer ports 8001 and 8444
Once this is done, we should now have two different data node planes running and should be able to see that in KONNECT:

blogcenter

This means that we have now two gateways running locally. The next step is part of the reason why KONNECT can make life easier. At this point, both containers aren’t aware of each other locally, and they don’t have to. They can however get federated inside a data plane node group, and to do that we only have to create a group:

blogcenter

One important thing to remember here is that, while the services are running in our machine and staying connected to KONNECT, we don’t see them listed in the data plane nodes window if they are disconnected or have stopped running. The list is created based on the connected data node planes and those are connected via certificates that are saved in KONNECT itself. This is where you can find them:

blogcenter

In the Gateway overview, there is an actions button with a list of Data Plane Certificates. That option leads us to:

blogcenter

This means that the removal of a data node from KONNECT only happens effectively with the removal of these certificates. There is further no control over how the data nodes should stop connecting to KONNECT. We cannot ban a specific node, but if we remove the certificate from our KONNECT account, then the node will not be able to connect. It works as a ban but this is an important aspect because many people are instinctively driven to think that a node can be banned directly and that can only be done by the removal of the certificate from Kong KONNECT.
We need to click on Gateway Manager and then select the option to create a Control Plane Group:

blogcenter

This option allows us to federate gateway managers. We don’t federate data plane nodes. Instead, it is the gateway manager that gets federated:

blogcenter

Once the federation has been created, the two gateway services are federated and each one of them has now a data plane node running. But we are not there yet because our nodes are running:

blogcenter

This means that we first need to stop them. Using docker we can use a docker stop command to stop them for example. This step really depends on your platform. One the nodes are stopped, we should now be able to federate our services:

blogcenter

We don’t see any data plan nodes running at the moment because we have stopped them. Let’s start them now. If you did this now, you probably noticed that you do not see any nodes being connected. This happens because those nodes now need to be associated with the federation and not with only the service.
The new nodes need to be connected with the federation and not the service and so to do that let’s first create a new data plane node in the default gateway manager that now exists under the federation:

blogcenter

When that node gets created we should now see a new data node plane under the Federation:

blogcenter

This now means that our services only serve an organizational function in terms of configuration. We can still discriminate between goals, but the configuration will be applied globally under the federation, and we have to be careful here because a configuration of two plugins in different services with the same plugin will result in conflicts and so if we choose the same plugin in one service like for example the basic auth:

blogcenter

If we do that for the other service, we will get a conflict like this:

blogcenter

We cannot add plugins to the federations. They are read-only abstractions that give us a sense of unity between the different services and the federated nodes:

blogcenter

Federation using Kong KONNECT is a broad term that allows us to configure different plugins. The configuration of each plugin escapes the scope of this article, but the idea is that in terms of authentication, we also have concepts like federated identity, federated security, and cloud communication. These are all things that follow a similar principle.
Using Kong Meshes is also a way to create a kind of federated environment with the difference that the configuration does not have to be centralized.
I talk about Kuma Meshes over here:

But Kong Meshes are an extension of Kuma. Kuma is open source. Kong Meshes is an enterprise, which means that there are other features available. To start with it let’s first create, not a control plane, but a Global Control Plane:

blogcenter

blogcenter

The configuration of a global control plane is very straightforward and after configuring it we should come to this screen:

blogcenter

Applying policies and how to do that in a fine-grained way escaped the purpose of this article.

2.2. API Lifecycle Management

An API Lifecycle Management in Kong KONNECT comprises 8 phases: Design and Development, Deployment and Scaling, Operations and Monitoring, Security and Compliance, Versioning and Evolution, Decommissioning, Automation and CI/CD Integration, and Community and Ecosystem. These phases are controlled and configured by the use of plugins and the usage of Federation. Important here to highlight is that security and version and keeping integrity across the APIs is very important and that is something that, as we have seen above Is straightforward to do in Kong as long as we apply the right plugin and create federations and services correctly segregated for different purposes.

2.2.1. Security and Access Control

Security and access control can be implemented by applying different plugins to our control manager or federation. We have many to choose from this list:

blogcenter

And we can also enforce security policies like the ones available from these plugins:

blogcenter

2.2.2. Monitoring and Analytics

We can configure analytics in Kong using these plugins:

blogcenter

We can use these plugins to connect monitoring data to a datadog service or any other kind of service using the StatsD plugin. The idea is to provide a universal way to provide metrics to perform analysis that can be coupled with external tools.

2.2.3. Testing and Deployment

The deployment of the API gateway using Kong KONNECT is much easier in this case because once our API gateway, as we have seen before is integrated in our system, which could be something like k8s or Docker, the updates and management is done in this centralized web application.

2.2.3. Disaster Recovery and High Availability

Disaster recovery is very important. Kong KONNECT acknowledges that and that is why KONNECT never gives up on a node. Remember that above we mentioned that a node never gets removed. We only need to remove the certificate to stop the node from trying to connect. If we load balance that, then the system can remain waiting for that node.
In terms of high availability, there are multiple strategies to make sure that happens, but we also need to make sure that our services do not suffer from a DDoS attack and that usually happens when a bad actor tries to make too many requests to a service. In that case, unprepared machines can be taken down, leading up to service disruptions. In order to control traffic, there are many plugins to choose from:

blogcenter

One of the most used plugins of this section is the rate-limiting plugin that prevents DDoS by allowing traffic up until a point and after that, a potential bad actor may insist as much as intended but all of these requests will be denied entry until the request frequency goes down to a certain level. It still leads to some disruptions, but the service doesn’t go down leading to eventually a faster upstart of the system.

3. Conclusion

Kong KONNECT allows us to configure most of what we need for our API gateways directly and seamlessly.
Know more about Kong with some videos I have created on YouTube about these topics:

  1. Working with Insomnia and Inso

  1. gRPC with Kong Konnect

  1. Kuma

  1. Kong Konnect

4. References



Top comments (0)