GitOps Approch to Configuration Management In Kong DBless Mode

Suren Raju
6 min readMay 5, 2023

--

Kong Gateway is an open-source API gateway built on top of NGINX, which offers a rich set of features for managing, securing, and scaling API requests. In this blog post, we will discuss configuration management in Kong in DBless mode using GitOps approach.

This blog post is based open-source Kong API Gateway version 3.0, leveraging its DB-less declarative configuration feature. The Kong API Gateway is deployed on Kubernetes, utilizing the official Kong Helm charts for deployment.

What is DBless or declarative config mode in Kong?

  • Kong Gateway can be run without a database using only in-memory storage for entities and it is called DB-less mode.
  • When running Kong Gateway DB-less, the configuration of entities is done in a second configuration file, in YAML or JSON, using declarative configuration.
  • Declarative configuration is a way of defining the desired state of the system, as opposed to imperative configuration, which specifies how to achieve that state.

Declarative configuration format

The Kong Gateway declarative configuration format is based on YAML syntax and consists of lists of entities and their attributes. Here’s an example configuration that demonstrates the key features of this format:

_format_version: "3.0"
_transform: true
services:
- name: my-service
url: https://example.com
plugins:
- name: key-auth
routes:
- name: my-route
paths:
- /

In this example, the configuration starts with two top-level fields _format_version and _transform. The _format_version field specifies the version of the format being used, while the _transform field enables automatic transformations of the configuration during the loading process.

The services section defines the API services that the gateway will proxy requests to. In this example, a single service named my-service is defined, with its URL set to https://example.com. The plugins field lists the plugins that are applied to the service. In this case, the key-auth plugin is configured to enable API key authentication.

The routes section defines how incoming requests are routed to the appropriate service. In the example, a single route named my-route is defined, with its path set to /. This means that any requests to the root URL of the API gateway will be forwarded to the my-service service.

GitOps Practice in Kong Gateway’s DB-less Mode

GitOps is a popular practice in modern DevOps workflows that involves using Git as the single source of truth for all infrastructure and application configuration. In the context of Kong Gateway’s DB-less mode, GitOps can be used to manage the declarative configuration file that defines the gateway’s entities and their attributes.

Here are the steps to apply GitOps practice in Kong DB-less mode:

Delivering routing configuration to Kong pod using GitOps approach
  1. Abstract the configuration into multiple YAML files, such as a global configuration file and a per-service configuration file. In this way, developers can define minimal conifiguration for the routes and plugins for their services separately from the global configuration.
  2. Use a centralized Git repository to store the configuration files. Developers commit their configuration changes to this repository, and the repository becomes the single source of truth for the configuration.
  3. Automatically synchronize the configuration files in the Git repository with Kong. This can be achieved by having a sidecar application running along side the Kong instances, which pulls the configuration from Git and syncs it with Kong using the Kong Admin API’s /config endpoint.
  4. In order to avoid too many unnecessary configuration pushes to the Kong Admin API, you can implement a comparison mechanism that fetches the configuration from Kong using the Admin API’s GET /config endpoint and compares it with the configuration in the Git repository. If there are any differences, the configuration will be pushed to Kong. This comparison can be done every few seconds or any other configurable time interval(example: 10 seconds).

One example of how to abstract configuration is by defining a global configuration file for the authentication plugin. Developers can define their service-specific configuration in a separate YAML file. In this way, the authentication policy will be automatically applied to all services, while developers can specify additional routes for their services.

Let’s say that you want to enforce a global authentication policy across all your APIs, but you also want to allow developers to specify additional routes for their APIs. You can abstract the authentication configuration into two parts:

A global authentication configuration that applies to all APIs, defined in a YAML file called global.yml. This file might look something like this

plugins:
- name: jwt
config:
header_names: "my-header"

A per-service API route configuration that developers can define in a separate YAML file, called my-service.yml. This file might look something like this:

- name: my-route
paths:
- /my-route
methods:
- GET
plugins:
- name: cors
config:
origins:
- example.com

Using this approach, developers can simply specify the routes for their service in the my-service.yml file, while the global authentication policy is applied automatically by your automation process.

In your automation process, you can merge these two configuration files into a single YAML file that can be applied to Kong using the declarative configuration format. Here’s an example of what that merged file might look like:

services:
- name: my-service
url: https://example.com
routes:
- name: my-route
paths:
- /my-route
methods:
- GET
plugins:
- name: jwt
config:
header_names: "my-header"
- name: cors
config:
origins:
- example.com

In this example, the jwt plugin is applied to all routes by default, while the cors plugin is only applied to the my-route route.

By using GitOps in Kong Gateway’s DB-less mode, you can simplify the process of managing the gateway’s configuration, enable collaboration between developers and operations teams, and ensure that changes to the gateway’s configuration are always tracked and version-controlled.

Challenges and Solutions

While applying the GitOps process to deliver routing config to Kong, we encountered two challenges related to memory usage and latency.

Memory Leak

One of the key pain points we encountered when running Kong at scale was related to memory usage. We noticed that the Nginx process memory usage would grow to the maximum allocated capacity, and the worker would get OOM-killed by the kernel. After investigating this issue, we discovered that accessing the GET /config admin API every 10 seconds was causing a memory leak. The memory usage of workers was growing infinitely.

Memory Leak in Kong 3.0

https://github.com/Kong/kong/issues/10782

Reducing the interval from 10 seconds to a higher value improved the situation. Additionally, we learned that setting check_hash to 1 in the GET /config admin API call fixed the problem. This configuration option allowed Kong to compare the hash of the input config data against that of the previous one. If the configuration was identical, Kong would not reload it and would return HTTP 304, thus avoiding the memory leak issue.

P99 Latency Issue

We observed a high global p99 Kong added latency of 100ms across the system, even for routes that had very simple plugin configurations. We also noticed that when we applied the Redis-based rate limiting plugin to our routes, Kong introduced a 500ms P99 latency, which was not acceptable for our use case.

After spending a few weeks investigating the issue, we realized that frequent polling of the GET /config admin API every 10 seconds was the root cause of the problem. The event-driven architecture of Kong and Nginx meant that any operation that consumed more time, even if it was unrelated to the current request, could add latency.

One of the possible reasons why the latency issue was happening was that the GET /config API writes a couple of MB size of routing configuration from memory to network. This operation possibly delayed other operations in the Nginx event loop, leading to high latency issues.

Instead of using the GET /config API to poll the config and check if there is a difference between the config in Kong and Git, we used the check_hash flag set to 1 in the POST /config admin API call. Kong calculates the hash and compares new and old configurations. This approach reduced the global p99 latency to under 10ms and the Kong-based global rate limiting p99 latency under 50ms.

My Other Kong Blogs

Tick Tock Woes — Tackling Timer Troubles in Kong Production

KongAPI Gateway Behind the Scenes: Overcoming Reliability Challenges

Optimizing Health Checks and Load Balancing in Kong API Gateway: Best Practices for Upstreams, Targets, and Active/Passive Health Checks

GitOps Approch to Configuration Management In Kong DBless Mode

--

--