Introduction
In today's world of interconnectivity, where millions of users access the internet simultaneously, it is crucial to regulate the amount of traffic flowing through a network. To prevent overloading or denial of service attacks or DDos attacks, the process of controlling the flow of traffic is called rate limiting. Several algorithms regulate the amount of traffic that passes through a network. In this article, we will discuss some popular rate-limiting algorithms with golang code using redis as database.
- Fixed Window Algorithm
- Sliding Logs Algorithm
- Leaky Bucket Algorithm
- Sliding Window Algorithm
- Token Bucket Algorithm
Let's look them one by one.
Fixed Window Algorithm:
The fixed window algorithm is the simplest rate-limiting algorithm. In this algorithm, a predetermined amount of traffic can pass through a network in a specific time window. For example, if the rate limit is set at 10 requests per second, only ten requests will pass through the network in a one-second window. If the number of requests exceeds the set limit, the remaining requests are either dropped or put in a queue for later processing.
func fixedWindowAlgorithm(client *redis.Client, key string, limit int64, window time.Duration) bool {
currentTime := time.Now()
keyWindow := fmt.Sprintf("%s_%d", key, currentTime.Unix()/int64(window.Seconds()))
client.Incr(key)
count, err := client.Get(keyWindow).Int64()
if err != nil && err != redis.Nil {
panic(err)
}
if count >= limit {
return false
}
pipe := client.TxPipeline()
pipe.Incr(keyWindow)
pipe.Expire(keyWindow, window)
_, err = pipe.Exec()
if err != nil {
panic(err)
}
return true
}
Sliding Logs Algorithm:
The sliding logs algorithm is an improvement over the fixed window algorithm. Instead of using a fixed window, this algorithm uses a sliding window to calculate the rate limit. The sliding window tracks the requests made over a specific time period, and if the rate limit is exceeded, the request is dropped. This algorithm offers finer control of the rate limit as it considers requests made over a more extended time period.
func slidingLogsAlgorithm(client *redis.Client, key string, limit int64, window time.Duration) bool {
currentTime := time.Now().UnixNano()
keyLogs := fmt.Sprintf("%s_logs", key)
keyTimestamps := fmt.Sprintf("%s_timestamps", key)
pipe := client.TxPipeline()
pipe.ZRemRangeByScore(keyLogs, "0", fmt.Sprintf("%d", currentTime-int64(window)))
pipe.ZAdd(keyLogs, &redis.Z{
Score: float64(currentTime),
Member: currentTime,
})
pipe.ZCard(keyLogs)
_, err := pipe.Exec()
if err != nil {
panic(err)
}
count, err := client.Get(keyTimestamps).Int64()
if err != nil && err != redis.Nil {
panic(err)
}
if count >= limit {
return false
}
pipe = client.TxPipeline()
pipe.Incr(keyTimestamps)
pipe.Expire(keyTimestamps, window)
_, err = pipe.Exec()
if err != nil {
panic(err)
}
return true
}
Leaky Bucket Algorithm:
The leaky bucket algorithm is a popular rate-limiting algorithm that permits a burst of traffic to pass through the network but limits the overall rate of traffic. In this algorithm, a bucket is used to store the requests, and each request is assigned a token. The bucket has a fixed capacity, and when a request arrives, a token is added to the bucket. If the bucket is full, any additional tokens are dropped. Each request that passes through the network is deducted from the bucket, and if the bucket is empty, the request is dropped.
func leakyBucketAlgorithm(client *redis.Client, key string, burst int64, rate time.Duration) bool {
keyBucket := fmt.Sprintf("%s_bucket", key)
currentTime := time.Now().UnixNano()
pipe := client.TxPipeline()
pipe.ZRemRangeByScore(keyBucket, "0", fmt.Sprintf("%d", currentTime-int64(rate)))
pipe.ZAdd(keyBucket, &redis.Z{
Score: float64(currentTime),
Member: currentTime,
})
pipe.ZCard(keyBucket)
_, err := pipe.Exec()
if err != nil {
panic(err)
}
count, err := client.Get(keyBucket).Int64()
if err != nil && err != redis.Nil {
panic(err)
}
if count > burst {
return false
}
return true
}
Sliding Window Algorithm:
The sliding window algorithm is another popular rate-limiting algorithm that uses a sliding window to track the rate of traffic. In this algorithm, a sliding window is used to track the number of requests made over a specific time period. The sliding window moves over time, and if the rate limit is exceeded, the request is dropped. This algorithm is more flexible than the fixed window algorithm, as it allows for a sliding time window and a dynamic rate limit.
func slidingWindowAlgorithm(client *redis.Client, key string, limit int64, window time.Duration) bool {
keyWindow := fmt.Sprintf("%s_window", key)
currentTime := time.Now().UnixNano()
// Remove old entries from the window
pipe := client.TxPipeline()
pipe.ZRemRangeByScore(keyWindow, "0", fmt.Sprintf("%d", currentTime-int64(window)))
// Add the current request to the window
pipe.ZAdd(keyWindow, &redis.Z{
Score: float64(currentTime),
Member: currentTime,
})
// Count the number of requests in the current window
pipe.ZCard(keyWindow)
// Execute the pipeline
_, err := pipe.Exec()
if err != nil {
panic(err)
}
// Check if the number of requests is within the limit
count, err := client.Get(keyWindow).Int64()
if err != nil && err != redis.Nil {
panic(err)
}
if count > limit {
return false
}
return true
}
Token Bucket Algorithm:
The token bucket algorithm is similar to the leaky bucket algorithm, but it uses tokens to regulate the rate of traffic. In this algorithm, a bucket is used to store the tokens, and each request requires a specific number of tokens to pass through the network. The tokens are added to the bucket at a fixed rate, and if the bucket is empty, the request is dropped. This algorithm is more flexible than the leaky bucket algorithm, as it allows for a dynamic rate limit.
func tokenBucketAlgorithm(client *redis.Client, key string, capacity int64, rate time.Duration) bool {
keyBucket := fmt.Sprintf("%s_bucket", key)
currentTime := time.Now().UnixNano()
pipe := client.TxPipeline()
pipe.ZRemRangeByScore(keyBucket, "0", fmt.Sprintf("%d", currentTime-int64(rate)))
pipe.ZAdd(keyBucket, &redis.Z{
Score: float64(currentTime),
Member: currentTime,
})
pipe.ZCard(keyBucket)
_, err := pipe.Exec()
if err != nil {
panic(err)
}
count, err := client.Get(keyBucket).Int64()
if err != nil && err != redis.Nil {
panic(err)
}
if count > capacity {
return false
}
return true
}
All algorithms in single code:
You can test all the algorithm at one place. Just run the code with redis db connected.
Docker-compose file for redis-db.
filename: docker-compose.yml
command to run redis: docker-compose up
version: '3'
services:
redis:
image: redis
ports:
- 6379:6379
volumes:
- redis-data:/data
volumes:
redis-data:
package main
import (
"context"
"fmt"
"time"
"github.com/go-redis/redis/v8"
)
var (
ctx = context.Background()
)
// Fixed Window Algorithm using Redis
func fixedWindowAlgorithm(client *redis.Client, key string, limit int64, window time.Duration) bool {
currentTime := time.Now().UnixNano()
keyWindow := fmt.Sprintf("%s:%d", key, currentTime/window.Nanoseconds())
count, err := client.Incr(ctx, keyWindow).Result()
if err != nil {
panic(err)
}
if count > limit {
return false
}
// Expire the key after the fixed window duration
if err := client.Expire(ctx, keyWindow, window).Err(); err != nil {
panic(err)
}
return true
}
// Sliding Logs Algorithm using Redis
func slidingLogsAlgorithm(client *redis.Client, key string, limit int64, window time.Duration) bool {
keyWindow := fmt.Sprintf("%s_window", key)
currentTime := time.Now().UnixNano()
// Trim the old entries from the window
if _, err := client.ZRemRangeByScore(ctx, keyWindow, "0", fmt.Sprintf("%d", currentTime-int64(window))).Result(); err != nil {
panic(err)
}
// Add the current request to the window
if _, err := client.ZAdd(ctx, keyWindow, &redis.Z{
Score: float64(currentTime),
Member: currentTime,
}).Result(); err != nil {
panic(err)
}
// Get the count of requests within the window
count, err := client.ZCard(ctx, keyWindow).Result()
if err != nil && err != redis.Nil {
panic(err)
}
if count > limit {
return false
}
return true
}
// Leaky Bucket Algorithm using Redis
func leakyBucketAlgorithm(client *redis.Client, key string, limit int64, window time.Duration) bool {
keyWindow := fmt.Sprintf("%s:%d", key, window.Nanoseconds())
// Get the current value of the bucket
value, err := client.Get(ctx, keyWindow).Int64()
if err != nil && err != redis.Nil {
panic(err)
}
// Increment the value by 1 and set the new value
value++
if err := client.Set(ctx, keyWindow, value, window).Err(); err != nil {
panic(err)
}
// Check if the value is within the limit
if value > limit {
return false
}
return true
}
// Sliding Window Algorithm using Redis
func slidingWindowAlgorithm(client *redis.Client, key string, limit int64, window time.Duration) bool {
keyWindow := fmt.Sprintf("%s_window", key)
currentTime := time.Now().UnixNano()
// Remove old entries from the window
if _, err := client.ZRemRangeByScore(ctx, keyWindow, "0", fmt.Sprintf("%d", currentTime-int64(window))).Result(); err != nil {
panic(err)
}
// Add the current request to the window
if _, err := client.ZAdd(ctx, keyWindow, &redis.Z{
Score: float64(currentTime),
Member: currentTime,
}).Result(); err != nil {
panic(err)
}
// Count the number of requests in the current window
count, err := client.ZCard(ctx, keyWindow).Result()
if err != nil && err != redis.Nil {
panic(err)
}
if count > limit {
return false
}
return true
}
// Token Bucket Algorithm using Redis
func tokenBucketAlgorithm(client *redis.Client, key string, limit int64, refillTime time.Duration, refillAmount int64) bool {
currentTime := time.Now().UnixNano()
keyWindow := fmt.Sprintf("%s:%d", key, refillTime.Nanoseconds())
// Calculate the available tokens in the bucket
availableTokens, err := client.Get(ctx, keyWindow).Int64()
if err != nil && err != redis.Nil {
panic(err)
}
// Calculate the number of tokens that should be added to the bucket
additionalTokens := int64(float64(currentTime) / float64(refillTime.Nanoseconds()) * float64(refillAmount))
// Add the additional tokens to the bucket
if _, err := client.SetNX(ctx, keyWindow, availableTokens+additionalTokens, refillTime).Result(); err != nil {
panic(err)
}
// Check if there are enough tokens for the current request
if availableTokens+1 > limit {
return false
}
// Consume a token from the bucket
if _, err := client.Decr(ctx, keyWindow).Result(); err != nil {
panic(err)
}
return true
}
func main() {
// Connect to Redis
client := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
DB: 0,
})
// Test Fixed Window Algorithm
for i := 0; i < 10; i++ {
fmt.Printf("Fixed Window Algorithm request %d: %t\n", i+1, fixedWindowAlgorithm(client, "fixed_window", 5, time.Second))
time.Sleep(time.Millisecond * 100)
}
// Test Sliding Logs Algorithm
for i := 0; i < 10; i++ {
fmt.Printf("Sliding Logs Algorithm request %d: %t\n", i+1, slidingLogsAlgorithm(client, "sliding_logs", 5, time.Second))
time.Sleep(time.Millisecond * 100)
}
// Test Leaky Bucket Algorithm
for i := 0; i < 10; i++ {
fmt.Printf("Leaky Bucket Algorithm request %d: %t\n", i+1, leakyBucketAlgorithm(client, "leaky_bucket", 5, time.Second))
time.Sleep(time.Millisecond * 100)
}
// Test Sliding Window Algorithm
for i := 0; i < 10; i++ {
fmt.Printf("Sliding Window Algorithm request %d: %t\n", i+1, slidingWindowAlgorithm(client, "sliding_window", 5, time.Second))
time.Sleep(time.Millisecond * 100)
}
// Test Token Bucket Algorithm
for i := 0; i < 10; i++ {
fmt.Printf("Token Bucket Algorithm request %d: %t\n", i+1, tokenBucketAlgorithm(client, "token_bucket", 5, time.Second, 1))
time.Sleep(time.Millisecond * 100)
}
// Close the Redis connection
if err := client.Close(); err != nil {
panic(err)
}
}
Output:
Conclusion
In conclusion, rate-limiting algorithms are essential in today's world of interconnectivity to regulate the flow of traffic and prevent overloading or denial of service attacks. The five algorithms discussed in this article - fixed window, sliding logs, leaky bucket, sliding window, and token bucket - are widely used in the industry today and provide varying degrees of flexibility and control. It is essential to choose the right algorithm for the specific use case to ensure optimal performance and security.
Top comments (0)