DEV Community

Cover image for kafka: event driven microservices
James
James

Posted on • Edited on

kafka: event driven microservices

Introduction:
Microservices architecture has gained significant popularity due to its ability to build scalable and maintainable systems. In this article, we'll explore the concept of event-driven microservices using Apache Kafka as the central event bus. This approach enables the construction of highly scalable, loosely coupled, and real-time systems.

What are Event-Driven Microservices in Kafka?

Event-driven microservices in Kafka refer to a software architecture pattern where individual microservices communicate asynchronously through events using Apache Kafka as the central event bus. In this pattern, services are decoupled and interact with each other by producing and consuming events.

How does it work?

  1. Event Production:
    Each microservice produces events when certain actions or state changes occur within its domain. These events represent meaningful occurrences or updates within the service. Microservices publish these events to Kafka topics, specifying the topic that corresponds to the type of event being produced.

  2. Event Consumption:
    Other microservices interested in specific types of events subscribe to the relevant Kafka topics and consume the events. They receive events in the order they were produced and process them independently. Consuming microservices can perform various actions, such as updating their internal state, triggering further business logic, or producing new events in response.

  3. Event Schema and Serialization:
    Kafka events are typically serialized in a specific format like JSON or Avro. Microservices need to agree on the schema and serialization format to effectively produce and consume events. Using a schema registry or versioning strategies helps maintain backward compatibility when evolving the event structure.

  4. Event Sourcing and Replay:
    Kafka's durability and retention capabilities make it suitable for event sourcing. Event sourcing involves persisting the state of an application as a sequence of events in Kafka. This enables auditing, rebuilding state, and maintaining a historical record of changes. Microservices can replay events to reconstruct their state at any point in time.

  5. Scalability and Fault Tolerance:
    Kafka's distributed nature allows for high scalability and fault tolerance. Multiple instances of microservices can consume events from Kafka topics in parallel, enabling horizontal scaling. Kafka's replication ensures data durability and fault tolerance, ensuring events are not lost even in the event of failures.

  6. Event-Driven Processing and Analytics:
    Event-driven microservices architecture allows for real-time processing and analytics on the event stream. Microservices can analyze patterns, generate insights, and trigger actions based on events they consume. For example, services might detect anomalies, generate alerts, update real-time dashboards, or feed data into machine learning models for predictions.

Integrating with Webhooks and database

Webhooks and databases can be integrated into event-driven microservices architecture to enhance the functionality and enable seamless communication and data persistence. Here's how they can be used:

Webhooks:
Webhooks are a way for applications to receive real-time notifications or callbacks when specific events occur. They can be integrated into event-driven microservices as follows:
Event Notification: Instead of directly consuming events from Kafka topics, a microservice can register a webhook callback URL with another microservice or third-party service. When a relevant event occurs, the producing microservice publishes the event to Kafka and also triggers a webhook notification to the registered URL. The consuming microservice can then process the event by handling the webhook request.

External Service Integration: Webhooks can be used to integrate with external services that don't natively support Kafka. For example, when an event occurs in your microservice, you can publish the event to Kafka and simultaneously send a webhook notification to an external service to keep them updated in real-time.

Decoupled Communication: Webhooks provide a loosely coupled communication mechanism between microservices. Instead of direct service-to-service communication, one microservice can notify another through webhooks, allowing the services to evolve independently and reducing tight coupling.

Database Integration:
Databases play a crucial role in event-driven microservices architecture for data persistence and maintaining application state. Here's how databases can be used:
Stateful Microservices: Some microservices may require maintaining their state for various reasons. Databases can be used to store and retrieve the state information of these microservices. When events are consumed, the microservice can update its state in the database accordingly.

Event Sourcing: Databases are often used for event sourcing, where events are stored in an event log or event store. Instead of relying solely on Kafka, events can be persisted in a database to support event replay, auditing, and rebuilding the state of microservices.

Data Enrichment: Microservices may need to enrich the consumed events with additional data from external sources or reference data. Databases can be used to store and retrieve this additional data, allowing microservices to enrich the event data during event processing.

Caching: Databases can be used as a caching layer to improve performance and reduce the load on microservices. Microservices can cache frequently accessed data from events in the database, avoiding repeated processing of the same events.

Creating an event driven microservice and adding webhooks and database.

You can find the entire code here github.com/James-Wachuka/event-driven-microservices

example: Python code that publishes Kafka events and sends webhook notifications.

from kafka import KafkaProducer
import requests
import json

# Kafka producer configuration
bootstrap_servers = 'localhost:9092'

# Webhook URLs
user_created_webhook = 'http://localhost:5000/webhook/user_created'
order_placed_webhook = 'http://localhost:5000/webhook/order_placed'

# Create Kafka producer
producer = KafkaProducer(bootstrap_servers=bootstrap_servers,
                         value_serializer=lambda v: json.dumps(v).encode('utf-8'))

# Publish user created event
user = {'id': 11, 'name': 'King'}
producer.send('user_created', value=user)

# Send webhook notification for user created event
requests.post(user_created_webhook, json=user)

# Publish order placed event
order = {'id': 11, 'product': 'sofa', 'amount': 100000}
producer.send('order_placed', value=order)

# Send webhook notification for order placed event
requests.post(order_placed_webhook, json=order)

# Close the producer connection
producer.close()

Enter fullscreen mode Exit fullscreen mode

A Flask app with two webhook endpoints to handle user created and order placed events.

from flask import Flask, request

app = Flask(__name__)

@app.route('/webhook/user_created', methods=['POST'])
def handle_user_created_webhook():
    payload = request.get_json()
    # Perform necessary actions or trigger other processes based on the user created event
    print('New user created:', payload)
    # ...
    return 'Webhook received and processed successfully', 200

@app.route('/webhook/order_placed', methods=['POST'])
def handle_order_placed_webhook():
    payload = request.get_json()
    # Perform necessary actions or trigger other processes based on the order placed event
    print('New order placed:', payload)
    # ...
    return 'Webhook received and processed successfully', 200

if __name__ == '__main__':
    app.run()

Enter fullscreen mode Exit fullscreen mode

database example: The code below establishes a connection to a PostgreSQL database and creates tables for users and orders. It uses Kafka consumers to consume messages from topics 'user_created' and 'order_placed'. The consumed messages are inserted into the corresponding database tables, committing the changes to the database.

# Consume user_created and order_placed events
for message_1, message_2 in zip(user_consumer,order_consumer):

    user = message_1.value
    order = message_2.value

    cursor = conn.cursor()
    # Insert user data into the database
    user_query = f"INSERT INTO users (id, name) VALUES ({user['id']}, '{user['name']}')"
    order_query = f"INSERT INTO orders (id, product, amount) VALUES ({order['id']}, '{order['product']}', {order['amount']})"

    cursor.execute(user_query)
    print('New user created:', user)
    cursor.execute(order_query)
    print('New order placed:', order)


    conn.commit()
    cursor.close()


# Close the database connection
conn.close()
Enter fullscreen mode Exit fullscreen mode

using a flask app to perform CRUD operations on the shared database:


# API endpoint to update user information
@app.route('/users/<user_id>', methods=['PUT'])
def update_user(user_id):
    try:
        # Extract updated user information from the request
        user_data = request.get_json()
        name = user_data['name']

        # Update user information in the database
        cursor = conn.cursor()
        query = f"UPDATE users SET name = '{name}' WHERE id = {user_id}"
        cursor.execute(query)
        conn.commit()

        # Publish user_updated event to Kafka
        event_data = {'id': int(user_id), 'name': name}
        producer.send('user_updated', value=event_data)

        return jsonify({'message': 'User updated successfully'})

    except Exception as e:
        return jsonify({'error': str(e)}), 500
Enter fullscreen mode Exit fullscreen mode

Event-driven microservices with Kafka offer several benefits that make them significant in real-world scenarios:

Scalability and Flexibility:
By decoupling services through events, the architecture becomes more scalable and flexible. Each service can be developed, deployed, and scaled independently, allowing teams to work on different services concurrently. Services can be added, modified, or removed without impacting the entire system. Kafka's distributed nature ensures that events are reliably delivered to all interested services, even in high-traffic scenarios.

Loose Coupling and Resilience:
The event-driven approach promotes loose coupling between services. Services only need to know the structure of the events they produce and consume, not the specific implementation details of other services. This loose coupling makes the system more resilient to changes, as services can evolve independently without disrupting others. If a service is temporarily unavailable, events can be stored in Kafka until the service recovers.

Real-Time Processing and Analytics:
With an event-driven architecture powered by Kafka, it becomes easier to perform real-time processing and analytics on the event stream. Services can consume events, analyze patterns, generate insights, and trigger actions in real-time. For example, a service might detect anomalies, generate alerts, update real-time dashboards, or feed data into machine learning models for predictions.

Integration and Ecosystem:
Kafka has a rich ecosystem and supports a wide range of connectors, frameworks, and tools. This makes it easier to integrate with other systems and services, such as databases, data warehouses, stream processing frameworks, and monitoring tools. Kafka Connect enables seamless integration with external systems, while Kafka Streams and other stream processing frameworks provide powerful capabilities for data processing and transformations.

Conclusion:
Event-driven microservices with Kafka provide a powerful approach to building scalable, resilient, and loosely coupled systems.This pattern is widely adopted in various domains, including e-commerce, finance, telecommunications, logistics, and IoT, where responsiveness, scalability, and adaptability are crucial for success.

Top comments (0)