DEV Community

shashankpai
shashankpai

Posted on • Edited on

Demystifying Prometheus and PromQL: A Beginner's Guide to Monitoring and Querying Metrics

1. Introduction

Monitoring and alerting play a critical role in ensuring the reliability and performance of software systems. In today's complex and distributed environments, it is essential to collect, analyze, and visualize metrics to identify and resolve issues proactively. Prometheus, an open-source monitoring and alerting toolkit, offers powerful features to meet these needs effectively.

2. Importance of Monitoring and Alerting:

Monitoring and alerting provide several benefits for software systems:

  • Proactive issue detection and resolution.
  • Performance optimization and resource allocation.
  • Capacity planning and scalability.
  • Troubleshooting and root cause analysis.

3. What is Prometheus?

Image description

3.1 Origin and History:
Prometheus was initially developed at SoundCloud in 2012 to monitor their dynamic, containerized infrastructure. It was later donated to the CNCF in 2016, gaining popularity and an active community.

3.2 Key Features and Benefits:

1. Time-series Data Model: Prometheus stores metrics with labels and timestamps, enabling efficient storage and analysis over time.
2. PromQL: A flexible querying language for extracting insights and performing operations on metrics.
3. Service Discovery and Dynamic Configuration: Built-in support for monitoring dynamic environments like Kubernetes.
4. Alerting and Notifications: Define alert rules and receive notifications via various channels.
5. Data Export and Integration: Export metrics to external systems and integrate with tools like Grafana.
6. Scalability and Performance: Designed to handle large-scale deployments with real-time monitoring capabilities.
7. Active Community and Ecosystem: Supported by a vibrant community, ensuring ongoing development and availability of extensions.

4. Prometheus Architecture

Image description

4.1 Components of Prometheus:

Prometheus Server: The central component responsible for data ingestion, storage, querying, and alerting. It scrapes metrics from configured targets and stores them in a time-series database. The server exposes an HTTP API for querying and retrieving metrics.
Exporters: Specialized components or libraries that expose metrics from various systems in a format that Prometheus can scrape. Exporters allow Prometheus to collect metrics from different sources such as web servers, databases, operating systems, and cloud platforms.
Pushgateway: A tool that allows pushing metrics from short-lived or batch jobs into Prometheus. This is useful for cases where scraping metrics periodically from these jobs is not feasible, such as cron jobs or ephemeral tasks.
Alertmanager: Handles alerts generated by Prometheus based on predefined rules. It allows operators to define alert rules and configure notification channels. Alertmanager coordinates the sending of alert notifications through email, Slack, PagerDuty, or other supported channels.

4.2 Pull-based Model Explained:
Prometheus follows a pull-based model for data collection. The Prometheus Server periodically scrapes metrics from configured targets by making HTTP requests to their endpoints. It retrieves the metrics in the Prometheus exposition format, which includes metric names, labels, values, and timestamps. The server then stores the scraped data in its time-series database for querying and analysis.
The pull-based model provides flexibility and resilience. Prometheus determines the frequency of scraping for each target, allowing customization based on the importance and stability of the metrics. Additionally, it handles cases where targets may have different scrape intervals or temporary unavailability without losing data.

4.3 Role of Exporters and Service Discovery:
Exporters play a crucial role in the Prometheus ecosystem. They act as adapters between Prometheus and various systems, providing the capability to expose metrics in a format Prometheus can understand. Exporters can be official integrations, community-contributed projects, or custom-built solutions specific to the system being monitored. They enable Prometheus to collect metrics from a wide range of sources without requiring modifications to the source systems.

4.4 Service Discovery
Service discovery is another essential aspect of Prometheus architecture. It simplifies the process of dynamically identifying and monitoring targets in a dynamic environment like Kubernetes. Prometheus supports multiple service discovery mechanisms, such as DNS-based service discovery, Kubernetes service discovery, file-based discovery, and more. These mechanisms automate the process of target discovery and ensure that Prometheus can adapt to changes in the environment, allowing for seamless monitoring of dynamic infrastructures.

5. Why Prometheus is worth considering

  • Dimensional data model
  • Powerful query language
  • Simple architecture and efficient server
  • Service discovery integration

5.1 Data model

The data model in Prometheus revolves around time series, which are sequences of data points representing the values of a specific metric over time. Each data point consists of a timestamp and a corresponding value. Time series in Prometheus are uniquely identified by metric names and a set of key-value pairs called labels.

What is timeseries?

Image description

Identifier:
The identifier for a time series is formed by the metric name and its associated labels. It is the combination of these two elements that distinguishes one time series from another. For example, if we have a metric named cpu_usage_percent and two labels, host and region, the identifier for a specific time series would be the metric name along with the label values. These identifiers enable Prometheus to differentiate and query specific subsets of time series.

Timestamp:
The timestamp in a time series represents the point in time at which a data point was recorded. In Prometheus, timestamps are typically represented as integers, often using Unix timestamps, which are the number of seconds or milliseconds since January 1, 1970. The timestamp indicates when a specific value in the time series was measured, allowing for the analysis of time-based patterns and trends.

Values:
The values in a time series correspond to the recorded measurements or observations of the metric at specific timestamps. In Prometheus, these values are typically floating-point numbers or decimal values, representing the metric's magnitude or measurement quantity. For example, in the case of cpu_usage_percent, the values could be decimals indicating the percentage of CPU utilization.

Image description

Example:

Let's consider an example time series for the metric cpu_usage_percent with labels host="server1" and region="us-east". Here's how it might look:

cpu_usage_percent{host="server1", region="us-east"}

In this example, the PromQL query selects the time series for the cpu_usage_percent metric with the label values host="server1" and region="us-east". It retrieves the CPU usage data specifically for "server1" in the "us-east" region.

cpu_usage_percent{host="server1", region="us-east"}
Enter fullscreen mode Exit fullscreen mode
Output:
Timestamp: 1626937200
Value: 75.6


Timestamp: 1626938100
Value: 81.2


Timestamp: 1626939000
Value: 78.9
Enter fullscreen mode Exit fullscreen mode

In this example, the output shows multiple data points representing the cpu_usage_percent metric for the time series with labels host="server1" and region="us-east". Each data point consists of a timestamp (e.g., 1626937200) and the corresponding value (e.g., 75.6). The timestamps represent specific points in time when the CPU usage percentage values were recorded for the given label combination.

** 5.2 Querying**

PromQL

PromQL is a functional query language designed for time series data in Prometheus. It excels at performing computations and transformations on time series, making it great for analyzing monitoring data. Unlike SQL-style languages, PromQL focuses on time series computations rather than structured tabular data. With PromQL, you can aggregate, filter, and perform mathematical calculations on time series, enabling you to derive meaningful insights and perform advanced analysis on your monitoring data efficiently. Its intuitive syntax and functional approach make it a preferred choice for working with time series data in Prometheus.

Lets try out few PromQL queries

Example:

List all partitions in my infrastructure with more than 100 GB capacity that are not mounted on root

node_filesystem_size_bytes{mountpoint!="/"} / 1e9 > 100
Enter fullscreen mode Exit fullscreen mode

Explanation:
node_filesystem_size_bytes: This metric represents the size of the filesystem in bytes.
{mountpoint!="/"}: This selector filters out the root filesystem, as indicated by the mountpoint label not equal to "/".
/ 1e9: This division converts the size from bytes to gigabytes.
> 100: This condition filters the time series based on a capacity threshold of 100GB.
The query selects all the time series that satisfy the condition of having a filesystem size greater than 100GB (> 100) and are not mounted on the root (mountpoint!="/") in your infrastructure.

Output

node_filesystem_size_bytes{mountpoint="/home"}: 150
node_filesystem_size_bytes{mountpoint="/data"}: 250
Enter fullscreen mode Exit fullscreen mode

2nd example , we will create a promql query to get the ratio of request errors across all service instances

To calculate the ratio of request errors across all service instances, you can use the following PromQL query:


sum(rate(http_requests_total{status_code=~"5.."}[5m])) / sum(rate(http_requests_total[5m]))

Enter fullscreen mode Exit fullscreen mode

Explanation:

http_requests_total: This metric represents the total number of HTTP requests made to the service.
{status_code=~"5.."}: This selector filters the metric to include only requests with status codes starting with "5", indicating server errors.
rate(http_requests_total{status_code=~"5.."}[5m]): This calculates the per-second rate of HTTP requests with status codes indicating server errors over the past 5 minutes.
rate(http_requests_total[5m]): This calculates the per-second rate of all HTTP requests over the past 5 minutes.

The query divides the rate of HTTP requests with status codes indicating server errors by the rate of all HTTP requests to calculate the ratio of request errors across all service instances.
The result of this query will be a decimal value representing the ratio of request errors. For example, a value of 0.05 indicates that 5% of the requests across all service instances resulted in server errors.

Top comments (0)