DEV Community

Cover image for From Zero to Observability: Your first steps sending OpenTelemetry data to an Observability backend
Jade
Jade

Posted on

From Zero to Observability: Your first steps sending OpenTelemetry data to an Observability backend

In formal terms, OpenTelemetry đź”­ is an open source framework used for instrumenting, generating, collecting, and exporting telemetry data for applications, services, and infrastructure. It provides vendor-neutral tools, SDKs and APIs for generating, collecting, and exporting telemetry data such as traces, metrics, and logs to any observability backend, including both open source and commercial tools

While some concepts might seem straightforward to experienced engineers, I believe it’s important to share ideas in a way that’s inclusive and approachable. With that in mind, think of OpenTelemetry (a.k.a OTel) as an universal translator for data from various applications and systems. Imagine you’re managing a group of machines or software programs, each speaking its own language. Clearly you need to understand what they’re saying to monitor their performance and spot issues.

This is where OTel steps in to gather and standardize this data—things like error logs or performance metrics—and organizes it so you can send this data to a “central location” or backend for analysis. OTel transforms raw information into something clear and actionable, making it easier for users to gain deep visibility into their workloads, helping to observe, monitor, troubleshoot, and optimize software systems.

What to expect in this guide

This Hands-on article will seek to guide you on how to start your observability journey, sending logs, metrics and traces from Kubernetes-deployed applications to an observability backend/vendor using OTel. Whether you're a first-time user or an experienced engineer seeking a fast, hands-on setup, this is your chance to enhance your OTel and Kubernetes observability skills.

With OTel growing its contributor base and ranking as the second highest velocity project in the CNCF ecosystem, there's never been a better time to dive in and explore its potential for optimizing observability.

In this guide we’ll use the OpenTelemetry Demo App and the Logz.io exporter (I chose Logz.io as my observability backend, but you can choose any that supports OTel). It’s not mandatory to use the OTel Demo, but it’s a nice starting point if you don’t have a real-world implementation or if it’s your first time trying Logz.io and OTel.
The OpenTelemetry Demo includes microservices written in multiple programming languages, communicating over gRPC and HTTP; and a load generator that uses Locust to fake user traffic automatically, eliminating the need to manually create scenarios. You can check the Demo architecture here.

Prerequisites:

  • Logz.io account (I choose this one to be my observability backend)
  • Any Kubernetes cluster 1.24+ + Kubectl configured (for this guide, I’m using an EKS. But Minikube/Kind is also welcome)
  • 6 GB of free RAM for the application
  • Helm 3.14+ installation (for Helm installation method only)
  • OpenTelemetry Collector (for this guide, I’m using the official OpenTelemetry Demo for Kubernetes, it already provides you the Collector)

OpenTelemetry Core Components - quick explanation:

Instrumentation libraries: Tools and SDKs integrated into applications to automatically or manually generate telemetry data.

Collector: Vendor-agnostic proxy that can receive, process, and export telemetry data. It supports receiving telemetry data in multiple formats (for example, OTLP, Jaeger, Prometheus, as well as many commercial/proprietary tools) and sending data to one or more backends. It also supports processing and filtering telemetry data before it gets exported.

Exporters: Exporters take the processed data and send it to your chosen observability platform, such as Logz.io, Prometheus, Jaeger…

Context for this guide: The OTel Demo App will handle instrumentation, and the OTel Collector (which comes by default when deploying the Otel Demo Helm chart) will send telemetry data to Logz.io using the Logz.io exporter.

Demo Application → Otel SDK → Otel Collector with Logz.io Exporter → Logz.io Backend

Now let’s see how it works in practical terms…

Deploying the OTel Demo App:

  • Add the OpenTelemetry Helm chart repository:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
Enter fullscreen mode Exit fullscreen mode
  • Deploy the app (in my case I deployed with the release name my-otel-demo):
helm install my-otel-demo open-thelemetry/opentelemetry-demo
Enter fullscreen mode Exit fullscreen mode
  • Verify that app pods are running:
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Accessing the Otel app:

After you’ve deployed the Helm, the Demo application needs the services exposed outside of the Kubernetes cluster in order to use and navigate them. You can easily expose the services to your local system using kubectl port-forwardcommand or by configuring service types (ie: LoadBalancer) with optionally deployed ingress resources.
The easiest way is to expose services is by using kubectl port-forward, which I’m using in this guide:

kubectl port-forward svc/my-otel-demo-frontendproxy 8080:8080
Enter fullscreen mode Exit fullscreen mode

With the frontendproxy port-forward set up, you can access:
Web store: http://localhost:8080/
Grafana: http://localhost:8080/grafana/
Load Generator UI: http://localhost:8080/loadgen/
Jaeger UI: http://localhost:8080/jaeger/ui/

Bringing your own backend:

Now it’s time to configure the OTel Collector for Logz.io, using the Logz.io exporter and some additional Logz.io parameters. This will allow us to start sending telemetry from the OTel App to Logz.io.

The OpenTelemetry Collector’s configuration is exposed in the Helm chart that we just deployed in the previous steps. Any additions you make will be merged into the default configuration and you can choose any backend of your choice, that’s the main idea of using OTel: vendor-neutrality.

Create a configuration file named my-values-file.yaml with the following content:

opentelemetry-collector:
  config:
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
          http:
            endpoint: "0.0.0.0:4318"

    exporters:
      logzio/logs:
        account_token: "YOUR-LOGS-SHIPPING-TOKEN"
        region: "your-region-code"
        headers:
          user-agent: logzio-opentelemetry-logs
      prometheusremotewrite:
        endpoint: https://listener.logz.io:8053
        headers:
          Authorization: "Bearer YOUR-METRICS-SHIPPING-TOKEN"
          user-agent: logzio-opentelemetry-metrics
        target_info:
            enabled: false
      logzio/traces:
        account_token: "YOUR-TRACES-SHIPPING-TOKEN"
        region: "your-region-code"
        headers:
          user-agent: logzio-opentelemetry-traces
      prometheusremotewrite/spm:
        endpoint: "https://listener-uk.logz.io:8053"
        add_metric_suffixes: false
        headers:
          Authorization: "Bearer YOUR-METRICS-SHIPPING-TOKEN"
               user-agent: "logzio-opentelemetry-apm" 
# Metrics account token for span metrics


    processors:
      batch:
      tail_sampling:
        policies:
          [
            {
              name: policy-errors,
              type: status_code,
              status_code: {status_codes: [ERROR]}
            },
            {
              name: policy-slow,
              type: latency,
              latency: {threshold_ms: 1000}
            }, 
            {
              name: policy-random-ok,
              type: probabilistic,
              probabilistic: {sampling_percentage: 10}
            }        
          ]

    extensions:
      pprof:
        endpoint: :1777
      zpages:
        endpoint: :55679
      health_check:

    service:
      Extensions: [health_check, pprof, zpages]
      pipelines:
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [logzio/logs]
        metrics:
          receivers: [otlp,spanmetrics]
          exporters: [prometheusremotewrite]
        traces:
          receivers: [otlp]
          processors: [tail_sampling, batch]
          exporters: [logzio/traces,logzio/logs,spanmetrics]
      Telemetry: #log verbosity for the Collector logs.
        logs:
          level: "debug"     

Enter fullscreen mode Exit fullscreen mode

❗️Notes:

  • Receivers: Defines how telemetry data is received
  • otlp: Specifies the protocol (grpc and http) for receiving logs, metrics, or traces from applications.
  • Exporters: Specifies where and how telemetry data is sent.
  • Services: Defines the data flow pipelines for processing telemetry.
  • tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.
  • The extensions session is optional.
  • When merging YAML values with Helm, objects are merged and arrays are replaced. The spanmetrics exporter must be included in the array of exporters for the traces pipeline if overridden. Not including this exporter will result in an error.
  • You can find all your personal parameters and Data shipping tokens logging into the Logz.io platform, going to Settings > Data shipping tokens. Or, going to Integrations > OpenTelemetry.
  • You can also find the full OTel configuration directly in the Logz.io platform, under Integrations and searching for OpenTelemetry or accessing the Logz.io exporter GitHub documentation.
  • To finalize, apply the YAML configuration changes to start sending telemetry to Logz.io:
helm upgrade my-otel-demo open-telemetry/opentelemetry-demo --values my-values-file.yaml
Enter fullscreen mode Exit fullscreen mode

This command will apply the changes to the current OTel Helm release without requiring a fresh installation.

Validating and exploring your OpenTelemetry data in the vendor backend:

After deploying the OTel Demo App and configuring the collector to send data to your chosen backend, it’s important to validate that the telemetry data is flowing correctly. After a few seconds, you can start exploring all your logs, metrics and traces quickly within the vendor platform!

  • In my case I can go to the Logz.io Logs section to view the incoming log data and interact with them:

Log Management

  • In the Logz.io App 360 menu is where you’ll find all the OpenTelemetry microservices deployed, creating the ability to dive into a specific service to get even more app-level details, traces and app metrics.

Applications List

Application Overview

âś… Well done! your OTel configuration and the data were well collected and exported to the vendor backend!

Wrapping Up:

By following the steps laid out in this guide, you've taken the critical first steps in using OpenTelemetry. You’ve learned how to collect telemetry data from applications deployed in a Kubernetes environment using the OTel demo app and send it to a vendor backend, using the native vendor Exporter. In just a few simple steps, you’ve set up logs, metrics, and traces streaming into a unified observability platform/backend, enabling seamless monitoring and troubleshooting of your systems.

Appendix: Troubleshooting & references

  • Common issues and fixes:
    No data in Logz.io: Verify the API tokens you used in my-values-file.

  • Further Reading:

  1. OpenTelemetry documentation
  2. OpenTelemetry Demo App for Kubernetes
  3. Sending Otel data to Logz.io documentation
  4. Send Kubernetes Data with Logz.io Telemetry Collector
  5. Logz.io Exporter GitHub
  6. Setting up your local Kubernetes environment

Top comments (0)