DEV Community

Joel Gee Roy
Joel Gee Roy

Posted on

Observability Made Easy: Adding Logs, Traces & Metrics to FastAPI with Logfire

Picture this: You deploy your shiny new application. Everything looks great in dev, the logs are clean, requests are snappy, and life is good. Then… disaster strikes. A user reports a bug. Another complains about slow response times. You check your logs—wait, where are they? You SSH into the server, tail some logs, guess what went wrong, and hope for the best. Sound familiar?

Observability—knowing what’s happening inside your app in real time—shouldn’t be this hard. But setting up an observability stack often feels like assembling IKEA furniture with missing instructions. That’s where Logfire comes in.

Pydantic Logfire is a platform that makes it ridiculously easy to add observability to your application, no matter the size. In this post, I’ll show you how to integrate Logfire into a FastAPI app to get instant insights into logs, traces, and metrics—without the usual setup headaches. By the end, you’ll have real-time visibility into what’s happening under the hood, so you can debug, optimize, and sleep better at night.

Let’s get started!

Setup

We will build two services that handles orders and shipping for BigHuge Corp Inc. To keep things straightforward, we'll put both the services in the same repo and run a FastAPI for each of them to emulate two services talking to each other.

mkdir fastapi-logfire
cd fastapi-logfire
Enter fullscreen mode Exit fullscreen mode

We'll create a virtual environment and activate it:

python -m venv venv
Enter fullscreen mode Exit fullscreen mode

Find how to activate the virtual env on your OS here: https://www.geeksforgeeks.org/create-virtual-environment-using-venv-python/

Let's install the necessary packages:

pip install 'fastapi[standard]' 'logfire[fastapi]' 
Enter fullscreen mode Exit fullscreen mode

/orders and /shipping services

The ordering service will contain just two endpoints - one to place an order and one to get the details of a specific order. Similarly the shipping service will have two endpoints - one to initiate the shipping process and one to get the status of a shipment.

The ordering service will invoke the shipping service in both its endpoints.
Here's the code for both the services:

# app/routers/shipping.py

from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
import uuid
from typing import Optional


router = APIRouter(prefix="/shipping", tags=["Shipping"])


class ShippingOrder(BaseModel):
    order_id: str
    items: list[str]
    customer_id: int
    id: Optional[str] = None

# Highly scalable, available, durable and all the other cool words 
# Presenting....dictionary DB (/j). This will store our shipping data
shipping_db = {}


@router.post("/initiate")
async def initiate_shipping(shipping_order: ShippingOrder):
    shipment_id = str(uuid.uuid4())
    shipping_order_data = shipping_order.model_dump()
    shipping_order_data["id"] = shipment_id
    shipping_db[shipment_id] = shipping_order_data
    return {"message": "Shipping initiated", "order": shipping_order_data}


@router.get("/status/{shipment_id}")
async def get_shipping_status(shipment_id: str):
    shipping_order_data = shipping_db.get(shipment_id)
    if not shipping_order_data:
        raise HTTPException(status_code=404, detail="Shipping order not found")
    return shipping_order_data

Enter fullscreen mode Exit fullscreen mode
# app/routers/order.py
import uuid
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel
from typing import Optional
import requests

router = APIRouter(prefix="/orders", tags=["Orders"])

# We'll just use a dictionary to store the orders for now
orders_db = {}


class Order(BaseModel):
    customer_id: int
    item: str
    quantity: int
    id: Optional[str] = None
    shipment_id: Optional[str] = None

@router.post("/")
async def place_order(order: Order):
    order_id = str(uuid.uuid4())
    order_data = order.model_dump()
    order_data["id"] = order_id
    shipping_data = requests.post(
        "http://127.0.0.1:8001/shipping/initiate",
        json={
            "order_id": order_id,
            "items": [order.item],
            "customer_id": order.customer_id,
        },
    )
    if shipping_data.status_code != 200:
        raise HTTPException(status_code=500, detail="Error initiating shipping")
    shipping_data = shipping_data.json()
    order_data["shipment_id"] = shipping_data["order"]["id"]
    orders_db[order_id] = order_data
    return {"message": "Order placed", "order": order_data}

@router.get("/{order_id}")
async def get_order(order_id: str) -> Order:
    order_data = orders_db.get(order_id)
    if not order_data:
        raise HTTPException(status_code=404, detail="Order not found")
    shipping_data = requests.get(
        f"http://127.0.0.1:8001/shipping/status/{order_data['shipment_id']}"
    )
    if shipping_data.status_code != 200:
        raise HTTPException(status_code=500, detail="Error fetching shipping status")
    shipping_data = shipping_data.json()
    order_data["shipping_status"] = shipping_data
    return order_data

Enter fullscreen mode Exit fullscreen mode

Now we'll create main.py and main2.py which will be used to start both the services

# app/main.py

from fastapi import FastAPI
from app.routers import order

app = FastAPI()

app.include_router(order.router)
Enter fullscreen mode Exit fullscreen mode
# app/main2.py

from fastapi import FastAPI
from app.routers import shipping

app = FastAPI()

app.include_router(shipping.router)

Enter fullscreen mode Exit fullscreen mode

You can run both services on separate terminals using the fastapi dev command.

fastapi dev app/main.py
Enter fullscreen mode Exit fullscreen mode
fastapi dev --port=8001 app/main2.py
Enter fullscreen mode Exit fullscreen mode

You can go to localhost:8000/docs to try the ordering service out and see if everything is working as expected.

Adding logging

Before we integrate logfire, you need to create a token from Logfire so that you can write data to the logfire dashboard. You can read on how to generate the token here. Once you have the token, you can save it as an environment variable in a .env file (You might need to install python-dotenv to use it).

LOGFIRE_TOKEN=YOUR_TOKEN
Enter fullscreen mode Exit fullscreen mode

We'll start small. We'll integrate logfire into our app such that all logs generated by the app will be sent to the logfire dashboard. We'll integrate logfire into the logging standard library. (Tip: you can also emit logs directly using logfire methods like logfire.info())

# app/core/logger.py
from logging import basicConfig, getLogger

import logfire

# Adding the logfire handler
basicConfig(handlers=[logfire.LogfireLoggingHandler()])

def setup_logger(name):
    logger = getLogger(name)
    # sending all logs starting from the DEBUG level
    logger.setLevel("DEBUG")
    return logger
Enter fullscreen mode Exit fullscreen mode

We've defined a setup_logger function that we can call anywhere in our project to send logs.

# app/routers/order.py
from ..core.logger import setup_logger
# [...] Other imports [...]

@router.post("/")
async def place_order(order: Order):
    # log to indicate the starting of handler
    logger.info("Placing order")
    order_id = str(uuid.uuid4())
    order_data = order.model_dump()
    order_data["id"] = order_id
    # [...] Retrieving shipping logic [...]
    orders_db[order_id] = order_data
    # log to indicate order was placed successfully 
    logger.info("Order placed")
    return {"message": "Order placed", "order": order_data}
Enter fullscreen mode Exit fullscreen mode

You can add similar statements in the shipping handler as well.

In main.py and main2.py, we'll add the logic to configure logfire

# app/main.py
# [...]
import logfire

app = FastAPI()

logfire.configure(token=os.getenv("LOGFIRE_TOKEN"), service_name="orders")
# [...]
Enter fullscreen mode Exit fullscreen mode

Do the same for main2.py, but with service_name as shipping.

If you run both the apps again, and try out some requests, you will see some logs displaying on the Live tab of the logfire dashboad:

Plain logs

Great! Now we see our logging statements in the dashboard. But Logfire lets us instrument fastapi directly to get even more data for each request. Let's implement that.

Under the logfire.configure() in both main.py and main2.py, add a new line of code:

logfire.instrument_fastapi(app, capture_headers=True)
Enter fullscreen mode Exit fullscreen mode

Now all your requests to both servers are instrumented automatically.

fastapi-with-logfire

Now all the service level logs are neatly nested under the respective requests. But get this - it can get even better.

Enter - tracing.

Distributed Tracing

So we set up our application in such a way the POST /orders endpoint will call POST /shipping/initiate endpoint inside it. It would be really nice if our dashboard could display this sequential flow instead of showing both the calls as separate (like we saw before). We can do this using tracing.

For tracing to work, context has to be propagated across services. This "context" helps keep track of the parent trace/span of a new span/log so that they can be viewed in tandem. Thankfully logfire gives us an easy way to do this. Since we're using requests to use the /shipping service, we'll install the associated library from logfire.

pip install 'logfire[requests]'

# or pip install 'logfire[httpx]' if you're using httpx
Enter fullscreen mode Exit fullscreen mode

We'll update the code in main.py to add logfire.instrument_requests()

# app/main.py
#[...]
logfire.configure(token=os.getenv("LOGFIRE_TOKEN"), service_name="orders")
logfire.instrument_requests() # NEW CODE
logfire.instrument_fastapi(app, capture_headers=True)
#[...]
Enter fullscreen mode Exit fullscreen mode

We don't need to update main2.py (the shipping one) because we aren't sending any requests in that server for now.

instrument_requests() will make sure that the traceparent header is automatically set when making requests. instrument_fastapi() makes sure that the traceparent header is extracted correctly from incoming requests. Thats it. Context propagada!

Run the servers again, and try sending some requests. You'll see something like this in your logfire dashboard:

Distributed Tracing

And heavens forbid, if something were to go wrong in one of the services, you now know exactly where it went wrong. Let's test this out:

# app/routers/shipping.py
#[...]

@router.get("/this-will-fail-just-because")
async def throw_error():
    raise Exception("Exception of my own making")
Enter fullscreen mode Exit fullscreen mode
# app/routers/orders.py
#[...]
@router.post("/")
async def place_order(order: Order):
    # [...]
    try:
        requests.post(
            "http://127.0.0.1:8001/shipping/this-will-fail-just-because"
        )
    except Exception as e:
        pass
    # [...]
Enter fullscreen mode Exit fullscreen mode

If we try POST /orders now, we'll see something like this:

Traces with errors

Metrics

Metrics signify how much of something exists. When paired with a time series, we can know the metric value at a point in time. This is useful for observing data like number of requests over a period of time, or things like CPU utilisation over time etc.

Setting up system metric tracking is pretty straightforward with logfire, so we'll do that first.

pip install 'logfire[system-metrics]'
Enter fullscreen mode Exit fullscreen mode

And now in main.py

# app/main.py
# [...]

# [...] logfire config [...]
logfire.instrument_system_metrics()
# [...]
Enter fullscreen mode Exit fullscreen mode

Now go to the logfire platform on your browser, select the "Dashboards" tab. Click on the "New Dashboard" button. Select "Basic System Metrics (Logfire)" from the drop down.

Run the server again and you should be seeing the graph populate with your system data.

System metrics

if you choose "Web Server Metrics" from the create dashboard drop down, you'll get another readymade dashboard containing useful metrics from our services.

Now let's add a custom metric to see the orders placed over time. We'll use a counter for this one.

# app/routers/orders.py
# [...]
import logfire

orders_placed = logfire.metric_counter("orders_placed")

# [...]

@router.post("/")
async def place_order(order: Order):
    # [...]
    logger.info(
        "Order placed",
        extra={"order": order_data, "shipping_details": shipping_data},
    )
    # Increment the metric counter
    orders_placed.add(1) # NEW CODE
    return {"message": "Order placed", "order": order_data}

Enter fullscreen mode Exit fullscreen mode

Now on the logfire platform, create a new dashboard and choose "Start from scratch" from the dropdown. Click on "Add Chart". We have to retrieve the data we need from logfire using SQL. To get an idea of how to structure your SQL queries, use the "Explore" tab in the logfire platform.

To get the total orders placed in a time period we'll use the following query:

SELECT
    SUM(scalar_value) AS total_orders
FROM metrics
WHERE metric_name = 'orders_placed'
Enter fullscreen mode Exit fullscreen mode

Set the visualisation type as "Values" and save the chart. Now the chart will get updated based on the time period that you select on top.

Total orders metric

To chart this in a time series, use the following query:

SELECT
    time_bucket('%time_bucket_duration%', start_timestamp) AS x,
    scalar_value
FROM metrics
WHERE metric_name = 'orders_placed';
Enter fullscreen mode Exit fullscreen mode

Choose the visualisation as "Time Series", and set the metrics to "scalar_value".

Orders graph

Wrapping Up

In this post, we took a hands-on approach to setting up observability in a FastAPI app using Logfire, covering logging, distributed tracing, and real-time metrics with minimal setup. While we focused on automatic instrumentation, there’s even more you can explore—like manual traces, which give you fine-grained control over spans and logs for deeper insights. If you enjoyed this post, consider subscribing to my newsletter for more dev-focused deep dives.

Top comments (0)