Overview

LiteLLM is a framework/library for building LLM applications that simplifies accessing many models across different providers.

Default configuration

LiteLLM is well integrated with OpenTelemetry, so you only need to specify the configuration through the environment variables.

1

Install the opentelemetry packages

Laminar comes with required opentelemetry packages, so you only need to install Laminar:

pip install lmnr

If you want to install the opentelemetry packages separately, follow the instructions below.

2

install LiteLLM

For opentelemetry callback to work, you need to install LiteLLM with the proxy extra of the LiteLLM package.

pip install 'litellm[proxy]'
3

Set the environment variables

LMNR_PROJECT_API_KEY="<your-project-api-key>"
OTEL_EXPORTER="otlp_grpc"
OTEL_ENDPOINT="https://api.lmnr.ai:8443"
OTEL_HEADERS="authorization=Bearer $LMNR_PROJECT_API_KEY"

authorization must start with a lowercase a, because gRPC headers are case-sensitive in Python OpenTelemetry SDK.

4

Enable otel callback in the code

import litellm
litellm.callbacks = ['otel']
5

Run your code and see traces in Laminar

Example code:

import litellm
litellm.callbacks = ['otel']

 response = litellm.completion(
    model="gpt-4.1-nano",
    messages=[
      {"role": "user", "content": "What is the capital of France?"}
    ],
)

Using Laminar’s features

If you want to use Laminar’s features, such as sessions, manual spans, and the observe decorator, you will need to install and initialize Laminar alongside setting LiteLLM’s callback.

1

Install Laminar

pip install lmnr
2

Initialize Laminar

from lmnr import Laminar, observe
import litellm

Laminar.initialize(project_api_key="LMNR_PROJECT_API_KEY")
litellm.callbacks = ['otel']

@observe
def completion(model, messages):
    response = litellm.completion(
        model=model,
        messages=messages,
    )
    return response

completion(
  "gpt-4.1-nano",
  [{"role": "user", "content": "What is the capital of France?"}]
)

This, however, will most likely result in your OpenAI calls being double-traced – once by LiteLLM and once by Laminar. This is because LiteLLM uses OpenAI SDK under the hood to call some of the models and Laminar instruments OpenAI SDK.

To avoid this, you can disable OpenAI instrumentation at initialization.

from lmnr import Laminar, Instruments
Laminar.initialize(
  project_api_key="LMNR_PROJECT_API_KEY",
  disabled=set([Instruments.OPENAI])
)