Overview

LiteLLM is a framework/library for building LLM applications that simplifies accessing many models across different providers.

Default configuration

1

Ensure you have the latest version of Laminar

pip install -U lmnr[all]
2

Initialize Laminar and integrate the callback

You need to initialize Laminar before adding the callback to LiteLLM.

import litellm
from lmnr import Laminar, LaminarLiteLLMCallback

# 1. Initialize Laminar
Laminar.initialize(project_api_key="LMNR_PROJECT_API_KEY")

# 2. Integrate the callback
litellm.callbacks = [LaminarLiteLLMCallback()]
3

Run your code and see traces in Laminar

Example code:

import litellm
from lmnr import Laminar, LaminarLiteLLMCallback

Laminar.initialize(project_api_key="LMNR_PROJECT_API_KEY")
litellm.callbacks = [LaminarLiteLLMCallback()]

response = litellm.completion(
    model="gpt-4.1-nano",
    messages=[
      {"role": "user", "content": "What is the capital of France?"}
    ],
)

Disabling OpenAI double-instrumentation

If you call OpenAI models via LiteLLM, adding Laminar LiteLLM callback may result in OpenAI calls being traced twice – once by Laminar LiteLLM callback and once by Laminar auto-instrumentation of OpenAI SDK. This is because LiteLLM uses OpenAI SDK under the hood to call some of the models and Laminar automatically instruments OpenAI SDK.

To avoid this, you can disable OpenAI SDK instrumentation at Laminar initialization:

from lmnr import Laminar, Instruments
Laminar.initialize(
  project_api_key="LMNR_PROJECT_API_KEY",
  disabled_instruments=set([Instruments.OPENAI])
)