LiteLLM
Configure LiteLLM to send traces to Laminar
Overview
LiteLLM is a framework/library for building LLM applications that simplifies accessing many models across different providers.
Default configuration
LiteLLM is well integrated with OpenTelemetry, so you only need to specify the configuration through the environment variables.
Install the opentelemetry packages
Laminar comes with required opentelemetry packages, so you only need to install Laminar:
If you want to install the opentelemetry packages separately, follow the instructions below.
install LiteLLM
For opentelemetry callback to work, you need to install LiteLLM with the proxy
extra of the LiteLLM package.
Set the environment variables
authorization
must start with a lowercase a
, because gRPC headers are case-sensitive
in Python OpenTelemetry SDK.
Enable otel callback in the code
Run your code and see traces in Laminar
Example code:
Using Laminar’s features
If you want to use Laminar’s features, such as sessions, manual spans, and the observe
decorator,
you will need to install and initialize Laminar alongside setting LiteLLM’s callback.
Install Laminar
Initialize Laminar
This, however, will most likely result in your OpenAI calls being double-traced – once by LiteLLM and once by Laminar. This is because LiteLLM uses OpenAI SDK under the hood to call some of the models and Laminar instruments OpenAI SDK.
To avoid this, you can disable OpenAI instrumentation at initialization.