LLM Observability for LiteLLM
Configure LiteLLM to send traces to Laminar
Overview
LiteLLM is a framework/library for building LLM applications that simplifies accessing many models across different providers.
Default configuration
Ensure you have the latest version of Laminar
Initialize Laminar and integrate the callback
You need to initialize Laminar before adding the callback to LiteLLM.
Run your code and see traces in Laminar
Example code:
Disabling OpenAI double-instrumentation
If you call OpenAI models via LiteLLM, adding Laminar LiteLLM callback may result in OpenAI calls being traced twice – once by Laminar LiteLLM callback and once by Laminar auto-instrumentation of OpenAI SDK. This is because LiteLLM uses OpenAI SDK under the hood to call some of the models and Laminar automatically instruments OpenAI SDK.
To avoid this, you can disable OpenAI SDK instrumentation at Laminar initialization: