Instrumentation
Details on code instrumentation and best practices for tracing with Laminar
Automatic instrumentation
By default, Laminar.initialize()
will automatically instrument majority of common LLM and VectorDB libraries for tracing.
This includes OpenAI, Anthropic, Langchain, Pinecone, and many more.
Instrument all available libraries
See all available auto-instrumentable modules here.
Disable automatic instrumentation
initialize()
also accepts
an optional instruments
parameter. If you explicitly pass an empty set,
no automatic instrumentations will be applied.
Instrument specific modules only
You can also enable instrumentation for specific modules only.
This is useful if you either want more control over what is being instrumented. Also, use this if you are using Next.js and having issues with automatic instrumentation – learn more.
Let’s say, for example, we call OpenAI and Anthropic models to perform the same task, and we only want to instrument the Anthropic calls, but not OpenAI.
initialize()
accepts
an optional instruments
parameter. Pass a set of instruments you want to enable.
In this case we only want to pass Instruments.ANTHROPIC
.
See available instruments in the next subsection.
Available instruments
See available instruments by importing Instruments
from lmnr
or view source.
These exact modules are auto-instrumented, if you do not pass instruments
to L.initialize()
.
Manual instrumentation
Use observe
to group separate LLM calls in one trace
Automatic instrumentation creates spans for LLM calls within the current trace context.
Unless you start a new trace before calling an LLM, each LLM call will create a new trace.
If you want to group several auto-instrumented calls in one trace,
simply observe
the top-level function that makes these calls.
Example
In this example, the request_handler
makes a call to OpenAI to determine the user
intent. If the intent matches the expected one, the handler makes another call to
OpenAI (possibly with additional RAG) to generate a response.
request_handler
is observed, so all calls to OpenAI inside it are grouped in one trace.
As a result, you will get a nested trace with the request_handler
span as the top level span,
and the OpenAI calls as child spans.
Observe specific code chunks
Also, in Python, you can use start_as_current_span
if you want to record a chunk of your code using with
statement.