Manual instrumentation
Manual code instrumentation and best practices for tracing with Laminar
Calling raw LLM APIs, reporting token usage
This is helpful if any of the following applies to you:
- You are calling an LLM API, not using their client library/SDK.
- You are using a library that is not auto-instrumented by OpenLLMetry.
- You want to report token usage for a specific API call.
- You are using an open-source/self-hosted LLM.
There are several critical attributes that need to be set on a span to ensure it appears as an LLM span in the UI.
lmnr.span.type
– must be set to'LLM'
.gen_ai.response.model
– must be set to the model name returned by an LLM API (e.g.gpt-4o-mini
).gen_ai.system
– must be set to the provider name (e.g. ‘openai’, ‘anthropic’).
In addition, the following attributes can be manually addded to report token usage and costs:
gen_ai.usage.input_tokens
– number of tokens used in the input.gen_ai.usage.output_tokens
– number of tokens used in the output.llm.usage.total_tokens
– total number of tokens used in the call.gen_ai.usage.input_cost
,gen_ai.usage.output_cost
,gen_ai.usage.cost
– can all be reported explicitly. However, Laminar calculates the cost of the major providers using the values ofgen_ai.usage.input_tokens
,gen_ai.usage.output_tokens
, andgen_ai.response.model
.
All of these values can be set in Python and JavaScript/TypeScript using static methods on Laminar
class.
Example
Use with Laminar.start_as_current_span
to create a span and set its attributes.
Use observe
to group separate LLM calls in one trace
Automatic instrumentation creates spans for LLM calls within the current trace context. That is, by default, each LLM call will create a new trace.
If you want to group several auto-instrumented calls in one trace,
simply observe
the top-level function that makes these calls.
Example
In this example, the request_handler
makes a call to OpenAI to determine the user
intent. If the intent matches the expected value, it makes another call to
OpenAI (possibly with additional RAG) to generate a response.
request_handler
is observed, so all calls to OpenAI inside it are grouped in one trace.
As a result, you will get a nested trace with the request_handler
span as the top level span,
and the OpenAI calls as child spans.
observe
in detail
This is a reference on Python @observe
decorator and JavaScript observe
function.
JS wrapper functions’ syntax is not as clean as Python decorators, but they are functionally equivalent. TypeScript has decorators, but they are (1) experimental, (2) only available on classes and methods, so we’ve decided to provide a wrapper function syntax. This is common for OpenTelemetry, but may not be as common for LLM observability, so we provide a reference here.
Parameters
name
(str|None
): name of the span. If not provided, the name of the wrapped function will be used. For example:
session_id
(str|None
): session ID for the current trace. If you know the session ID staticallly, you can pass it here.user_id
(str|None
): user ID for the current trace. If you know the user ID staticallly, you can pass it here.
Inputs and outputs
- Function parameters and their values are serialized to JSON and recorded as span input.
- Function return value is serialized to JSON and recorded as span output.
For example:
In this case, the span will have the following attributes:
- Span input (
lmnr.span.input
) will be{"param1": 1, "param2": 2}
- Span output (
lmnr.span.output
) will be3
Notes
@observe
is a decorator factory, so it must always be used with parentheses:@observe()
.- This decorator can be used with both synchronous and asynchronous functions.
- Streaming responses are taken care of, so if your function returns a generator, it will be observed correctly.
Trace specific parts of code (Python only)
In Python, you can also use Laminar.start_as_current_span
if you want to record a chunk of your code using with
statement.
Example
Laminar.start_as_current_span in detail
Laminar.start_as_current_span
is a context manager that creates a new span and sets it as the current span.
Under the hood it uses bare OpenTelemetry start_as_current_span
method, but it also sets some
Laminar-specific attributes.
Parameters
name
(str
): name of the span.input
(Any
): input to the span. It will be serialized to JSON and recorded as span input.span_type
(Literal['DEFAULT'] | Literal['LLM']
): type of the span. If not specified, it will be'DEFAULT'
.
Returns
Span
: the span object. It is a bare OpenTelemetry span, so you can use it to set attributes. We still recommend usingLaminar.set_span_output
andLaminar.set_span_attributes
to set attributes, unless you want to set some custom attributes.
Examples
Continuing a trace
When you manually instrument your code, sometimes you may want to continue an existing trace. For example, a trace may start in one API route, and you may want to continue it in a different API route.
Passing span object between functions (Recommended)
It is helpful to pass the span object between functions, so that you can continue the same trace.
Example
We will use two main methods here – Laminar.start_span()
and use_span
. You can import
use_span
from lmnr
, but it is just a re-export of opentelemetry.trace.use_span
.
The type of the span is Span
. You can import it directly from opentelemetry.trace
.
Ending the span is required. If you don’t end the span, the trace will not be recorded.
You can also end the span in the last by passing end_on_exit=True
to the last use_span
.
As a result, you will get a nested trace with outer
as the top level span,
where any spans created with use_span
/Laminar.withSpan
will be children of outer
.
In this example, outer
is the top level span, foo
called OpenAI, which got auto-instrumented,
and bar
is a custom span that we created manually.
Setting trace ID manually
This is not completely compatible with OpenTelemetry, so use only when you have to. Please use the span object passing method above whenever possible.
If there is no way for you to pass the span object between functions, you can set the trace ID manually. In the backend, Laminar will associate all spans with the same trace ID. Laminar’s trace IDs are UUIDs, so if you want to set a trace ID manually, you must pass a valid UUID.
Example
Both spans, foo
and bar
, will be in the same trace, because they have the same trace ID.