Details on Spans and attributes in OpenTelemetry-compatible tracing with Laminar
OTLP/gRPC | OTLP/HTTP/proto | |
---|---|---|
Supported | ✅ | ✅ |
Recommended | ✅ | ❌ |
Underlying protocol | gRPC over HTTP/2 | HTTP/1.1 |
Encoding format | protobuf | protobuf |
Base URL for Laminar cloud | https://api.lmnr.ai | https://api.lmnr.ai |
Port at Laminar cloud | 8443 | 443 |
Default port for self-hosted backend | 8001 | 8000 |
Path | /v1/traces [1] | /v1/traces [1] |
initTracer()
. Functions like this usually accept a configuration object
or a set of parameters, including the exporter configuration.
To send traces to Laminar, you need to configure the endpoint and the authorization.
https://api.lmnr.ai
listening
for gRPC traffic on port 8443
.
For the self-hosted backend, the base url is http://<your-self-hosted-backend-url>
and the default port is 8001
, unless you have changed the configuration.
The /v1/traces
path is the default OpenTelemetry trace endpoint, and Laminar listens
at this path. In both JavaScript (OpenTelemetry Node SDK) and Python OpenTelemetry SDKs,
the gRPC implementation appends /v1/traces
to the base url, if you don’t specify it.
Be careful though if you are using the HTTP exporter, as the HTTP implementation does not append it.
Bearer
and the token is your
project API key.
The right way to set the headers for gRPC requests is to use the metadata object. Even though
gRPC is sent over HTTP/2, and metadata is sent as HTTP headers, it is different from
raw HTTP headers. Learn more about metadata in the gRPC documentation.
metadata
and headers
parameters. The latter is effectively ignored by Laminar backend.authorization
has to start with
lowercase a
.
Authorization
with capital A
. If you are specifying the key as Authorization
and not seeing an error saying TypeError: not all arguments converted during string formatting
,
you are probably using the HTTP exporter, which is not recommended.Attribute | Description | Type | Laminar representation (if different) | Example |
---|---|---|---|---|
trace_id | Unique identifier for the trace | 16-bytes | UUID [1] | 01234567-89ab-4def-0123-456789abcdef |
span_id | Unique identifier for the span | 8-bytes | UUID [1] | 00000000-0000-0000-0123-456789abcdef |
parent_span_id | Unique identifier for the parent span | 8-bytes | UUID [1] | 00000000-0000-0000-0123-456789abcdef |
name | Name of the span | string | my_function | |
events | Events associated with the span | Array<Event> | ||
attributes | Attributes associated with the span. Fully compatible with the gen_ai semantic conventions | Key-value pair. Key must comply to semantic conventions, value must be of AttributeType [2] | {"gen_ai.usage.output_tokens": 369} | |
start_time_unix_nano | Start time of the span in nanoseconds [3] | number | timestamp with UTC timezone | 1630000000000000000 |
end_time_unix_nano | End time of the span in nanoseconds [3] | number | timestamp with UTC timezone | 1630000000000000000 |
AttributeType
is a union of string
, number
, boolean
, Array<string>
, Array<number>
, Array<boolean>
[3] In most OpenTelemetry client implementations, you don’t have to convert the timestamp to nanoseconds manually,
you can simply pass the Date
/ datetime
object and the client will convert it to nanoseconds.
Attribute | Description | Type | Example |
---|---|---|---|
lmnr.span.type | Type of the span | string (LLM or DEFAULT ) | LLM |
lmnr.span.path | Path of the span | string | agent.generate.openai.chat |
gen_ai.system | Model provider | string | openai |
gen_ai.usage.output_tokens | Number of tokens the LLM produced | number | 369 |
gen_ai.usage.input_tokens | Number of tokens in the LLM input | number | 42 |
llm.usage.total_tokens | Total number of tokens in the LLM input and output. If not specified, sum of input_tokens and output_tokens | number | 411 |
gen_ai.usage.cost | Cost of the LLM call. If not specified, system , input_tokens , and output_tokens are used to estimate the cost Laminar-side in the background | number | 0.012 |
gen_ai.usage.input_cost | Cost of the inputs to the LLM call. If not specified, system and input_tokens are used to estimate the cost Laminar-side in the background | number | 0.003 |
gen_ai.usage.output_cost | Cost of the outputs of the LLM call. If not specified, system , and output_tokens are used to estimate the cost Laminar-side in the background | number | 0.009 |
gen_ai.usage.request_model | Model name specified in the request | string | gpt-4o |
gen_ai.usage.response_model | Model name returned in the response | string | gpt-4o-2024-08-06 |
gen_ai.prompt.{i}.content | Content of the input message number i (starting from 0) | string | write a poem about laminar flow |
gen_ai.prompt.{i}.role | Role of the input message number i (starting from 0) | string | user |
attributes
field of the span.