Glossary

  • Span – a unit of work representing a single operation in your application. Typically, a span corresponds to a function invocation or an API call. A single “block” on the “waterfall” trace.
  • Trace – collection of spans involved in processing a request in your LLM application. Consists of one or more nested spans. A root span is the first span in a trace, and marks the beginning and end of the trace. Trace holds spans and aggregated metadata from the spans.
  • Event – a key-value pair of data with a timestamp representing an event within your application. Must happen within a span.
  • Session – a collection of traces that were serving the same user or the same interaction.

Concept

Laminar offers comprehensive tracing and analytics of your entire application. For every run, the entire execution trace is logged, so the information you can see in logs includes, but is not limited to:

  • Total execution time
  • Total execution tokens and cost
  • Span-level execution time and token counts
  • Inputs and outputs of each span

Getting started

Installation

Install the package from PyPI.

pip install lmnr

Add 2 lines of code to instrument your code

from lmnr import Laminar as L
L.initialize("LMNR_PROJECT_API_KEY")

This will automatically instrument all major LLM providers, frameworks including LangChain and LlamaIndex, and even vector db calls. If you execute your code, you will see traces in the Laminar dashboard.

See Configure automatic instrumentation to learn more about granular configuration or opting out.

Project API key

To get the project API key, go to the Laminar dashboard, click the project settings, and generate a project API key. Unless specified at initialization, Laminar will look for the key in the LMNR_PROJECT_API_KEY environment variable

Adding manual instrumentation with observe

You can instrument your code by adding the @observe() decorator to your functions.

import os
from openai import OpenAI

from lmnr import observe, Laminar as L

L.initialize(
    project_api_key=os.environ["LMNR_PROJECT_API_KEY"],
)
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@observe()  # annotate all functions you want to trace
def validate_topic(topic: str):
    # assume you call the database to validate if the topic exists
    result = call_db(topic)
    if result:
        return result
    else:
        return "Topic not found"

# enable this at caller level, so that both `observe`d function and the 
# auto-instrumented OpenAI calls are grouped in one trace
@observe()
def poem_writer(topic="turbulence"):
    validated_topic = validate_topic(topic)

    prompt = f"write a poem about {validated_topic}"

    # OpenAI calls are automatically instrumented
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": prompt},
        ],
    )

    poem = response.choices[0].message.content

    return poem

if __name__ == "__main__":
    print(poem_writer(topic="laminar flow"))

Learn more about instrumenting your code by checking our Python SDK and JavaScript/TypeScript SDK.

Accessing traces

  1. Go to the traces page from the navbar on the left side of the page
  2. Click on each row to see the detailed breakdown, and waterfalll of each log on the sidebar.
  3. Click “Filter” and filter by the required criteria.
Screenshot of the traces page

Example traces page

Viewing more details

Simply click on any of the rows in the logs page and you will see the details in the side.

Click on each span to see its details, including inputs, outputs and metadata, as well as associated events.

OpenTelemetry compatibility

Laminar’s manual and automatic instrumentation is compatible with OpenTelemetry. Our backend is Opentelemetry-compatible destination and serves ingestion through gRPC. Majority of the Opentelemetry-compatible instrumentations for LLM and VectorDB libraries are provided by OpenLLMetry.

This means that you can use OpenTelemetry SDKs to send traces to Laminar, and they will be displayed in the Laminar UI.

To get started, in your application, set the OpenTelemetry exporter to the Laminar gRPC endpoint: https://api.lmnr.ai:8443/v1/traces.

Read on to the Otel section to learn more about the OpenTelemetry objects and attributes that Laminar uses in more detail.