Overview

The observe decorator (Python) or function wrapper (JavaScript/TypeScript) is the primary way to structure your traces in Laminar. It allows you to:

  • Create a parent span that groups multiple LLM calls into a single trace
  • Capture inputs and outputs of your functions automatically
  • Structure your application’s tracing in a logical way

Basic Usage

You can instrument specific functions by wrapping them in observe(). This is especially helpful when you want to trace functions, or group separate functions into a single trace.

import { observe } from '@lmnr-ai/lmnr';

const myFunction = async () => observe(
  { name: 'myFunction'}, 
  async () => {
    const response = await client.chat.completions.create({
      model: "gpt-4o-mini",
      messages: [{ role: "user", content: "What is the capital of France?" }],
    });
    return response.choices[0].message.content;
  }
);

await myFunction();

We are now recording my_function and the OpenAI call, which is nested inside it, in the same trace. Notice that the OpenAI span is a child of my_function. Parent-child relationships are automatically detected and visualized with tree hierarchy.

You can nest as many spans as you want inside each other. By observing both the functions and the LLM/vector DB calls you can have better visualization of execution flow which is useful for debugging and better understanding of the application.

Input arguments to the function are automatically recorded as inputs of the span. The return value is automatically recorded as the output of the span.

Passing arguments to the function in TypeScript is slightly non-obvious. Example:

const myFunction = async () => observe(
  { name: 'myFunction' }, 
  async (param1, param2) => {
    // ...
  }
  'argValue1',
  'argValue2'
);

Detailed Reference

General syntax

await observe({ ...options }, async (param1, param2) => {
  // your code here
}, arg1, arg2);

Parameters (ObserveOptions)

  • name (string): name of the span. If not passed, and the function to observe is not an anonymous arrow function, the function name will be used.
  • sessionId (string): session ID for the wrapped trace.
  • traceType ('DEFAULT'|'EVENT'|'EVALUATION'): Type of the trace. Unless it is within evaluation, it must be 'DEFAULT'.
  • spanType ('DEFAULT'|'LLM') - Type of the span. 'DEFAULT' is used if not specified. If the type is 'LLM', you must manually specify some attributes. This translates to lmnr.span.type attribute on the span.
  • traceId (string): [experimental] trace ID for the current trace. This is useful if you want to continue an existing trace. IMPORTANT: must be a valid UUID, i.e. has to include 8-4-4-4-12 hex digits.
  • input: a dictionary of input parameters. Is preferred over function parameters.
  • ignoreInput: if true, the input will not be recorded.
  • ignoreOutput: if true, the output will not be recorded.

Inputs and outputs

  • Function parameters and their values are serialized to JSON and recorded as span input.
  • Function return value is serialized to JSON and recorded as span output.

For example:

const result = await observe({ name: 'my_span' }, async (param1, param2) => {
    return param1 + param2;
}, 1, 2);

In this case, the span will have the following attributes:

  • Span input (lmnr.span.input) will be {"param1": 1, "param2": 2}
  • Span output (lmnr.span.output) will be 3

Use Cases

Grouping LLM Calls

One of the most common use cases for observe is to group multiple LLM calls into a single trace:

import { Laminar, observe } from '@lmnr-ai/lmnr';
import { OpenAI } from 'openai';

const handle = async (userMessage) => 
    await observe({name: 'requestHandler'}, async () => {
        // First LLM call
        const routerResponse = await openai.chat.completions.create({
            messages: [
                {role: 'user', content: 'First prompt' + userMessage}
            ],
            model: 'gpt-4o-mini',
        });
        
        // Second LLM call
        const modelResponse = await openai.chat.completions.create({
            messages: [
                {role: 'user', content: userMessage}
            ],
            model: 'gpt-4o',
        });

        return modelResponse.choices[0].message.content;
    });

Alternative Methods

In Python, you can also use Laminar.start_as_current_span if you want to trace a specific block of code:

from lmnr import Laminar

def request_handler(user_message: str):
    with Laminar.start_as_current_span(
        name="handler",
        input=user_message
    ) as span:
        # Your code here
        
        # Set output of the current span
        Laminar.set_span_output(result)
        return result