Default automatic tracing

If you initialize Laminar, LLM calls will be automatically traced. Typically, one LLM call creates one span.

Here, openai.chat is a span. It is the only span in the trace.

Since these spans are created automatically, they are not grouped into traces. That is, you will see one span per trace.

For example, in this toy application, we make two LLM calls – first generates random nouns, and second generates a poem using those nouns.

Here, both entries in the table are traces. Each trace contains one span. On the right-hand side, you can see the expanded view of the trace. It shows the only span openai.chat.

It will make sense to group these spans into a single trace, as they belong to the same user request.

Grouping spans into traces

The most common way to group spans into traces is to create one parent span before making LLM calls. This parent span will contain all the child spans, each may be an LLM call, or a piece of code that you want to trace.

The concept of such a parent span is sometimes called a “root span” or a “top-level span”.

The recommended way to create a parent span is to use the observe decorator (Python) or observe function wrapper (JavaScript/TypeScript).

You can instrument specific functions by adding the @observe() decorator. This is especially helpful when you want to trace functions, or group separate functions into a single trace.

from lmnr import observe

@observe()  # annotate all functions you want to trace
def my_function():
    res = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "What is the capital of France?"}],
    )
    return res.choices[0].message.content

my_function()

We are now recording my_function and the OpenAI call, which is nested inside it, in the same trace. Notice that the OpenAI span is a child of my_function. Parent-child relationships are automatically detected and visualized with tree hierarchy.

You can nest as many spans as you want inside each other. By observing both the functions and the LLM/vector DB calls you can have better visualization of execution flow which is useful for debugging and better understanding of the application.

Input arguments to the function are automatically recorded as inputs of the span. The return value is automatically recorded as the output of the span.

Passing arguments to the function in TypeScript is slightly non-trivial. Example:

const myFunction = observe(
  { name: 'myFunction'}, 
  async (param1, param2) => {
    // ...
  }
  'argValue1',
  'argValue2'
);

Grouping traces into sessions

Sometimes, you may want to group traces into sessions. Sessions are useful when you want to group traces that are related to a single user interaction or a single request.

For example, imagine an advanced conversational chatbot agent that at every turn in the conversation does several things, such as processing the user input, calling the database, and generating a response. In such case, we may want to represent each turn in the conversation as a trace, and the whole conversation as a session.

Example. Associate a trace with a session

Simply call the Laminar.set_session(session_id="session123") within or outside any span context. All the subsequent spans will be associated with the session.

from lmnr import Laminar, observe

Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])

@observe()
def my_function():
    Laminar.set_session(session_id="session123")
    # your code here

    # optionally, at the end of the session, you can clear the session id
    Laminar.clear_session()

Viewing sessions

Head to the traces page and select the “Sessions” tab. You will see all the sessions, and if you click on each, it will expand to show all the traces within the session.

Each trace within a session is the same as if it were a standalone trace.

Dynamically disabling tracing

Sometimes, you may want to dynamically disable tracing for a particular span. For example, some of your customers need more privacy than others, and you only want to collect metadata for some of them.

To achieve this, we offer a wrapper that can set tracing to one of the following modes:

  • ALL – trace everything as normal
  • META_ONLY – do not trace inputs and outputs
  • OFF – do not trace anything within the wrapper
// only available since v0.4.25
import { withTracing, TracingLevel } from "@lmnr-ai/lmnr";

withTracing(TracingLevel.OFF, () => {
    // your code here
});

// code here is traced normally

Next Steps

Learn more about observe and its advanced alternatives in the section on Manual instrumentation.