Laminar is an open-source observability and analytics platform for complex LLM apps. Laminar helps developers build better LLM applications by providing a comprehensive set of tools for observability, analytics, and prompt chain management.

Living at the intersection of tracing and event-based analytics, Laminar is like Datadog + PostHog for LLM applications.

Check our Github repo to learn more about how it works or if you are interested in self-hosting.

Getting started

Installation

Install the package from PyPI.

pip install lmnr

Code change

Annotate your code with @observe() decorator to start tracing and turn on the OpenAI auto-instrumentation.

import os
from openai import OpenAI
from lmnr import Laminar as L, Instruments, observe

L.initialize(
    project_api_key=os.environ["LMNR_PROJECT_API_KEY"],
    instruments={Instruments.OPENAI},
)

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def poem_writer(topic: str):
    prompt = f"write a poem about {topic}"
    # OpenAI calls are automatically instrumented
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": prompt},
        ],
    )
    poem = response.choices[0].message.content
    return poem

@observe() # annotate all functions you want to trace
def handle_user_request(...):
    ...

    poem = poem_writer(topic="laminar flow")
    
    ...

Learn more about instrumenting your code by checking our Python SDK and JavaScript/TypeScript SDK.

Project API key

To get the project API key, go to the Laminar dashboard, click the project settings, and generate a project API key. Unless specified at initialization, Laminar will look for the key in the LMNR_PROJECT_API_KEY environment variable

Features

Observability

With Laminar, you can instrument your entire LLM application and track the full execution trace. Adding instrumentation is as simple as adding a Python decorator to your function (wrapping into another function in the case of Javascript/Typescript).

It is compatible with OpenTelemetry tracing through the means of OpenLLMetry.

Get started with Tracing.

Screenshot of observability dashboard

Analytics

Laminar provides infrastructure to run LLM analysis to extract semantic events, such as “user sentiment” or “did my LLM agent upsell?”, and then turn them into trackable metrics. Combining these events with the trace data allows you to link back to the specific user interaction that caused the event and gives you a better understanding of the user experience.

In addition, you can track raw metrics, by sending events with values directly to Laminar.

Learn more about Events extraction.

Prompt chain management

You can build and host chains of prompts and LLMs and then call them as if each chain was a single function. It is especially useful when you want to experiment with techniques, such as Mixture of Agents and self-reflecting agents, without or before hosting prompts and model configs in your code.

Learn more about Pipeline builder for prompt chains.

Evaluations

In addition to semantic events, Laminar allows you to run evaluations and analyze their results in the dashboard.

You can use Laminar’s JavaScript and Python SDKs to set up and ran your evaluations.

Learn more about Evaluations.