Laminar is an open-source observability and analytics platform for complex LLM apps. Laminar helps developers build better LLM applications by providing a comprehensive set of tools for observability, analytics, and prompt chain management.

Living at the intersection of tracing and event-based analytics, Laminar is like Datadog + PostHog for LLM applications.

Check our Github repo to learn more about how it works or if you are interested in self-hosting.

Getting started

Installation

Install the package from PyPI.

pip install lmnr

Add 2 lines of code to instrument your code

from lmnr import Laminar as L
L.initialize("LMNR_PROJECT_API_KEY")

This will automatically instrument all major LLM providers, frameworks including LangChain and LlamaIndex, and even vector db calls. If you execute your code, you will see traces in the Laminar dashboard.

See Configure automatic instrumentation to learn more about granular configuration or opting out.

Project API key

To get the project API key, go to the Laminar dashboard, click the project settings, and generate a project API key. Unless specified at initialization, Laminar will look for the key in the LMNR_PROJECT_API_KEY environment variable

Example: Instrument OpenAI calls with Laminar

import os
from openai import OpenAI
from lmnr import Laminar as L

# OpenAI calls will be automatically patched on initialization
L.initialize(
    project_api_key=os.environ["LMNR_PROJECT_API_KEY"],
)

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def poem_writer(topic: str):
    prompt = f"write a poem about {topic}"

    # OpenAI calls are automatically instrumented
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": prompt},
        ],
    )
    poem = response.choices[0].message.content
    return poem

if __name__ == "__main__":
    print(poem_writer("laminar flow"))

Adding manual instrumentation

If you want to trace your own functions for their durations, inputs and outputs, or want to group them into one trace, you can use @observe decorator in Python or async observe function in JavaScript/TypeScript.

import os
from openai import OpenAI
from lmnr import observe, Laminar as L

L.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

@observe()
def request_handler(data: dict):
    # some other logic, e.g. data preprocessing

    response1 = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": data["prompt_1"]},
        ],
    )

    # some other logic, e.g. a conditional DB call

    response2 = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": data["prompt_2"]},
        ],
    )

    return response2.choices[0].message.content

Next steps

Learn more about using observe.

For more language-specific instructions, check our Python SDK and JavaScript/TypeScript SDK.

Features

Observability

Track the full execution trace of your LLM application.

Laminar’s manual and automatic instrumentation is compatible with OpenTelemetry.

Get started with Tracing.

Screenshot of observability dashboard

Analytics

Laminar provides infrastructure to run LLM analysis to extract semantic events, such as “user sentiment” or “did my LLM agent upsell?”, and then turn them into trackable metrics. Combining these events with the trace data allows you to link back to the specific user interaction that caused the event and gives you a better understanding of the user experience.

In addition, you can track raw metrics, by sending events with values directly to Laminar.

Learn more about Events extraction.

Prompt chain management

You can build and host chains of prompts and LLMs and then call them as if each chain was a single function. It is especially useful when you want to experiment with techniques, such as Mixture of Agents and self-reflecting agents, without or before hosting prompts and model configs in your code.

Learn more about Pipeline builder for prompt chains.

Evaluations

In addition to semantic events, Laminar allows you to run evaluations and analyze their results in the dashboard.

You can use Laminar’s JavaScript and Python SDKs to set up and ran your evaluations.

Learn more about Evaluations.