Overview
Laminar is an open-source LLM engineering platform.
Laminar powers the data flywheel for your LLM applications. With Laminar, you can trace, evaluate, annotate, analyze, and re-use your LLM data.
Check our Github repo to learn more about how it works or if you are interested in self-hosting.
Getting started
Installation
Add 2 lines to instrument your code
from lmnr import Laminar as L
L.initialize(project_api_key="LMNR_PROJECT_API_KEY")
This will automatically instrument all major LLM providers, LLM frameworks including LangChain and LlamaIndex, and even vector DB calls. Execute your code to see traces in the Laminar dashboard.
To disable or configure automatic instrumentation, see the section on Automatic instrumentation.
Project API key
To get the project API key, go to the Laminar dashboard, click the project settings, and generate a project API key.
Specify the key at Laminar
initialization. If not specified,
Laminar will look for the key in the LMNR_PROJECT_API_KEY
environment variable.
Example: Instrument OpenAI calls with Laminar
import os
from openai import OpenAI
from lmnr import Laminar as L
L.initialize(
project_api_key=os.environ["LMNR_PROJECT_API_KEY"],
)
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def poem_writer(topic: str):
prompt = f"write a poem about {topic}"
# OpenAI calls are automatically instrumented
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
],
)
poem = response.choices[0].message.content
return poem
if __name__ == "__main__":
print(poem_writer("laminar flow"))
Adding manual instrumentation
If you want to trace your own functions for their durations, inputs and outputs, or want to group
them into one trace,
you can use @observe
decorator in Python or async observe
function in JavaScript/TypeScript.
import os
from openai import OpenAI
from lmnr import observe, Laminar as L
L.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
@observe()
def request_handler(data: dict):
# some other logic, e.g. data preprocessing
response1 = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": data["prompt_1"]},
],
)
# some other logic, e.g. a conditional DB call
response2 = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": data["prompt_2"]},
],
)
return response2.choices[0].message.content
Next steps
Learn more about
using observe
and manual instrumentation.
Features
Observability
Track the full execution trace of your LLM application.
Laminar’s instrumentation is compatible with OpenTelemetry.
Get started with Tracing.
Events and Analytics
Make sense of thousands of traces generated by your LLM application. Evaluate and catch failures of your agents in real-time. Gather metrics on user or agent behavior based on semantic meaning.
In addition, you can track raw metrics, by sending events with values directly to Laminar.
Learn more about Events extraction.
Data labeling and re-ingestion
Laminar provides you with a UI where you can label and annotate the traces of your LLM applications. Organize the labeled data into datasets and use them to update your prompts or fine-tune your models.
Perform advanced hybrid search on datasets to augment your prompts with the most semantically relevant examples from past traces.
Learn more about Data labeling and Datasets.
Evaluations
Laminar allows you to run your prompts and models against datasets, evaluate the performance, and analyze their results in the dashboard.
You can use Laminar’s JavaScript and Python SDKs to set up and run your evaluations.
Learn more about Evaluations.
Prompt chain management
You can build and host chains of prompts and LLMs and then call them as if each chain was a single function. It is especially useful when you want to experiment with techniques, such as Mixture of Agents and self-reflecting agents, without or before having to manage prompts and model configs in your code.
Learn more about Pipeline builder for prompt chains.