Quickstart Guide
This guide will walk you through running your first evaluation using Laminar’s evaluation system.
Laminar provides a structured approach to create, run, and track your AI system’s performance through these key components:
- Executors - Functions that process inputs and produce outputs, such as prompt templates, LLM calls, or production logic
- Evaluators - Functions that assess outputs against targets or quality criteria, producing numeric scores
- Datasets - Collections of datapoints (test cases) with two key elements:
data
- Required JSON input sent to the executortarget
- Optional reference data sent to the evaluator, typically containing expected outputs
- Visualization - Tools to track performance trends and detect regressions over time
- Tracing - Automatic recording of execution flow and model invocations
Example datapoint:
Evaluation Groups group related evaluations to assess one feature or component, with results aggregated for comparison.
Evaluation Lifecycle
For each datapoint in a dataset:
- The executor receives the
data
as input - The executor runs and its output is stored
- Both the executor output and
target
are passed to the evaluator - The evaluator produces either a numeric score or a JSON object with multiple numeric scores
- Results are stored and can be visualized to track performance over time
This approach helps you continuously measure your AI system’s performance as you make changes, showing the impact of model updates, prompt revisions, and code changes.
Create your first evaluation
Prerequisites
To get the project API key, go to the Laminar dashboard, click the project settings, and generate a project API key. This is available both in the cloud and in the self-hosted version of Laminar.
Specify the key at Laminar
initialization. If not specified,
Laminar will look for the key in the LMNR_PROJECT_API_KEY
environment variable.
Create an evaluation file
Create a file named my-first-evaluation.ts
and add the following code:
It is important to pass the config
object with instrumentModules
to evaluate
to ensure that the OpenAI client and any other instrumented modules are instrumented.
Create a file named my-first-evaluation.ts
and add the following code:
It is important to pass the config
object with instrumentModules
to evaluate
to ensure that the OpenAI client and any other instrumented modules are instrumented.
Create a file named my-first-evaluation.py
and add the following code:
Run the evaluation
You can run evaluations in two ways: using the lmnr eval
CLI or directly executing the evaluation file.
Using the CLI
The Laminar CLI automatically detects top-level evaluate
function calls in your files - you don’t need to wrap them in a main
function or any special structure.
To run multiple evaluations, place them in an evals
directory with the naming pattern *.eval.{ts,js}
:
Then run all evaluations with a single command:
To run multiple evaluations, place them in an evals
directory with the naming pattern *.eval.{ts,js}
:
Then run all evaluations with a single command:
To run multiple evaluations, place them in an evals
directory with the naming pattern eval_*.py
or *_eval.py
:
Then run all evaluations with a single command:
Running as a standalone script
You can also import and call evaluate
directly from your application code:
The evaluate
function is flexible and can be used both in standalone scripts processed by the CLI and integrated directly into your application code.
Evaluator functions must return either a single numeric score.
No need to initialize Laminar - evaluate
automatically initializes Laminar behind the scenes. All instrumented function calls and model invocations are traced without any additional setup.
View evaluation results
When you run an evaluation from the CLI, Laminar will output the link to the dashboard where you can view the evaluation results.
Laminar stores every evaluation result. A run for every datapoint is represented as a trace. You can view the results and corresponding traces in the evaluations page.
Tracking evaluation progress
To track the score progression over time or compare evaluations side-by-side, you need to group them together. This can be achieved by passing the groupName
parameter to the evaluate
function.