Getting Started in 2 Minutes

1. Get Your Project API Key

To get your Project API Key, navigate to your project settings page on the Laminar dashboard and create new project API key.

Next, you’ll need to set this key as an environment variable in your project. Create a .env file in the root of your project (if you don’t have one already) and add the following line:

LMNR_PROJECT_API_KEY=your_project_api_key_here

Replace your_project_api_key_here with the actual key you copied.

2. Initialize Laminar in Your Application

Adding just two lines to your application enables comprehensive tracing:

import { Laminar } from '@lmnr-ai/lmnr';
import { OpenAI } from 'openai';
Laminar.initialize({
    projectApiKey: process.env.LMNR_PROJECT_API_KEY,
    instrumentModules: {
        openAI: OpenAI,
        // add other libraries as you need
    }
});

Laminar should be initialized once in your application. This could be at the server startup, or in the entry point of your application.

This will automatically instrument all major LLM provider SDKs, LLM frameworks including LangChain and LlamaIndex, and calls to vector databases.

For Node JS setups, you need to manually pass the modules you want to instrument, such as OpenAI. See the section on manual instrumentation.

For more information, refer to the instrumentation docs.

3. That’s it! Your LLM API Calls Are Now Traced

Once initialized, Laminar automatically traces LLM API calls. For example, after initialization, this standard OpenAI call:

import { OpenAI } from 'openai';

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "What is the capital of France?" }],
});

Will automatically create a span in your Laminar dashboard:

Laminar automatically captures important LLM metrics including latency, token usage, and cost calculations based on the specific model used.

Tracing Custom Functions

Beyond automatic LLM tracing, you can use the observe decorator/wrapper to trace specific functions in your application:

You can instrument specific functions by wrapping them in observe(). This is especially helpful when you want to trace functions, or group separate functions into a single trace.

import { observe } from '@lmnr-ai/lmnr';

const myFunction = async () => observe(
  { name: 'myFunction'}, 
  async () => {
    const response = await client.chat.completions.create({
      model: "gpt-4o-mini",
      messages: [{ role: "user", content: "What is the capital of France?" }],
    });
    return response.choices[0].message.content;
  }
);

await myFunction();

We are now recording my_function and the OpenAI call, which is nested inside it, in the same trace. Notice that the OpenAI span is a child of my_function. Parent-child relationships are automatically detected and visualized with tree hierarchy.

You can nest as many spans as you want inside each other. By observing both the functions and the LLM/vector DB calls you can have better visualization of execution flow which is useful for debugging and better understanding of the application.

Input arguments to the function are automatically recorded as inputs of the span. The return value is automatically recorded as the output of the span.

Passing arguments to the function in TypeScript is slightly non-obvious. Example:

const myFunction = async () => observe(
  { name: 'myFunction' }, 
  async (param1, param2) => {
    // ...
  }
  'argValue1',
  'argValue2'
);

Next Steps