Overview

Simply by initializing Laminar at the start of your application, you can start tracing prompts, responses, token usage, and costs of LLM calls from:
  • LLM providers SDKs (OpenAI, Anthropic, Gemini, etc.)
  • LLM Frameworks (LangChain, LangGraph, Vercel AI SDK, Browser Use, etc.)
  • Vector database operations (Pinecone, Qdrant, etc.)
To learn more about the integrations with the LLM frameworks and SDKs, see the integrations section.
In JavaScript/TypeScript, recommended approach is to specify which modules to instrument using the instrumentModules parameter.
import { Laminar } from '@lmnr-ai/lmnr';
import { OpenAI } from 'openai';

// Enable automatic instrumentation for specific modules
Laminar.initialize({
  projectApiKey: process.env.LMNR_PROJECT_API_KEY,
  instrumentModules: {
    openai: OpenAI
  }
});

// All OpenAI calls are now automatically traced
const client = new OpenAI();
const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }]
});

Instrument all supported libraries

This approach instruments all supported libraries automatically.
import { Laminar } from '@lmnr-ai/lmnr';

// Initialize before importing LLM libraries
Laminar.initialize({
  projectApiKey: process.env.LMNR_PROJECT_API_KEY
});

// Import after initialization
import { OpenAI } from 'openai';
import Anthropic from '@anthropic-ai/sdk';
This approach may not work with all bundlers. If you encounter issues, use selective instrumentation instead.

Instrument specific libraries

For better control and compatibility, instrument only the libraries you need.
Recommended approach for JavaScript/TypeScript applications:
import { OpenAI } from 'openai';
import Anthropic from '@anthropic-ai/sdk';
import { Laminar } from '@lmnr-ai/lmnr';

Laminar.initialize({
  projectApiKey: process.env.LMNR_PROJECT_API_KEY,
  instrumentModules: {
    openai: OpenAI,
    anthropic: Anthropic
  }
});

// Both OpenAI and Anthropic calls are now traced
const openaiClient = new OpenAI();
const anthropicClient = new Anthropic();

Disable Automatic Instrumentation

import { Laminar } from '@lmnr-ai/lmnr';

Laminar.initialize({
  projectApiKey: process.env.LMNR_PROJECT_API_KEY,
  instrumentModules: {} // Empty object = no instrumentation
});

// No LLM calls will be automatically traced
// Use manual instrumentation instead

Supported Libraries

Laminar supports automatic instrumentation for a wide range of libraries:
LLM Providers:
  • OpenAI (openai)
  • Anthropic (@anthropic-ai/sdk)
  • Google AI (@google/generative-ai)
  • Cohere (cohere-ai)
Frameworks:
  • Vercel AI SDK (ai)
  • LangChain (langchain, @langchain/core)
Vector Databases:
  • Pinecone (@pinecone-database/pinecone)
  • Qdrant (@qdrant/js-client-rest)

Integration-Specific Guides

Some frameworks require additional configuration:

What Gets Traced

When automatic instrumentation is enabled, you’ll see detailed traces including:

LLM Calls

  • Request parameters (model, messages, temperature, etc.)
  • Response content and metadata
  • Token usage (input, output, total)
  • Latency and performance metrics
  • Automatic cost calculation

Framework Operations

  • Chain executions in LangChain
  • Agent reasoning steps
  • Tool calls and results
  • Vector similarity searches

Error Handling

  • Exception details and stack traces
  • Retry attempts and failures
  • Rate limiting and quota errors

Next Steps

Once automatic instrumentation is working:
  1. Add structure with the observe decorator to group related operations
  2. Organize traces into sessions for multi-turn conversations
  3. Add metadata for better filtering and analysis
  4. Set up evaluations to monitor quality and performance
Automatic instrumentation provides comprehensive observability with minimal setup, making it easy to understand and optimize your LLM applications.