Laminar automatically instruments the official OpenAI package with a single line of code, allowing you to trace and monitor all your OpenAI API calls without modifying your existing code. This provides complete visibility into your AI application’s performance, costs, and behavior.
Just add a single line at the start of your application or file to instrument OpenAI with Laminar.
Copy
import { Laminar } from '@lmnr-ai/lmnr';import OpenAI from 'openai';import 'dotenv/config'; // Load environment variables// This single line instruments all OpenAI API callsLaminar.initialize({ instrumentModules: { OpenAI: OpenAI }});// Initialize OpenAI client as usualconst openai = new OpenAI();
It is important to pass OpenAI to instrumentModules as a named export.
// Make API calls to OpenAI as you normally wouldconst response = await openai.chat.completions.create({ model: "gpt-4.1-mini", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Hello, how are you?" } ],});console.log(response.choices[0].message.content);
All OpenAI API calls are now automatically traced in Laminar.
These features allow you to build more structured traces, add context to your LLM calls, and gain deeper insights into your AI application’s performance.