Overview
Next.js is a popular React framework for building web applications.
Getting Started
1. Install Laminar
2. Update your next.config.ts
Add the following to your next.config.ts
file:
const nextConfig = {
serverExternalPackages: ['@lmnr-ai/lmnr'],
};
export default nextConfig;
This is because Laminar depends on OpenTelemetry, which uses some Node.js-specific functionality, and we need to inform Next.js about it. Learn more in the Next.js docs.
3. Initialize Laminar
To instrument your entire Next.js app, place Laminar initialization in instrumentation.{ts,js}
file. Learn more about instrumentation.{ts,js}
here.
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { Laminar } = await import('@lmnr-ai/lmnr');
Laminar.initialize({
projectApiKey: process.env.LMNR_API_KEY,
});
}
}
instrumentation.ts
is experimental in Next.js < 15.
If you use Next.js < 15, add the following to your next.config.js
:
module.exports = {
experimental: { instrumentationHook: true }
};
4. Patch LLM SDKs
AI SDK is already instrumented implicitly, but you need to direct it to use Laminar tracer.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: 'What is Laminar flow?',
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
},
});
AI SDK is already instrumented implicitly, but you need to direct it to use Laminar tracer.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: 'What is Laminar flow?',
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
},
});
Usually, Laminar initialize can patch the LLM SDKs automatically. However, in Next.js, the imports
that happen in instrumentation.{ts,js}
file are not visible to the rest of the app.
So, you need to manually patch the LLM SDKs using Laminar.patch
function.
For example, if you initialize your LLM SDKs in lib/llm-clients.ts
file, you can patch them like this:
import { OpenAI } from "openai";
import * as anthropic from "@anthropic-ai/sdk";
import { Laminar } from "@lmnr-ai/lmnr";
Laminar.patch({
OpenAI: OpenAI,
anthropic: anthropic,
});
const openai = new OpenAI();
const anthropic = new anthropic.Anthropic();
export { openai, anthropic };
5. Grouping traces within one route
If your app makes multiple LLM calls within one route, you may want to group them together.
You might get this functionality by default, if your app is instrumented with OpenTelemetry and
some Next.js instrumentation, e.g. @vercel/otel
or @sentry/nextjs
.
Otherwise, you can achieve this by using observe
function wrapper, e.g. something like
import { NextRequest } from 'next/server';
import { getTracer, observe } from '@lmnr-ai/lmnr';
export const GET = observe(async (req: NextRequest) => {
const tracer = getTracer();
const { firstText, secondText } = await observe(
{
name: 'GET /api/chat',
},
async () => {
const { firstText } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: 'What is Laminar flow?',
experimental_telemetry: {
isEnabled: true,
tracer,
},
});
const { secondText } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: 'What is Laminar flow?',
experimental_telemetry: {
isEnabled: true,
tracer,
},
});
return { firstText, secondText };
}
);
return NextResponse.json({ firstText, secondText });
});
Learn more about observe
function wrapper here.
Responses are generated using AI and may contain mistakes.