This is because Laminar depends on OpenTelemetry, which uses some Node.js-specific functionality,
and we need to inform Next.js about it. Learn more in the Next.js docs.
This file is crucial as it initializes Laminar for tracing.
Next.js automatically loads this file during initialization.
instrumentation.ts
Copy
export async function register() { if (process.env.NEXT_RUNTIME === 'nodejs') { const { Laminar } = await import('@lmnr-ai/lmnr') // Make sure to initialize Laminar **after** you initialize other // tracing libraries, e.g. `registerOTel` from `@vercel/otel`. Laminar.initialize({ apiKey: process.env.LMNR_PROJECT_API_KEY, }) }}
Laminar only works in the ‘nodejs’ runtime of Next.js.Learn more about the instrumentation.ts file in the Next.js docs.
Laminar must be initialized at the entry point of the application, but after other tracing libraries are initialized.
For Next.js, the instrumentation.ts file is ideal for this purpose as it’s loaded early in the application lifecycle.
This file contains the main page layout for our chat application:
app/page.tsx
Copy
import ChatUI from "@/components/chat-ui";export default function Home() { return ( <div className="grid grid-rows-[auto_1fr_auto] min-h-screen p-4 sm:p-6 font-[family-name:var(--font-geist-sans)]"> <header className="py-4 text-center"> <h1 className="text-2xl font-bold text-blue-600 mb-1">Therapy Chat</h1> <p className="text-sm text-gray-600 dark:text-gray-400 max-w-lg mx-auto"> A safe space to share your thoughts and receive supportive guidance. Your conversation is private and confidential. </p> </header> <main className="w-full max-w-4xl mx-auto my-4"> <ChatUI /> </main> <footer className="py-4 text-center text-sm text-gray-500 dark:text-gray-400"> <p>AI-powered support | Not a substitute for professional medical advice</p> </footer> </div> );}
This file contains the API route handler that processes chat messages and communicates with the OpenAI API.Make sure to enable experimental_telemetry in the generateText function and pass the tracer to it.
app/api/chat/route.ts
Copy
import { openai } from "@lib/openai";import { anthropic } from "@lib/anthropic";import { NextRequest, NextResponse } from "next/server";export async function POST(request: NextRequest) { try { const body = await request.json(); const { messages, provider } = body; const llmProvider = provider ?? process.env.LLM_PROVIDER ?? "openai"; // Create system message with therapeutic instructions const systemMessage = { role: 'system', content: `You are an AI-powered therapist assistant. Respond with empathy, understanding, and professionalism.Your goal is to provide supportive responses that help the user process their feelings and thoughts.Never give medical advice or diagnose conditions.` }; let response; if (llmProvider === "openai") { const completion = await openai.chat.completions.create({ model: "gpt-4.1-nano", messages: [systemMessage, ...messages], }); response = completion.choices[0].message.content; } else if (llmProvider === "anthropic") { const completion = await anthropic.messages.create({ model: "claude-3-5-haiku-latest", system: systemMessage.content, messages: messages, max_tokens: 1000, }); response = completion.content[0].type === "text" ? completion.content[0].text : ""; } else { throw new Error(`Unsupported provider: ${llmProvider}`); } return NextResponse.json({ message: response }); } catch (error) { console.error("Error in chat API:", error); return NextResponse.json( { error: "Failed to process request" }, { status: 500 } ); }}
These files contain the LLM clients, but more importantly patching those functions by Laminar.
lib/openai.ts
Copy
import { OpenAI } from "openai";import { Laminar } from "@lmnr-ai/lmnr";Laminar.patch({ OpenAI: OpenAI});const openai = new OpenAI();export { openai };
lib/anthropic.ts
Copy
import * as anthropic from "@anthropic-ai/sdk";import { Laminar } from "@lmnr-ai/lmnr";Laminar.patch({ anthropic: anthropic});const anthropicClient = new anthropic.Anthropic();export { anthropicClient as anthropic };