Overview
Simply by initializing Laminar at the start of your application, you can start tracing prompts, responses, token usage, and costs of LLM calls from:- LLM providers SDKs (OpenAI, Anthropic, Gemini, etc.)
- LLM Frameworks (LangChain, LangGraph, Vercel AI SDK, Browser Use, etc.)
- Vector database operations (Pinecone, Qdrant, etc.)
To learn more about the integrations with the LLM frameworks and SDKs, see the integrations section.
In JavaScript/TypeScript, recommended approach is to specify which modules to instrument using the
instrumentModules
parameter.Instrument all supported libraries
This approach instruments all supported libraries automatically.This approach may not work with all bundlers. If you encounter issues, use selective instrumentation instead.
Instrument specific libraries
For better control and compatibility, instrument only the libraries you need.Recommended approach for JavaScript/TypeScript applications:
Disable Automatic Instrumentation
Supported Libraries
Laminar supports automatic instrumentation for a wide range of libraries:LLM Providers:
- OpenAI (
openai
) - Anthropic (
@anthropic-ai/sdk
) - Google AI (
@google/generative-ai
) - Cohere (
cohere-ai
)
- Vercel AI SDK (
ai
) - LangChain (
langchain
,@langchain/core
)
- Pinecone (
@pinecone-database/pinecone
) - Qdrant (
@qdrant/js-client-rest
)
Integration-Specific Guides
Some frameworks require additional configuration:- Next.js applications: See the Next.js integration guide
- Vercel AI SDK: See the Vercel AI SDK guide
- LangChain: See the LangChain integration guide
What Gets Traced
When automatic instrumentation is enabled, you’ll see detailed traces including:LLM Calls
- Request parameters (model, messages, temperature, etc.)
- Response content and metadata
- Token usage (input, output, total)
- Latency and performance metrics
- Automatic cost calculation
Framework Operations
- Chain executions in LangChain
- Agent reasoning steps
- Tool calls and results
- Vector similarity searches
Error Handling
- Exception details and stack traces
- Retry attempts and failures
- Rate limiting and quota errors
Next Steps
Once automatic instrumentation is working:- Add structure with the
observe
decorator to group related operations - Organize traces into sessions for multi-turn conversations
- Add metadata for better filtering and analysis
- Set up evaluations to monitor quality and performance