OpenAI
Instrument your OpenAI API calls with Laminar
Overview
Laminar automatically instruments the official OpenAI package with a single line of code, allowing you to trace and monitor all your OpenAI API calls without modifying your existing code. This provides complete visibility into your AI application’s performance, costs, and behavior.
Getting Started
1. Install Laminar and OpenAI
2. Set up your environment variables
Store your API keys in a .env
file:
Then load them in your application using a package like dotenv.
If you are using OpenAI with Next.js, please follow the Next.js integration guide for best practices and setup instructions.
3. Initialize Laminar
Just add a single line at the start of your application or file to instrument OpenAI with Laminar.
It is important to pass OpenAI
to instrumentModules
as a named export.
4. Use OpenAI as usual
All OpenAI API calls are now automatically traced in Laminar.
1. Install Laminar and OpenAI
2. Set up your environment variables
Store your API keys in a .env
file:
Then load them in your application using a package like dotenv.
If you are using OpenAI with Next.js, please follow the Next.js integration guide for best practices and setup instructions.
3. Initialize Laminar
Just add a single line at the start of your application or file to instrument OpenAI with Laminar.
It is important to pass OpenAI
to instrumentModules
as a named export.
4. Use OpenAI as usual
All OpenAI API calls are now automatically traced in Laminar.
1. Install Laminar and OpenAI
2. Set up your environment variables
Store your API keys in a .env
file:
To see an example of how to integrate Laminar within a FastAPI application, check out our FastAPI integration guide.
3. Initialize Laminar
Just add a single line at the start of your application or file to instrument OpenAI with Laminar.
4. Use OpenAI as usual
All OpenAI API calls are now automatically traced in Laminar.
These features allow you to build more structured traces, add context to your LLM calls, and gain deeper insights into your AI application’s performance.
Monitoring Your OpenAI Usage
After instrumenting your OpenAI calls with Laminar, you’ll be able to:
- View detailed traces of each OpenAI API call, including request and response
- Track token usage and cost across different models
- Monitor latency and performance metrics
- Open LLM span in Playground for prompt engineering
- Debug issues with failed API calls or unexpected model outputs
Visit your Laminar dashboard to view your OpenAI traces and analytics.
Advanced Features
- Sessions - Learn how to add session structure to your traces
- Metadata - Discover how to add additional context to your LLM spans
- Trace structure - Explore creating custom spans and more advanced tracing
- Realtime Monitoring - See how to monitor your OpenAI calls in real-time