Laminar automatically captures and stores image data sent to vision-capable LLM models across any SDK or framework you use. Whether you’re using OpenAI, Anthropic, Google, or any other provider’s SDK, Laminar seamlessly:
Automatically detects and saves all image content in your LLM requests.
Saves both Base64 encoded images and URLs.
Operates without interruption to your main application flow or performance.
This happens transparently in the background - no code changes required.
Laminar automatically detects images when you send them using the standard OpenAI SDK patterns. No additional configuration is required - simply use images in your LLM calls as you normally would.
import { OpenAI } from 'openai';import { Laminar, observe } from '@lmnr-ai/lmnr';import fs from 'fs';// Initialize LaminarLaminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY,});// Initialize OpenAI clientconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});const analyzeImage = async (imagePath, userQuestion) => await observe({ name: 'analyzeImage' }, async () => { // Encode image to base64 const imageBuffer = fs.readFileSync(imagePath); const base64Image = imageBuffer.toString('base64'); // Make LLM call with image - Laminar automatically traces the image data const response = await openai.chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'user', content: [ { type: 'text', text: userQuestion }, { type: 'image_url', image_url: { url: `data:image/jpeg;base64,${base64Image}` } } ] } ], max_tokens: 500 }); return response.choices[0].message.content; });// Example usageconst result = await analyzeImage('eiffel_tower.jpg', 'What information is shown in this image?');console.log(result);
import { OpenAI } from 'openai';import { Laminar, observe } from '@lmnr-ai/lmnr';import fs from 'fs';// Initialize LaminarLaminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY,});// Initialize OpenAI clientconst openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});const analyzeImage = async (imagePath, userQuestion) => await observe({ name: 'analyzeImage' }, async () => { // Encode image to base64 const imageBuffer = fs.readFileSync(imagePath); const base64Image = imageBuffer.toString('base64'); // Make LLM call with image - Laminar automatically traces the image data const response = await openai.chat.completions.create({ model: 'gpt-4o', messages: [ { role: 'user', content: [ { type: 'text', text: userQuestion }, { type: 'image_url', image_url: { url: `data:image/jpeg;base64,${base64Image}` } } ] } ], max_tokens: 500 }); return response.choices[0].message.content; });// Example usageconst result = await analyzeImage('eiffel_tower.jpg', 'What information is shown in this image?');console.log(result);
import osimport base64from openai import OpenAIfrom lmnr import Laminar, observefrom dotenv import load_dotenvload_dotenv()# Initialize LaminarLaminar.initialize( project_api_key=os.environ["LMNR_PROJECT_API_KEY"])# Initialize OpenAI clientclient = OpenAI(api_key=os.environ["OPENAI_API_KEY"])@observe()def analyze_image(image_path: str, user_question: str): # Encode image to base64 with open(image_path, "rb") as image_file: base64_image = base64.b64encode(image_file.read()).decode('utf-8') # Make LLM call with image - Laminar automatically traces the image data response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "user", "content": [ { "type": "text", "text": user_question }, { "type": "image_url", "image_url": { "url": f"data:image/jpeg;base64,{base64_image}" } } ] } ], max_tokens=500 ) return response.choices[0].message.content# Example usageresult = analyze_image("eiffel_tower.jpg", "What information is shown in this image?")print(result)
Get started with automatic image tracing using any of our supported integrations. No configuration required - just install and your images will be automatically captured: