Overview

Laminar automatically calculates costs for LLM calls when the correct provider and model names are set. This page lists the supported providers and their corresponding model names that Laminar recognizes for cost calculation.

Supported Providers

Laminar uses provider names consistent with OpenLLMetry standards. When manually instrumenting LLM calls, set the gen_ai.system attribute to one of these values:
ProviderProvider NameExample ModelDocumentation
OpenAIopenaigpt-4o, gpt-4o-2024-11-20platform.openai.com
Anthropicanthropicclaude-3-5-sonnet, claude-3-5-sonnet-20241022docs.anthropic.com
Google Geminigemini, google-genaimodels/gemini-1.5-proai.google.dev
Azure OpenAIazure-openaigpt-4o-mini, gpt-4o-mini-2024-07-18learn.microsoft.com
AWS Bedrockbedrock-anthropicclaude-3-5-sonnet-20241022-v2:0docs.aws.amazon.com
Mistral AImistralmistral-large-2407docs.mistral.ai
Groqgroqllama-3.1-70b-versatileconsole.groq.com
Missing a provider or can’t see cost information? Create an issue and we’ll add it.

Setting Provider Information

When manually instrumenting LLM calls, ensure you set the correct provider and model attributes:
import { Laminar, LaminarAttributes, observe } from '@lmnr-ai/lmnr';

await observe(
  { name: 'anthropicCall', spanType: 'LLM' },
  async () => {
    const response = await fetch('https://api.anthropic.com/v1/messages', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'x-api-key': process.env.ANTHROPIC_API_KEY,
      },
      body: JSON.stringify({
        model: 'claude-3-5-sonnet-20241022',
        messages: [{ role: 'user', content: 'Hello!' }],
        max_tokens: 100
      })
    }).then(res => res.json());

    // Set provider and model for cost calculation
    Laminar.setSpanAttributes({
      [LaminarAttributes.PROVIDER]: 'anthropic',
      [LaminarAttributes.RESPONSE_MODEL]: response.model,
      [LaminarAttributes.INPUT_TOKEN_COUNT]: response.usage.input_tokens,
      [LaminarAttributes.OUTPUT_TOKEN_COUNT]: response.usage.output_tokens,
    });

    return response;
  }
);

Model Name Formats

Different providers use different model name formats. Use the exact names as returned by the provider’s API:

OpenAI

// Standard models
"gpt-4o"
"gpt-4o-mini" 
"gpt-3.5-turbo"

// Versioned models  
"gpt-4o-2024-11-20"
"gpt-4o-mini-2024-07-18"

Anthropic

// Standard models
"claude-3-5-sonnet"
"claude-3-haiku"
"claude-3-opus"

// Versioned models
"claude-3-5-sonnet-20241022"
"claude-3-5-sonnet-20241022-v2:0"

Google Gemini

// Full model paths
"models/gemini-1.5-pro"
"models/gemini-1.5-flash"
"models/gemini-1.0-pro"

Azure OpenAI

// Same as OpenAI models
"gpt-4o"
"gpt-4o-mini-2024-07-18"

Custom Providers

For providers not listed above, you can still track usage by setting custom attributes:
// For custom or unsupported providers
Laminar.setSpanAttributes({
  [LaminarAttributes.PROVIDER]: 'custom-provider',
  [LaminarAttributes.RESPONSE_MODEL]: 'custom-model-v1',
  [LaminarAttributes.INPUT_TOKEN_COUNT]: response.usage.input_tokens,
  [LaminarAttributes.OUTPUT_TOKEN_COUNT]: response.usage.output_tokens,
  // Set explicit costs if known
  'gen_ai.usage.input_cost': 0.001,
  'gen_ai.usage.output_cost': 0.002,
  'gen_ai.usage.cost': 0.003
});

Cost Calculation

Laminar automatically calculates costs using:
  1. Token counts (gen_ai.usage.input_tokens, gen_ai.usage.output_tokens)
  2. Model name (gen_ai.response.model)
  3. Provider name (gen_ai.system)
Laminar also takes into account cached tokens to calculate cost for providers that support it, like OpenAI, Anthropic, and so on.
The cost calculation uses current pricing from each provider. If explicit cost attributes are provided, they take precedence over calculated costs.

Viewing Costs

Costs appear in the Laminar UI on:
  • Trace details - Sum of all LLM calls within the trace
  • LLM spans - Individual LLM call costs
  • Analytics dashboard - Aggregated cost metrics by models

Pricing Data

Laminar maintains current pricing information for supported providers. For the complete list of supported models and their pricing, see the pricing data in our GitHub repository.

Best Practices

Always Set Provider Info

// ✅ Good - complete provider information
Laminar.setSpanAttributes({
  [LaminarAttributes.PROVIDER]: 'openai',
  [LaminarAttributes.RESPONSE_MODEL]: response.model,
  [LaminarAttributes.INPUT_TOKEN_COUNT]: response.usage.prompt_tokens,
  [LaminarAttributes.OUTPUT_TOKEN_COUNT]: response.usage.completion_tokens,
});

// ❌ Bad - missing provider or model
Laminar.setSpanAttributes({
  [LaminarAttributes.INPUT_TOKEN_COUNT]: response.usage.prompt_tokens,
});

Use Exact Model Names

// ✅ Good - exact model name from API response
[LaminarAttributes.RESPONSE_MODEL]: response.model

// ❌ Bad - hardcoded or modified model name  
[LaminarAttributes.RESPONSE_MODEL]: 'gpt-4'

Handle Missing Usage Data

// Gracefully handle missing usage information
const inputTokens = response.usage?.prompt_tokens || 0;
const outputTokens = response.usage?.completion_tokens || 0;

if (inputTokens > 0 || outputTokens > 0) {
  Laminar.setSpanAttributes({
    [LaminarAttributes.INPUT_TOKEN_COUNT]: inputTokens,
    [LaminarAttributes.OUTPUT_TOKEN_COUNT]: outputTokens,
  });
}
Proper provider configuration ensures accurate cost tracking and better insights into your LLM usage patterns.