Overview

Laminar automatically instruments LangChain and LangGraph operations by simply initializing Laminar at the beginning of your Python application. This allows you to trace and monitor your LLM chains, agents, and graph-based workflows, providing complete visibility into your AI application’s performance, costs, and behavior without needing to modify your existing LangChain/LangGraph code.

Getting Started

1. Install Laminar and LangChain/LangGraph

You’ll need Laminar, LangChain core, any specific LangChain LLM/tool integrations (e.g., for OpenAI), and LangGraph:

pip install 'lmnr[all]' langchain langchain-openai langgraph python-dotenv
# Add other packages like langchain-community for specific tools if needed

2. Set up environment variables & Initialize Laminar

Store your API keys in a .env file and initialize Laminar once at the start of your application, before any LangChain or LangGraph code is executed.

from lmnr import Laminar
from dotenv import load_dotenv
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated, Sequence
import operator

# Load environment variables from .env file
load_dotenv()

# Initialize Laminar - this single step enables automatic tracing
Laminar.initialize()

To see an example of how to integrate Laminar within a FastAPI application, check out our FastAPI integration guide.

3. Use LangChain and LangGraph as usual

LangChain Example (Simple LLMChain):

# Ensure Laminar.initialize() was called as shown in Step 2.

model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{question}")
])
output_parser = StrOutputParser()

chain = prompt | model | output_parser

# response = chain.invoke({"question": "What is the capital of France?"})
# print(response)

LangGraph Example (Simple Graph):

# Ensure Laminar.initialize() was called as shown in Step 2.

class AgentState(TypedDict):
    messages: Annotated[Sequence[HumanMessage], operator.add]

llm = ChatOpenAI()

def call_model(state: AgentState):
    messages = state['messages']
    response = llm.invoke(messages)
    return {"messages": [response]} # Append new message

# Define a new graph
workflow = StateGraph(AgentState)
workflow.add_node("agent", call_model)
workflow.set_entry_point("agent")
workflow.add_edge("agent", END)

app = workflow.compile()

# inputs = {"messages": [HumanMessage(content="Hi there!")]}
# result = app.invoke(inputs)
# print(result['messages'][-1].content)

All instrumentable LangChain and LangGraph operations are now automatically traced in Laminar.

Monitoring Your LangChain Usage

After instrumenting your LangChain and LangGraph applications with Laminar, you’ll be able to:

  1. View detailed traces of each chain, agent step, tool usage, and LLM call.
  2. Track token usage and cost across different models used within LangChain.
  3. Monitor latency and performance metrics for individual components and overall workflows.
  4. Analyze prompt engineering by inspecting inputs/outputs at each step.
  5. Debug issues with complex chains or graphs by visualizing their execution flow.

Visit your Laminar dashboard to view your LangChain traces and analytics.

Advanced Features

Leverage Laminar’s advanced features to get more out of your LangChain instrumentation:

  • Sessions - Group related LangChain executions (e.g., a user conversation).
  • Metadata - Add custom context (e.g., user IDs, environment details) to your LangChain traces.
  • Trace structure - Create custom spans to instrument parts of your application logic outside of LangChain.
  • Realtime Monitoring - Observe your LangChain applications in real-time.