Laminar automatically instruments LangChain and LangGraph operations by simply initializing Laminar at the beginning of your Python application. This allows you to trace and monitor your LLM chains, agents, and graph-based workflows, providing complete visibility into your AI application’s performance, costs, and behavior without needing to modify your existing LangChain/LangGraph code.
# Ensure Laminar.initialize() was called as shown in Step 2.model = ChatOpenAI()prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful assistant."), ("human", "{question}")])output_parser = StrOutputParser()chain = prompt | model | output_parser# response = chain.invoke({"question": "What is the capital of France?"})# print(response)
LangGraph Example (Simple Graph):
Copy
# Ensure Laminar.initialize() was called as shown in Step 2.class AgentState(TypedDict): messages: Annotated[Sequence[HumanMessage], operator.add]llm = ChatOpenAI()def call_model(state: AgentState): messages = state['messages'] response = llm.invoke(messages) return {"messages": [response]} # Append new message# Define a new graphworkflow = StateGraph(AgentState)workflow.add_node("agent", call_model)workflow.set_entry_point("agent")workflow.add_edge("agent", END)app = workflow.compile()# inputs = {"messages": [HumanMessage(content="Hi there!")]}# result = app.invoke(inputs)# print(result['messages'][-1].content)
All instrumentable LangChain and LangGraph operations are now automatically traced in Laminar.