Overview
We’ll create a FastAPI application that:
- Accepts support ticket submissions
- Uses an LLM to classify the ticket
- Returns the classification result
- Traces the entire process with Laminar
Setup
You can also use the example app from our GitHub repo.
Alternatively, for a clean install, follow the steps below:
Install Dependencies
First, let’s install the required packages:
pip install fastapi uvicorn openai lmnr python-dotenv
Environment Setup
Create a .env
file with your API keys:
LMNR_PROJECT_API_KEY="your-laminar-project-api-key"
OPENAI_API_KEY="your-openai-api-key"
Project Structure
Create the following project structure:
fastapi-app/
├── .env
├── requirements.txt
└── src/
├── main.py
├── llm.py
└── schemas.py
Implementation
Let’s create our FastAPI application with Laminar tracing. We’ll split the code into three files:
1. schemas.py
This file will contain the Pydantic models for the request and response.
from enum import Enum
from pydantic import BaseModel
class TicketCategory(str, Enum):
REFUND = "REFUND"
BUG = "BUG"
QUESTION = "QUESTION"
OTHER = "OTHER"
class Ticket(BaseModel):
title: str
description: str
customer_email: str
class TicketClassification(BaseModel):
category: TicketCategory
reasoning: str
2. llm.py
This file will contain the logic that handles the LLM. In a production app,
this will likely be a much bigger module with classes, routing, etc.
For now, we’ll just have a function that handles an OpenAI call.
from dotenv import load_dotenv
from lmnr import observe
from openai import OpenAI
from schemas import Ticket, TicketCategory, TicketClassification
import os
load_dotenv(override=True)
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
@observe()
def model_classify_ticket(ticket: Ticket) -> TicketClassification:
response = client.beta.chat.completions.parse(
model="gpt-4.1-nano",
response_format=TicketClassification,
messages=[
{
"role": "system",
"content": """You are a support ticket classifier.
Classify the ticket that a user sends you""",
},
{
"role": "user",
"content": f"""Title: {ticket.title}
Description: {ticket.description}
Customer Email: {ticket.customer_email}""",
},
],
)
return response.choices[0].message.parsed or TicketClassification(
category=TicketCategory.OTHER,
reasoning=response.choices[0].message.content,
)
3. main.py
This file will contain the logic that handles the FastAPI app.
Laminar must be initialized once at the entry point of the application.
For FastAPI, this is typically main.py
, right before creating app
.
from dotenv import load_dotenv
from fastapi import FastAPI, HTTPException
from llm import model_classify_ticket
from lmnr import Laminar
from schemas import Ticket, TicketClassification
load_dotenv(override=True)
# Laminar must be initialized once, preferably at the entry point of the application
Laminar.initialize()
app = FastAPI()
@app.post("/api/v1/tickets/classify", response_model=TicketClassification)
async def classify_ticket(ticket: Ticket):
try:
classification = model_classify_ticket(ticket)
print(classification)
return classification
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Running the Application
Start the FastAPI server:
uvicorn src.main:app --reload --port 8011
Testing the API
You can test the API using curl:
curl --location 'localhost:8011/api/v1/tickets/classify' \
--header 'Content-Type: application/json' \
--data-raw '{
"title": "Can'\''t access my account",
"description": "I'\''ve been trying to log in for the past hour but keep getting an error message",
"customer_email": "user@example.com"
}'
Viewing Traces
After making a request, you can view the traces in your Laminar dashboard at https://www.lmnr.ai. The trace will show:
- The ticket classification function execution
- The OpenAI API call
Key Features Demonstrated
- Automatic Tracing: Laminar automatically traces OpenAI calls and functions marked with
@observe
- Tokens Usage: Laminar automatically calculates the tokens used for each OpenAI call
- Cost Estimation: Laminar automatically estimates the cost of each OpenAI call
Troubleshooting
If you encounter issues:
- Check that your API keys are correctly set in the
.env
file and that it is loaded correctly
- Verify that Laminar is properly initialized
- Ensure all dependencies are installed
- Review the FastAPI logs for any application errors