POST
/
v1
/
agent
/
run
curl --request POST \
  --url https://api.lmnr.ai/v1/agent/run \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "prompt": "<string>",
  "spanContext": {
    "traceId": "<string>",
    "spanId": "<string>",
    "isRemote": true
  },
  "modelProvider": "anthropic",
  "model": "<string>",
  "stream": true,
  "enableThinking": true
}'
{
  "result": {
    "isDone": true,
    "content": "<string>",
    "error": "<string>"
  }
}

Description

Run the Laminar agent with the given prompt. The response can be streamed or returned as a single response.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
prompt
string
required

The instructions to be used for the agent run.

spanContext
object

Laminar OpenTelemetry span context. Useful to embed agent spans in a larger trace.

modelProvider
enum<string>

LLM provider. Defaults to anthropic.

Available options:
anthropic,
bedrock
model
string

LLM model name. Must match the provider API model name, e.g. 'claude-3-7-sonnet-latest'.

stream
boolean

If true, the response will be streamed including the intermediate 'thinking' steps. If false, the response will be returned as a single message. Defaults to false.

enableThinking
boolean

The thinking parameter that is sent to the underlying LLM. Defaults to true.

Response

200
application/json
Agent run result
result
object
required

The result of the agent run.