Index SDK Reference

Laminar SDKs are available for JavaScript/TypeScript and Python. Index SDK is the SDK for calling the Index API. All index calls from SDK can be traced as described in the index tracing docs.

SDKs call the streaming API, regardless of the stream parameter in the SDK request. This is because the non-streaming API has a timeout limitation. SDKs will transform the streaming response into a non-streaming response.

Agent Run

Request

ParameterTypeRequiredDefaultDescription
promptstringYes-The prompt text to send to the agent
streambooleanNofalseWhether to stream the response from the agent
parentSpanContext / parent_span_contextstringNo-Stringified Laminar span context for tracing
modelProvider / model_providerEnumNo-The model provider to use (anthropic, gemini, openai)
modelstringNo-The specific model to use (must set modelProvider)
enableThinking / enable_thinkingbooleanNofalseWhether to enable thinking (passed to LLM provider)
timeoutintegerNo-Soft timeout in seconds for the agent run
maxSteps / max_stepsintegerNo100Maximum number of steps the agent will take
cdpUrl / cdp_urlstringNo-URL to a running browser with CDP
thinkingTokenBudget / thinking_token_budgetintegerNo-Maximum tokens for agent thinking
startUrl / start_urlstringNo-Starting URL for the agent to visit
agentState / agent_statestringNo-Stringified agent state from previous run
storageState / storage_statestringNo-Stringified browser storage state
returnScreenshots / return_screenshotsbooleanNofalseWhether to return screenshots with each step
returnAgentState / return_agent_statebooleanNofalseWhether to return agent state after run
returnStorageState / return_storage_statebooleanNofalseWhether to return browser storage state

prompt

The prompt to the agent. Required.

TypeRequired
stringYes
import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

stream

Whether to stream the response from the agent. This affects the response shape.

TypeRequiredDefault
booleanNofalse
import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    stream: true,
  });
  for await (const chunk of result) {
    console.log(chunk);
  }
};

main().then(() => {
  console.log('Done');
});

parentSpanContext / parent_span_context

The stringified Laminar span context that can be used to place the trace in an existing trace.

TypeRequired
stringNo
import { LaminarClient, Laminar } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    // This will serialize the current span context, assuming there is an active
    // span.
    parentSpanContext: Laminar.serializeSpanContext(),
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

modelProvider

The model provider to use for the agent.

TypeRequired
Enum[‘anthropic’, ‘gemini’, ‘openai’]No

It is required to set the modelProvider if you set the model.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    modelProvider: 'anthropic',
    model: 'claude-3-7-sonnet-20250219',
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

model

The model to use for the agent. Must match an available model (with vision) in the modelProvider’s API.

See models for available models.

TypeRequired
stringNo

It is required to set the modelProvider if you set the model.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    modelProvider: 'anthropic',
    model: 'claude-3-7-sonnet-20250219',
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

enableThinking / enable_thinking

The param is passed to the underlying LLM provider. Only used for Anthropic.

TypeRequiredDefault
booleanNofalse

For OpenAI, reasoning_effort is currently defaulted to low, regardless of this param. Gemini models always set this to true.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    modelProvider: 'anthropic',
    model: 'claude-3-7-sonnet-20250219',
    enableThinking: true,
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

timeout

Timeout in seconds. This is a soft timeout, the agent will continue its current step until completion after the timeout. Also, initialization of the agent is not included in the timeout.

TypeRequired
integerNo

In non-streaming mode, agent timing out will throw an error.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    timeout: 600, // 10 minutes
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

maxSteps / max_steps

The maximum number of steps the agent will take.

TypeRequiredDefault
integerNoNone (currently defaulted to 100 in the backend)
import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    max_steps: 10,
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

cdpUrl / cdp_url

If you have a running browser with CDP, you can pass the URL to the browser here. By default, Laminar will start and manage its own browser instance.

TypeRequired
stringNo
import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    cdpUrl: 'wss://localhost:9222',
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

thinkingTokenBudget / thinking_token_budget

The maximum number of tokens the agent will use for thinking. Passed to the underlying LLM provider.

Currently, there is a heuristic that converts the token budget to a reasoning effort parameter for OpenAI.

TypeRequired
integerNo
import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    thinkingTokenBudget: 2048,
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

startUrl / start_url

The URL to start the agent on. If not specified, the agent infers the URL from the prompt.

TypeRequired
stringNo
import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Summarize this page.',
    start_url: 'https://www.lmnr.ai',
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

agentState / agent_state

The stringified agent state as returned by a previous agent run. This is useful for continuing the agent run in a subsequent call.

TypeRequired
stringNo

Agent state is a very large object.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    agentState: '...',
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

storageState / storage_state

The stringified browser storage state (auth, cookies, etc.) as returned by a previous agent run. This is useful for continuing the agent run in a subsequent call.

TypeRequired
stringNo

This may be a very large object.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    storageState: '...',
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

returnScreenshots / return_screenshots

Whether to return a screenshot of the page with each step.

TypeRequiredDefault
booleanNofalse

This will not have any effect on non-streaming runs.

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    stream: true,
    returnScreenshots: true,
  });
  for await (const chunk of result) {
    console.log(chunk);
  }
};

main().then(() => {
  console.log('Done');
});

returnAgentState / return_agent_state

Whether to return the agent state after the run. This is useful for continuing the agent run in a subsequent call.

TypeRequiredDefault
booleanNofalse

Agent state is a very large object

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    returnAgentState: true,
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

returnStorageState / return_storage_state

Whether to return the browser storage state (auth, cookies, etc.) after the run. This is useful for continuing the agent run in a subsequent call.

TypeRequiredDefault
booleanNofalse

Storage state may be a very large object

import { LaminarClient } from '@lmnr-ai/lmnr';

const client = new LaminarClient({
  projectApiKey: 'lmnr-project-api-key',
});

const main = async () => {
  const result = await client.agent.run({
    prompt: 'Go to www.lmnr.ai and summarize their homepage.',
    returnStorageState: true,
  });
  console.log(result);
};

main().then(() => {
  console.log('Done');
});

Response

Non-streaming (AgentOutput)

type AgentOutput = {
    result: ActionResult;
    agentState?: string | null;
    storageState?: string | null;
};

type ActionResult = {
    isDone: boolean;
    content?: string | null;
    error?: string | null;
};

Streaming

type RunAgentResponseChunk = RunAgentStepChunk
  | RunAgentFinalChunk | RunAgentErrorChunk | RunAgentTimeoutChunk;

type RunAgentStepChunk = {
    chunkType: "step";
    messageId: string; // UUID
    actionResult: ActionResult;
    summary: string;
    screenshot?: string | null;
};

type ActionResult = {
    isDone: boolean;
    content?: string | null;
    error?: string | null;
}

type RunAgentFinalChunk = {
    chunkType: "finalOutput";
    messageId: string; // UUID
    content: AgentOutput;
};

type AgentOutput = {
    result: ActionResult;
    agentState?: string | null;
    storageState?: string | null;
};

type RunAgentErrorChunk = {
    chunkType: "error";
    messageId: string; // UUID
    error: string;
};

type RunAgentTimeoutChunk = {
    chunkType: "timeout";
    messageId: string; // UUID
    actionResult: ActionResult;
    summary: string;
    screenshot?: string | null;
};