This is first tutorial (1/2) in a series about how to develop and deploy your pipeline on Laminar. It aims to showcase how you can fully develop and orchestrate an LLM pipeline or agent on Laminar, and then fully host it on Laminar just by running a few commands.

Intro

In this tutorial, we will build a pipeline that calls your own code.

The demo application will:

  1. Take a user’s question and a website URL as an input.
  2. Read and parse a website using Beautiful Soup.
  3. Call an LLM with the input quetion and the content of the website
  4. Return the result to the user

Prerequisites

  • Python - Install Python3.9+
  • poetry - Install poetry for managing your dependencies in a project (Read more)

Create a project

When you have just registered, you will have one workspace created by default. Workspaces have projects. Click “New project” to create a project and give it a name “my-first-project”.

Click “Create” and the project will be created. Then you’ll get redirected to “Pipelines” page.

Create a pipeline

Now that you have created a project, let’s create a Laminar pipeline.

First, click “New pipeline”, then enter name “website_qa”, select “Blank” from templates from templates, and click “Create”.

The pipeline will be created. Then you’ll be redirected to the Laminar pipeline builder.

Build the pipeline

Place a Function node and define the signature of your function.

To do that, just add url to “Parameter names” in the node’s settings. This function will take a url and return the content of the website.

Then, place an LLM node and prompt it with the following prompt:

You are a question answering machine that responds user's questions about the contents of a website. You must ground your response in the content of the website and try not to add your own knowledge. If the provided content does not have the response, acknowledge it.

User's question:
{{question}}

Website content:
{{parsed_content}}

The mustache variables will create input handles on the left of the node.

Finally place additional input node, and rename Input nodes to question and url.

Connect the url Input node to the parse_website node, question to the question handle on LLM, output of parse_website node to the parsed_content handle on the LLM, and the LLM node to the Output node.

The final flow will look like this

Implementing Function node

Now, we need to actually add code for the Function node called parse_website.

You’ll need to use IDE of your choice and terminal.

Run the following commands in terminal:

mkdir website_qa
cd website_qa
poetry init --no-interaction
poetry add lmnr requests bs4
poetry shell

We’ve created a project and installed lmnr library there. Also we installed requests and bs4 libraries to make requests and parse the returned websites. Also, we activated our poetry environment to call CLI commands with lmnr later.

Now create pipeline.py file in your IDE next to generated pyproject.toml and poetry.lock files. Also, create src folder with __init__.py and request.py files.

Your directory structure should look like this:

website_qa/
├── .env
├── pipeline.py
├── pyproject.toml
├── poetry.lock
└── src/
      ├── __init__.py
      └── request.py

Let’s define a function which makes the request and parses it in src/request.py.

# src/request.py
import requests
from bs4 import BeautifulSoup


def parse_website(url: str) -> str:
    html_content = requests.get(url).text
    soup = BeautifulSoup(html_content, 'html.parser')
    text_content = soup.get_text()

    return text_content

You can write and store code of any complexity in src/. Just make sure it’s properly structured according to Python’s module import rules. Place __init__.py files in all subdirectories.

Now, let’s go to pipeline.py and register the function for the parse_website node.

pipeline.py must have variable of type Pipeline where you register the functions to be used for your nodes.

# pipeline.py
from lmnr import Pipeline

from src.request import parse_website


app = Pipeline()


@app.func("parse_website")
def parse_url(url: str) -> str:
    return parse_website(url)

Connecting your browser to your local code

At this point, we need to connect the local code to our pipeline.

We will need two things:

  • A project API key. These are shared among all users within the project and are needed to authenticate the backend.
  • A dev session ID. This is stored in your browser and is sent to Laminar backend, so it knows where to call for local functions.

Get the project api key and dev session id from the project settings page. You can locate the settings in the navbar in the UI.

Create a .env file besides pipeline.py, pyproject.toml, and poetry.lock. Enter the following there:

#.env
LMNR_PROJECT_API_KEY={your project api key}
LMNR_DEV_SESSION_ID={your dev session id}

Then run lmnr dev to establish connection with your server.

You should see the following output:

========================================
Dev Session ID:
{DEV_SESSION_ID}
========================================

Registered functions:
- parse_website

========================================

You don’t need to reconnect the debugger when you change your code. In addition, this command will try its best to persist the session.

Now we’re ready to test the flow.

Testing the flow

Let’s test the end-to-end flow with a couple simple inputs

Fill in the url input on the left with https://docs.lmnr.ai/tutorials/getting-started

Ask a question about this page in the question input on the left, e.g. “What is a Laminar workspace”.

Then press Cmd + Enter (or Ctrl + Enter) or click run button.

You’ll see the output of the pipeline and the full trace similarly to the picture. You can try to play around and, when you’re ready to deploy it, move on to the next tutorial.