Unify node

Unify node executes a prompt and optional chat messages against a selected model by using Unify’s chat completion API.

Unify is LLM router service, which allows to access a broad range of LLMs hosted on different providers. You can select the desired model and any of the available providers for that model to execute your requests on. Model can be selected by checking Unify’s model benchmarks.

A cool feature of Unify is dynamic routing. Instead of selecting provider, you can select a metric to optimize on, such as lowest-ttft, and Unify will select the right provider for you.

On top of that, the only API key you need is UNIFY_API_KEY. Generate an API key after registering at Unify and then add it in Laminar’s env variables page. After that, you can access any model you want just by selecting it on the node. No need to spend your time to register and issue API keys for various models at different providers each time you want to switch the model or provider.

How it works

The node follows the standard practice of sending messages to an LLM’s chat completion (or similar) API, where the first message is the (optional) "system" message and the rest of the messages alternate between messages with "user" and "assistant" roles.

Message with "system" role will come from the “Prompt” on the node.

For sending "user" and "assistant" messages, you can enable the “Chat messages” and connect a node, which has output of “ChatMessageList” type. “ChatMessageList” is expected to contain a messages with "user" and "assistant" roles. (Read more)

Unify node with Chat messages

Prompt

The prompt can be templated, by adding dynamic input variables inside the double-curly braces.

Any node can be connected to the template variable’s handle, if its output is of string type.

For example, prompt can be "Name a random flower" (without any dynamic inputs) or "Write a poem about {{subject}}, use the following words in a poem: {{words}}" (with subject and words as dynamic inputs).

Model

Unify has a really wide range of models to run and you can select the model by setting the following configs right on the node:

  • Model name: Select any model or type manually after selecting -type manually-
  • Provider: Select any provider or type manually after selecting -type-manually-. Dynamically routed provider can be selected here.
  • Uploaded by: Type the uploader of the model, if applicable for the provider.
  • Thresholds: Add thresholds for dynamic routing by entering the numeric value and its name.