Basic correctness evaluation

In this example our executor function calls an LLM to get the capital of a country. We then evaluate the correctness of the prediction by checking for exact match with the target capital.

1

1. Define an executor function

The executor function calls OpenAI to get the capital of a country. The prompt also asks to only name the city and nothing else. In a real scenario, you will likely want to use structured output to get the city name only.

import OpenAI from 'openai';

const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});

const getCapital = async (
    {country}: {country: string}
): Promise<string> => {
    const response = await openai.chat.completions.create({
        model: 'gpt-4o-mini',
        messages: [
            {
                role: 'system',
                content: 'You are a helpful assistant.'
            }, {
                role: 'user',
                content: `What is the capital of ${country}?` +
                ' Just name the city and nothing else'
            }
        ],
    });
    return response.choices[0].message.content ?? ''
}

2

2. Define an evaluator function

The evaluator function checks for exact match and returns 1 if the executor output matches the target, and 0 otherwise.

const evaluator = async (output, target) =>
    (await output) === target.capital ? 1 : 0
3

3. Define data and run the evaluation

my-eval.ts
import { evaluate } from '@lmnr-ai/lmnr';

const evaluationData = [
    { data: { country: 'Canada' }, target: { capital: 'Ottawa' } },
    { data: { country: 'Germany' }, target: { capital: 'Berlin' } },
    { data: { country: 'Tanzania' }, target: { capital: 'Dodoma' } },
]

evaluate({
    data: evaluationData,
    executor: async (data) => await getCapital(data),
    evaluators: { checkCapitalCorrectness: evaluator },
    config: {
        projectApiKey: process.env.LMNR_PROJECT_API_KEY
    }
})

And then run either ts-node my-eval.ts or npx lmnr eval my-eval.ts.

Running evaluations on a previously collected dataset

It is quite common to run evaluations on datasets that were previously collected and may contain LLM inputs, LLM outputs, and additional custom data, e.g. human labels.

The interesting bit here is that you have to define the executor function to extract the LLM output from the dataset.

Let’s assume we have a dataset with the following structure:

[
    {
        "data": {
            "country": "Germany",
            "llm_output": "Berlin",
            "llm_input": "What is the capital of Germany?",
            "human_label": "correct"
        },
        "target": {
            "capital": "Berlin"
        }
    },
    {
        "data": {
            "country": "Canada",
            "llm_output": "Ottawa",
            "llm_input": "What is the capital of Canada?",
            "human_label": "correct"
        },
        "target": {
            "capital": "Ottawa"
        }
    },
    {
        "data": {
            "country": "Kazakhstan",
            "llm_output": "Nur-Sultan",
            "llm_input": "What is the capital of Kazakhstan?",
            "human_label": "incorrect"
        },
        "target": {
            "capital": "Astana"
        }
    }
]

* It is common for LLMs of generation of approximately gpt-4 and claude-3 to name the capital of Kazakhstan as “Nur-Sultan” instead of “Astana”, because, for a few years prior to their data cut-off, the capital was indeed called Nur-Sultan.

1

1. Define an executor function

Since the dataset already contains the LLM output, we can simply extract it from the dataset instead of calling the LLM.

const getCapital = async (data: Record<string, string>): Promise<string> => {
    return data.llm_output
}

2

2. Define an evaluator function

The evaluator function checks for exact match and returns 1 if the executor output matches the target, and 0 otherwise.

const evaluator = async (output, target) =>
    (await output) === target.capital ? 1 : 0
3

3. Define data and run the evaluation

my-eval.ts
import { evaluate } from '@lmnr-ai/lmnr';

const evaluationData = [
    // ... your dataset here
]

evaluate({
    data: evaluationData,
    executor: async (data) => await getCapital(data),
    evaluators: { checkCapitalCorrectness: evaluator },
    config: {
        projectApiKey: process.env.LMNR_PROJECT_API_KEY
    }
})

And then run either ts-node my-eval.ts or npx lmnr eval my-eval.ts.

LLM as a judge offline evaluation

In this example, our executor will write short summaries of news articles, and the evaluator will check if the summary is correct, and grade them from 1 to 5.

1

1. Prepare your data

The trick here is that the evaluator function needs to see the original article to evaluate the summary. That is why, we will have to duplicate the article from data into target prior to running the evaluation.

The data may look something like the following:

[
    {
        "data": {
            "article": "Laminar has released a new feature. ...",
        },
        "target": {
            "article": "Laminar has released a new feature. ...",
        }
    }
]
2

2. Define an executor function

An executor function calls OpenAI to summarize a news article. It returns a single string, the summary.

import OpenAI from 'openai';

const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});

const getSummary = async (data: {article: string}): Promise<string> => {
    const response = await openai.chat.completions.create({
        model: 'gpt-4o-mini',
        messages: [
            {
                role: "system",
                content: "Summarize the articles that the user sends you"
            }, {
                role: "user",
                content: data.article,
            },
        ],
    });
    return response.choices[0].message.content ?? ''
}

3

3. Define an evaluator function

An evaluator function grades the summary from 1 to 5. It returns an integer. We’ve simply asked OpenAI to respond in JSON, but you may want to use structured output or BAML instead.

We also ask the LLM to give a comment on the summary. Even though we don’t use it in the evaluation, it may be useful for debugging or further analysis. In addition, LLMs are known to perform better when given a chance to explain their reasoning.

import OpenAI from 'openai';

const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});

const gradeSummary = async (
    summary: string,
    data: {article: string}
): Promise<number> => {
    const response = await openai.chat.completions.create({
        model: 'gpt-4o-mini',
        messages: [{
            role: "user",
            content: "Given an article and its summary, grade the " +
                "summary from 1 to 5. Answer in json. For example: " +
                '{"grade": 3, "comment": "Summary is missing key points"}' +
                `Article: ${target['article']}. Summary: ${summary}`
        }],
    });
    return JSON.parse(response.choices[0].message.content ?? '')["grade"]
}
4

4. Run the evaluation

my-eval.ts
import { evaluate } from '@lmnr-ai/lmnr';

const evaluationData = [
    { data: { article: '...' }, target: { article: '...' } },
    { data: { article: '...' }, target: { article: '...' } },
    { data: { article: '...' }, target: { article: '...' } },
]

evaluate({
    data: evaluationData,
    executor: async (data) => await getSummary(data),
    evaluators: { gradeSummary: gradeSummary },
    config: {
        projectApiKey: process.env.LMNR_PROJECT_API_KEY
    }
})

And then run either ts-node my-eval.ts or npx lmnr eval my-eval.ts.

Evaluation with no target

Sometimes you may want to run evaluations on the output of the executor without a target. This can be useful, for example, to check if the output of the executor is in the correct format or if you want to use an LLM as a judge evaluator that generally evaluates the output.

This is as simple as not passing target to your evaluator functions.

function isOutputLongEnough(output) {
    return output.length > 100 ? 1 : 0
}

And for your dataset, you can just remove the target field. For example:

[
    { "data": { "article": "..." } },
    { "data": { "article": "..." } },
    { "data": { "article": "..." } },
]