In this example our executor function calls an LLM to get the capital of a country.
We then evaluate the correctness of the prediction by checking for exact match with the target capital.
1
1. Define an executor function
The executor function calls OpenAI to get the capital of a country.
The prompt also asks to only name the city and nothing else. In a real scenario,
you will likely want to use structured output to get the city name only.
Copy
import OpenAI from 'openai';const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});const getCapital = async ( {country}: {country: string}): Promise<string> => { const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: `What is the capital of ${country}?` + ' Just name the city and nothing else' } ], }); return response.choices[0].message.content ?? ''}
Copy
import OpenAI from 'openai';const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});const getCapital = async ( {country}: {country: string}): Promise<string> => { const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: `What is the capital of ${country}?` + ' Just name the city and nothing else' } ], }); return response.choices[0].message.content ?? ''}
Copy
from openai import AsyncOpenAIopenai_client = AsyncOpenAI(api_key=os.environ["OPENAI_API_KEY"])async def get_capital(data): country = data["country"] response = await openai_client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": f"What is the capital of {country}? " "Just name the city and nothing else", }, ], ) return response.choices[0].message.content.strip()
2
2. Define an evaluator function
The evaluator function checks for exact match and returns 1 if the executor output
matches the target, and 0 otherwise.
In this example, our executor will write short summaries of news articles,
and the evaluator will check if the summary is correct, and grade them from 1 to 5.
1
1. Prepare your data
The trick here is that the evaluator function needs to see the original article to evaluate the summary.
That is why, we will have to duplicate the article from data into target prior to running the evaluation.
The data may look something like the following:
Copy
[ { "data": { "article": "Laminar has released a new feature. ...", }, "target": { "article": "Laminar has released a new feature. ...", } }]
2
2. Define an executor function
An executor function calls OpenAI to summarize a news article. It returns a single string, the summary.
Copy
import OpenAI from 'openai';const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});const getSummary = async (data: {article: string}): Promise<string> => { const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [ { role: "system", content: "Summarize the articles that the user sends you" }, { role: "user", content: data.article, }, ], }); return response.choices[0].message.content ?? ''}
Copy
import OpenAI from 'openai';const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});const getSummary = async (data: {article: string}): Promise<string> => { const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [ { role: "system", content: "Summarize the articles that the user sends you" }, { role: "user", content: data.article, }, ], }); return response.choices[0].message.content ?? ''}
Copy
from openai import AsyncOpenAIopenai_client = AsyncOpenAI(api_key=os.environ["OPENAI_API_KEY"])async def get_summary(data: dict[str, str]) -> str: response = await openai_client.chat.completions.create( model="gpt-4o-mini", messages=[ { "role": "system", "content": "Summarize the articles that the user sends you" }, { "role": "user", "content": data["article"], }, ], ) return response.choices[0].message.content.strip()
3
3. Define an evaluator function
An evaluator function grades the summary from 1 to 5. It returns an integer.
We’ve simply asked OpenAI to respond in JSON, but you may want to use
structured output or BAML instead.
We also ask the LLM to give a comment on the summary. Even though we don’t use it in the evaluation,
it may be useful for debugging or further analysis. In addition, LLMs are known to perform better
when given a chance to explain their reasoning.
Copy
import OpenAI from 'openai';const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});const gradeSummary = async ( summary: string, data: {article: string}): Promise<number> => { const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [{ role: "user", content: "Given an article and its summary, grade the " + "summary from 1 to 5. Answer in json. For example: " + '{"grade": 3, "comment": "Summary is missing key points"}' + `Article: ${target['article']}. Summary: ${summary}` }], }); return JSON.parse(response.choices[0].message.content ?? '')["grade"]}
Copy
import OpenAI from 'openai';const openai = new OpenAI({apiKey: process.env.OPENAI_API_KEY});const gradeSummary = async ( summary: string, data: {article: string}): Promise<number> => { const response = await openai.chat.completions.create({ model: 'gpt-4o-mini', messages: [{ role: "user", content: "Given an article and its summary, grade the " + "summary from 1 to 5. Answer in json. For example: " + '{"grade": 3, "comment": "Summary is missing key points"}' + `Article: ${target['article']}. Summary: ${summary}` }], }); return JSON.parse(response.choices[0].message.content ?? '')["grade"]}
Copy
import jsonasync def grade_summary(summary: str, target: dict[str, str]) -> int: response = await openai_client.chat.completions.create( model="gpt-4o-mini", messages=[{ "role": "user", "content": "Given an article and its summary, grade the " + "summary from 1 to 5. Answer in json. For example: " + '{"grade": 3, "comment": "Summary is missing key points"}' + f"Article: {target['article']}. Summary: {summary}" }, ], ) return int( json.loads(response.choices[0].message.content.strip())["grade"] )
Sometimes you may want to run evaluations on the output of the executor without a target.
This can be useful, for example, to check if the output of the executor is in the correct format
or if you want to use an LLM as a judge evaluator that generally evaluates the output.
This is as simple as not passing target to your evaluator functions.