Teacher evaluation

Teacher evaluation is a critical step in the distil labs training pipeline that happens before actual SLM training begins. It serves several important purposes:

Feasibility Check: It validates whether a large language model (LLM) can accurately solve your task. If the teacher model can solve the task, the student model will be able to learn it effectively. If the teacher model cannot solve the task, you have an opportunity to refine your inputs before investing time in full SLM training.

Performance Benchmark: It establishes a performance expectation for your SLM. The accuracy of the teacher LLM provides the first approximation of the performance you can expect from your trained SLM.

Initiating teacher evaluation

After uploading your data, you can start teacher evaluation using the API as follows (get your token):

1import requests
2from pprint import pprint
3# Start teacher evaluation using the upload_id from your data upload
4response = requests.post(
5 f"https://api.distillabs.ai/teacher-evaluations/{upload_id}",
6 headers={"Authorization": f"Bearer {token}"},
7)
8# Store the teacher evaluation ID for checking status later
9teacher_evaluation_id = response.json()["id"]
10pprint(response.json())

Checking evaluation status and results

You can check the status of your teacher evaluation and retrieve results using:

1# Check status and get results
2response = requests.get(
3 f"https://api.distillabs.ai/teacher-evaluations/{teacher_evaluation_id}/status",
4 headers={"Authorization": f"Bearer {token}"},
5)
6pprint(response.json())

In a Jupyter notebook, display the results with

1import pandas as pd
2print(pd.DataFrame(response.json()["results"]).transpose())

High accuracy on LLM evaluation indicates our task is well defined and we can move on to training. When training an SLM for this task, we can use the LLM evaluation as the quality benchmark for the trained model.

However, if teacher performance is low, consider:

  1. Revising your task description to be more specific
  2. Improving the quality of your example data
  3. Checking for inconsistencies in your dataset
  4. Ensuring your task is well-defined and solvable

Retrieving predictions

For more in-depth analysis, you can download the predictions on individual data points of the test dataset. The URL links to a JSON file that contains the predictions along with other information depending on which task type you have selected (i.e. classification/question-answering).

The URL of this file can be found using:

classification
1print(response.json()["evaluation_predictions_download_url"])

You can then download this file from the terminal using:

$curl -o teacher_evaluation_predictions.json "<DOWNLOAD_URL>"

Note that file is in a JSON Lines format and can be read using:

classification
1df = pd.read_json("teacher_evaluation_predictions.json", lines=True)