Teacher evaluation
Teacher evaluation is a critical step in the distil labs training pipeline that happens before actual SLM training begins. It serves several important purposes:
Feasibility Check: It validates whether a large language model (LLM) can accurately solve your task. If the teacher model can solve the task, the student model will be able to learn it effectively. If the teacher model cannot solve the task, you have an opportunity to refine your inputs before investing time in full SLM training.
Performance Benchmark: It establishes a performance expectation for your SLM. The accuracy of the teacher LLM provides the first approximation of the performance you can expect from your trained SLM.
Initiating teacher evaluation
After uploading your data, you can start teacher evaluation using the API as follows (get your token
):
Checking evaluation status and results
You can check the status of your teacher evaluation and retrieve results using:
In a Jupyter notebook, display the results with
High accuracy on LLM evaluation indicates our task is well defined and we can move on to training. When training an SLM for this task, we can use the LLM evaluation as the quality benchmark for the trained model.
However, if teacher performance is low, consider:
- Revising your task description to be more specific
- Improving the quality of your example data
- Checking for inconsistencies in your dataset
- Ensuring your task is well-defined and solvable
Retrieving predictions
For more in-depth analysis, you can download the predictions on individual data points of the test dataset. The URL links to a JSON file that contains the predictions along with other information depending on which task type you have selected (i.e. classification/question-answering).
The URL of this file can be found using:
You can then download this file from the terminal using:
Note that file is in a JSON Lines format and can be read using: