Teacher evaluation
Teacher evaluation is a critical step in the distil labs training pipeline that happens before actual SLM training begins. It serves several important purposes:
Feasibility Check: It validates whether a large language model (LLM) can accurately solve your task. If the teacher model can solve the task, the student model will be able to learn it effectively. If the teacher model cannot solve the task, you have an opportunity to refine your inputs before investing time in full SLM training.
Performance Benchmark: It establishes a performance expectation for your SLM. The accuracy of the teacher LLM provides the first approximation of the performance you can expect from your trained SLM.
Initiating teacher evaluation
After uploading your data, start the teacher evaluation:
Interpreting results
Check the status and results of your teacher evaluation:
The evaluation returns multiple scores to help you understand how well the teacher model answers your test questions. See Metrics for details on each metric and how to interpret them.
High accuracy on LLM evaluation indicates your task is well defined and you can move on to training. When training an SLM for this task, you can use the LLM evaluation as the quality benchmark for the trained model.
However, if teacher performance is low, consider:
- Revising your task description to be more specific
- Improving the quality of your example data
- Checking for inconsistencies in your dataset
- Ensuring your task is well-defined and solvable
Retrieving predictions (API only)
For more in-depth analysis, you can download the predictions on individual data points of the test dataset using the API. The URL links to a JSON file that contains the predictions along with other information depending on which task type you have selected.
Download and read the predictions file:
The file is in JSON Lines format and can be read using:
