Model training
After validating your teacher model’s performance, the next step is to train your small language model (SLM) using distil labs’ knowledge distillation approach.
Understanding knowledge distillation
Knowledge distillation is the core technology behind distil labs’ ability to create high-performing small models with minimal training data. The process works as follows:
- Synthetic Data Generation: The large “teacher” model generates synthetic training data based on your problem definition, task description, and provided examples.
- Synthetic Data Validation: We validate generated data to make sure the synthetic set is diverse and high-quality
- Knowledge Transfer: The synthetic data is used to train the smaller “student” model with a loss function aligned with your specific task. This process enables the student model to emulate the teacher’s capabilities while maintaining a much smaller size.
Initiating model training
After completing teacher evaluation and confirming satisfactory performance, you can start the training process using the API (get your token
):
Monitoring training status
The training process typically takes several hours to complete. You can check the current status of your training job:
Possible status values include:
PENDING
- Job is waiting to startRUNNING
- Job is currently runningSUCCESS
- Job has finished successfullyFAILURE
- Job encountered an error
Retrieving evaluation results
When the training is complete (status=SUCCESS
), you can retrieve detailed evaluation results to compare the performance of your trained SLM against the teacher model:
Display the results with:
Retrieving predictions
For more in-depth analysis, you can download the predictions on individual data points of the test dataset. These predictions are generated using the fine-tuned student model. The URL links to a JSON file that contains the predictions along with other information depending on which task type you have selected (i.e. classification/question-answering).
The URL of this file can be found using:
You can then download this file from the terminal using:
Note that file is in a JSON Lines format and can be read using:
What makes a successful training?
-
Comparison to Teacher: Your SLM should achieve performance reasonably close to the teacher model (typically within one standard deviation)
-
Task Requirements: The absolute performance should meet your specific application needs
If your SLM performance is significantly below the teacher model, consider:
- Increasing the number of training examples
- Adjusting your task description to be more specific
- Modifying your configuration parameters (like increasing training epochs)
- Using a slightly larger student model