Model training with the distil labs platform
The distil labs platform allows anyone to benefit from state-of-the-art methods for model fine-tuning. You don’t need to be a machine learning expert to get a highly performant model customized to your needs in a matter of a day.
Overview
In this notebook, we will train a small language model (SLM) with the distil labs platform. We will follow a three-step process and, at the end, download our own SLM for local deployment.
In practice, you will transform a compact “student” model into a domain expert—without writing a single training loop yourself. Distil Labs takes care of every heavy-lifting step:
Registration
The first step towards model distillation is creating an account at app.distillabs.ai. Once you sign up, you can use your email/password combination in the authentification section below.
Notebook Setup
Copy over necessary data
distil labs authentification
To begin, we need to authenticate. You should use your distil labs login and password to generate a temporary API key that will be used for authentification during this tutorial; it is valid for 1h so please re-authenticate if a 403 Forbidden
errors appear.
Register a new model
The first component of the workflow is registring a new model - this helps us keep track of all our experiments down the line
Inspect our models
Now that the model is registerd, we can take a look at all the models in our repository
Data Validation
To get started with model training we need to upload the necessary data components. The details of formatting are discussed in Data Preparation Guidelines for Classification but if you don’t have a dataset ready, you can follow one of the data preparation notebooks to prepare an example dataset. Each distil labs training relies on:
- Job description that explains the classification task and describes all classes
- Train and test dataset (~10s examples) which demonstrates our expected inputs and outputs
- (optional) Unstructured dataset with unlabelled data points related to the problem
The data for this example should be stored in the data_location
directory. Lets first take a look at the current directory to make sure all files are available. Your current directory should look like:
Job Description
A job description explains the classification task in plain english and follows the general structure below:
For this problem, we use the job description stored in data_location/
, lets inspect the job_description prepared for our problem:
Train and test data
We need a small train data to begin disti labs training and a testing dataset that we can use to evaluate the performance of the fine-tuned model. Here, we use the train and test datasets from the data_location
directory where each is a JSON-lines file with below 100 (question
, answer
) pairs.
Let’s inspect the available datasets to see the format and a few examples.
Unstructured dataset
The unstructured dataset is used to guide the teacher model in generating diverse, domain-specific data. It can be documentation, unlabelled examples, or even industry literature that contains such information. Here, we use the unstructured datasets from the data_location/
directory where each is a JSON-lines with a single column (context
).
Let’s inspect the available datasets to see the format and a few examples.
Upload and Validate data
We upload all data elements to the distil labs plaform use the data validation API to check if everything is in order for our jobs.
Teacher evaluation
In the teacher evaluation stage, we will use our test set to validate whether our chosen ‘teacher’ LLM can solve the task well enough.
If a large model can solve a problem, we can then distil the problem-solving ability of the larger model into a small model. The accuracy of the teacher LLM will give us an idea of the performance to expect from our SLM.
Check status and results
Run the cell below to check the status and results of the LLM evaluation.
High accuracy on LLM evaluation indicates our task is well defined and we can move on to training. When training an SLM for this task, we can use the LLM evaluation as the quality benchmark for the trained model.
SLM Training
Now that we are satisfied with the LLM evaluation, we will start the distil labs training process where the SLM learns to mimic the LLM’s behavior on your specific task. Once the training is complete, we will review the SLM’s performance against the LLM’s benchmark and decide if the quality meets your requirements.
To kick off the training job, we can use the following code snippet below that starts the training loop and returns the job tag of the initialized training job. In the subsequent steps, the job tag will be used to manage the training process’s lifecycle.
Training status and evaluation results
We can analyze the status of the training job using the jobs
API. The following code snippets displays the current status of the job we started before.
When the job is finished (status=complete
), we can use the jobs
API again to get the benchmarking result - the accuracy of the LLM and the accuracy of the fine-tuned SLM. We can achieve this using:
Interpreting results
Inspecting the classification results, we can compare the accuracy of the small model (1B parameters) to the teacher model with 70x the size. In most cases, the accuracy should be comparable, indicating successful training.
SLM Ready
You can list all of your models using the cell below.Once the model is fully trained, we can share the model binaries with you, so you can deploy it on your own infrastructure and have full control. The model binaries can be downloaded using the model
API by downloading the tarball and extracting into to the model
directory. A trained model can be later deployed for inference; this is explained in the next tutorial: classification_model_deployment.ipynb