Model training with the distil labs platform

The distil labs platform allows anyone to benefit from state-of-the-art methods for model fine-tuning. You don’t need to be a machine learning expert to get a highly performant model customized to your needs in a matter of a day.

Overview

In this notebook, we will train a small language model (SLM) with the distil labs platform. We will follow a three-step process and, at the end, download our own SLM for local deployment.

In practice, you will transform a compact “student” model into a domain expert—without writing a single training loop yourself. Distil Labs takes care of every heavy-lifting step:

StageWhat happens under the hoodWhy it matters
Data upload & validationYou submit a job description, tiny train / test CSVs, and (optionally) an unstructured corpus. The platform checks schema, finds label mistakes, and estimates achievable accuracy.Catches data bugs before you waste compute.
LLM evaluationA large foundation model (“teacher”) answers your test questions. Distil Labs measures accuracy and shows a pass/fail report.If the teacher can’t solve the task, small models won’t either—stop here instead of two hours later.
SLM training (synthetic generation + distillation)Automatically generates additional Q&A pairs from your corpus to fill knowledge gaps, then fine-tunes the 135 M student with LoRA/QLoRA adapters while distilling the teacher’s reasoning. Lightweight hyper-parameter search runs in the background.Produces a model up to 70 × smaller than the teacher yet usually within a few percentage points of its accuracy—ready for CPU-only devices.
Benchmarking & packagingOnce training finishes, Distil Labs re-evaluates both teacher and student on your held-out test set, generates a side-by-side metrics report, and bundles the weights in an Ollama-ready tarball.You get hard numbers and a model you can run locally in one command.

Registration

The first step towards model distillation is creating an account at app.distillabs.ai. Once you sign up, you can use your email/password combination in the authentification section below.

Notebook Setup

Copy over necessary data
$%%bash
># Check if the directory exists
>if [ -d "data-mental-health" ]; then
> echo "Data directory does exist, nothing to do"
>else
> echo "Data directory does not exist, cloning from a repository"
>
> # Clone the repo to a temp location
> git clone https://github.com/distil-labs/distil-labs-examples.git distil-labs-examples
>
> # Copy the specific subdirectory to the data directory
> cp -r distil-labs-examples/classification-tutorial/data-mental-health data-mental-health
> cp -r distil-labs-examples/classification-tutorial/data-injury data-injury
> cp -r distil-labs-examples/classification-tutorial/data-ecommerce data-ecommerce
> cp -r distil-labs-examples/classification-tutorial/data-banking-routing data-banking-routing
>
> # Delete the cloned repo
> rm -rf distil-labs-examples
>
> echo "Subdirectory copied and repo removed."
>
>fi
1! pip install pandas requests rich torch transformers
1import pandas
2pandas.set_option("display.max_rows", 10)

distil labs authentification

To begin, we need to authenticate. You should use your distil labs login and password to generate a temporary API key that will be used for authentification during this tutorial; it is valid for 1h so please re-authenticate if a 403 Forbidden errors appear.

1import getpass
2import json
3import requests
4
5
6def distil_bearer_token(DL_USERNAME: str, DL_PASSWORD: str) -> str:
7 response = requests.post(
8 "https://cognito-idp.eu-central-1.amazonaws.com",
9 headers={
10 "X-Amz-Target": "AWSCognitoIdentityProviderService.InitiateAuth",
11 "Content-Type": "application/x-amz-json-1.1",
12 },
13 data=json.dumps({
14 "AuthParameters": {
15 "USERNAME": DL_USERNAME,
16 "PASSWORD": DL_PASSWORD,
17 },
18 "AuthFlow": "USER_PASSWORD_AUTH",
19 "ClientId" : "4569nvlkn8dm0iedo54nbta6fd",
20 })
21 )
22 response.raise_for_status()
23 return response.json()["AuthenticationResult"]["AccessToken"]
24
25
26DL_USERNAME = "YOUR_EMAIL"
27DL_PASSWORD = getpass.getpass()
28
29AUTH_HEADER = {"Authorization": distil_bearer_token(DL_USERNAME, DL_PASSWORD)}
30print("Success")

Register a new model

The first component of the workflow is registring a new model - this helps us keep track of all our experiments down the line

1from pprint import pprint
2
3# Register a model
4data = {"name": "testmodel-1234"}
5response = requests.post(
6 "https://api.distillabs.ai/models",
7 data=json.dumps(data),
8 headers={"content-type": "application/json", **AUTH_HEADER},
9)
10pprint(response.json())
11model_id = response.json()["id"]
12print(f"Registered a model with ID={model_id}")

Inspect our models

Now that the model is registerd, we can take a look at all the models in our repository

1from pprint import pprint
2
3# Retrieve all models
4response = requests.get(
5 "https://api.distillabs.ai/models",
6 headers=AUTH_HEADER
7)
8pprint(response.json())

Data Validation

To get started with model training we need to upload the necessary data components. The details of formatting are discussed in Data Preparation Guidelines for Classification but if you don’t have a dataset ready, you can follow one of the data preparation notebooks to prepare an example dataset. Each distil labs training relies on:

  1. Job description that explains the classification task and describes all classes
  2. Train and test dataset (~10s examples) which demonstrates our expected inputs and outputs
  3. (optional) Unstructured dataset with unlabelled data points related to the problem
1from pathlib import Path
2data_location = Path("data-banking-routing")
3assert data_location.exists()

The data for this example should be stored in the data_location directory. Lets first take a look at the current directory to make sure all files are available. Your current directory should look like:

├── README.md
├── classification-training.ipynb
└── <data_location>
├── job_description.json
├── test.csv
├── train.csv
└── unstructured.csv

Job Description

A job description explains the classification task in plain english and follows the general structure below:

{
"task_description": "<Enter job description here>",
"classes_description":
{
"class A": "<Enter class A description here>",
"class B": "<Enter class B description here>",
...
}
}

For this problem, we use the job description stored in data_location/, lets inspect the job_description prepared for our problem:

1import json
2import rich.json
3
4with open(data_location.joinpath("job_description.json")) as fin:
5 rich.print(rich.json.JSON(fin.read()))

Train and test data

We need a small train data to begin disti labs training and a testing dataset that we can use to evaluate the performance of the fine-tuned model. Here, we use the train and test datasets from the data_location directory where each is a JSON-lines file with below 100 (questionanswer) pairs.

Let’s inspect the available datasets to see the format and a few examples.

1from pathlib import Path
2from IPython.display import display
3
4import pandas
5
6print("# --- Train set")
7train = pandas.read_csv(data_location.joinpath("train.csv"))
8display(train)
9
10print("# --- Test set")
11test = pandas.read_csv(data_location.joinpath("test.csv"))
12display(test)

Unstructured dataset

The unstructured dataset is used to guide the teacher model in generating diverse, domain-specific data. It can be documentation, unlabelled examples, or even industry literature that contains such information. Here, we use the unstructured datasets from the data_location/ directory where each is a JSON-lines with a single column (context).

Let’s inspect the available datasets to see the format and a few examples.

1unstructured = pandas.read_csv(data_location.joinpath("unstructured.csv"))
2display(unstructured)

Upload and Validate data

We upload all data elements to the distil labs plaform use the data validation API to check if everything is in order for our jobs.

1import json
2
3import requests
4import yaml
5from pathlib import Path
6import pandas
7
8# Specify the config
9config = {
10 "base": {
11 "task": "classification",
12 }
13}
14
15# Package your data
16data_dir = Path("data")
17data = {
18 "job_description": {
19 "type": "json",
20 "content": open(data_location / "job_description.json", encoding="utf-8").read()
21 },
22 "train_data": {
23 "type": "csv",
24 "content": open(data_location / "train.csv", encoding="utf-8").read()
25 },
26 "test_data": {
27 "type": "csv",
28 "content": open(data_location / "test.csv", encoding="utf-8").read()
29 },
30 "unstructured_data": {
31 "type": "csv",
32 "content": open(data_location / "unstructured.csv", encoding="utf-8").read()
33 },
34 "config": {
35 "type": "yaml",
36 "content": yaml.dump(config)
37 },
38}
39
40# Upload data
41response = requests.post(
42 f"https://api.distillabs.ai/models/{model_id}/uploads",
43 data=json.dumps(data),
44 headers={"content-type": "application/json", **AUTH_HEADER},
45)
46print(response.json())
47upload_id = response.json()["id"]

Teacher evaluation

In the teacher evaluation stage, we will use our test set to validate whether our chosen ‘teacher’ LLM can solve the task well enough.

If a large model can solve a problem, we can then distil the problem-solving ability of the larger model into a small model. The accuracy of the teacher LLM will give us an idea of the performance to expect from our SLM.

1from pprint import pprint
2
3# Start teacher evaluation
4data = {"upload_id": upload_id}
5response = requests.post(
6 f"https://api.distillabs.ai/models/{model_id}/teacher-evaluations",
7 data=json.dumps(data),
8 headers={"content-type": "application/json", **AUTH_HEADER},
9)
10
11pprint(response.json())
12teacher_evaluation_id = response.json().get("id")

Check status and results

Run the cell below to check the status and results of the LLM evaluation.

High accuracy on LLM evaluation indicates our task is well defined and we can move on to training. When training an SLM for this task, we can use the LLM evaluation as the quality benchmark for the trained model.

1import json
2from pprint import pprint
3import pandas as pd
4
5response = requests.get(
6 f"https://api.distillabs.ai/teacher-evaluations/{teacher_evaluation_id}/status",
7 headers=AUTH_HEADER,
8)
9pprint(response.json()["message"])
10
11try:
12 display(pd.DataFrame(response.json().get("results")).transpose())
13except:
14 pass

SLM Training

Now that we are satisfied with the LLM evaluation, we will start the distil labs training process where the SLM learns to mimic the LLM’s behavior on your specific task. Once the training is complete, we will review the SLM’s performance against the LLM’s benchmark and decide if the quality meets your requirements.

To kick off the training job, we can use the following code snippet below that starts the training loop and returns the job tag of the initialized training job. In the subsequent steps, the job tag will be used to manage the training process’s lifecycle.

1import time
2from pprint import pprint
3
4# Start SLM training
5data = {"upload_id": upload_id}
6response = requests.post(
7 f"https://api.distillabs.ai/models/{model_id}/training",
8 data=json.dumps(data),
9 headers={"content-type": "application/json", **AUTH_HEADER},
10)
11
12pprint(response.json())
13slm_training_job_id = response.json().get("id")

Training status and evaluation results

We can analyze the status of the training job using the jobs API. The following code snippets displays the current status of the job we started before.

1import json
2
3response = requests.get(
4 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/status",
5 headers=AUTH_HEADER,
6)
7response.json()

When the job is finished (status=complete), we can use the jobs API again to get the benchmarking result - the accuracy of the LLM and the accuracy of the fine-tuned SLM. We can achieve this using:

1from pprint import pprint
2response = requests.get(
3 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/evaluation-results",
4 headers=AUTH_HEADER,
5)
6
7pprint(response.json()["message"])
8for model in ["teacher", "student-base", "student-tuned"]:
9 try:
10 print(model)
11 print("Accuracy:", response.json().get("evaluation_results").get(model).get("accuracy"))
12 display(pd.DataFrame(response.json().get("evaluation_results").get(model)).transpose())
13 except:
14 pass

Interpreting results

Inspecting the classification results, we can compare the accuracy of the small model (1B parameters) to the teacher model with 70x the size. In most cases, the accuracy should be comparable, indicating successful training.

SLM Ready

You can list all of your models using the cell below.Once the model is fully trained, we can share the model binaries with you, so you can deploy it on your own infrastructure and have full control. The model binaries can be downloaded using the model API by downloading the tarball and extracting into to the model directory. A trained model can be later deployed for inference; this is explained in the next tutorial: classification_model_deployment.ipynb

1import json
2from pprint import pprint
3import requests
4
5response = requests.get(
6 f"https://api.distillabs.ai/models",
7 headers=AUTH_HEADER,
8)
9pprint(response.json())
1from pprint import pprint
2
3slm_training_job_id = "SELECTED-MODEL"
4response = requests.get(
5 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/model",
6 headers=AUTH_HEADER,
7)
8pprint(response.json())
1import tarfile
2import urllib.request
3
4print("Downloading …")
5def status(count, block, total):
6 print("\r", f"Downloading: {count * block / total:.1%}", end="")
7
8
9urllib.request.urlretrieve(
10 s3url,
11 "model.tar",
12 reporthook=status,
13)
14
15print("\nUnpacking …")
16with tarfile.open("model.tar", mode="r:*") as tar:
17 tar.extractall(path=".")
1!ls -lt

Model Deployment

1import torch
2import pandas
3
4from transformers import AutoTokenizer, AutoModelForSequenceClassification
5from transformers import TextClassificationPipeline
6
7model = AutoModelForSequenceClassification.from_pretrained("model")
8tokenizer = AutoTokenizer.from_pretrained("model", padding_side="left")
9llm = TextClassificationPipeline(model=model, tokenizer=tokenizer, top_k=None)
1answer = llm("I have a charge for cash withdrawal that I want to learn about")
2pandas.DataFrame(answer.pop()).sort_values(by="score", ascending=False)