Introduction

Welcome to the distil Labs hands‑on tutorial for fine-tuning and deploying your own domain-specialized model. In this tutorial, you’ll learn how to Fine-tune a small language model (SLM) for a custom open-book question answering task using the Distil Labs platform.

Despite its compact size, the fine-tuned SLM will deliver performance close to much larger models—demonstrating how domain specialization and efficient distillation can unlock powerful capabilities on resource-constrained hardware. By the end, you’ll have a functional, local QA assistant—built with minimal data, no ML expertise, and zero dependency on cloud-based LLMs.

Registration

The first step towards model distillation is creating an account at app.distillabs.ai. Once you sign up, you can use your email/password combination in the authentification section below.

Notebook Setup

Copy over necessary data
$%%bash
># Check if the directory exists
>if [ -d "data" ]; then
> echo "Data directory does exist, nothing to do"
>else
> echo "Data directory does not exist, cloning from a repository"
>
> # Clone the repo to a temp location
> git clone https://github.com/distil-labs/distil-labs-examples.git distil-labs-examples
>
> # Copy the specific subdirectory to the data directory
> cp -r distil-labs-examples/rag-tutorial/data data
>
> # Delete the cloned repo
> rm -rf distil-labs-examples
>
> echo "Subdirectory copied and repo removed."
>
>fi
Install python libraries
1! pip install langchain-core langchain_community langchain-openai langchain-huggingface langchain-ollama
2! pip install wikipedia pandas numpy requests rich pyyaml rouge_score ollama
1%env TOKENIZERS_PARALLELISM=false

Specialize a Question-Answering Model with distil labs

In this chapter you will transform a compact 1B-parameter “student” model into a domain expert—without writing a single training loop yourself. Distil Labs takes care of every heavy-lifting step:

StageWhat happens under the hoodWhy it matters
Data upload & validationYou submit a job description, tiny train / test CSVs, and (optionally) an unstructured corpus. The platform checks schema, finds label mistakes, and estimates achievable accuracy.Catches data bugs before you waste compute.
Teacher evaluationA large foundation model (“teacher”) answers your test questions. Distil Labs measures accuracy and shows a pass/fail report.If the teacher can’t solve the task, small models won’t either—stop here instead of two hours later.
SLM training (synthetic generation + distillation)Automatically generates additional Q&A pairs from your corpus to fill knowledge gaps, then fine-tunes the 135 M student with LoRA/QLoRA adapters while distilling the teacher’s reasoning. Lightweight hyper-parameter search runs in the background.Produces a model up to 70 × smaller than the teacher yet usually within a few percentage points of its accuracy—ready for CPU-only devices.
Benchmarking & packagingOnce training finishes, Distil Labs re-evaluates both teacher and student on your held-out test set, generates a side-by-side metrics report, and bundles the weights in an Ollama-ready tarball.You get hard numbers and a model you can run locally in one command.

What you need to supply

  • A concise job description that tells the platform what “good” looks like
  • Roughly 20–100 labeled (question, answer) pairs for train / test
  • Any domain documents you want the teacher to read while inventing synthetic Q&A pairs

Everything else (synthetic generation, distillation, evaluation, and packaging) is automated.
Let’s dive in and see how that looks in practice.

Authentication

The first step towards model distillation is logging into your distil labs account you created at the begginning of the notebook. If you registered already, you can use your email/password combination in the authentication section below.

1import getpass
2import json
3import requests
4
5
6def distil_bearer_token(DL_USERNAME: str, DL_PASSWORD: str) -> str:
7 response = requests.post(
8 "https://cognito-idp.eu-central-1.amazonaws.com",
9 headers={
10 "X-Amz-Target": "AWSCognitoIdentityProviderService.InitiateAuth",
11 "Content-Type": "application/x-amz-json-1.1",
12 },
13 data=json.dumps({
14 "AuthParameters": {
15 "USERNAME": DL_USERNAME,
16 "PASSWORD": DL_PASSWORD,
17 },
18 "AuthFlow": "USER_PASSWORD_AUTH",
19 "ClientId" : "4569nvlkn8dm0iedo54nbta6fd",
20 })
21 )
22 response.raise_for_status()
23 return response.json()["AuthenticationResult"]["AccessToken"]
24
25
26DL_USERNAME = "YOUR_EMAIL"
27DL_PASSWORD = getpass.getpass()
28
29AUTH_HEADER = {"Authorization": distil_bearer_token(DL_USERNAME, DL_PASSWORD)}
30print("Success")

Register a new model

The first component of the workflow is registring a new model - this helps us keep track of all our experiments down the line

1## Register a model
2from pprint import pprint
3
4# Register a model
5data = {"name": "testmodel-123"}
6response = requests.post(
7 "https://api.distillabs.ai/models",
8 data=json.dumps(data),
9 headers={"content-type": "application/json", **AUTH_HEADER},
10)
11pprint(response.json())
12model_id = response.json()["id"]
13print(f"Registered a model with ID={model_id}")

Inspect our models

Now that the model is registerd, we can take a look at all the models in our repository

1from pprint import pprint
2
3# Retrieve all models
4response = requests.get(
5 "https://api.distillabs.ai/models",
6 headers=AUTH_HEADER
7)
8pprint(response.json())

Data Upload

The data for this example should be stored in the data_location directory. Lets first take a look at the current directory to make sure all files are available. Your current directory should look like:

├── README.md
├── rag-tutorial.ipynb
└── data
├── job_description.json
├── test.csv
├── train.csv
└── unstructured.csv
1import json
2from pathlib import Path
3import rich.json
4
5with open(Path("data").joinpath("job_description.json")) as fin:
6 rich.print(rich.json.JSON(fin.read()))

Train/test set

We need a small train dataset to begin distil labs training and a testing dataset that we can use to evaluate the performance of the fine-tuned model. Here, we use the train and test datasets from the data_location directory where each is a CSV file with below 100 (question, answer) pairs.

1from pathlib import Path
2from IPython.display import display
3
4import pandas as pd
5
6print("# --- Train set")
7train = pd.read_csv(Path("data").joinpath("train.csv"))
8display(train)
9
10print("# --- Test set")
11test = pd.read_csv(Path("data").joinpath("test.csv"))
12display(test)

Unstructured dataset

The unstructured dataset is used to guide the teacher model in generating diverse, domain-specific data. In the case of this open-book example, we need to provide a realistic document that would be used as context for question-answering. Here, we use the unstructured datasets from the data_location/ directory where each is a JSON-lines with a single column (context).

Let’s inspect the available datasets to see the format and a few examples.

1import pandas as pd
2
3unstructured = pd.read_csv(Path("data").joinpath("unstructured.csv"))
4display(unstructured)

Data upload

We upload our dataset by attaching it to the model we created, this lets us keep all the artifacts in one place

1import json
2
3import requests
4import yaml
5from pathlib import Path
6import pandas
7
8# Specify the config
9config = {
10 "base": {
11 "task": "question-answering-open-book",
12 },
13 "tuning": {
14 "num_train_epochs": 4,
15 },
16}
17
18# Package your data
19data_dir = Path("data")
20data = {
21 "job_description": {
22 "type": "json",
23 "content": open(data_dir / "job_description.json", encoding="utf-8").read()
24 },
25 "train_data": {
26 "type": "csv",
27 "content": open(data_dir / "train.csv", encoding="utf-8").read()
28 },
29 "test_data": {
30 "type": "csv",
31 "content": open(data_dir / "test.csv", encoding="utf-8").read()
32 },
33 "unstructured_data": {
34 "type": "csv",
35 "content": open(data_dir / "unstructured.csv", encoding="utf-8").read()
36 },
37 "config": {
38 "type": "yaml",
39 "content": yaml.dump(config)
40 },
41}
42
43# Upload data
44response = requests.post(
45 f"https://api.distillabs.ai/models/{model_id}/uploads",
46 data=json.dumps(data),
47 headers={"content-type": "application/json", **AUTH_HEADER},
48)
49print(response.json())
50upload_id = response.json()["id"]

Teacher Evaluation

Before training an SLM, distil labs validates whether a large language model can solve your task:

1from pprint import pprint
2
3# Start teacher evaluation
4data = {"upload_id": upload_id}
5response = requests.post(
6 f"https://api.distillabs.ai/models/{model_id}/teacher-evaluations",
7 data=json.dumps(data),
8 headers={"content-type": "application/json", **AUTH_HEADER},
9)
10
11pprint(response.json())
12teacher_evaluation_id = response.json().get("id")

Poll the status endpoint until it completes, then inspect the quality of generated answers. distil labs shows four scores to tell you how well the “teacher” model answers your test questions. Think of them as different lenses on the same picture—together they give a fuller view than any single number

MetricWhat it really asksHow to read it
Exact-Match (Binary)“Did the model give exactly the same words as the reference answer?”1 = perfect match, 0 = anything else. Great for facts that have one correct phrasing, harsh on synonyms. (Wikipedia)
LLM-as-a-Judge“If we let a large language model act as a human grader, does it say this answer is good?”Scores reflect semantic quality even when wording differs; handy when many answers are possible. (Evidently AI, arXiv)
ROUGE-L“How much word-overlap is there between answer and reference?” (counts the longest common subsequence).Higher = more shared wording; favours longer answers that reuse reference phrases. Widely used in text-summarisation tests. (Wikipedia)
METEOR“Do the two answers share words or close synonyms/stems, and is the wording fluent?”Balances precision + recall, rewards correct synonyms, penalises word-salad; often tracks human judgements better than pure overlap metrics. (Wikipedia)

How to interpret a scorecard
  • If Exact-Match is low but LLM-as-a-Judge is high, the answers are probably right but paraphrased—consider adding those paraphrases to your reference set.
  • If all four numbers sag, revisit your job description or give the model more context; the task may be under-specified.

Follow the links above for deeper dives if you want to explore the math or research behind each metric.

1from pprint import pprint
2
3response = requests.get(
4 f"https://api.distillabs.ai/teacher-evaluations/{teacher_evaluation_id}/status",
5 headers=AUTH_HEADER
6)
7pprint(response.json())

SLM Training

Once the teacher evaluation completes successfully, start the SLM training:

1import time
2from pprint import pprint
3
4# Start SLM training
5data = {"upload_id": upload_id}
6response = requests.post(
7 f"https://api.distillabs.ai/models/{model_id}/training",
8 data=json.dumps(data),
9 headers={"content-type": "application/json", **AUTH_HEADER},
10)
11
12pprint(response.json())
13slm_training_job_id = response.json().get("id")

We can analyze the status of the training job using the jobs API. The following code snippets displays the current status of the job we started before. When the job is finished (status=complete), we can use the jobs API again to get the benchmarking result - the accuracy of the LLM and the accuracy of the fine-tuned SLM. We can achieve this using:

1import json
2from pprint import pprint
3import requests
4
5response = requests.get(
6 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/status",
7 headers=AUTH_HEADER,
8)
9pprint(response.json())

When the job is finished (status=complete), we can use the jobs API again to get the benchmarking result for the base and fine-tuned SLM, using the same four metrics as for the teacher evaluation. We can achieve this using:

1from pprint import pprint
2
3response = requests.get(
4 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/evaluation-results",
5 headers=AUTH_HEADER,
6)
7
8pprint(response.json())

Download Your Model

You can list all of your models using the cell below. Once training is complete, download the selected model for deployment.

1import json
2from pprint import pprint
3import requests
4
5response = requests.get(
6 f"https://api.distillabs.ai/models",
7 headers=AUTH_HEADER,
8)
9pprint(response.json())
1from pprint import pprint
2
3# Get model download URL
4slm_training_job_id = "SELECTED-MODEL"
5response = requests.get(
6 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/model",
7 headers=AUTH_HEADER
8)
9
10s3url = response.json()["s3_url"]
11pprint(response.json())
1import tarfile
2import urllib.request
3
4print("Downloading …")
5def status(count, block, total):
6 print("\r", f"Downloading: {count * block / total:.1%}", end="")
7
8
9urllib.request.urlretrieve(
10 s3url,
11 "model.tar",
12 reporthook=status,
13)
14
15print("\nUnpacking …")
16with tarfile.open("model.tar", mode="r:*") as tar:
17 tar.extractall(path=".")
1!ls -lt

Deploy your fine‑tuned model

Now that we have a small language model fine‑tuned specifically for HotpotQA we can launch a lightweight chat model locally with ollama.

Install ollama in your own system

To install ollama, follow the instructions from https://ollama.com/download and make sure to enable the serving daemon (via ollama serve). Once ready, make sure the app is running by executing the following command (the list should be empty since we have not loaded any models yet):

1! ollama list

(Optional) Install ollama for Google Colab

If you are running this notebook in Google Colab, you can install Ollama using the following link

1! curl -fsSL https://ollama.com/install.sh | sh

Once ollama is installed, we should start the application. You can start the daemon with ollama serve using nohup to make sure it stays in the background.

1! nohup ollama serve &

Make sure the app is running by executing the following command (the list should be empty since we have not loaded any models yet):

1! ollama list

Register and test the downloaded model

Once your model is trained, it should be unpacked and registered with ollama. The downloaded model directory already contains everything that is needed and the model can be registed with the command below. Once it is ready, we can test the model with a standard OpenAI interface

1! ollama create model-distillabs -f model/Modelfile
1from openai import OpenAI
2
3client = OpenAI(
4 base_url = 'http://localhost:11434/v1',
5 api_key='ollama', # required, but unused
6)
7
8response = client.chat.completions.create(
9 model="model-distillabs",
10 messages=[
11 {"role": "user", "content": "What day is it?"},
12 ],
13)
14print(response.choices[0].message.content)
1SYSTEM_PROMPT = """
2You are a problem solving model working on task_description XML block:
3
4<task_description>
5Answer the question using information in the context. Questions require information from more than one paragraph from the context to answer.
6</task_description>
7
8You will be given a single task with context in the context XML block and the task in the question XML block
9Solve the task in question block based on the context in context block.
10Generate only the answer, do not generate anything else
11"""
12
13PROMPT_TEMPLATE = """
14Now for the real task, solve the task in question block based on the context in context block.
15Generate only the solution, do not generate anything else.
16
17<context>
18{context}
19</context>
20
21<question>
22{question}
23</question>
24"""
25
26def get_prompt(self, question: str, context: str) -> str:
27 return [
28 {"role": "system", "content": SYSTEM_PROMPT},
29 {"role": "user", "content": PROMPT_TEMPLATE.format(context=context, question=question)},
30 ]

Test our model

1import pandas
2from pathlib import Path
3
4data_location = Path("data")
5test = pandas.read_csv(data_location.joinpath("test.csv"))
6
7example = test.loc[3]
8print(f"context:\n{example['context']}")
9print(f"question:\n{example['question']}")
10print(f"answer:\n{example['answer']}")
11
12
13response = client.chat.completions.create(
14 model="model-distillabs",
15 messages=get_prompt(question=example['question'], context=example['context']),
16)
17print("\n\nPrediction:", response.choices[0].message.content)