Introduction

Welcome to the Distil Labs hands‑on tutorial for fine-tuning and deploying your own domain-specialized assistant.

In this tutorial, you’ll learn how to:

  1. Fine-tune a small language model (SLM) for a custom open-book question answering task using the Distil Labs platform.
  2. Deploy a fully local Retrieval-Augmented Generation (RAG) system, where the fine-tuned model answers questions based on an external knowledge source.

Our focus is on building an assistant that can answer questions about the Roman Empire using just a single Wikipedia page as context. You will walk through the full lifecycle—from understanding your dataset, to fine-tuning a 135M-parameter model, to deploying a self-contained RAG pipeline that runs entirely on your machine. To visualise the size of the model, take a look at the following comparison between sizes of a frontier model GPT4, llama8B which is normalluy considered a “small language model” and the 100M model we will be training in the tutorial.

Despite its compact size, the fine-tuned SLM will deliver performance close to much larger models—demonstrating how domain specialization and efficient distillation can unlock powerful capabilities on resource-constrained hardware.

By the end, you’ll have a functional, local QA assistant—built with minimal data, no ML expertise, and zero dependency on cloud-based LLMs.

Registration

The first step towards model distillation is creating an account at app.distillabs.ai. Once you sign up, you can use your email/password combination in the authentification section below.

Notebook Setup

Copy over necessary data
$%%bash
># Check if the directory exists
>if [ -d "data" ]; then
> echo "Data directory does exist, nothing to do"
>else
> echo "Data directory does not exist, cloning from a repository"
>
> # Clone the repo to a temp location
> git clone https://github.com/distil-labs/distil-labs-examples.git distil-labs-examples
>
> # Copy the specific subdirectory to the data directory
> cp -r distil-labs-examples/rag-tutorial/data data
>
> # Delete the cloned repo
> rm -rf distil-labs-examples
>
> echo "Subdirectory copied and repo removed."
>
>fi
Install python libraries
1! pip install langchain-core langchain_community langchain-openai langchain-huggingface langchain-ollama
2! pip install wikipedia pandas numpy requests rich pyyaml rouge_score ollama
1%env TOKENIZERS_PARALLELISM=false

Step 1: Understand your data

Before we can specialize a model or build a retrieval‑augmented generation (RAG) pipeline, we need to inspect the knowledge source we’ll be working with. In this tutorial, our task is: answer questions about the Roman Empire.

Why bother looking at the raw data first?
• It clarifies the scope (what’s in and what’s out of domain).
• It helps us spot formatting issues or noisy sections.
• It lets us craft realistic evaluation questions early on.

Retrieve a reference article

To keep things quick, we’ll use a single reference document: the English Wikipedia page for the Roman Empire. In a production system you’d likely combine multiple sources, but one page is enough to demo the workflow.

1import wikipedia, textwrap
2
3# Disable auto-suggest so we get the exact page
4page = wikipedia.page("Roman Empire", auto_suggest=False)
5wikipedia_text = page.content
6
7print("First 1200 characters of the article:\n")
8print(textwrap.fill(wikipedia_text[:1200], 110))

Sample questions we want to answer

Let’s jot down a few questions that our finished system should handle. Capturing these early gives us a mini test‑set for later.

1sample_qa = [
2 {"question": "When did the Roman Empire collapse?", "answer": "The west Roman Empire fell in 476 CE and the Easy laster until the fall of Constantinopol in 1453."},
3 {"question": "Who was the first emperor of the Roman Empire?", "answer": "Augustus (formerly Octavian)"},
4 {"question": "What were the main languages spoken across the Roman Empire?", "answer": "Latin in the West and Greek in the East"},
5 {"question": "What monumental arena in Rome hosted gladiatorial games?", "answer": "The Colosseum"},
6]
7
8for qa in sample_qa:
9 print(f"Q: {qa['question']}\nA: {qa['answer']}\n")

Why do we specialize the SLM

Here are the answers generated by the base HuggingFaceTB/SmolLM2-135M-Instruct with access to the wikipedia context. Below, we will see the questions (Q) and answers (A) that we use for testing and predictions (P) generated by the model. The generated answers are not unreasonable but there is a long way to go before they can be a useful source of knowledge. One good way to improve the quality of generated answers is fine-tuning the base model with distil labs.

Q: When did the Roman Empire collapse?
  • A: The west Roman Empire fell in 476 CE and the East lasted until the fall of Constantinople in 1453.
  • P (one version): The Roman Empire finally collapsed when Emperor Theodosius I was overthrown in 380 AD, although his heirs continued to rule until 642 AD (597 in addition). The fall is attributed to a combination of internal strife and external factors.
  • P (second version): The Roman Empire collapsed in the West and survived in the East

===== End of the Western Roman Empire in 476 AD == ====== Fall in the West and survival in the East ===== <context> The Roman Empire ruled the Mediterranean and much of Europe, Western Asia and North Africa. The Romans conquered most of this during the Republic, and it was ruled by emperors following Octavian’s assumption of effective sole rule in 27 BC. The western empire collapsed in 476 AD, but the eastern empire lasted until the fall of Constantinople in 1453. </context>


Q: Who was the first emperor of the Roman Empire?
  • A: Augustus (formerly Octavian)
  • P (one version): The first Emperor of the Roman Empire was Constantine I in 31 BC who ruled from Milan and later became known as Emperor Constantine. He restored the Julian Code to its original form after the death of Trajan in AD 109 or possibly in 75 CE after Trajan’s death in AD 98 that led to him becoming the first Emperor of the Roman Empire after his appointment by Emperor Titus in 12,651 (AD 336) from his father, Antoninus Pius.
  • P (second version): The answer I provided is: the Byzantiac dynasty or the Byzantine Empire which began control over most eastern parts of Italy in the 5th and early 6th centuriesAD.

Q: What were the main languages spoken across the Roman Empire?
  • A: Latin in the West and Greek in the East
  • P (one version): There were not two dominant languages by the Imperial Era throughout, Rome’s history it was a multi-peopelanguage that emerged. In Roman history the Empire dominated all the major languages and dialects including Latin, which is Latin still commonly used today in many countries of the world. It wasn’t until the Roman Empire collapsed after a brief period during the early Republic and the Principality (27 BC - AD 476) during the middle period that vernaculars gradually declined with the rise of Early Christianity. </content>
  • P (second version):

There are several different languages spoken across the Roman Empire, each with its own unique characteristics and cultural impacts. The languages include:

In the West, Latin (Celtic) was in widespread use throughout much of the Empire’s territory. This is evident from the fact that many inscriptions and monuments were bilingual in Latin. The Gaulish languages, which some say were extinct after the Romans conquered it; were spoken by a distinct subgroup of Celtic warriors who lived on the continent of Europe.

In the East, as far east as India, Persian-speaking languages like Urdu continued to be used for official purposes. Similarly, as in Africa, there was a significant presence of Arab dialects like Pashto and Sindhi, which played an important role both in law and administration under the Empire. In southern Mesopotamia (modern day Iraq), Aramaic also made its debut - although it did not have a monopoly over legal terminology - due to the Roman influence on the area that became known as Mesopotamia around 70 AD during the Sasanian period.

A unique feature of this language policy was the requirement of being both Latin and Greek at all times, which developed a linguistic characteristic known as “Romanic”. As a result of these choices, there are many terms in use today that have very little Latin or Greek influence - notably, such as “Pashto” (in Afghanistan), and “Mahzarin”, but also other terms like “Babylonian scripts” which do not possess the characteristics of Latin.

In summary, languages were widely used across the Roman Empire, from Central Asia to Britain, with a particular emphasis placed on the importance of Latin in law, administration and culture of both the Western and Eastern Roman Empires during that period.


Q: What monumental arena in Rome hosted gladiatorial games?
  • A: The Colosseum
  • P: The city’s famous Circus Maximus was the venue for numerous theatrical events at which gladiators fought, competed with animals such as bulls and donkeys etc., alongside other spectacular presentations.

Step 2: Specialize a Question-Answering Model with distil labs

In this chapter you will transform a compact 135 M-parameter “student” model into a domain expert—without writing a single training loop yourself. Distil Labs takes care of every heavy-lifting step:

StageWhat happens under the hoodWhy it matters
Data upload & validationYou submit a job description, tiny train / test CSVs, and (optionally) an unstructured corpus. The platform checks schema, finds label mistakes, and estimates achievable accuracy.Catches data bugs before you waste compute.
Teacher evaluationA large foundation model (“teacher”) answers your test questions. Distil Labs measures accuracy and shows a pass/fail report.If the teacher can’t solve the task, small models won’t either—stop here instead of two hours later.
SLM training (synthetic generation + distillation)Automatically generates additional Q&A pairs from your corpus to fill knowledge gaps, then fine-tunes the 135 M student with LoRA/QLoRA adapters while distilling the teacher’s reasoning. Lightweight hyper-parameter search runs in the background.Produces a model up to 70 × smaller than the teacher yet usually within a few percentage points of its accuracy—ready for CPU-only devices.
Benchmarking & packagingOnce training finishes, Distil Labs re-evaluates both teacher and student on your held-out test set, generates a side-by-side metrics report, and bundles the weights in an Ollama-ready tarball.You get hard numbers and a model you can run locally in one command.

What you need to supply

  • A concise job description that tells the platform what “good” looks like
  • Roughly 20–100 labeled (question, answer) pairs for train / test
  • Any domain documents you want the teacher to read while inventing synthetic Q&A pairs

Everything else (synthetic generation, distillation, evaluation, and packaging) is automated.
Let’s dive in and see how that looks in practice.

Authentication

The first step towards model distillation is logging into your distil labs account you created at the begginning of the notebook. If you registered already, you can use your email/password combination in the authentication section below.

1import getpass
2import json
3import requests
4
5
6def distil_bearer_token(DL_USERNAME: str, DL_PASSWORD: str) -> str:
7 response = requests.post(
8 "https://cognito-idp.eu-central-1.amazonaws.com",
9 headers={
10 "X-Amz-Target": "AWSCognitoIdentityProviderService.InitiateAuth",
11 "Content-Type": "application/x-amz-json-1.1",
12 },
13 data=json.dumps({
14 "AuthParameters": {
15 "USERNAME": DL_USERNAME,
16 "PASSWORD": DL_PASSWORD,
17 },
18 "AuthFlow": "USER_PASSWORD_AUTH",
19 "ClientId" : "4569nvlkn8dm0iedo54nbta6fd",
20 })
21 )
22 response.raise_for_status()
23 return response.json()["AuthenticationResult"]["AccessToken"]
24
25
26DL_USERNAME = "YOUR_EMAIL"
27DL_PASSWORD = getpass.getpass()
28
29AUTH_HEADER = {"Authorization": distil_bearer_token(DL_USERNAME, DL_PASSWORD)}
30print("Success")

Register a new model

The first component of the workflow is registring a new model - this helps us keep track of all our experiments down the line

1## Register a model
2from pprint import pprint
3
4# Register a model
5data = {"name": "testmodel-123"}
6response = requests.post(
7 "https://api.distillabs.ai/models",
8 data=json.dumps(data),
9 headers={"content-type": "application/json", **AUTH_HEADER},
10)
11pprint(response.json())
12model_id = response.json()["id"]
13print(f"Registered a model with ID={model_id}")

Inspect our models

Now that the model is registerd, we can take a look at all the models in our repository

1from pprint import pprint
2
3# Retrieve all models
4response = requests.get(
5 "https://api.distillabs.ai/models",
6 headers=AUTH_HEADER
7)
8pprint(response.json())

Data Upload

The data for this example should be stored in the data_location directory. Lets first take a look at the current directory to make sure all files are available. Your current directory should look like:

├── README.md
├── rag-tutorial.ipynb
└── data
├── job_description.json
├── test.csv
├── train.csv
└── unstructured.csv
1import json
2from pathlib import Path
3import rich.json
4
5with open(Path("data").joinpath("job_description.json")) as fin:
6 rich.print(rich.json.JSON(fin.read()))

Train/test set

We need a small train dataset to begin distil labs training and a testing dataset that we can use to evaluate the performance of the fine-tuned model. Here, we use the train and test datasets from the data_location directory where each is a CSV file with below 100 (question, answer) pairs.

1from pathlib import Path
2from IPython.display import display
3
4import pandas as pd
5
6print("# --- Train set")
7train = pd.read_csv(Path("data").joinpath("train.csv"))
8display(train)
9
10print("# --- Test set")
11test = pd.read_csv(Path("data").joinpath("test.csv"))
12display(test)

Unstructured dataset

The unstructured dataset is used to guide the teacher model in generating diverse, domain-specific data. Here, we will use the chunks of the wikipedia article as the unstructured data for our problem.

1import pandas as pd
2from langchain_text_splitters import RecursiveCharacterTextSplitter
3
4# Split the document into chunks
5text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
6all_splits = text_splitter.split_text(wikipedia_text)
7
8# Save the documents into local storage
9contexts_dataframe = pd.DataFrame([{"context": split} for split in all_splits])
10contexts_dataframe.to_csv("data/unstructured.csv")
11contexts_dataframe

Data upload

We upload our dataset by attaching it to the model we created, this lets us keep all the artifacts in one place

1import json
2
3import requests
4import yaml
5from pathlib import Path
6import pandas
7
8# Specify the config
9config = {
10 "base": {
11 "task": "question-answering-open-book",
12 "student_model_name": "SmolLM2-135M-Instruct",
13 },
14 "synthgen": {
15 "num_distractor_context_blocks": 2,
16 },
17 "tuning": {
18 "num_train_epochs": 4,
19 },
20}
21
22# Package your data
23data_dir = Path("data")
24data = {
25 "job_description": {
26 "type": "json",
27 "content": open(data_dir / "job_description.json", encoding="utf-8").read()
28 },
29 "train_data": {
30 "type": "csv",
31 "content": open(data_dir / "train.csv", encoding="utf-8").read()
32 },
33 "test_data": {
34 "type": "csv",
35 "content": open(data_dir / "test.csv", encoding="utf-8").read()
36 },
37 "unstructured_data": {
38 "type": "csv",
39 "content": open(data_dir / "unstructured.csv", encoding="utf-8").read()
40 },
41 "config": {
42 "type": "yaml",
43 "content": yaml.dump(config)
44 },
45}
46
47# Upload data
48response = requests.post(
49 f"https://api.distillabs.ai/models/{model_id}/uploads",
50 data=json.dumps(data),
51 headers={"content-type": "application/json", **AUTH_HEADER},
52)
53print(response.json())
54upload_id = response.json()["id"]

Teacher Evaluation

Before training an SLM, distil labs validates whether a large language model can solve your task:

1from pprint import pprint
2
3# Start teacher evaluation
4data = {"upload_id": upload_id}
5response = requests.post(
6 f"https://api.distillabs.ai/models/{model_id}/teacher-evaluations",
7 data=json.dumps(data),
8 headers={"content-type": "application/json", **AUTH_HEADER},
9)
10
11pprint(response.json())
12teacher_evaluation_id = response.json().get("id")

Poll the status endpoint until it completes, then inspect the quality of generated answers. distil labs shows four scores to tell you how well the “teacher” model answers your test questions. Think of them as different lenses on the same picture—together they give a fuller view than any single number

MetricWhat it really asksHow to read it
Exact-Match (Binary)“Did the model give exactly the same words as the reference answer?”1 = perfect match, 0 = anything else. Great for facts that have one correct phrasing, harsh on synonyms. (Wikipedia)
LLM-as-a-Judge“If we let a large language model act as a human grader, does it say this answer is good?”Scores reflect semantic quality even when wording differs; handy when many answers are possible. (Evidently AI, arXiv)
ROUGE-L“How much word-overlap is there between answer and reference?” (counts the longest common subsequence).Higher = more shared wording; favours longer answers that reuse reference phrases. Widely used in text-summarisation tests. (Wikipedia)
METEOR“Do the two answers share words or close synonyms/stems, and is the wording fluent?”Balances precision + recall, rewards correct synonyms, penalises word-salad; often tracks human judgements better than pure overlap metrics. (Wikipedia)

How to interpret a scorecard
  • If Exact-Match is low but LLM-as-a-Judge is high, the answers are probably right but paraphrased—consider adding those paraphrases to your reference set.
  • If all four numbers sag, revisit your job description or give the model more context; the task may be under-specified.

Follow the links above for deeper dives if you want to explore the math or research behind each metric.

1from pprint import pprint
2
3response = requests.get(
4 f"https://api.distillabs.ai/teacher-evaluations/{teacher_evaluation_id}/status",
5 headers=AUTH_HEADER
6)
7pprint(response.json())

SLM Training

Once the teacher evaluation completes successfully, start the SLM training:

1import time
2from pprint import pprint
3
4# Start SLM training
5data = {"upload_id": upload_id}
6response = requests.post(
7 f"https://api.distillabs.ai/models/{model_id}/training",
8 data=json.dumps(data),
9 headers={"content-type": "application/json", **AUTH_HEADER},
10)
11
12pprint(response.json())
13slm_training_job_id = response.json().get("id")

We can analyze the status of the training job using the jobs API. The following code snippets displays the current status of the job we started before. When the job is finished (status=complete), we can use the jobs API again to get the benchmarking result - the accuracy of the LLM and the accuracy of the fine-tuned SLM. We can achieve this using:

1import json
2from pprint import pprint
3import requests
4
5response = requests.get(
6 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/status",
7 headers=AUTH_HEADER,
8)
9pprint(response.json())

When the job is finished (status=complete), we can use the jobs API again to get the benchmarking result for the base and fine-tuned SLM, using the same four metrics as for the teacher evaluation. We can achieve this using:

1from pprint import pprint
2
3response = requests.get(
4 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/evaluation-results",
5 headers=AUTH_HEADER,
6)
7
8pprint(response.json())

Download Your Model

You can list all of your models using the cell below. Once training is complete, download the selected model for deployment.

1import json
2from pprint import pprint
3import requests
4
5response = requests.get(
6 f"https://api.distillabs.ai/models",
7 headers=AUTH_HEADER,
8)
9pprint(response.json())
1from pprint import pprint
2
3# Get model download URL
4slm_training_job_id = "SELECTED-MODEL"
5response = requests.get(
6 f"https://api.distillabs.ai/trainings/{slm_training_job_id}/model",
7 headers=AUTH_HEADER
8)
9
10s3url = response.json()["s3_url"]
11pprint(response.json())
1import tarfile
2import urllib.request
3
4print("Downloading …")
5def status(count, block, total):
6 print("\r", f"Downloading: {count * block / total:.1%}", end="")
7
8
9urllib.request.urlretrieve(
10 s3url,
11 "model.tar",
12 reporthook=status,
13)
14
15print("\nUnpacking …")
16with tarfile.open("model.tar", mode="r:*") as tar:
17 tar.extractall(path=".")
1!ls -lt

Step 3: Build a local RAG system with your fine‑tuned model

Now that we have a small language model fine‑tuned specifically for Roman‑Empire question‑answering, we can build our RAG pipeline around it. This domain‑specialized LLM will provide more accurate, context‑aware answers than our baseline model while still running entirely on local hardware. The main objectives for us are as follows:

  • Launch a lightweight chat model locally with ollama.
  • Chunk a Wikipedia article, embed the chunks with HuggingFace sentence‑transformers, and store them in an in‑memory vector store.
  • Glue retrieval and generation together in a minimal RAG class, then test the loop end‑to‑end.

Install ollama in your own system

To install ollama, follow the instructions from https://ollama.com/download and make sure to enable the serving daemon (via ollama serve). Once ready, make sure the app is running by executing the following command (the list should be empty since we have not loaded any models yet):

1! ollama list

(Optional) Install ollama for Google Colab

If you are running this notebook in Google Colab, you can install Ollama using the following link

1! curl -fsSL https://ollama.com/install.sh | sh

Once ollama is installed, we should start the application. You can start the daemon with ollama serve using nohup to make sure it stays in the background.

1! nohup ollama serve &

Make sure the app is running by executing the following command (the list should be empty since we have not loaded any models yet):

1! ollama list

Register and test the downloaded model

Once your model is trained, it should be unpacked and registered with ollama. The downloaded model directory already contains everything that is needed and the model can be registed with the command below. Once it is ready, we can test the model with a standard OpenAI interface

1! ollama create model-distillabs -f model/Modelfile
1from openai import OpenAI
2
3client = OpenAI(
4 base_url = 'http://localhost:11434/v1',
5 api_key='ollama', # required, but unused
6)
7
8response = client.chat.completions.create(
9 model="model-distillabs",
10 messages=[
11 {"role": "user", "content": "What day is it?"},
12 ],
13)
14print(response.choices[0].message.content)

Index our target dataset

This section walks through loading the Wikipedia article on the Roman Empire into an in‑memory vector store (adapted from https://python.langchain.com/docs/tutorials/rag/):

1from langchain_community.document_loaders import WikipediaLoader
2from langchain_text_splitters import RecursiveCharacterTextSplitter
3from langchain_huggingface import HuggingFaceEmbeddings
4from langchain_core.vectorstores import InMemoryVectorStore
5
6# Split the document into chunks
7text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
8text_splits = text_splitter.split_text(wikipedia_text)
9document_splits = text_splitter.create_documents(text_splits)
10
11# Embed and index the chunks
12embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L12-v2")
13vector_store = InMemoryVectorStore(embeddings)
14indexed = vector_store.add_documents(documents=document_splits)

Define the RAG logic

Now that our dataset is indexed and the chat model is live, we can wire retrieval and generation together. In this section we implement a bite‑sized RAG helper class that

  1. fetches the top‑k passages most similar to the user’s question,
  2. feeds those passages and the question into the language model via a structured prompt, and
  3. returns a concise answer.

With this plumbing in place, answering a question becomes a single‑function call.

1from langchain_core.documents import Document
2from langchain_openai import ChatOpenAI
3from langchain_core.vectorstores import InMemoryVectorStore
4
5
6class RAG:
7 def __init__(self, vector_store: InMemoryVectorStore, llm: ChatOpenAI):
8 self.vector_store = vector_store
9 self.llm = llm
10
11 self.SYSTEM_PROMPT = """
12You are a problem solving model working on task_description XML block:
13
14<task_description>
15Answer the question using information from one of the context paragrpahs.
16The answer should should contain all important information from the context paragraph but stay short, one sentence maximum.
17</task_description>
18
19You will be given a single task with context in the context XML block and the task in the question XML block
20Solve the task in question block based on the context in context block.
21Generate only the answer, do not generate anything else
22"""
23
24 self.PROMPT_TEMPLATE = """
25Now for the real task, solve the task in question block based on the context in context block.
26Generate only the solution, do not generate anything else.
27
28<context>
29{context}
30</context>
31
32<question>
33{question}
34</question>
35"""
36
37 def retrieve(self, question: str, k: int = 3):
38 return self.vector_store.similarity_search(question, k=k)
39
40 def generate(self, question: str, context_docs):
41 context = "\n\n".join(doc.page_content for doc in context_docs)
42 messages = [
43 {"role": "system", "content": self.SYSTEM_PROMPT},
44 {"role": "user", "content": self.PROMPT_TEMPLATE.format(context=context, question=question)},
45 ]
46 return self.llm.invoke(messages).content
47
48 def answer(self, question: str):
49 return self.generate(question, self.retrieve(question))

Plug the new model into RAG

With the fine‑tuned weights now running locally, the last step is to introduce the specialized LLM into our existing RAG helper class. The retrieval component fetches the most relevant passages about the Roman Empire—while the generation step leverages a model that has been trained on our domain‑specific data.

1from langchain_openai import ChatOpenAI
2
3tuned_llm = ChatOpenAI(
4 base_url='http://localhost:11434/v1',
5 api_key="EMPTY",
6 model="model-distillabs",
7 temperature=0,
8)
9tuned_rag = RAG(vector_store=vector_store, llm=tuned_llm)
10print(tuned_rag.answer("When did the roman empire collapse?"))

Test our RAG system

1sample_qa = [
2 {"question": "When did the Roman Empire collapse?", "answer": "The west Roman Empire fell in 476 CE and the Easy laster until the fall of Constantinopol in 1453."},
3 {"question": "Who was the first emperor of the Roman Empire?", "answer": "Augustus (formerly Octavian)"},
4 {"question": "What were the main languages spoken across the Roman Empire?", "answer": "Latin in the West and Greek in the East"},
5 {"question": "What monumental arena in Rome hosted gladiatorial games?", "answer": "The Colosseum"},
6]
7
8for qa in sample_qa:
9 print(f"Q: {qa['question']}\nA: {qa['answer']}\nP: {tuned_rag.answer(qa['question'])}\n")