Overview
Whether you are looking to classify text, answer questions, interact with internal tools, or solve other language tasks, our step-by-step workflow will take you from initial concept to production-ready model. Let’s dive in!
Authentication
First, authenticate with the distil labs platform:
Step 1: Create a model
Register a new model to track your experiment:
You can list all your models with:
Step 2: Task selection and data preparation
Begin by identifying the specific task you want your model to perform. Different tasks require different approaches to data preparation and model configuration.
Learn more about task selection →
Prepare your data and configuration files according to your chosen task type. A training job requires the following files in a directory:
Learn more about data preparation →
Upload your data to the model (from your local ./data directory):
Step 3: Teacher evaluation
Before training your specialized small model, validate whether a large language model can accurately solve your task with the provided examples. If the teacher model can solve the task, the student model will be able to learn from it effectively. Learn about teacher evaluation →
Step 4: Model training
Once your teacher evaluation shows satisfactory results, train your specialized small language model using knowledge distillation.
Understand the model training process →
Step 5: Download your model
Once training is complete, download your model:
Step 6: Model deployment
Deploy your trained model locally or using distil labs inference for immediate integration with your applications.
If you decide to deploy locally, download the model and setup inference with Ollama:
Query the model with the OpenAI-compatible API:
Next steps
You’ve successfully trained and deployed a specialized small language model! For more details, explore:
- Tutorials for complete end-to-end examples
- Deployment options for production deployment
