Model deployment
Once your model is trained, you can download and deploy it locally using the inference framework of your choice. Alternatively, you can push it to Hugging Face Hub for easy sharing and deployment.
Deploying with distil CLI
The distil CLI provides a built-in deployment command that handles model download and serving for you.
Local deployment (experimental)
Deploy your trained model locally using llama-cpp as the inference backend:
This downloads the model and starts a local llama-server on port 8000. You can customize the port and enable server logs:
Once running, query the model using the OpenAI-compatible API at http://localhost:<port>/v1.
To get the command to invoke your locally deployed model:
This outputs a ready-to-run command using uv. Copy and run it directly:
For question answering models that require context, use the --context flag:
Local deployment requires llama-cpp to be installed on your machine.
Downloading your model
Download your trained model using the CLI or API:
After downloading, extract the tarball. You will have a directory containing your trained SLM with all necessary files for deployment:
Deploying with vLLM
vLLM is a high-performance inference engine for LLMs. To get started, set up a virtual environment and install dependencies:
Start the vLLM server:
For tool calling models:
Note that model refers to the directory containing your model weights. The server runs in the foreground, so make sure to run the server in a separate window or a background process.
Query the model using the OpenAI-compatible API:
Or use the provided client script:
For question answering models that require context, use the --context flag:
Pushing to Hugging Face Hub (API only)
You can upload your model directly to your private Hugging Face repository. Once pushed, you can deploy it directly from Hugging Face using various inference frameworks.
Requirements:
- Training ID of the model (
YOUR_TRAINING_ID) - Hugging Face user access token with write privileges (
YOUR_HF_TOKEN) - Repository name for your model (
YOUR_USERNAME/MODEL_NAME)
We push models to two repositories on Hugging Face, one in GGUF format and one in safetensors format. Once your model is on Hugging Face, you can deploy it directly using vLLM:
Production considerations
When deploying your model to production, consider:
- Resource Requirements: Even small models benefit from GPU acceleration for high-throughput applications.
- Security: Apply appropriate access controls, especially if your model handles sensitive information.
- Container Deployment: Consider packaging your model with Docker for consistent deployment across environments.
