Model deployment
Once your model is trained, you can download and deploy it locally using the inference framework of your choice. Alternatively, you can push it to Hugging Face Hub for easy sharing and deployment.
Downloading your model
Download your trained model using the CLI or API:
After downloading, extract the tarball. You will have a directory containing your trained SLM with all necessary files for deployment:
Deploying with vLLM
vLLM is a high-performance inference engine for LLMs. To get started, set up a virtual environment and install dependencies:
Start the vLLM server:
Note that model refers to the directory containing your model weights. The server runs in the foreground, so make sure to run the server in a separate window or a background process.
Query the model using the OpenAI-compatible API:
Or use the provided client script:
For question answering models that require context, use the --context flag:
Deploying with Ollama
Ollama makes it easy to run LLMs locally.
Install Ollama following the instructions at ollama.com, then set up your environment:
Create and run the model:
Query the model using the OpenAI-compatible API:
Or use the provided client script:
For question answering models that require context, use the --context flag:
Pushing to Hugging Face Hub (API only)
You can upload your model directly to your private Hugging Face repository. Once pushed, you can deploy it directly from Hugging Face using various inference frameworks.
Requirements:
- Training ID of the model (
YOUR_TRAINING_ID) - Hugging Face user access token with write privileges (
YOUR_HF_TOKEN) - Repository name for your model (
YOUR_USERNAME/MODEL_NAME)
Note that for Ollama, your model needs to be in a GGUF format: GGUF. As such, we push models to two repositories on Hugging Face, one for the GGUF format and one for the safetensors format. Once your model is on Hugging Face, you can deploy it directly using vLLM or Ollama:
Note: When using Ollama with private models, you may need to upload your Ollama SSH key to Hugging Face. See the Hugging Face documentation for instructions.
Production considerations
When deploying your model to production, consider:
- Resource Requirements: Even small models benefit from GPU acceleration for high-throughput applications.
- Security: Apply appropriate access controls, especially if your model handles sensitive information.
- Container Deployment: Consider packaging your model with Docker for consistent deployment across environments.
