Podman Quadlets¶
Bazzite AI workloads run as systemd user services using Podman Quadlets. The ujust <workload> config command automatically generates quadlet files.
How Quadlets Work¶
When you run ujust <workload> config, a quadlet file is generated in ~/.config/containers/systemd/. When started, systemd manages the container:
- Auto-start on login - Containers start automatically
- Restart policies - Automatic restart on failure
- Service management - Standard
systemctlcommands work - GPU passthrough - Automatically configured
Quick Start¶
| Step | Command | Description | Recording |
|---|---|---|---|
| 1 | ujust ollama config | Generate quadlet | |
| 2 | ujust ollama start | Start service | |
| 3 | ujust ollama status | Check status |
Workload Lifecycle¶
All workloads follow the config → start pattern:
| Command | Description |
|---|---|
ujust <workload> config | Configure and generate quadlet |
ujust <workload> start | Start via systemctl |
ujust <workload> status | Check service status |
ujust <workload> logs | View container logs |
ujust <workload> stop | Stop service |
ujust <workload> restart | Restart service |
ujust <workload> delete | Remove quadlet and config |
Multiple Instances¶
Run multiple instances with separate quadlets:
# First instance (default)
ujust ollama config
ujust ollama start
# Second instance (different port)
ujust ollama config -n 2 --port=11435
ujust ollama start -n 2
Each instance has:
- Separate quadlet file:
~/.config/containers/systemd/ollama-<n>.container - Separate config:
~/.config/ollama/<n>/config - Separate container name:
ollama-<n>
Manual Systemctl Commands¶
After configuration, standard systemctl commands work:
# Check status
systemctl --user status ollama-1.service
# View logs
journalctl --user -u ollama-1.service -f
# Stop/Start
systemctl --user stop ollama-1.service
systemctl --user start ollama-1.service
# Disable auto-start
systemctl --user disable ollama-1.service
Generated Quadlet Location¶
Quadlet files are stored in:
Example ollama-1.container:
[Unit]
Description=Ollama LLM Server (Instance 1)
After=network-online.target
[Container]
Image=ghcr.io/atrawog/bazzite-ai-pod-ollama:stable
PublishPort=11434:11434
Volume=%h/.config/ollama/1:/home/jovian/.ollama
AddDevice=nvidia.com/gpu=all
Network=bazzite-ai.network
[Service]
Restart=on-failure
[Install]
WantedBy=default.target
Available Workloads¶
| Workload | Config Command | Recording |
|---|---|---|
| Ollama | ujust ollama config | |
| Jupyter | ujust jupyter config | |
| ComfyUI | ujust comfyui config | |
| OpenWebUI | ujust openwebui config | |
| FiftyOne | ujust fiftyone config | |
| Jellyfin | ujust jellyfin config | |
| Portainer | ujust portainer config |
See Also¶
- Bazzite AI OS Deployment - Native ujust deployment
- Commands Reference - All ujust commands
- Recordings - Watch command demos