Skip to content

Workloads

Bazzite AI provides containerized workloads for AI/ML development. All workloads are OCI containers available at ghcr.io/atrawog/bazzite-ai-pod-*:stable.

AI/ML Workloads

Workload Size GPU Description Quick Start
Ollama ~11GB Yes LLM inference server ujust ollama start
JupyterLab ~17GB Yes Interactive notebooks ujust jupyter start
ComfyUI ~26GB Yes AI image generation ujust comfyui start
nvidia-python ~14GB Yes ML/AI with PyTorch ujust apptainer shell
Open WebUI ~2GB No Chat interface for Ollama ujust openwebui start
FiftyOne ~3GB No Dataset visualization ujust fiftyone start

Service Workloads

Workload Size GPU Description Quick Start
Jellyfin ~1GB Yes Media server ujust jellyfin start
Portainer ~200MB No Container management ujust portainer start
Runners ~5GB Yes GitHub self-hosted runners ujust runners start

GPU Support

All GPU-enabled workloads automatically detect and use available GPUs:

Vendor Support Notes
NVIDIA Yes RTX 20+ with CUDA
AMD Yes RX 5000+ via ROCm
Intel Yes Gen 7+ / Arc via Vulkan

For GPU setup, see GPU Setup.

Common Operations

All workloads follow the same lifecycle pattern:

Command Description
ujust <workload> config Configure settings
ujust <workload> start Start service
ujust <workload> status Check status
ujust <workload> logs View logs
ujust <workload> stop Stop service
ujust <workload> delete Remove config

Multi-Instance Support

Run multiple instances of any workload using the -n flag:

# First instance (default ports)
ujust ollama start

# Second instance (different port)
ujust ollama start -n 2 --port=11435

# Third instance
ujust ollama start -n 3 --port=11436

See Also