Skip to content

Pod Inheritance

Standard OCI Containers

This hierarchy shows build inheritance, not deployment requirements. All workloads are published to ghcr.io/atrawog/bazzite-ai-pod-*:stable and run anywhere containers are supported.

Inheritance Tree

graph TD
    base[pod-base<br/>~7GB<br/>Fedora 43 + Dev Tools]

    base --> nvidia[pod-nvidia<br/>~8GB<br/>CUDA + cuDNN + TensorRT]
    base --> devops[pod-devops<br/>~10GB<br/>AWS + gcloud + kubectl]
    base --> runner[pod-githubrunner<br/>~8GB<br/>GitHub Actions Runner]

    nvidia --> python[pod-nvidia-python<br/>~14GB<br/>PyTorch ML via pixi]
    nvidia --> ollama[pod-ollama<br/>~11GB<br/>LLM Inference Server]

    python --> jupyter[pod-jupyter<br/>~17GB<br/>JupyterLab Server]
    python --> comfyui[pod-comfyui<br/>~26GB<br/>AI Image Generation]

    style python fill:#4CAF50,color:#fff
    style jupyter fill:#4CAF50,color:#fff
    style devops fill:#4CAF50,color:#fff
    style ollama fill:#4CAF50,color:#fff
    style comfyui fill:#4CAF50,color:#fff

Green nodes = Core workloads (recommended for most workflows)

Layer Structure

Each workload inherits tools from its parent, adding specialized functionality:

Layer 1: Base Foundation

pod-base (~7GB) - Clean Fedora 43 with development essentials

  • Build toolchain (gcc, make, cmake, ninja)
  • Language runtimes (Python 3.13, Node.js 23+, Go, Rust)
  • VS Code, Docker CLI, Podman
  • kubectl, Helm, Claude Code
  • Modern shell tools (fzf, ripgrep, bat, eza)

Layer 2: Specializations

From base:

Pod Adds Use Case
nvidia CUDA 13.0, cuDNN, TensorRT Custom GPU setups
devops AWS, gcloud, Firebase, Grafana tools Cloud infrastructure
githubrunner GitHub Actions runner agent CI/CD pipelines

Layer 3: ML/AI & LLM

From nvidia:

Pod Adds Use Case
nvidia-python PyTorch, torchvision, torchaudio via pixi ML/AI development
ollama Ollama LLM server, model management LLM inference

Layer 4: Interactive

From nvidia-python:

Pod Adds Use Case
jupyter JupyterLab server Interactive notebooks
comfyui ComfyUI, Stable Diffusion AI image generation

Image Registry

All workloads are published to GitHub Container Registry:

ghcr.io/atrawog/bazzite-ai-pod-<variant>:<tag>

Available Tags

Tag Description
stable Production-ready release
latest Most recent build
<version> Specific version (e.g., 1.0.0)

Pull Examples

# Docker
docker pull ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

# Podman
podman pull ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

# Apptainer (converts to SIF format)
apptainer pull docker://ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

Build System

Workloads are built using a unified buildcache for efficient multi-variant builds:

pods/
├── base/Containerfile           # Base layer
├── nvidia/Containerfile         # CUDA layer
├── nvidia-python/Containerfile  # PyTorch layer
├── jupyter/Containerfile        # JupyterLab layer
├── comfyui/Containerfile        # AI image generation layer
├── ollama/Containerfile         # LLM inference layer
├── devops/Containerfile         # DevOps tools
└── githubrunner/Containerfile   # CI/CD runner

Build Commands

# Build specific workload
just pod build nvidia-python

# Build all workloads
just pod build all

# Push to registry
just pod push nvidia-python

Common Base Components

All workloads include (inherited from base):

Languages & Runtimes

Language Version
Python 3.13
Node.js 23+
Go Latest
Rust Latest
.NET 8.0
PHP Latest
Java OpenJDK
Ruby Latest

Development Tools

Category Tools
Build gcc, g++, make, cmake, ninja, meson
Version Control git, gh CLI
Containers Docker CLI, Podman
Kubernetes kubectl, Helm
Editor VS Code (code-server)
AI Claude Code CLI

Shell Environment

Tool Purpose
Starship Modern shell prompt
fzf Fuzzy finder
zoxide Smart directory navigation
ripgrep Fast search
bat Better cat
eza Better ls

Container User

All workloads run as user jovian (UID 1000) by default:

  • Username: jovian
  • UID: 1000
  • Home: /home/jovian
  • Workspace: /workspace (mounted from host)

See Also