Skip to content

GPU Compatibility

Bazzite AI OS supports all modern GPUs through open-source drivers. This guide covers compatibility, setup, and troubleshooting.

Docker/Podman Users

For GPU setup outside Bazzite AI OS, see Docker/Podman Deployment.

Supported GPUs

Vendor Supported GPUs Driver Features
NVIDIA RTX 20+ (Turing and newer) Open kernel modules 580.95 CUDA, Ray Tracing, DLSS, NVENC
AMD RX 5000+ / Vega+ (GCN 1+) AMDGPU + Mesa RADV Vulkan RT, FSR, VCN encoding
Intel Gen 7+ / Arc A/B-series i915 / xe + Mesa ANV XeSS, AV1 encoding, Quick Sync

Graphics Stack

Component Version
Mesa 25.2.4
Vulkan 1.4
OpenGL 4.6
Kernel 6.16.4+

NVIDIA Setup

Supported Cards

  • RTX 40-series (Ada Lovelace)
  • RTX 30-series (Ampere)
  • RTX 20-series (Turing)

Older cards (GTX 10-series and earlier) are not supported due to open driver requirements.

Verify GPU Detection

# On host
nvidia-smi

# Expected output shows your GPU model, driver version, CUDA version

Pod GPU Access

First-time setup required:

# Run once on host
ujust setup-gpu-pods

# This configures nvidia-container-toolkit

Verify Pod GPU Access

# Run GPU pod with Apptainer
apptainer shell --nv bazzite-ai-pod-nvidia-python_stable.sif

# Inside pod - verify CUDA
nvidia-smi
python -c "import torch; print(torch.cuda.is_available())"

AMD Setup

Supported Cards

  • RX 7000-series (RDNA 3)
  • RX 6000-series (RDNA 2)
  • RX 5000-series (RDNA)
  • Vega (GCN 5)
  • RX 500/400-series (GCN 4)

No Setup Required

AMD GPUs work automatically via the AMDGPU driver and Mesa RADV.

Verify GPU Detection

# On host
ls /dev/dri/
# Should show: card0, renderD128

# Check GPU info
glxinfo | grep "OpenGL renderer"

Pod GPU Access

AMD GPUs work automatically via /dev/dri passthrough:

# With Podman
podman run --device=/dev/dri -it ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

Intel Setup

Supported GPUs

  • Arc B-series (Battlemage)
  • Arc A-series (Alchemist)
  • Iris Xe (Gen 12)
  • UHD Graphics (Gen 7+)

No Setup Required

Intel GPUs work automatically via i915/xe drivers and Mesa ANV.

Verify GPU Detection

# Check GPU
ls /dev/dri/
glxinfo | grep "OpenGL renderer"

# For Arc GPUs
vainfo

Pod GPU Access

Same as AMD - automatic via /dev/dri:

podman run --device=/dev/dri -it ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

Multi-GPU Systems

NVIDIA + Intel (Common Laptop Config)

Bazzite AI OS handles hybrid graphics automatically. For GPU pods:

# Force NVIDIA GPU when running Apptainer
__NV_PRIME_RENDER_OFFLOAD=1 apptainer shell --nv bazzite-ai-pod-nvidia-python_stable.sif

# Or inside pod
export __NV_PRIME_RENDER_OFFLOAD=1

Multiple NVIDIA GPUs

# Use specific GPU
CUDA_VISIBLE_DEVICES=0 python train.py
CUDA_VISIBLE_DEVICES=1 python train.py

# Use all GPUs
CUDA_VISIBLE_DEVICES=0,1 python train.py

Troubleshooting

NVIDIA: "CUDA not available"

  1. Check driver loaded:

    lsmod | grep nvidia
    # Should show nvidia, nvidia_modeset, nvidia_uvm
    
  2. Verify nvidia-smi:

    nvidia-smi
    # If fails, driver not loaded properly
    
  3. Run GPU setup:

    ujust setup-gpu-pods
    
  4. Reboot and retry:

    systemctl reboot
    

NVIDIA: "Driver/library version mismatch"

After kernel updates, NVIDIA modules may need rebuilding:

# Check current kernel vs driver kernel
uname -r
modinfo nvidia | grep vermagic

# If mismatched, reboot to load new drivers
systemctl reboot

AMD/Intel: "No GPU found in pod"

Ensure /dev/dri is mounted:

# Check host
ls -la /dev/dri/

# Run pod with device access
podman run --device=/dev/dri -it ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable

# Inside pod
ls /dev/dri/

Permission Denied on /dev/dri

Add user to video and render groups:

# On host
sudo usermod -aG video,render $USER
# Log out and back in

GPU Not Detected After Update

# Check OSTree status
rpm-ostree status

# If issues, rollback
rpm-ostree rollback
systemctl reboot

Low Performance

  1. Check power mode:

    # NVIDIA
    nvidia-smi -q -d POWER
    
    # Set performance mode
    nvidia-smi -pm 1
    
  2. Check thermal throttling:

    nvidia-smi dmon -s pucvmet
    
  3. Verify PCIe bandwidth:

    nvidia-smi -q -d PCIE
    

Testing GPU Access

Quick Tests

# NVIDIA - on host
nvidia-smi

# All GPUs - OpenGL
glxgears

# Vulkan
vkcube

In Pods

# PyTorch
import torch
print(f"CUDA: {torch.cuda.is_available()}")
print(f"Devices: {torch.cuda.device_count()}")
print(f"Name: {torch.cuda.get_device_name(0)}")

# Quick benchmark
import time
x = torch.randn(10000, 10000, device='cuda')
start = time.time()
y = torch.matmul(x, x)
torch.cuda.synchronize()
print(f"Matrix multiply: {time.time() - start:.3f}s")

See Also