GPU Compatibility¶
Bazzite AI OS supports all modern GPUs through open-source drivers. This guide covers compatibility, setup, and troubleshooting.
Docker/Podman Users
For GPU setup outside Bazzite AI OS, see Docker/Podman Deployment.
Supported GPUs¶
| Vendor | Supported GPUs | Driver | Features |
|---|---|---|---|
| NVIDIA | RTX 20+ (Turing and newer) | Open kernel modules 580.95 | CUDA, Ray Tracing, DLSS, NVENC |
| AMD | RX 5000+ / Vega+ (GCN 1+) | AMDGPU + Mesa RADV | Vulkan RT, FSR, VCN encoding |
| Intel | Gen 7+ / Arc A/B-series | i915 / xe + Mesa ANV | XeSS, AV1 encoding, Quick Sync |
Graphics Stack¶
| Component | Version |
|---|---|
| Mesa | 25.2.4 |
| Vulkan | 1.4 |
| OpenGL | 4.6 |
| Kernel | 6.16.4+ |
NVIDIA Setup¶
Supported Cards¶
- RTX 40-series (Ada Lovelace)
- RTX 30-series (Ampere)
- RTX 20-series (Turing)
Older cards (GTX 10-series and earlier) are not supported due to open driver requirements.
Verify GPU Detection¶
Pod GPU Access¶
First-time setup required:
Verify Pod GPU Access¶
# Run GPU pod with Apptainer
apptainer shell --nv bazzite-ai-pod-nvidia-python_stable.sif
# Inside pod - verify CUDA
nvidia-smi
python -c "import torch; print(torch.cuda.is_available())"
AMD Setup¶
Supported Cards¶
- RX 7000-series (RDNA 3)
- RX 6000-series (RDNA 2)
- RX 5000-series (RDNA)
- Vega (GCN 5)
- RX 500/400-series (GCN 4)
No Setup Required¶
AMD GPUs work automatically via the AMDGPU driver and Mesa RADV.
Verify GPU Detection¶
# On host
ls /dev/dri/
# Should show: card0, renderD128
# Check GPU info
glxinfo | grep "OpenGL renderer"
Pod GPU Access¶
AMD GPUs work automatically via /dev/dri passthrough:
Intel Setup¶
Supported GPUs¶
- Arc B-series (Battlemage)
- Arc A-series (Alchemist)
- Iris Xe (Gen 12)
- UHD Graphics (Gen 7+)
No Setup Required¶
Intel GPUs work automatically via i915/xe drivers and Mesa ANV.
Verify GPU Detection¶
Pod GPU Access¶
Same as AMD - automatic via /dev/dri:
Multi-GPU Systems¶
NVIDIA + Intel (Common Laptop Config)¶
Bazzite AI OS handles hybrid graphics automatically. For GPU pods:
# Force NVIDIA GPU when running Apptainer
__NV_PRIME_RENDER_OFFLOAD=1 apptainer shell --nv bazzite-ai-pod-nvidia-python_stable.sif
# Or inside pod
export __NV_PRIME_RENDER_OFFLOAD=1
Multiple NVIDIA GPUs¶
# Use specific GPU
CUDA_VISIBLE_DEVICES=0 python train.py
CUDA_VISIBLE_DEVICES=1 python train.py
# Use all GPUs
CUDA_VISIBLE_DEVICES=0,1 python train.py
Troubleshooting¶
NVIDIA: "CUDA not available"¶
-
Check driver loaded:
-
Verify nvidia-smi:
-
Run GPU setup:
-
Reboot and retry:
NVIDIA: "Driver/library version mismatch"¶
After kernel updates, NVIDIA modules may need rebuilding:
# Check current kernel vs driver kernel
uname -r
modinfo nvidia | grep vermagic
# If mismatched, reboot to load new drivers
systemctl reboot
AMD/Intel: "No GPU found in pod"¶
Ensure /dev/dri is mounted:
# Check host
ls -la /dev/dri/
# Run pod with device access
podman run --device=/dev/dri -it ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
# Inside pod
ls /dev/dri/
Permission Denied on /dev/dri¶
Add user to video and render groups:
GPU Not Detected After Update¶
Low Performance¶
-
Check power mode:
-
Check thermal throttling:
-
Verify PCIe bandwidth:
Testing GPU Access¶
Quick Tests¶
In Pods¶
# PyTorch
import torch
print(f"CUDA: {torch.cuda.is_available()}")
print(f"Devices: {torch.cuda.device_count()}")
print(f"Name: {torch.cuda.get_device_name(0)}")
# Quick benchmark
import time
x = torch.randn(10000, 10000, device='cuda')
start = time.time()
y = torch.matmul(x, x)
torch.cuda.synchronize()
print(f"Matrix multiply: {time.time() - start:.3f}s")
See Also¶
- System Requirements - Hardware requirements
- nvidia-python Pod - ML/AI development
- Deployment Guide - Run pods on other platforms