Skills Development CoreWeave GPU Cluster Troubleshooting Guide

CoreWeave GPU Cluster Troubleshooting Guide

v20260423
coreweave-common-errors
A comprehensive guide to diagnose and resolve common CoreWeave infrastructure errors. It covers issues related to GPU scheduling, Pod status (e.g., Pending), CUDA Out of Memory errors, NCCL timeouts, and Kubernetes resource allocation within AI/ML deployment pipelines.
Get Skill
123 downloads
Overview

CoreWeave Common Errors

Error Reference

1. Pod Stuck Pending -- No GPU Available

kubectl describe pod <pod-name> | grep -A5 Events
# "0/N nodes are available: insufficient nvidia.com/gpu"

Fix: Check GPU availability: kubectl get nodes -l gpu.nvidia.com/class=A100_PCIE_80GB. Try a different GPU type or region.

2. CUDA Out of Memory

torch.cuda.OutOfMemoryError: CUDA out of memory

Fix: Reduce batch size, enable gradient checkpointing, or use a larger GPU (A100-80GB instead of 40GB).

3. Image Pull BackOff

Fix: Create an imagePullSecret:

kubectl create secret docker-registry regcred \
  --docker-server=ghcr.io \
  --docker-username=$GH_USER \
  --docker-password=$GH_TOKEN

4. NCCL Timeout (Multi-GPU)

NCCL error: unhandled system error

Fix: Ensure all GPUs are on the same node (NVLink). For multi-node, use InfiniBand-connected nodes.

5. PVC Not Mounting

Fix: Check storage class availability: kubectl get sc. Use CoreWeave storage classes like shared-hdd-ord1 or shared-ssd-ord1.

6. Node Affinity Mismatch

Fix: List valid GPU class labels:

kubectl get nodes -o json | jq -r '.items[].metadata.labels["gpu.nvidia.com/class"]' | sort -u

7. Service Not Reachable

Fix: Check Service and Endpoints:

kubectl get svc,endpoints <service-name>

Resources

Next Steps

For diagnostics, see coreweave-debug-bundle.

Info
Category Development
Name coreweave-common-errors
Version v20260423
Size 2.13KB
Updated At 2026-04-26
Language