Canvas CloudAI
Canvas Cloud AI

GPU

intermediate
hardware
Enhanced Content

Definition

Graphics Processing Unit - specialized hardware designed for parallel processing that accelerates AI training and graphics rendering. Like having thousands of workers doing calculations simultaneously instead of one at a time.

Real-World Example

Training large AI models on GPUs can be 100x faster than using regular CPUs because GPUs can process many calculations at once.

Cloud Provider Equivalencies

All major clouds provide GPUs primarily through virtual machines: you choose a GPU-capable VM/shape (or attach a GPU) and run CUDA/ROCm-enabled workloads such as AI training, inference, rendering, and HPC. Naming differs (instance types/VM series/shapes), but the concept is the same: pay for GPU time plus supporting CPU, memory, and storage.

AWS
Amazon EC2 (GPU instances, e.g., P5/P4d/G5/G6)
AZ
Azure Virtual Machines (GPU VMs, e.g., ND/NC/NV series)
GCP
Compute Engine (GPU accelerators, e.g., NVIDIA L4/T4/A100/H100 attached to VMs)
OCI
OCI Compute (GPU shapes, e.g., NVIDIA A10/A100/H100 shapes depending on region)

Explore More Cloud Computing Terms