llamaperf

H100 80GB vs M5 Max 64GB

For running local LLMs · 5 reports across 2 models

Side A
H100 80GB
Vendor
nvidia
VRAM
80GB
Memory
Discrete
Side B
M5 Max 64GB
Vendor
apple
VRAM
64GB
Memory
Unified

Tokens per second by model

ModelH100 80GBM5 Max 64GB
Qwen3.6up to 35B45.0n=163.0n=3
Gemma 4up to 31B97.0n=1

More comparisons