llamaperf

H100 80GB vs M2 Max 96GB

For running local LLMs · 4 reports across 1 model

Side A
H100 80GB
Vendor
nvidia
VRAM
80GB
Memory
Discrete
Side B
M2 Max 96GB
Vendor
apple
VRAM
96GB
Memory
Unified

Tokens per second by model

ModelH100 80GBM2 Max 96GB
Qwen3.6up to 35B45.0n=128.0n=3

More comparisons