llamaperf

M2 Max 96GB vs RTX 4090

For running local LLMs · 5 reports across 2 models

Side A
M2 Max 96GB
Vendor
apple
VRAM
96GB
Memory
Unified
Side B
RTX 4090
Vendor
nvidia
VRAM
24GB
Memory
Discrete

Tokens per second by model

ModelM2 Max 96GBRTX 4090
Qwen3.6up to 35B28.0n=225.0n=1
Gemma 4up to 31B149.6n=2

More comparisons