llamaperf

M3 16GB vs RTX 4090

For running local LLMs · 4 reports across 2 models

Side A
M3 16GB
Vendor
apple
VRAM
16GB
Memory
Unified
Side B
RTX 4090
Vendor
nvidia
VRAM
24GB
Memory
Discrete

Tokens per second by model

ModelM3 16GBRTX 4090
Gemma 4up to 31B21.0n=1149.6n=2
Qwen3.6up to 35B25.0n=1

More comparisons