llamaperf

Best GPUs for running local LLMs

Picking a GPU for local LLM inference comes down to VRAM (does the model fit?), memory bandwidth (how fast it generates), and software support. The list below is ranked by how many community reports each card has on llamaperf — a rough proxy for how heavily it gets used in practice — and surfaces the fastest tokens-per-second observed on each.

Ranked from 73 community reports on llamaperf.

Ranked by community reports

#GPUVRAMReportsFastest t/s
1RX 7900 XTXamd24GB658.0
2RTX 5090nvidia32GB4106.5
3RTX 4060 Ti 16GBnvidia16GB445.0
4RTX A6000 48GBnvidia48GB416.9
5AMD Threadripper 256GBamd256GB48.8
6RTX 4090nvidia24GB3149.6
7RTX 3090nvidia24GB366.0
8RTX 3060 12GBnvidia12GB360.0
9Instinct MI300X 192GBamd192GB260.0
10RTX 4070nvidia12GB255.0
11RX 7900 XTamd20GB238.0
12Instinct MI250X 128GBamd128GB235.0
13Intel Arc B580 12GBintel12GB230.0
14RX 7800 XT 16GBamd16GB227.0
15RX 7600 8GBamd8GB225.0
16M5 Max 128GBapple128GB27.5
17H100 80GBnvidia80GB145.0
18RTX 5060 Ti 16GBnvidia16GB145.0
19M5 Max 64GBapple64GB132.0
20M4 Max 64GBapple64GB123.0
21M4 16GBapple16GB123.0
22M4 Max 36GBapple36GB121.0
23M3 16GBapple16GB121.0
24M1 Pro 16GBapple16GB120.0
25M4 Pro 24GBapple24GB119.0
26M3 Max 48GBapple48GB118.0
27M2 16GBapple16GB118.0
28M3 8GBapple8GB118.0
29M1 8GBapple8GB117.5
30M3 Max 36GBapple36GB116.0
31M2 Max 32GBapple32GB116.0
32M2 8GBapple8GB116.0
33M2 Ultra 64GBapple64GB114.0
34M3 Pro 18GBapple18GB114.0
35M1 16GBapple16GB114.0
36M1 Ultra 64GBapple64GB112.0
37M2 Pro 16GBapple16GB112.0
38M1 Max 32GBapple32GB110.0
39AMD MI50 32GBamd32GB19.7
40M3 Max 128GBapple128GB15.5
41M2 Ultra 192GBapple192GB1
42DGX Sparknvidia128GB1

No reports yet

These match the profile but nobody has submitted a report yet.

What to look for

VRAM is the gating constraint

Whether a model runs at all is decided by memory. A Q4_K_M quant of a 7B model needs ~5GB; a 13B needs ~8GB; a 30B needs ~20GB; a 70B needs ~40GB — plus headroom for context and KV cache. If the weights don't fit, generation either crawls (CPU offload) or fails outright.

Bandwidth determines tokens-per-second

Once weights fit, throughput is dominated by memory bandwidth, not raw FLOPs. An RTX 3090 (936 GB/s) and an RTX 4090 (1008 GB/s) are within ~10% of each other on inference-bound workloads despite the 4090's much larger compute budget. M-series Macs trade off here: massive memory pool, but Pro-tier bandwidth is closer to a midrange discrete card.

Software support gates which engines you can use

NVIDIA has CUDA kernels in every major engine (llama.cpp, vLLM, exllamav2, TensorRT-LLM). AMD support has improved sharply via ROCm but still trails on engine coverage. Apple Silicon is best-in-class for MLX and llama.cpp Metal but unsupported by vLLM. Match the engine you want to use to the hardware ecosystem.

Frequently asked

What is the best GPU for running local LLMs?

There is no single answer — it depends on which model size you want to run. For 7B–13B models, an RTX 3060 12GB or RTX 4060 Ti 16GB is enough. For 30B-class models, an RTX 3090 or 4090 (24GB) is the sweet spot. For 70B-class, you need 40GB+ of VRAM (RTX A6000, dual 3090s, or an M-series Mac with 64GB+ unified memory).

Is more VRAM or more compute better for local LLMs?

VRAM, by a wide margin. Inference throughput is memory-bandwidth bound, not compute bound. A card with enough VRAM to fit your model and decent bandwidth will outperform a faster GPU that has to offload weights to system memory.

Do I need an NVIDIA GPU for local LLMs?

No. AMD GPUs work via ROCm with most major engines, and Apple Silicon Macs run llama.cpp Metal and MLX natively. NVIDIA still has the broadest engine support and best out-of-the-box experience, but it's no longer the only option.

How is this list ranked?

By the number of community submissions on llamaperf for each GPU. More reports indicate a GPU is widely used in practice for local LLM inference. The fastest tokens-per-second observed on each is shown alongside as a quality signal.

How we rank

Hardware is sorted by the number of community submissions on llamaperf — a proxy for how widely each card is used in practice for local LLM inference. Within that, we surface the fastest tokens-per-second observed on each as a quality signal. Submissions come primarily from r/LocalLLaMA discussions and direct user uploads. Nothing here is sponsored or affiliate-driven.

See also