Best local LLMs by hardware tier
Rankings only make sense once you fix the hardware. Pick a tier below — the leaderboard re-sorts to the models the community actually runs there, weighted by report count, fastest observed tokens-per-second, and recency.
Apple 128 GB+ & Strix Halo
Large unified-memory rigs. Big MoE models, very long contexts. Ranked from 7 reports.
| # | Model family | Best variant tested | Reports | Fastest t/s |
|---|---|---|---|---|
| 1 | Gemma 4Google DeepMind 31B Instruct · 31B · GGUF · on AMD Threadripper 256GB | 31B Instruct · 31B · GGUF | 5 | 8.8 |
| 2 | Qwen3.6Alibaba Q8 · on M5 Max 128GB | Q8 on M5 Max 128GB | 2 | 5.5 |
How we rank
A single global "best models" list doesn't really exist — what runs well on a 5090 is often unrunnable on a 4060, and a 7B that screams on an M3 Max is usually a poor pick on an H100. So we fix the hardware first, then rank the families that actually have community reports on it. The score blends popularity (log-scaled report count), fastest observed tokens-per-second normalized within the bucket, recency (90-day half-life), and a small bias for rows where we know the variant + quant + GPU cleanly. Click into a family for the full breakdown of records.