H100 80GB vs M2 Max 96GB
For running local LLMs · 4 reports across 1 model
Tokens per second by model
| Model | H100 80GB | M2 Max 96GB |
|---|---|---|
| Qwen3.6up to 35B | 45.0n=1 | 28.0n=3 |
For running local LLMs · 4 reports across 1 model
| Model | H100 80GB | M2 Max 96GB |
|---|---|---|
| Qwen3.6up to 35B | 45.0n=1 | 28.0n=3 |