H100 80GB vs Instinct MI250X 128GB
For running local LLMs · 1 report across 1 model
Tokens per second by model
| Model | H100 80GB | Instinct MI250X 128GB |
|---|---|---|
| Qwen3.6up to 35B | 45.0n=1 | — |
For running local LLMs · 1 report across 1 model
| Model | H100 80GB | Instinct MI250X 128GB |
|---|---|---|
| Qwen3.6up to 35B | 45.0n=1 | — |