A100 40GB vs H100 80GB
For running local LLMs · 1 report across 1 model
Tokens per second by model
| Model | A100 40GB | H100 80GB |
|---|---|---|
| Qwen3.6up to 35B | — | 45.0n=1 |
For running local LLMs · 1 report across 1 model
| Model | A100 40GB | H100 80GB |
|---|---|---|
| Qwen3.6up to 35B | — | 45.0n=1 |