llamaperf

A100 40GB vs RTX 3090

For running local LLMs · 3 reports across 2 models

Side A
A100 40GB
Vendor
nvidia
VRAM
40GB
Memory
Discrete
Side B
RTX 3090
Vendor
nvidia
VRAM
24GB
Memory
Discrete

Tokens per second by model

ModelA100 40GBRTX 3090
Qwen3.6up to 35B66.0n=2
Qwen2.528.0n=1

More comparisons