Ampere (microarchitecture)

GPU microarchitecture by Nvidia From Wikipedia, the free encyclopedia

Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures. It was officially announced on May 14, 2020, and is named after French mathematician and physicist André-Marie Ampère.[1][2]

Quick Facts Launched, Designed by ...
Ampere
LaunchedMay 14, 2020; 4 years ago (2020-05-14)
Designed byNvidia
Manufactured by
Fabrication processTSMC N7 (professional)
Samsung 8N (consumer)
Codename(s)GA10x
Product Series
Desktop
Professional/workstation
  • RTX A series
Server/datacenter
  • A100
Specifications
L1 cache192 KB per SM (professional)
128 KB per SM (consumer)
L2 cache2 MB to 6 MB
Memory support
PCIe supportPCIe 4.0
Supported Graphics APIs
DirectXDirectX 12 Ultimate (Feature Level 12_2)
Direct3DDirect3D 12.0
Shader ModelShader Model 6.8
OpenCLOpenCL 3.0
OpenGLOpenGL 4.6
CUDACompute Capability 8.6
VulkanVulkan 1.3
Media Engine
Encode codecs
Decode codecs
Color bit-depth
  • 8-bit
  • 10-bit
Encoder(s) supportedNVENC
Display outputs
History
PredecessorTuring (consumer)
Volta (professional)
SuccessorAda Lovelace (consumer)
Hopper (datacenter)
Support status
Supported
Close

Nvidia announced the Ampere architecture GeForce 30 series consumer GPUs at a GeForce Special Event on September 1, 2020.[3][4] Nvidia announced the A100 80 GB GPU at SC20 on November 16, 2020.[5] Mobile RTX graphics cards and the RTX 3060 based on the Ampere architecture were revealed on January 12, 2021.[6]

Nvidia announced Ampere's successor, Hopper, at GTC 2022, and "Ampere Next Next" (Blackwell) for a 2024 release at GPU Technology Conference 2021.

Details

Summarize
Perspective

Architectural improvements of the Ampere architecture include the following:

  • CUDA Compute Capability 8.0 for A100 and 8.6 for the GeForce 30 series[7]
  • TSMC's 7 nm FinFET process for A100
  • Custom version of Samsung's 8 nm process (8N) for the GeForce 30 series[8]
  • Third-generation Tensor Cores with FP16, bfloat16, TensorFloat-32 (TF32) and FP64 support and sparsity acceleration.[9] The individual Tensor cores have with 256 FP16 FMA operations per clock 4x processing power (GA100 only, 2x on GA10x) compared to previous Tensor Core generations; the Tensor Core Count is reduced to one per SM.
  • Second-generation ray tracing cores; concurrent ray tracing, shading, and compute for the GeForce 30 series
  • High Bandwidth Memory 2 (HBM2) on A100 40 GB & A100 80 GB
  • GDDR6X memory for GeForce RTX 3090, RTX 3080 Ti, RTX 3080, RTX 3070 Ti
  • Double FP32 cores per SM on GA10x GPUs
  • NVLink 3.0 with a 50 Gbit/s per pair throughput[9]
  • PCI Express 4.0 with SR-IOV support (SR-IOV is reserved only for A100)
  • Multi-instance GPU (MIG) virtualization and GPU partitioning feature in A100 supporting up to seven instances
  • PureVideo feature set K hardware video decoding with AV1 hardware decoding[10] for the GeForce 30 series and feature set J for A100
  • 5 NVDEC for A100
  • Adds new hardware-based 5-core JPEG decode (NVJPG) with YUV420, YUV422, YUV444, YUV400, RGBA. Should not be confused with Nvidia NVJPEG (GPU-accelerated library for JPEG encoding/decoding)

Chips

  • GA100[11]
  • GA102
  • GA103
  • GA104
  • GA106
  • GA107
  • GA10B

Comparison of Compute Capability: GP100 vs GV100 vs GA100[12]

More information GPU features, Nvidia Tesla P100 ...
GPU features Nvidia Tesla P100 Nvidia Tesla V100 Nvidia A100
GPU codename GP100 GV100 GA100
GPU architecture Pascal Volta Ampere
Compute capability 6.0 7.0 8.0
Threads / warp 32 32 32
Max warps / SM 64 64 64
Max threads / SM 2048 2048 2048
Max thread blocks / SM 32 32 32
Max 32-bit registers / SM 65536 65536 65536
Max registers / block 65536 65536 65536
Max registers / thread 255 255 255
Max thread block size 1024 1024 1024
FP32 cores / SM 64 64 64
Ratio of SM registers to FP32 cores 1024 1024 1024
Shared Memory Size / SM 64 KB Configurable up to 96 KB Configurable up to 164 KB
Close

Comparison of Precision Support Matrix[13][14]

More information FP16, FP32 ...
Supported CUDA Core Precisions Supported Tensor Core Precisions
FP16 FP32 FP64 INT1 INT4 INT8 TF32 BF16 FP16 FP32 FP64 INT1 INT4 INT8 TF32 BF16
Nvidia Tesla P4 NoYesYesNoNoYesNoNoNoNoNoNoNoNoNoNo
Nvidia P100 YesYesYesNoNoNoNoNoNoNoNoNoNoNoNoNo
Nvidia Volta YesYesYesNoNoYesNoNoYesNoNoNoNoNoNoNo
Nvidia Turing YesYesYesNoNoNoNoNoYesNoNoYesYesYesNoNo
Nvidia A100 YesYesYesNoNoYesNoYesYesNoYesYesYesYesYesYes
Close

Legend:

  • FPnn: floating point with nn bits
  • INTn: integer with n bits
  • INT1: binary
  • TF32: TensorFloat32
  • BF16: bfloat16

Comparison of Decode Performance

More information H.264 decode (1080p30), H.265 (HEVC) decode (1080p30) ...
Concurrent streams H.264 decode (1080p30) H.265 (HEVC) decode (1080p30) VP9 decode (1080p30)
V100 16 22 22
A100 75 157 108
Close

Ampere dies

More information Die, GA100 ...
Die GA100[15] GA102[16] GA103[17] GA104[18] GA106[19] GA107[20] GA10B[21] GA10F
Die size 826 mm2 628 mm2 496 mm2 392 mm2 276 mm2 200 mm2 448 mm2  ?
Transistors 54.2B 28.3B 22B 17.4B 12B 8.7B 21B  ?
Transistor density 65.6 MTr/mm2 45.1 MTr/mm2 44.4 MTr/mm2 44.4 MTr/mm2 43.5 MTr/mm2 43.5 MTr/mm2 46.9 MTr/mm2  ?
Graphics processing clusters 8 7 6 6 3 2 2 1
Streaming multiprocessors 128 84 60 48 30 20 16 12
CUDA cores 12288 10752 7680 6144 3840 2560 2048 1536
Texture mapping units 512 336 240 192 120 80 64 48
Render output units 192 112 96 96 48 32 32 16
Tensor cores 512 336 240 192 120 80 64 48
RT cores N/A 84 60 48 30 20 8 12
L1 cache 24 MB 10.5 MB 7.5 MB 6 MB 3 MB 2.5 MB 3 MB 1.5 MB
192 KB
per SM
128 KB per SM 192 KB
per SM
128 KB
per SM
L2 cache 40 MB 6 MB 4 MB 4 MB 3 MB 2 MB 4 MB  ?
Close

A100 accelerator and DGX A100

Summarize
Perspective

The Ampere-based A100 accelerator was announced and released on May 14, 2020.[9] The A100 features 19.5 teraflops of FP32 performance, 6912 FP32/INT32 CUDA cores, 3456 FP64 CUDA cores, 40 GB of graphics memory, and 1.6 TB/s of graphics memory bandwidth.[22] The A100 accelerator was initially available only in the 3rd generation of DGX server, including 8 A100s.[9] Also included in the DGX A100 is 15 TB of PCIe gen 4 NVMe storage,[22] two 64-core AMD Rome 7742 CPUs, 1 TB of RAM, and Mellanox-powered HDR InfiniBand interconnect. The initial price for the DGX A100 was $199,000.[9]

Comparison of accelerators used in DGX:[23][24][25]

More information Model, Architecture ...
ModelArchitectureSocketFP32
CUDA
cores
FP64 cores
(excl. tensor)
Mixed
INT32/FP32
cores
INT32
cores
Boost
clock
Memory
clock
Memory
bus width
Memory
bandwidth
VRAMSingle
precision
(FP32)
Double
precision
(FP64)
INT8
(non-tensor)
INT8
dense tensor
INT32FP4
dense tensor
FP16FP16
dense tensor
bfloat16
dense tensor
TensorFloat-32
(TF32)
dense tensor
FP64
dense tensor
Interconnect
(NVLink)
GPUL1 CacheL2 CacheTDPDie sizeTransistor
count
ProcessLaunched
P100 PascalSXM/SXM2N/A17923584N/A1480 MHz1.4 Gbit/s HBM24096-bit720 GB/sec16 GB HBM210.6 TFLOPS5.3 TFLOPSN/AN/AN/AN/A21.2 TFLOPSN/AN/AN/AN/A160 GB/secGP1001344 KB (24 KB × 56)4096 KB300 W610 mm215.3 BTSMC 16FF+Q2 2016
V100 16GB VoltaSXM251202560N/A51201530 MHz1.75 Gbit/s HBM24096-bit900 GB/sec16 GB HBM215.7 TFLOPS7.8 TFLOPS62 TOPSN/A15.7 TOPSN/A31.4 TFLOPS125 TFLOPSN/AN/AN/A300 GB/secGV10010240 KB (128 KB × 80)6144 KB300 W815 mm221.1 BTSMC 12FFNQ3 2017
V100 32GB VoltaSXM351202560N/A51201530 MHz1.75 Gbit/s HBM24096-bit900 GB/sec32 GB HBM215.7 TFLOPS7.8 TFLOPS62 TOPSN/A15.7 TOPSN/A31.4 TFLOPS125 TFLOPSN/AN/AN/A300 GB/secGV10010240 KB (128 KB × 80)6144 KB350 W815 mm221.1 BTSMC 12FFN
A100 40GB AmpereSXM4691234566912N/A1410 MHz2.4 Gbit/s HBM25120-bit1.52 TB/sec40 GB HBM219.5 TFLOPS9.7 TFLOPSN/A624 TOPS19.5 TOPSN/A78 TFLOPS312 TFLOPS312 TFLOPS156 TFLOPS19.5 TFLOPS600 GB/secGA10020736 KB (192 KB × 108)40960 KB400 W826 mm254.2 BTSMC N7Q1 2020
A100 80GB AmpereSXM4691234566912N/A1410 MHz3.2 Gbit/s HBM2e5120-bit1.52 TB/sec80 GB HBM2e19.5 TFLOPS9.7 TFLOPSN/A624 TOPS19.5 TOPSN/A78 TFLOPS312 TFLOPS312 TFLOPS156 TFLOPS19.5 TFLOPS600 GB/secGA10020736 KB (192 KB × 108)40960 KB400 W826 mm254.2 BTSMC N7
H100 HopperSXM516896460816896N/A1980 MHz5.2 Gbit/s HBM35120-bit3.35 TB/sec80 GB HBM367 TFLOPS34 TFLOPSN/A1.98 POPSN/AN/AN/A990 TFLOPS990 TFLOPS495 TFLOPS67 TFLOPS900 GB/secGH10025344 KB (192 KB × 132)51200 KB700 W814 mm280 BTSMC 4NQ3 2022
H200 HopperSXM516896460816896N/A1980 MHz6.3 Gbit/s HBM3e6144-bit4.8 TB/sec141 GB HBM3e67 TFLOPS34 TFLOPSN/A1.98 POPSN/AN/AN/A990 TFLOPS990 TFLOPS495 TFLOPS67 TFLOPS900 GB/secGH10025344 KB (192 KB × 132)51200 KB1000 W814 mm280 BTSMC 4NQ3 2023
B100 BlackwellSXM6N/AN/AN/AN/AN/A8 Gbit/s HBM3e8192-bit8 TB/sec192 GB HBM3eN/AN/AN/A3.5 POPSN/A7 PFLOPSN/A1.98 PFLOPS1.98 PFLOPS989 TFLOPS30 TFLOPS1.8 TB/secGB100N/AN/A700 WN/A208 BTSMC 4NPQ4 2024 (expected)
B200 BlackwellSXM6N/AN/AN/AN/AN/A8 Gbit/s HBM3e8192-bit8 TB/sec192 GB HBM3eN/AN/AN/A4.5 POPSN/A9 PFLOPSN/A2.25 PFLOPS2.25 PFLOPS1.2 PFLOPS40 TFLOPS1.8 TB/secGB100N/AN/A1000 WN/A208 BTSMC 4NP
Close

Products using Ampere

  • GeForce MX series
    • GeForce MX570 (mobile) (GA107)
  • GeForce 20 series
    • GeForce RTX 2050 (mobile) (GA107)
  • GeForce 30 series
    • GeForce RTX 3050 Laptop GPU (GA107)
    • GeForce RTX 3050 (GA106 or GA107)[26]
    • GeForce RTX 3050 Ti Laptop GPU (GA107)
    • GeForce RTX 3060 Laptop GPU (GA106)
    • GeForce RTX 3060 (GA106 or GA104)[27]
    • GeForce RTX 3060 Ti (GA104 or GA103)[28]
    • GeForce RTX 3070 Laptop GPU (GA104)
    • GeForce RTX 3070 (GA104)
    • GeForce RTX 3070 Ti Laptop GPU (GA104)
    • GeForce RTX 3070 Ti (GA104 or GA102)[29]
    • GeForce RTX 3080 Laptop GPU (GA104)
    • GeForce RTX 3080 (GA102)
    • GeForce RTX 3080 12 GB (GA102)
    • GeForce RTX 3080 Ti Laptop GPU (GA103)
    • GeForce RTX 3080 Ti (GA102)
    • GeForce RTX 3090 (GA102)
    • GeForce RTX 3090 Ti (GA102)
  • Nvidia Workstation GPUs (formerly Quadro)
    • RTX A1000 (mobile) (GA107)
    • RTX A2000 (mobile) (GA106)
    • RTX A2000 (GA106)
    • RTX A3000 (mobile) (GA104)
    • RTX A4000 (mobile) (GA104)
    • RTX A4000 (GA104)
    • RTX A5000 (mobile) (GA104)
    • RTX A5500 (mobile) (GA103)
    • RTX A4500 (GA102)
    • RTX A5000 (GA102)
    • RTX A5500 (GA102)
    • RTX A6000 (GA102)
    • A800 Active
  • Nvidia Data Center GPUs (formerly Tesla)
    • Nvidia A2 (GA107)
    • Nvidia A10 (GA102)
    • Nvidia A16 (4 × GA107)
    • Nvidia A30 (GA100)
    • Nvidia A40 (GA102)
    • Nvidia A100 (GA100)
    • Nvidia A100 80 GB (GA100)
    • Nvidia A100X
    • NVIDIA A30X
  • Tegra SoCs
    • AGX Orin (GA10B)
    • Orin NX (GA10B)
    • Orin Nano (GA10B)
More information Type, GA10B ...
Products using Ampere (per Chip)
TypeGA10BGA107GA106GA104GA103GA102GA100
GeForce MX series GeForce MX570 (mobile)
GeForce 20 series GeForce RTX 2050 (mobile)
GeForce 30 series GeForce RTX 3050 Laptop
GeForce RTX 3050
GeForce RTX 3050 Ti Laptop
GeForce RTX 3050
GeForce RTX 3060 Laptop
GeForce RTX 3060
GeForce RTX 3060
GeForce RTX 3060 Ti
GeForce RTX 3070 Laptop
GeForce RTX 3070
GeForce RTX 3070 Ti Laptop
GeForce RTX 3070 Ti
GeForce RTX 3080 Laptop
GeForce RTX 3060 Ti
GeForce RTX 3080 Ti Laptop
GeForce RTX 3070 Ti
GeForce RTX 3080
GeForce RTX 3080 Ti
GeForce RTX 3090
GeForce RTX 3090 Ti
Nvidia Workstation GPUs RTX A1000 (mobile)RTX A2000 (mobile)
RTX A2000
RTX A3000 (mobile)
RTX A4000 (mobile)
RTX A4000
RTX A5000 (mobile)
RTX A5500 (mobile)RTX A4500
RTX A5000
RTX A5500
RTX A6000
Nvidia Data Center GPUs Nvidia A2
Nvidia A16
Nvidia A10
Nvidia A40
Nvidia A30
Nvidia A100
Tegra SoCs AGX Orin
Orin NX
Orin Nano
Close

See also

References

Wikiwand - on

Seamless Wikipedia browsing. On steroids.