Website Coelacanth's Dream located a Github commit that may signal a future configuration to the approaching AMD Aldebaran GPU-based Instinct accelerator. The new GPU, codenamed 'GFX90A," will utilize the CDNA2 architecture, a derivative of the GFX 9th Family structure (Vega structure).
AMD Instinct MI200 Could Feature Two 110 Compute Units CDNA 2 GPU Dies
There are three codes, GFX906_60, GFX908_120, and GFX90A_110, each one specific to a different source. The GFX906_60 is speculated to refer to the Instinct MI60, the GFX908_120 is the Instinct MI100, and the GFX90A_110 may be used for the newer-generation AMD accelerator. With each code, the third part refers to computational units.
For instance, the MI60 will utilize 60 compute units, the MI100 will use 120 units, and the last is to utilize 110 compute units. What is interesting is that the next-gen accelerator from AMD uses fewer computational units than the MI100.
It is stated the Aldebaran GPU will showcase 128 compute units, which does not match with the information received about the next-gen code for the new AMD accelerator. However, any GPU typically will deactivate some of the clusters, which if this is correct, would drop it down to 110 active compute units.
Considering the settings of different Shader Engine and CU, Aldebaran / MI200 is an MCM configuration with 2 GPU dies, so if the setting is symmetric for each die instead of Shader Engine, each die will have 4 SEs. It is possible to have (56 CUs), and disable each one of them to make a total of 110 CUs.
— Coelacanth’s Dream
Website VideoCardz states,
It is unclear if AMD is planning to double the FP32 core count on CDNA2 architecture, but assuming that they do, with a theoretical 1500 MHz GPU clock the accelerator would offer have a single-precision compute performance of 42.2 TFLOPS, 1.82x more than MI100. If that isn’t the case, then MI200 would have to have at least a 1650 MHz clock to reach the same FP32 throughput of 23 TFLOPs.
In the case of HPC accelerators such as MI200, the FP64 performance is far more important. According to previous leaks, MI200 is to feature full-rate FP64 performance, which means either doubling or quadrupling the performance over MI100, depending on the architecture.
AMD's MI200 is set to release before the end of 2021. It is their revolutionary multi-chip graphics processor that is constructed with two active dies and 128 gigabytes of HBM2e memory.
Here's What To Expect From AMD Instinct MI200 'CDNA 2' GPU Accelerator
Inside the AMD Instinct MI200 is an Aldebaran GPU featuring two dies, a secondary and a primary. It has two dies with each consisting of 8 shader engines for a total of 16 SE's. Each Shader Engine packs 16 CUs with full-rate FP64, packed FP32 & a 2nd Generation Matrix Engine for FP16 & BF16 operations. Each die, as such, is composed of 128 compute units or 8192 stream processors. This rounds up to a total of 220 compute units or 14,080 stream processors for the entire chip. The Aldebaran GPU is also powered by a new XGMI interconnect. Each chiplet features a VCN 2.6 engine and the main IO controller.
As for DRAM, AMD has gone with an 8-channel interface consisting of 1024-bit interfaces for an 8192-bit wide bus interface. Each interface can support 2GB HBM2e DRAM modules. This should give us up to 16 GB of HBM2e memory capacity per stack and since there are eight stacks in total, the total amount of capacity would be a whopping 128 GB. That's 48 GB more than the A100 which houses 80 GB HBM2e memory. The full visualization of the Aldebaran GPU on the Instinct MI200 is available here.
AMD Radeon Instinct Accelerators 2020
|Accelerator Name||AMD Instinct MI300||AMD Instinct MI250X||AMD Instinct MI250||AMD Instinct MI210||AMD Instinct MI100||AMD Radeon Instinct MI60||AMD Radeon Instinct MI50||AMD Radeon Instinct MI25||AMD Radeon Instinct MI8||AMD Radeon Instinct MI6|
|CPU Architecture||Zen 4 (Exascale APU)||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A|
|GPU Architecture||TBA (CDNA 3)||Aldebaran (CDNA 2)||Aldebaran (CDNA 2)||Aldebaran (CDNA 2)||Arcturus (CDNA 1)||Vega 20||Vega 20||Vega 10||Fiji XT||Polaris 10|
|GPU Process Node||5nm+6nm||6nm||6nm||6nm||7nm FinFET||7nm FinFET||7nm FinFET||14nm FinFET||28nm||14nm FinFET|
|GPU Chiplets||4 (MCM / 3D Stacked)|
1 (Per Die)
1 (Per Die)
1 (Per Die)
1 (Per Die)
|1 (Monolithic)||1 (Monolithic)||1 (Monolithic)||1 (Monolithic)||1 (Monolithic)||1 (Monolithic)|
|GPU Clock Speed||TBA||1700 MHz||1700 MHz||1700 MHz||1500 MHz||1800 MHz||1725 MHz||1500 MHz||1000 MHz||1237 MHz|
|FP16 Compute||TBA||383 TOPs||362 TOPs||181 TOPs||185 TFLOPs||29.5 TFLOPs||26.5 TFLOPs||24.6 TFLOPs||8.2 TFLOPs||5.7 TFLOPs|
|FP32 Compute||TBA||95.7 TFLOPs||90.5 TFLOPs||45.3 TFLOPs||23.1 TFLOPs||14.7 TFLOPs||13.3 TFLOPs||12.3 TFLOPs||8.2 TFLOPs||5.7 TFLOPs|
|FP64 Compute||TBA||47.9 TFLOPs||45.3 TFLOPs||22.6 TFLOPs||11.5 TFLOPs||7.4 TFLOPs||6.6 TFLOPs||768 GFLOPs||512 GFLOPs||384 GFLOPs|
|VRAM||192 GB HBM3?||128 GB HBM2e||128 GB HBM2e||64 GB HBM2e||32 GB HBM2||32 GB HBM2||16 GB HBM2||16 GB HBM2||4 GB HBM1||16 GB GDDR5|
|Memory Clock||TBA||3.2 Gbps||3.2 Gbps||3.2 Gbps||1200 MHz||1000 MHz||1000 MHz||945 MHz||500 MHz||1750 MHz|
|Memory Bus||8192-bit||8192-bit||8192-bit||4096-bit||4096-bit bus||4096-bit bus||4096-bit bus||2048-bit bus||4096-bit bus||256-bit bus|
|Memory Bandwidth||TBA||3.2 TB/s||3.2 TB/s||1.6 TB/s||1.23 TB/s||1 TB/s||1 TB/s||484 GB/s||512 GB/s||224 GB/s|
|Form Factor||OAM||OAM||OAM||Dual Slot Card||Dual Slot, Full Length||Dual Slot, Full Length||Dual Slot, Full Length||Dual Slot, Full Length||Dual Slot, Half Length||Single Slot, Full Length|
|Cooling||Passive Cooling||Passive Cooling||Passive Cooling||Passive Cooling||Passive Cooling||Passive Cooling||Passive Cooling||Passive Cooling||Passive Cooling||Passive Cooling|