SK Hynix Announces Development of HBM3 DRAM: Up To 24 GB Capacities, 12-Hi Stacks & 819 GB/s Bandwidth

Hassan Mujtaba
SK Hynix First To Complete HBM3 Development: Up To 24 GB In 12-Hi Stack, 819 GB/s Bandwidth

SK Hynix has announced that it has become the first in the industry to develop the next-generation High Bandwidth Memory standard, HBM3.

SK Hynix First To Complete HBM3 Development: Up To 24 GB In 12-Hi Stack, 819 GB/s Bandwidth

According to SK Hynix, the company has successfully developed its HBM3 DRAM, marking the next generation of High Bandwidth Memory products. The new memory standard will not only improve bandwidth but also increase DRAM capacities by stacking multiple DRAM chips vertically in stacks.

Related StoryOmar Sohail
SK hynix’s LPDDR5X RAM Uses a High-K Metal Gate Process That Reduces Power Draw by 25% While Increasing Speed to 8.5Gbps

SK Hynix began the development of its HBM3 DRAM starting with the mass production of HBM2E memory in July last year. The company is announcing today that its HBM3 DRAM will be available in two capacities, a 24 GB variant which will be the industry's biggest capacity for the specific DRAM & a 16 GB variant. The 24 GB variant will feature a 12-Hi stack comprised of 2GB DRAM chips while 16 GB variants will utilize an 8-Hi stack. The company also mentions that the height of the DRAM chips has been shrunk to 30 micrometers (μm, 10-6m).

“Since its launch of the world’s first HBM DRAM, SK hynix has succeeded in developing the industry’s first HBM3 after leading the HBM2E market,” said Seon-yong Cha, Executive Vice President in charge of the DRAM development. “We will continue our efforts to solidify our leadership in the premium memory market and help boost the values of our customers by providing products that are in line with the ESG management standards.”

via SK Hynix

As for performance, the SK Hynix HBM3 DRAM is expected to deliver 819 GB/s bandwidth per stack. This is an improvement of 78% over SK Hynix's HBM2E DRAM which delivers 460 GB/s of bandwidth. A GPU like the NVIDIA A100 which features 6 HBM2E stacks will be able to deliver up to 5 TB/s bandwidth versus 2.0 TB/s bandwidth that it currently delivers using the existing DRAM standards. The memory capacity using the 24 GB DRAM dies should also theoretically reach up to 120 GB (5 out of 6 dies enabled due to yields) and 144 GB with the whole die stack enabled. It is likely that successors to NVIDIA's Ampere (Ampere Next) and CDNA 2 (CDNA 3) will be the first to utilize the HBM3 memory standard.

The new memory type is expected to be adopted by high-performance data centers and machine learning platforms in the coming year. Just recently, Synopsys also announced that they are increasing designs for multi-die architectures with HBM3 IP and verification solutions, more on that here.

Share this story