SK Hynix HBM3 Memory Module Revealed During OCP Summit 2021 – 12-Hi Stack, 24GB Module With 6400 Mbps Transfer Speeds

SK Hynix First To Complete HBM3 Development: Up To 24 GB In 12-Hi Stack, 819 GB/s Bandwidth

Less than a month ago, SK Hynix had confirmed the development of a new 24GB HBM3 memory with an immensely high bandwidth performance of 819GB/s per stack. This was an introduction to their next-gen high-bandwidth memory. And, since next-gen CPUs and GPUs will require faster and stronger memory, HBM3 might be the answer to the needs of newer memory technology.

SK Hynix Shows Off HBM3 Memory Module With 12-Hi 24 GB Stack Layout & 6400 Mbps Speeds

Now, during the OCP Summit 2021, SK Hynix officially released the details of their next-gen memory modules. JEDEC, the group "responsible for HBM3," has still not released final specs on the new standard of memory modules. However, SK Hynix has published specifications from their initial tests, showing speeds of 5.2 Gbps to 6.4 Gbps. Unfortunately, we do not know which one of the two speeds will be close to what will be globally produced for "next-gen accelerators."

Related StoryHassan Mujtaba
Sony PS3 Emulator “RPCS3” Now Fully Supports AMD Ryzen 7000 “Zen 4” AVX-512 CPU Instructions

This recent 5.2 to 6.4 Gbps module featured a total of 12 stacks with each connected to a 1024-bit interface. Since the controller bus width for HBM3 has not changed since its predecessor, a fairly large number of stacks in conjunction with higher frequencies causes an increase of bandwidth speeds per stack, ranging from 461 GB/s to 819 GB/s.

SK Hynix shows off 24 GB HBM3 package with 12-Hi stack and 6400 Mbps transfer speeds. (Image Credits: ServerTheHome)

Anandtech has recently published a comparison chart showing the different HBM Memory, from HBM to the new HBM3 modules:

HBM Memory Specifications Comparison

DRAMHBM1HBM2HBM2eHBM3
I/O (Bus Interface)1024102410241024
Prefetch (I/O)2222
Maximum Bandwidth128 GB/s256 GB/s460.8 GB/s819.2 GB/s
DRAM ICs Per Stack48812
Maximum Capacity4 GB8 GB16 GB24 GB
tRC48ns45ns45nsTBA
tCCD2ns (=1tCK)2ns (=1tCK)2ns (=1tCK)TBA
VPPExternal VPPExternal VPPExternal VPP
External VPP
VDD1.2V1.2V1.2VTBA
Command InputDual CommandDual CommandDual CommandDual Command

With the announcement of AMD's new Instinct MI250X accelerator on Monday, we found that the company plans to offer a whopping 8 HBM2e stacks, with a clock speed of as high as 3.2 Gbps. Each of the stacks offers a total capacity of 16GBs, equaling 128 GBs of capacity. TSMC previously announced the company's plan for Chip-on-Wafer-on-Substrate, also known as CoWoS-S, which combines technology that showcases as many as 12 HBM stacks. Companies and consumers should start seeing the initial products that use this tech beginning in 2023.

Once the first set of products using the new memory technology, it is speculated that HBM3 will be globally available, and that possibly we will see products that offer "twelve 12Hi HBM3 stacks" that are going to be offered from SK Hynix, allowing customers 288 GBs of memory capacity, and also offer as high as 9.8 TB/s of total bandwidth speeds.

Source: ServerTheHome, Andreas Schilling, AnandTech

WccfTech Tv
Subscribe
Filter videos by
Order