SK Hynix and Samsung Talk HBM at Hot Chips 28 – Low Cost HBM, HBM2 and HBM3 In The Roadmap

The semiconductor industry has witnessed a massive departure from the conventional technologies in the last few years. The emergence of new memory standards, SOC architecture has paved the way for modern designs for upcoming products. High-Bandwidth memory or HBM might have first arrived in a graphics products but the potential to be part of a bigger ecosystem.

SK Hynix and Samsung Discuss HBM Roadmap - HBM3 and Low Cost HBM Under Development

In 2015, AMD unveiled their Radeon R9 Fury X graphics card. The product was the first to feature SK Hynix HBM memory. Delivering up to 512 GB/s bandwidth which was unmatched by any other graphics card in the market. A year later, Samsung ramped up volume production of their own HBM2 DRAM which went on to become part of NVIDIA's Tesla P100 Hyperscale chip. The super computing chip has been shipping to HPC and Cloud data PCs since Q2 2016.

While conventional DRAM model such as GDDR5 are still the go-to-solution for many high-end cards, the future belongs to HBM. Not only is HBM faster, but its also more efficient. This means that it consumes less power and provides higher performance. Another advantage of HBM is that it doesn't require a lot of space alongside the host chip (CPU or GPU). But since HBM2 is new, it comes at a higher cost and being new also means that overall quantity of the chips being produced is not enough to carter a broad audience of consumers.

Samsung initiated production for HBM2 memory back in Q1 2016 while SK Hynix plans to begin production for their HBM2 chips this quarter. At Hot Chips 28, both companies brought forward their catalogs for HBM2 and future roadmaps for HBM.

HBM2 Specifications Comparison:

I/O (Bus Interface)326410241024
Prefetch (I/O)81622
Maximum Bandwidth32GB/s
(8Gbps per pin)
64 GB/s
(16Gbps per pin)
(1Gbps per pin)
(2Gbps per pin)
tCCD2ns (=4tCK)2ns (=4tCK)2ns (=1tCK)2ns (=1tCK)
VPPInternal VPPInternal VPPExternal VPPExternal VPP
VDD1.5V, 1.35V1.35V1.2V1.2V
Command InputSingle CommandSingle CommandDual CommandDual Command

Low Cost HBM - Faster and Cheaper Than HBM1, Built For The Mass Market

So there are at least two solutions in the works after HBM2, HBMx (HBM3) and low cost HBM. The low cost HBM solution is presented by Samsung and is explained to be more cost effective. It is faster than HBM1 but slower than HBM2 however, it's supposed to be significantly cheaper. Comparing both HBM2 and HBM (low cost), it is seen that the latter comes with lesser TSVs. TSV standards for through silicon vias which are used to handle the I/O on the DRAM die. The number is reduced from 1024 on an HBM2 stack to 512 on a low cost stack.


The end result is a faster pin speed of 3 GB/s (+) that can deliver 200 GB/s compared to 256 GB/s on HBM2. The lower 512-bit interface across 2 / 4 stacks would equate to 1024 / 2048. Samsung believes that they can easily produce these chips in larger quantity and ship it to a mass market.

xHBM or HBM3 - The Next Generation of High-Bandwidth Memory Chips

With HBM2 hitting production, SK Hynix and Samsung are already prepping up for the next iteration of HBM memory as demand for bandwidth and efficiency is ever increasing. SK Hynix terms their next solution as HBM3 or HBMx while Samsung calls it xHBM or Extreme HBM.

The specifications for HBM3 have not been finalized yet and are mostly under consideration at this moment. But two key points that were discussed during Hot Chips and highlighted by Computerbase reveal that HBM3 would offer twice the bandwidth and feature a very attractive price. We are talking about 512 GB/s bandwidth from these chips compared to 256 GB/s offered on HBM2. Four of these stacks would result in over 2 TB/s of bandwidth. Sounds juicy but we sure don't expect a mainstream graphics card getting that any time soon. May be after Volta?

Some points in consideration for the next-gen HBM are cost-effectiveness, form factors, power, density and bandwidth. Currently, HBM2 can go as high as 48 GB in terms of capacity so probably expect around 64 GB when HBM3 arrives.

Micron Also Brings DRAM Talk To The Table - Plans DDR5 For 2019, Calls HBM a Bad Attempt of HMC

Micron DDR5 DRAM

There's a boom in the DRAM industry these days, Micron also discussed their future roadmaps for DRAM tech at Hot Chips. The company revealed that they plan to sample DDR5 DRAM in 2018 followed by production in 2019. They key purpose behind DDR5 DRAM is to bring twice the bandwidth at just 1.1V. This would mean an increase in clock speeds, capacities would stick to 8 - 32 GB. Rated frequencies for DDR5 memory would be at 3200 MHz in the beginning and DDR5-6400 MHz when production and yields catch up.


Micron also talked about their high-bandwidth solution known as HMC (Hybrid Memory Cube). The company calls HBM a bad copy of HMC since it has many features that HBM cannot offer outside of bandwidth. Micron also worked on GDDR5X solution which entered the market with NVIDIA's Pascal based cards and is actively working with Intel on prepping the next generation 3D XPoint memory. You can learn more about that here. There's definitely a lot of buzz surrounding different memory architectures these days. HBM, HMC and DDR5 are going to be a real deal changer once we approach 2019.

WccfTech Tv
Filter videos by