SK Hynix To Commence Mass Production of 4 GB HBM2 DRAM In Q3 2016 – Aiming at NVIDIA Pascal and AMD Polaris GPUs

Hassan Mujtaba
Posted Mar 7, 2016
103Shares
Share Tweet Submit

SK Hynix is planning to commence production of their HBM2 DRAMs in Q3 2016, as reported by Golem.de. The source reveals that SK Hynix will be initiating full fledged mass production of 4 GB HBM2 DRAMs in Q3 ’16 and followed quickly with 8 GB HBM2 DRAMs in Q4 ’16. SK Hynix will be aiming to supply NVIDIA’s Pascal and AMD Polaris GPUs, both of which are expected to arrive in mid of 2016.

SK Hynix To Commence 4 GB HBM2 DRAM Die Production in Q3 2016, 8 GB DRAM Dies in Q4 2016

SK Hynix seems to be aiming a steady pace for their HBM2 DRAM production. Just like Samsung who went with 4 GB HBM2 dies first, SK Hynix will be choosing a similar path with mass production commencing in Q3 2016. It does make SK Hynix a little late knowing that Samsung has had their production initiated while 8 GB HBM2 dies are planned for this year too, although no specific date is given yet.

SK Hynix shipped the very first HBM dies with AMD’s Radeon R9 Fury series cards last year. AMD was the very first GPU company to incorporate the first generation of high bandwidth memory standard on their cards and this year will see an even greater influx of HBM powered graphics cards from both NVIDIA and AMD. Samsung already has mass production started but SK Hynix is planning to get production started in Q3 2016 which gives them time to see the market trends for high-performance and mid-range GPUs.

In the interview with Golem.de, an SK Hynix representative stated that the company will be begin HBM2 DRAM production in second half of 2016 (Q3 2016 to be precise). The HBM2 DRAM from SK Hynix will be offered as an alternative to Samsung’s HBM2 stack, hence being used on both AMD’s 14nm FinFET based Polaris GPUs and NVIDIA’s 16nm FinFET based Pascal GPUs.

Image Credits: Golem.de

The new HBM2 memory makes use of beefier 8Gb (Gigabit) dies which are connected and stacked vertically through the TSVs. Each package of these dies is able to deliver speeds of 256 GB/s (Gbps bandwidth) which is twice the bandwidth of current generation HBM1 memory that provides a 128 GB/s bandwidth and over 7x increase compared to a 4 Gb GDDR5 DRAM chip (36 Gbps). The 4 GB HBM2 package can be incorporated in several stacks, offering higher memory space at better efficiency than current generation DRAM solutions. Having either two of these packages on an interposer would get 8 GB of VRAM along with 512 GB/s of total bandwidth and four packages would mean 16 GB of VRAM and 1 TB/s bandwidth.

HBM is a leading edge memory solution which was first adopted by high-end graphics cards. It is known to have provided the best solutions possible through implementation of 3D, 4K display, Virtual Reality and other upgraded functions.

The usage of HPC(High-performance computing) has extended to various applications in the purpose of processing and storing huge amounts of big data. A high density memory using TSV stacking technology, HBM answers calls for cost-effective memory solutions, securing its reputation as “the most effective” memory for HPC and related products.

In other words, HBM, which has advantages in high performance, low power and small form factor, will help overcome DRAM maximum speed and density limitations which will result in the explosive expansion of HBM demand. via SK Hynix

A quad stack of 4-Hi HBM2 stacks with 4 GB dies will give 16 GB VRAM, you increase the density of the DRAM to 8 GB and you’ll be getting 32 GB VRAM off the same layout. HBM2 is very scalable in nature allowing range of SKUs compared to HBM1. You cannot just have different stacks and layouts but HBM2 also allows each die to run at a specific speed to conserve power. All HBM2 dies from SK Hynix will run at 2.0 Gbps but can also be lowered down to 1.6 Gbps and 1.0 Gbps speeds. It all depends on the application and usage of the solution.

Rise Of The Tomb Raider PC Patch Adds DX12 Multi-GPU & Async Compute Support & More

It will be interesting to see if cards aside from high-end, enthusiast grade stuff get to use HBM2. Micron’s GDDR5X solution is set to be seen on several graphics cards which we speculated about in detail in our previous article. Samsung sure has the edge when it comes to HBM2 production since they started earlier, SK Hynix will be shipping their first samples in Q3 2016 while Samsung is expected to mass produce 8 GB HBM2 dies by then. Nevertheless, 2016 seems to be a great year for memory makers as higher bandwidth solutions remain in the spot light.

GDDR5 vs HBM Comparison:

HBM2 Specifications Comparison:

DRAMGDDR5GDDR5XHBM1HBM2
I/O (Bus Interface)326410241024
Prefetch (I/O)81622
Maximum Bandwidth32GB/s
(8Gbps per pin)
64 GB/s
(16Gbps per pin)
128GB/s
(1Gbps per pin)
256GB/s
(2Gbps per pin)
tRC40ns(=1.5V)
48ns(=1.35V)
48ns45ns
tCCD2ns (=4tCK)2ns (=4tCK)2ns (=1tCK)2ns (=1tCK)
VPPInternal VPPInternal VPPExternal VPPExternal VPP
VDD1.5V, 1.35V1.35V1.2V1.2V
Command InputSingle CommandSingle CommandDual CommandDual Command

Share Tweet Submit