JEDEC Publishes HBM2 Specifications – Will Scale Up To 32GB, 8-Hi Stacks, with 1 TB/s Bandwidth
High Bandwidth Memory needs no introduction. Debuting in the AMD Fury series, to great acclaim, the availability of high bandwidth to an ever-powerful GPU has been a much needed reprieve. HBM1 however, had its own limitations – one of the major ones being a maximum limit of 4GB and a maximum bandwidth of 512 GB/s. All this is set to change with HBM2 – which will introduce sizes up to 32 GB at 1 TB/s, more than enough to satiate the hunger of next generation graphic cards from both Nvidia and AMD.
Next generation graphic cards will have up to 32 GB memory and 1 TB/s bandwidth
HBM2 will basically double the bandwidth offered by HBM1 – which is quite an impressive feat considering that HBM1 is already around 4 times faster than GDDR5. Not only that but power consumption will be reduced by another 8% - once again over an existing reduction of 48% over GDDR5 (of HBM1). But perhaps one of the most significant developments is that it will allow GPU manufacturers to seamlessly scale vRAM from 2GB to 32GB – which covers pretty much all the bases. As our readers are no doubt aware, HBM is 2.5D stacked DRAM (on an inter-poser). This means that the punch offered by any HBM memory is directly related to its stack (layers).
The impact of memory bandwidth on GPU performance has been under-rated in the past - something that has finally started changing with the advent of High Bandwidth Memory. Where HBM1 could go as high as a 4-Hi stack (4 layers), HBM2 can go up to 8-Hi (8 layers). The 4-Hi HBM stack present on AMD Fury series is basically a combination of 4x 4-Hi stacks – each contributing 1GB to the 4GB grand total. In comparison, HBM2’s 4-Hi stack will offer 4GB on a single stack – so the Fury X combination repeated with HBM2 would actually net 16GB HBM2 with 1TB/s bandwidth. Needless to say, this is a very nice number, both in terms of real estate utilization and raw bandwidth offered by the medium.
Of course, HBM2 is only as good as the graphic cards its featured in. As far as use-case confirmations go, Nvidia at-least, speaking at the Japanese version of the GTC confirmed that it will be utilizing HBM2 technology in its upcoming Pascal GPUs. Interestingly however, the amount of vRAM revealed was 16GB at 1 TB/s and not 32 GB. The 1 TB/s number shows that Nvidia is going to be using 4 stacks of HBM - and the amount of vRAM tells us that its going to be 4-Hi HBM2. They did mention however, that as the memory standard matures they might eventually start rolling out 32GB HBM2 graphic cards. This is something that isn't really surprising considering 8-Hi HBM would almost certainly have more complications than 4-Hi HBM in terms of yield. Given below is the official press release as well as the specs table for HBM2:
JEDEC Updates Groundbreaking High Bandwidth Memory (HBM) Standard
JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of an update to JESD235 High Bandwidth Memory (HBM) DRAM standard. HBM DRAM is used in Graphics, High Performance Computing, Server, Networking and Client applications where peak bandwidth, bandwidth per watt, and capacity per area are valued metrics to a solution’s success in the market. The standard was developed and updated with support from leading GPU and CPU developers to extend the system bandwidth growth curve beyond levels supported by traditional discrete packaged memory. JESD235A is available for free download from the JEDEC website.
JESD235A leverages Wide I/O and TSV technologies to support up to 8 GB per device at speeds up to 256 GB/s. This bandwidth is delivered across a 1024-bit wide device interface that is divided into 8 independent channels on each DRAM stack. The standard supports 2-high, 4-high and 8-high TSV stacks of DRAM at full bandwidth to allow systems flexibility on capacity requirements from 1 GB – 8 GB per stack.
Additional improvements in the recent update include a new pseudo channel architecture to improve effective bandwidth, and clarifications and enhancements to the test features. JESD235A also defines a new feature to alert controllers when DRAM temperatures have exceeded a level considered acceptable for reliable operation so that the controller can take appropriate steps to return the system to normal operation.
“GPUs and CPUs continue to drive demand for more memory bandwidth and capacity, amid increasing display resolutions and the growth in computing datasets. HBM provides a compelling solution to reduce the IO power and memory footprint for our most demanding applications,” said Barry Wagner, JEDEC HBM Task Group Chairman.
HBM2 Specification Comparison
|WCCFTech||DDR3||GDDR5||4-Hi HBM1||4-Hi HBM2|
|Prefetch (per I/O)||8||8||2||2|
(2133 per pin)
(8Gbps per pin)
(1Gbps per pin)
(2Gbps per pin)
|tRC||4x - 5xns||40ns(=1.5V)|
|tCCD||4ns (=4tCK)||2ns (=4tCK)||2ns (=1tCK)||2ns (=1tCK)|
|VPP||Internal VPP||Internal VPP||External VPP||External VPP|
|VDD||1.5V, 1.35V||1.5V, 1.35V||1.2V||1.2V|
|Command Input||Single Command||Single Command||Dual Command||Dual Command|
Stay in the loop
GET A DAILY DIGEST OF LATEST TECHNOLOGY NEWS
Straight to your inbox
Subscribe to our newsletter