JEDEC Updates HBM Standard With Up To 24 GB Memory & 307 GB/s Bandwidth Support Per Stack
JEDEC has published an update for the high-bandwidth memory standard which allows higher capacities and faster pin speeds. With the update, system manufacturers adopting HBM can leverage from higher memory capacities and various device configurations as HBM memory serves several applications include Graphics, High-Performance Computing, Server, Networking and Client applications.
JEDEC Published New Update For High-Bandwidth Memory Standard – Higher Memory Capacities of Up To 24 GB, Faster Pin Speeds
The existing HBM2 memory allows for up to 8 GB memory in 4-Hi or 8-Hi stacks. The products that make the most use of the HBM2 standard currently in the graphics industry are the Tesla V100 that is produced by NVIDIA and the upcoming Instinct MI60 from AMD. Both cards would feature 32 GB of HBM2 memory and feature a terabyte of bandwidth (900 GB/s for the Tesla V100). The key here is that these cards make use of 4 HBM2 stacks, each with 8 GB of memory per stack. These can be either 4-Hi or 8-Hi, depending on the memory configuration (Tesla V100 is also available in 16 GB HBM2 variant).
JEDEC standard JESD235B for HBM leverages Wide I/O and TSV technologies to support densities up to 24 GB per device at speeds up to 307 GB/s. This bandwidth is delivered across a 1024-bit wide device interface that is divided into 8 independent channels on each DRAM stack. The standard can support 2-high, 4-high, 8-high, and 12-high TSV stacks of DRAM at full bandwidth to allow systems flexibility on capacity requirements from 1 GB – 24 GB per stack.
This update extends the per pin bandwidth to 2.4 Gbps, adds a new footprint option to accommodate the 16 Gb-layer and 12-high configurations for higher density components, and updates the MISR polynomial options for these new configurations. Additional clarifications are provided throughout the document to address test features and compatibility across generations of HBM components.
With the updated JEDEC standard, HBM will be updated to support 24 GB of memory per stack. The memory would be made up of 16 Gb dies and can scale all the way from 1 GB to 24 GB per stack depending on the height of the stacks themselves. The standard can support 2-Hi, 4-Hi, 8-Hi and 12-Hi TSV stacks and each would feature a 1024-bit wide bus interface.
A single stack thus will offer speeds of 307 GB/s. If you look at the topmost configuration that one can get out of the new standard, you get 96 GB of memory and 1.2 TB/s bandwidth along a 4096-bit wide interface. This is an unprecedented amount of memory and bandwidth you can get from a single chip but obviously, it would cost a lot.