GDDR6 Memory Being Developed by Micron, Coming to Mid-End Graphic Cards in 2016 – Alleges Report [Updated]
Update: Micron contacted us shortly after this article was published and has debunked the GDDR6 story. Looks like Fudzilla got this one wrong. The company is only working on GDDR5X which is intended to provide significant performance improvements to designs that are currently using GDDR5, therefore giving system designers the option of delivering enhanced performance without dramatically altering current architectures.
The next generation of graphic cards will be getting a big boost in raw processing power thanks to the jump to FinFET technology. However, as any enthusiast knows, memory and bandwidth can quickly become a very undesirable bottleneck. To solve this problem, AMD and JEDEC introduced HBM 1.0 to the market this year and the much more flexible HBM 2.0 standard is expected to debut next year in flagship products. But what about the middle end lineup? The answer to that (according to an exclusive report by Fudzilla), is an upcoming GDDR6 standard.
GDDR6 memory allegedly landing in 2016, developed by Micron
Now here is the thing, reports of the upcoming GDDR6 standard have been present as far back as 2012 (example from VR Zone here). There are also a few oddities in what Fudzilla is reporting, which I would like to point out here for clarity’s sake. Firstly, the source is attributing the GDDR6 standard to Micron, while in actuality GDDR is a JEDEC standard and no such verification for GDDR6 currently exists. This new initiative also seems slightly confusing, because Micron is already working on a JEDEC approved GDDR5X standard which will offer around 2x the bandwidth of GDDR5.
GDDR5X is based on the GDDR5 standard and primarily doubles the prefetch of the standard while preserving “most of the command protocols of GDDR5”. What that means is that while the bandwidth has been doubled, it is not, strictly speaking an improvement of the GDDR5 standard, rather a new branch of the same and arguably a completely new technology (contrary to what the ‘GDDR’X name might suggest). One of the examples given is DDR3 to DDR4, which also happens to be a good approximate analogy to think of the GDDR5 to GDDR5X jump. Unfortunately, we do not, at this point know what difference, if any, there is between GDDR6 and GDDR5X.
The source cited “internal sources” for the exclusive report, so it is possible that GDDR6 is actually GDDR5X rebranded (since the come from the same company). It is also possible (not probable!) that Micron is working on two different standard, in collaboration with JEDEC, namely GDDR5X and GDDR6 but that seems unlikely to me. At any rate, GDDR5 has evolved from its 60nm debut in 2007 to a much more sleek and efficient 20nm version in 2015. I wouldn’t be surprised if JEDEC finally decided to define the GDDR6 standard at the 20nm process in 2016. This transition to a lower node allows much higher clocks and low operating voltages – something that was not possible in the early days of the standard.
In any case, the use of GDDR5X or GDDR6 remains a question in probability. The simple fact of matter is that most middle order graphic cards do not need more bandwidth than what the modern form of GDDR5 can already provide. And since all concerned parties have already achieved economies of scale with GDDR5 there would be very little reason to shift to a brand new standard such as GDDR5X/GDDR6 since HBM 2.0 will more than cater for the high end stuff, where bandwidth can quickly become an actual issue.
That said, high bandwidth memory in economical packaging (as low as 2GB HBM) will also be arriving by 2016. Companies like SK Hynix have been getting over their initial learning curve and yields are getting more mature by the month. Soon enough, we will be able to get the low end HBM (memory) in mobile devices such as laptops where the energy efficiency of the standard will be able to do wonders. So even in the middle order, the decision of companies shifting to GDDR5X/GDDR6 remains a dubious proposition at best.