TSMC CoWoS Production At Full Capacity As Demand Skyrockets – Nvidia, AMD, And More Trying To Get Their Hands On Interposers
Last month, we saw TSMC unveil the world's largest Chip-on-Wafer-on-Substrate (CoWoS) interposer. Now with the COVID-19 situation amidst the world, you would expect production to come to a stop, but it is quite the opposite for TSMC. TSMC's demand for CoWoS packing has skyrocketed, according to DigiTimes' unnamed source. Big names such as Nvidia, AMD, HiSilicon, Xilinx, and Broadcom are all knocking on TSMC's door to some CoWoS packing so they can use it for high bandwidth powered AI accelerators and ASICs over the past two weeks. This has led to TSMC ramping up production to full capacity at their fabs.
Almost Triple The Performance From Their Previous Generation With A Bandwidth Of 2.7 TBps
As I mentioned earlier, we saw TSMC unveil the world's largest CoWoS interposer, but to understand that, we have to understand what CoWoS is. CoWoS is a 2.5D of placing an individual dies side-by-side on a single silicon interposer. The benefits of this configuration are that you can increase the density on smaller devices as you reach the point of how big each individual die can possibly be. This leads to better power efficiency leading to lower power consumption and better connectivity between the dies.
We saw the largest CoWoS interposer ever made with a size of 1700mm2, which allows for massive performance increases, such as 2.7 TBps, which is a 2.7 times performance boost over the technology TSMC made available in 2016. It allows for up to 6 HBM stacks offering up to 96 GB of memory, which is leaps and bounds ahead of any other card on the market. Along with being well suited for GPU solutions is also well suited for 5G networking, power-efficient data centers, and more. If you would like to read more about the largest CoWoS interposer, coverage is available through this link.
Products that currently feature this technology are the Nvidia V100 and the AMD Radeon VII, which features more memory on the PCB compared to other cards because of the HBM on the same silicon interposer as the GPU is. That is why the memory is named HBM2 memory. The memory is much closer to the GPU allows the bandwidth to be so much higher compared to GDDR6 cards, where the memory is farther apart from the GPU on the PCB. This also allows for smaller PCB on the GPU, ultimately making it more compact.
It is exciting to see where this technology is going and how it will change the future of graphics cards with higher memory capacity, high memory bandwidth, smaller PCB, and better power efficiency. Although this technology is quite impressive, it isn't going to be cheaps demonstrated with both of the examples shown above costing quite a decent amount.