Intel’s First Confirmed Xe GPU Product: The Aurora Exascale Supercomputer
Intel yesterday announced its partnership with Cray in building the Aurora Exascale supercomputer which is planned to enter into service by 2021. The company did not comment on whether the supercomputer will use integrated graphics but judging on the timeline and the workloads in question, this will almost certainly be the discrete variety of the Xe GPU. This is essentially the embodiment of the “Teraflops to Petaflops” revealed at the Intel Architecture Day event.
Intel’s Xe GPUs will power the world’s first Exascale Supercomptuer
The Aurora supercomputer was commissioned by the U.S. Department of Energy (DoE) and is being built in conjunction with Cray. The project is planned to be completed b y the 2021 timeframe and will be capable of a quintillion sustained operations per second. To put that into perspective, that’s a million more than a teraflops – and the average processor nets around 200 GFLOPs. The deal is valued at $500 million out of which Cray is taking a $146 million chunk and Intel will take the remaining $354 million.
The supercomputer is expected to be multiple times faster than the ones in operation today (the most our current ones can do is deliver a sustained performance of 200 petaflops or so). The peak performance of the Aurora should be even higher than the quintillion mark. It will be utilizing Intel Xeon and Optane persistent memory along with Cray’s custom interconnect called Slingshot. This announcement is particularly impressive because it would mark the first time that a Top10 capable supercomputer is going to utilize Intel powered GPUs (I know what you are thinking and coprocessors don’t count!).
The vast majority of computing power in today’s supercomputers comes from dGPUs so it is unclear at the moment what the power distribution of the quintillion ops mark will be like but I think it’s safe to assume that Intel’s Xe GPU will be capable of delivering competitive performance. Here’s the thing though, NVIDIA is currently unmatched in the AI department. Their CUDA ecosystem has a very strong foothold in the industry and it would require an incredibly motivated entity to unseat that – such as the DoE.
Those familiar with systems like this know that usually, most of the data is stored in vast arrays of RAMs that act as defacto harddisks so the processors can have quick and easy access to the data. With Intel’s Optane memory in the equation, this will not only become cheaper but faster as well (the transitions from long term storage to RAM will be much faster) allowing for a much higher turnaround time between workloads.
Ironically the DoE did not comment on what the energy requirements would be nor any other specs (such as the no. of discrete GPUs – and good on them for that because we would undoubtedly have attempted to reverse engineer the performance per-card from that metric). Needless to say, from the looks of it, Intel isn’t having any trouble getting customers to leave the red and green pie and bite into some Xe.