This post and all materials contained inside are part of our annual April Fools joke.
The mother of all leaks from Intel has just happened. Intel recently held a high profile "Xe Unleashed" event internally where the GPU leadership presented their finalized Xe methodology to Bob Swan and some other key people (I am told certain reps from certain AIBs like ASUS were also present) and needless to say, one of them thought we should know about it as well. We were able to get our hands on some presentation slides and even footage of the actual teaser! Know the pesky little "e" in Intel Xe? Well, it represents the number of GPUs.
Intel Xe Unleashed: e denotes GPUs, Xe 2 flagship GPU will be a 'seamless' dual GPU, landing on 6/31 next year
Intel Xe philosophy believes that innovation needs to happen on 3 main fronts: process, microarchitecture and "e". We are already familiar with the first two but 'e' is something that has not been successfully implemented so far. Sure there have been dual GPUs, but they all had to tradeoff some part of the functionality and never scaled linearly. Intel's graphics team believes it has solved just that. With a brand new architectural approach (Xe) and a software layer (OneAPI) that can scale indiscriminately between any number of GPUs, it's ready to remedy the years of neglect that 'e' has faced in the industry.
We managed to get our hands on 3 slides from the presentation that Raja Koduri gave:
This slide is the cornerstone of the Xe philosophy and the big reveal about what e actually denotes. This also reveals the existence of the X4 class of GPUs by the way, which as you will see is just one step in Intel's plan to dominate the GPU market.
They have designed the One API to act as an intermediary between the Direct3D layer and the GPU(s) (I am told they have a Linux solution in the works as well) and allow the user to scale between multiple GPUs seamlessly. Seamless is the keyword here as a multi GPU that can perform cohesively as a single GPU has never been made. According to the presentation shown at the Xe Unleashed event, the GPU will register essentially as one large GPU. This will allow it to mate with applications that may not have the capability for multi GPU and retain almost all backwards compatibility.
Developers won't need to worry about optimizing their code for multi-GPU, the OneAPI will take care of all that. This will also allow the company to beat the foundry's usual lithographic limit of dies that is currently in the range of ~800mm. Why have one 800mm die when you can have two 600mm dies (the lower the size of the die, the higher the yield) or four 400mm ones? Armed with One API and the Xe macroarchitecture Intel plans to ramp all the way up to Octa GPUs by 2024. From this roadmap, it seems like the first Xe class of GPUs will be X2.
The tentative timeline for the first X2 class of GPUs was also revealed: June 31st, 2020. This will be followed by the X4 class sometime in 2021. It looks like Intel plans to add two more cores every year so we should have the X8 class by 2024. Assuming Intel has the scaling solution down pat, it should actually be very easy to scale these up. The only concern here would be the packaging yield - which Intel should be more than capable of handling and binning should take care of any wastage issues quite easily. Neither NVIDIA nor AMD have yet gone down the MCM path and if Intel can truly deliver on this design then the sky's the limit.
Without any further ado, here's the footage of the teaser our spy managed to take:
There you go folks, here's your very first look at the official Intel Xe GPU, or more accurately the Intel X2 GPU. This short but rather spicy trailer gives a lot away. The design rocks a carbon fibre aesthetic with blue accents (from what I have been told, the blue stripes will be glow in the dark!) and the first reference design will be made in partnership with ASUS. You can also quite clearly see two intake pipes for what appears to be an internal water loop.
My source has told me that the card will actually have two modes. A standard mode, which will allow the dual GPU to function at moderate clock speeds for most users, and a turbo boost mode, which when connected to the AIO upgrade, will allow the user to reach clock speeds exceeding 2.7 GHz (well 2.71828 to be exact) on both GPUs! This is an absolutely astonishing feat that allows Intel to reduce the upfront cost of their GPU. You can either buy the card with the AIO as a package or pay less and upgrade later.
I am told that Intel is planning to be very competitive in pricing and when asked hinted that their flagship would be more affordable than any counterpart on the market. This means we are looking at a maximum MSRP of $699 for the X2 flagship. The X2 GPU will be based on the new 4D XPoint memory and feature the Direct3D 14_2 feature level as far as hardware goes. Here are the complete specs that were discussed during the event:
|Wccftech||Intel Xe 2 GPU|
|Stream Processors||12288 (6144 x2)|
|Core Clock (Boost)||1600 MHz (2718 MHz)|
|Memory||32 GB 4D XPoint|
|Peak Performance||66.8 TFLOPs|
|Die Size||600mm2 (x2)|
|Interconnect||Unknown (TSV based)|
|Feature Level||Direct3D 14_2|
|Power Connectors||3x 8-pin|
|Expected MSRP||$699 (TBC)|