Exploring AMD’s And Intel’s Architectural Philosophies – What Does The Future Hold ? (Part I)
[Editorial] Today we’re going to bring you on a journey that will get you one step closer to seeing the world in the eyes of AMD and Intel and reflect on how different yet quite similar those images truly are. To understand the philosophies of both companies we have to also understand the architectures inside the products they make and get to know the reasons behind the decisions made in designing those architectures intimately.
Intel and AMD Microarchitectures – Exploring the Past, Present and the Future (Part I)
Lets start from the beginning, in 2006 AMD made the decision to buy ATi with the vision to fuse GPU cores and CPU cores into a single piece of silicon called the Accelerated Processing Unit or APU for short. AMD called this effort “Fusion” and it was the company’s main advent from buying ATi. In 2010 Intel released the first processor to feature integrated graphics but just like the company’s first effort to make a dual or a quad core processor the resulting product was a multi-chip module or an MCM for short. Intel called this design Clarkdale & it was not a fully integrated monolithic solution. The graphics portion had its own separate die and it was connected to the main CPU die using an interposer.
A truly integrated solution did not come from Intel until Sandy Bridge was released a year later in January 2011.
In the same year & month AMD released its first processor with integrated graphics, code named Brazos. However unlike Intel’s first effort with Clarkdale the graphics portion of the chip was integrated into a single die of silicone and with Brazos & Sandy Bridge began the era of Fusion.
Fast forward to today and you’ll find that almost all processors have some sort of graphics solution integrated. Intel’s entire range of consumer processors have integrated graphics except for the niche that socket LGA 2011 addresses. All of AMD’s processors launched in the past two years have integrated graphics and the company has stated multiple times before that the company’s future is all about APUs. The only AMD products which don’t have some sort of integrated graphics solution are those on the AM3+ socket but all of said processors are based on the same Piledriver server design from 2012.
All mobile processors spanning notebooks to handhelds have integrated graphics as well. So it’s becoming strikingly clear that integrated graphics is here to stay.
Evolution Of Integrated Graphics
Today integrated graphics processors take up a considerable percentage of the chip real estate.
The GPU in Kaveri, AMD’s latest APU, takes up 47% of the chip’s real estate that’s almost half the die.
GPU real-estate has been consistently growing for the past several years, with AMD doubling the GPU portion from Llano, AMD’s first generation high performance APU, to AMD’s latest Kaveri parts.
We see a very similar trend with Intel as well, increasing the GPU portion of the die with each generation.
Going from Sandy Bridge, Intel’s first processor with integrated graphics built on a single monolithic die, to mainstream GT2 Haswell parts the GPU real estate nearly doubled. And with Intel’s “Iris” GT3 graphics parts the real estate quadrupled compared to Sandy Bridge.
Integrated graphics processors are no longer just responsible for graphics, they can accelerate a plethora of workloads using OpenCL. QuickSync is a great example of how iGPUs can be used for things other than graphics processing. Same with AMD, their iGPUs can be used to accelerate things like photo & video editing, accelerate stock analytics & media conversion.
With both AMD & Intel investing and dedicating such a significant amount of transistors in each chip to graphics and witnessing the gradual incline of research and development resources being poured into graphics solutions today by both parties; it’s clearly evident that both companies see great value and business opportunity in the of pursuit graphics.
So why the relatively sudden & invested interest in graphics from both companies ?
For the past several decades engineers relied primarily on Moore’s Law to get better performance out of their designs. Each year engineers were handed more transistors to play with, transistors that were faster and consumed less power. But in recent years Moore’s Law became much more slow to progress.
Power consumption continued to decline under the recent progression of Moore’s Law but frequency stopped scaling as it used to and it became exponentially more difficult to squeeze more performance out of a single serial processing unit (CPU). So engineers resorted to adding more CPU cores but the more CPU cores they add the more difficult it becomes to write code that distributes the workload evenly on all cores, the more cores you pile up, the more severely the problem is compounded.
You end up with a multicore core CPU that only sees a limited number of its resources getting any usage while running the majority of code. This means as a designer you end up with wasted transistors, inflating the cost to manufacture the processor without any tangible gain.
Engineers hit a brick wall while trying to improve the performance of CPU designs without blowing their power budgets or resorting to extremely complex and difficult design methods. If we were to get faster processors the processor design game had to change and we had to turn to a technology that did not rely primarily on frequency or complexity to scale.
The Solution To The Problem
The answer was GPUs, parallel processors which we can easily continue to scale with Moore’s law.
We can continue to spend transistors to add more parallel processors to each design instead of trying to push frequencies or complexity of a handful of CPU cores. This ensured that we could continue to scale the performance of our designs for the foreseeable future.
Parallel processors existed for years but were limited to a few applications. They have always been used in High Performance Computing “HPC” and graphics processing among other applications. Graphics processing was an obvious target for parallel processors, if you needed to process colors for millions of pixels tens of times every second a CPU was simply not going to cut it and thousands of smaller, slower & more efficient processors were perfect for such an application.
But there was a trick, not all code can be applied to GPUs and the grand majority of programmers in the world were either used to programming for a single fast serial processor or they learned programming from a such a person. After all the entire industry relied on CPUs for several decades & If we were going to turn to parallel computing in the mainstream to continue the scaling of performance we had to figure out how to make it less challenging and more accessible for programmers to write code for these types of processors.
AMD made it very clear from the beginning that their goal was to build the ultimate heterogeneous processor and all the evidence from Intel’s past actions and future roadmap is that they are indeed pursuing the same.
AMD being the smaller and more agile company was faster to respond to the changes in the industry.
AMD was able to more quickly adapt its vision to practice and mold it into a strategy that began to bare fruit.
The result was the development of HSA or (Heterogeneous System Architecture), the product was Kaveri and the goal was to chase the untapped potential of heterogeneous designs and begin a new era of computing where performance would scale again at the rate of golden age Silicone Valley.
Get Latest Tech News Daily
We cater to your constant need to remain up to date on today’s technology. Like us, tweet to us or +1 us, to keep up with our round the clock updates, reviews, guides and more.