[Update]: A Good many people have been confused by the 128 GB/s speed of the stacked DRAM mentioned in our table. They have also confused it with the 1 Terabytes per second achievable speed mentioned by the CEO of Nvidia. The speed mentioned here is not the total memory bandwidth of a GPU once all Mem. Chips are in place and is the total memory bandwidth of one chip only. Notice that the table mentions GDDR 5 max as 28GB/s when we clearly don’t get just 28GB/s with GDDR5 GPUs.

[Editorial] Before I begin, my humble warning that this post might get a little technical. This generation of graphic cards is not about brute power, but efficiency and intelligent design. To achieve the maximum throughput while maintaining a very small foot print. Basically, true progress; and its not just about adding more transistors on a die. Nvidia demoed two critical technologies on GTC this year, namely NV Link and Stacked DRAM aka ’3D Memory’. However they understandably failed to give a lot of technical details since the demo was for the general audience, but I will try to take care of that today, albeit slightly late.

NVIDIA Pascal GPU Chip Module

Nvidia Pascal: Using CoW (Chip-on-Wafer) based 3D Memory to Achieve the Next Gen GPU

Lets begin with 3D Memory. Now most of you know what SoC (System-on-Chip) means, but now we have a slightly less used term which I will take the opportunity to explain. Basically the CoW (cue mundane bovine jokes) or Chip on Wafer design is a technique used to plant a single logic circuit directly over or under a stack of wafers. Basically the chips are stacked and Silicon punched through in vertical pillars called TSV (Through Silicon Vias) till the Control Die. In this case, it means that the DRAMs that are stacked will be controlled by a single logic circuit and henceforth referred to as a ‘Chip-on-Wafer’ design. In all probability the Nvidia 3D RAM will be using the JEDEC HBM standard, which funnily enough was developed by JEDEC and AMD.  However the actual production will most likely be carried out by SK Hynix. Pascal’s Stacked DRAM Design’s 2 modules of configuration:

Configuration 1: 2x Stack (512 Gb/s) + 1 (Control Die). This is called 2-Hi HBM.
Configuration 2: 4x Stack (1024 Gb/s) + 1 (Control Die) This is called 4-Hi HBM.

Nvidia might even bring a configuration standard in its Pascal Architecture between these 2 ‘traditional’ configs (3 Stacks) but that is unlikely. It could theoretically reach speeds of 2 -  4 Terabytes per second by ramping upto 16-Hi or 32-Hi HBM with multiple chips on the GPU. So here we have an interesting question. Green has promised us speeds up to 1 Terabytes per Second. So there is more or less no question that the high end GPUs will ship with very high layers of stack , however what about the middle and lower order? Will they also ship with the same layers of stack or a lesser configuration. If I were to make an educated speculation I would put my money on multiple configurations scaled across the spectrum of GPUs. As in the lower order to have 2 + 1 layers, while as the top order could have the 8 + 1 layers (or more) . Continuing the same speculation, HBM utilizes a low operating frequency and low power requirement. Therefore Nvidia’s Stacked DRAM will most probably operate at around 1.2V with frequency around 1Ghz. Here is a comparison chart between our traditional GDDR5 Ram ad x2 ad x4 stacks of DRAM with the control dies.

AMD Radeon R9 295X2 Side

AMD, today officially launches their flagship dual-chip Radeon R9 295X2 graphics card powered by two beastly Hawaii cores. The Radeon R9 295X2 is also the first reference card from a GPU manufacturer to adopt a hybrid cooler as a reference design which adds to the aesthetics and cooling of the card.AMD Radeon R9 295X2

AMD’s Radeon R9 295X2 ‘Vesuvius’ Bursts Vulcan-Power With Two Full Hawaii Cores

The naming of the card is enough to suggest the amount of power and performance the card holds. Featuring a tremendous dual Hawaii chip configuration, the Radeon R9 295X2 is a feat of engineering from the AMD team. The AMD Radeon R9 295X2 ‘Vesuvius’ features the Hawaii XT cores with GCN 1.1 architecture. With two Hawaii XT GPUs on the same board, the specifications equate to 2816 x 2 stream processors, 176 x 2 texture mapping units, 64 x 2 Raster Operators and 4 GB of GDDR5 memory on each chip which equates to a total of 8 GB GDDR5 memory. Double these and you get an impressive 5632 stream processors, 352 TMUs and 128 ROPs for an unprecedented amount of gaming and compute performance.

Clock speeds are maintained at 1018 MHz for the core which is impressive considering AMD managed to clock their chip 18 MHz faster than the stock variant without worrying about the heat. This shows that the new cooling design has things kept well under control. The 8 GB GDDR5 memory runs across a 512-Bit X 2 memory bus that is clocked at 1250 MHz effective clock speed (5.00 GHz effective clock speed). This pumps out a total of 640 GB/s memory bandwidth.

Hardware Software

NVIDIA has officially launched their performance boosting GeForce 337.50 BETA driver which as mentioned, optimizes DirectX 11 performance in a wide selection of gaming titles. Geared specifically towards GeForce graphics cards, the GeForce 337.50 BETA offers both SLI and Single GPU performance boosts with 10 – 20 % enhancements versus the previous GeForce driver.

DirectX 11 API Enhancements GeForce 337.50 Driver

NVIDIA GeForce 337.50 BETA Performance Boosting Driver Official

Now, what’s really interesting about the GeForce 337.50 BETA driver is that it is specifically geared to enhance performance in DirectX 11 API. The GeForce 337.50 BETA driver takes a well known approach to solve processor overhead without moving to a new API (DX12 for example). The approach is called ‘Tiled Resources’ and a weak variant of the same is also used in Mantle API. This will also grant Developers more control at the Assembly level which will allow them to tweak performance for specific set of hardware. Just like AMD’s Mantle API which runs on GCN based graphics unit, the GeForce 337.50 Driver will improve performance of GeForce 500 series and up without the need of a separate API.

Goat Simulator logo

This Review is Not Spoiler Free.
Goat Simulator Cover Hi Res

First of all, a big thanks to Anton Westbergh (@AntonWestbergh) for handing us the Review Copy of Goat Simulator. Goat Simulator is an Indie game developed by Coffee Stain Studios to instant, critical acclaim. The game’s humble origins were from a Game Jam and it was only after the Internet’s enthusiastic demand that it turned into an IP. When we heard of this game, a few weeks back, we knew we had to get our hands on it.

The Goat Simulator Review: You won’t believe the stuff you can do in this Baaah-rilliant Indie Master Piece

1. Preliminary

Normally, what happens in the case of a super-hyped game, is that the Gaming Studio shamelessly paints it as the Next Big Thing and the critics shoot it down (mostly). However just as everything about Goat Simulator is one-of-a-kind, it has the exact opposite approach, because the Dev’s don’t take it seriously but we think it’s nothing short of epic. Cofee Stain Studios, posts a witty comment on the official description:

“Goat Simulator is a small, broken and stupid game. I t was made in a couple of weeks so don’t expect a game in the size and scope of GTA with goats. In fact, you’re better off not expecting anything at all actually. To be completely honest, it would be best if you’d spend your $10 on a hula hoop, a pile of bricks, or maybe a real-life goat.”


[Editorial] Skynet has been a very prevalent icon in tech-culture for the past decade. The fear of Machines rising against humanity and robotic overlords are all frequently visited territories. Interestingly, one would expect that the expected result of all the Technophobic media might have been to deter us away from A.I. work; but the actual effect seems to be the opposite. Scientists have become more or less obsessed with achieving a thinking, rationalizing, and sentient Intelligence; simulated or otherwise. Skynet of legend, I remember, was a Neural-Net based Artificial Intelligence. It worked on the concept of “Machine Learning”. It so happens that Nvidia showcased what appears to be the first fully Scalable Deep Neural Network based (Primitive) Intelligence System. A System that can deploy “Machine Learning” and actually learn just like a human.

SKYNET Powered by NvidiaDisclaimer: I am about 90% sure that I am joking about Skynet.

Deep Neural Network Intelligence demonstrates Unsupervised Machine Learning at Nvidia’s GTC

I think a basic introduction on how Neural Nets work and function is in order. Of course the actual way a neural net works is low level code, so I can only provide a very simplified explanation. Neural Networks were thought of first as a way to perfectly simulate the Human and Animal nervous system where a neuron fires for any object ‘recognized’. The reasoning went so that if we could replicate the trigger process with virtual ‘neurons’ we should be able to achieve ‘true’ machine learning and eventually even Artificial Intelligence. Now thing is, Nvidia isn’t exactly the first company to achieve a working Deep Neural Network (with collaboration from Stanford University). The first DNN was created by Google. Thats right, the one company that is powerful and ambitious enough to pioneer something like true A.I. capabilities.

AMD FirePro W9100 Official

AMD has announced their latest FirePro series which includes the flagship FirePro W9100 series graphics card based on the Hawaii GPU architecture. The Hawaii GPU leverages performance of the FirePro graphics cards versus the previous generation which utilized the Tahiti GPU architecture.AMD FirePro W9100 Official

AMD Unleashes Hawaii Based FirePro W9100 Professional Graphics Card

The AMD FirePro W9100 is the next generation Hawaii GPU based professional graphics card which features over 5.24 Tera Flops of Single-Precision and 2.67 TFlops of Double-Precision performance. Its no surprise that we are looking at the full Hawaii GPU for the FirePro W9100 which means the card features 2816 Stream processors, 176 TMUs and 64 ROPs. With 1/2 FP64 rate the card churns out impressive compute performance which enables content creators and professionals to harness the full power of the Hawaii chip. This also hints at a clock speed of greater than 1000 MHz but we have to wait to confirm core, memory frequencies and TDP.

The AMD FirePro W9100 is also the first graphics card to standardize 4K resolution for server and professional market by bringing 16 GB of GDDR5 memory to professionals. Display is provided through 6 display port outputs, the higher memory buffer will allow better content creation and visualization. With

The performance slides presented at the event showcase several enhancements over the NVIDIA’s Quadro K6000 with 1.5 times more double precision performance, more display outputs and a higher frame buffer of 16 GB ram compared to 12 GB on NVIDIA’s counterpart. The card pumps out over 300 GB/s bandwidth, rest of the card spec are not known at the moment, neither is the price but we can speculate a price tag around $4000 US since that’s what AMD sold their previous FirePro W9000 card at.

NVIDIA GeForce GTX Titan Z Official

NVIDIA has just announced GeForce GTX TItan Z which is their most high performance graphics card ever built on the GK110 core. The GeForce GTX Titan Z is a a pure beast featuring NVIDIA’s flagship GK110 core architecture and a massive core count backed by 12 GB GDDR5 VRAM.NVIDIA GeForce GTX Titan Z Official

NVIDIA Annouces World’s Fastest GeForce GTX TItan Z With Dual GK110 Cores

The NVIDIA GeForce GTX Titan Z is an engineering beauty powering the next iteration of enthusiast and high performance gaming PCs. The GeForce GTX Titan Z isn’t specifically aimed at gamers and is also available for developers and content creators who can now use the power of dual GK110 cores to develop rich content.

The flagship GeForce GTX Titan Z will replace the GeForce GTX 690 boasting dual-GK110 cores compared to dual-GK104 cores on its predecessor. The GeForce GTX Titan Z will feature two GK110 cores with 5760 Cuda Cores, 448 TMUs and 96 ROPs. This mean that the GeForce GTX Titan Z is featuring the full GK110 specifications as opposed to the rumors which said it would rather use the cut-down GK110 core with 2496 Cores, 208 TMUs and 40 ROPs. Other specs include a 384-bit x 2 bus which will run across a massive 12 GB VRAM. This is an impressive feature giving developers and games an unprecedented amount of VRAM for use. The memory is clocked at 7 GHz effective clock speed. The core clock speeds are not confirmed but the GeForce GTX Titan Z features a maximum single precision performance of 8 TFlops which is really impressive.


At GTC 2014, NVIDIA announced their next-generation high-performance Pascal GPU which will feature new innovations in addition to the high performance graphics core. The NVIDIA Pascal GPU will launch in 2016 and feature next generation technologies to solve bandwidth issues faced by graphics processing units.NVIDIA Pascal GPU Overview

NVIDIA Next Generation Pascal GPU To Feature 3D Stacked Memory and NVLINK

NVIDIA’s Pascal GPU would replace Maxwell going in 2016 and would feature the latest core architecture from NVIDIA that will use the latest 3D Stacked memory which will enable memory to be stacked on the GPU die and enable bandwidth speeds of upto 1 TB/s. This 3D chip on wafer integration will not only enable much more BW (bandwidth) but will also deliver upto 4 times the efficiency and 2.5 times more VRAM capacity of the graphics unit to deliver amazing performance on higher resolution screens.

The Pascal GPU would also introduce NVLINK which is the next generation Unified Virtual Memory link with Gen 2.0 Cache coherency features and 5 – 12 times the bandwidth of a regular PCIe connection. This will solve many of the bandwidth issues that high performance GPUs currently face.

DX12 logo
Hardware Industry

Microsoft has popped the lid on their upcoming DirectX 12 API which aims to enhance GPU performance with low-overhead and adding new rendering features. Most of us were expecting that the new DirectX 12 API will require new hardware to fully support the technology but that’s not the case at all as revealed during the GDC event, DirectX 12 will support all current hardware from NVIDIA, AMD and Intel.Microsoft-DirectX-12-GDC-2014

Microsoft’s DirectX 12 API Supports All Current GPU Architectures

This is a great news for desktop users as the majority of PC audience use a variety of hardware configurations which include mainly three vendors – Intel, NVIDIA and AMD. All major hardware companies have announced that their current generation of products will fully support the new Direct3D 12 API which will improve the utilization of the hardware to its maximum potential so that applications get improve performance in gaming along with ease of development on developers end.

AMD’s Raja Kadouri was the first to announce DirectX 12 support for their GCN (Graphics Core Next) hardware. AMD has always been the first to adopt the latest DirectX APIs due to their partnership with Microsoft on GPU and API development. Since DirectX 7, their GPUs have been the first to feature support for the new APIs from Microsoft and they had been working closely with Microsoft in the development of DirectX 12. Bear in mind that AMD had already developed their own proprietary API known as Mantle which is available in DICE’s Battlefield 4 and recently came to THIEF from Square Enix a few days ago.


NVIDIA has officially launched their GeForce GTX 800M Mobility family which introduces new and improved GPUs based on the Kepler and Maxwell core architecture. The GeForce GTX 800M series also features support for the NVIDIA GeForce Experience utilities such as GameStream and ShadowPlay which delivers better tools for games streaming and video recording using NVIDIA’s GPU core architecture.NVIDIA GeForce GTX 800M Series

NVIDIA Unleashes The GeForce GTX 800M Mobility GPU Lineup

The initial launch lineup of the GeForce GTX 800M series includes a mix of both Kepler and Maxwell chips. The top GeForce GTX 880M, GeForce GTX 870M, GeForce GTX 860M (Kepler) feature the Kepler core architecture while the GeForce GTX 860M, GeForce GTX 850M and the yet to be introduced, GTX 840M, GTX 830M models feature the first generation Maxwell core architecture.

The GeForce GTX 880M is technically the same chip as the GeForce GTX 680MX and GTX 780M with 1536 Cuda Cores, 128 TMUs and 32 ROPs but it has some key improvements. The VRAM has been expanded to 8 GB GDDR5 and the clock speeds have been bumped to 933 MHz base and 954 MHz boost although the memory is clocked at 5 GHz effective rate. The memory runs along a 256-bit interface pumping out 160 GB/s bandwidth though we don’t know how the higher VRAM would benefit users in terms of raw performance numbers.

mwc 2014 logo

Mobile World Congress 2014 Barcelona

Th Mobile World Congress happens to be one of the biggest events of note in the tech sector. As the name suggests it consists of everything and anything to do with mobile. Notably the launch of major smartphones and tablets occurs here. Ofcouse the major players are now drifting towards a more independent launch style for flagship products, the Apple Event being a prime example, but none the less the MWC 2014 will be the area to be for Mobile headlines and enthusiasts alike. MWC 2014 will take place in Barcelona and will begin on the 24th of this month on Monday. I.e. 2 days from the point of writing this article. An important thing to note this year is the Keynote address of Facebook’s founder Mark Zuckerberg. Now lets take a look at what to expect from the Mobile World Congress 2014.

GeForce GTX 750 Ti

Today, NVIDIA officially announces their GeForce GTX 750 Ti and GeForce GTX 750 graphics cards which are based on the first generation Maxwell core architecture. With almost twice the performance per watt of its predecessors, the NVIDIA Maxwell core architecture is going to become one of the most innovative GPU core architecture after Kepler in terms of efficiency.NVIDIA GeForce GTX 750 Ti Front

NVIDIA Launches GeForce GTX 750 Ti and GeForce GTX 750

The first generation Maxwell core would be integrated inside two graphics cards known as the GeForce GTX 750 Ti and the GeForce GTX 750. These are the first mid range cards NVIDIA has launched in their GeForce 700 series lineup since the rest of the graphics cards were based off the GK110 or the GK104 Kepler cores. The Maxwell GPU for both cards is codenamed GM107 so we will talk about it first before moving in the specifications of the cards.

The NVIDIA Maxwell GM107 core is the first of two Maxwell based chips which will be based on the 28nm process . The GM107 which is already available on the GeForce GTX 750 Ti and the GeForce GTX 750 while the other chip is codenamed GM108 and is the base Maxwell model which would be available as a part of NVIDIA’s GeForce 800M mobility lineup. The GM108 would be fused in the entry level mobility chips which include the GeForce GT 840M and possibly a few more models.


NVIDIA will also unleash their GeForce GTX Titan Black graphics card today featuring the full compute power of the GK110 core. The GeForce GTX Titan Black is the second generation Titan graphics card featuring the GK110 core with double precision cores unlocked and faster clock speeds compared to the GeForce GTX 780 Ti.GeForce GTX Titan Black Official

NVIDIA GeForce GTX Titan Black Delivers Unprecedented Amounts of Power to Developers and Enthusiasts

NVIDIA had initially launched their GeForce GTX Titan back in February 2013 which was the world’s first graphics card to feature the GK110 core which was at the time being mass produced specifically for server needs. Super-computers such as the Titan which the card was named after was the first to utilize the latest core with its high performance CUDA cores, 2668 in count at the time. The GeForce GTX Titan was a great card but came at a big price so it was mostly a enthusiast only product but still outsold the GeForce GTX 690 which was NVIDIA’s Dual GK104 based card available at the same $999 US price range.

Moving forward, NVIDIA introduced several GK110 based cards, the GeForce GTX 780 and GeForce GTX 780 Ti. The Ti model was the first to introduce the full GK110 core to the masses (but still had gimped double precision performance). The card came at a lower $699 US price range and bested the Titan in performance. AMD on the other hand had some interesting Hawaii models in the pipeline with highly competitive pricing models which was a reason for their popularity among the buyers. The fight between the GeForce GTX 780 Ti and Radeon R9 290X still remains unsettled since both cards are good in their own terms. The GTX 780 Ti is faster in NVIDIA optimized games, runs cooler with a low TDP but costs higher. On the AMD side, the R9 290X performs fantastic for its price and even manages to outperform the 780 Ti in AMD optimized titles but runs really hot and loud plus you can hardly find one in the market due to the crypto mining craze.

AMD Mantle Logo

AMD has officially launched their 14.1 BETA driver which adds support for the latest Mantle API in GCN enabled graphics processing units. The latest driver enables Mantle in Battlefield 4 which is the first game to adopt the new API delivering faster performance and improved frame pacing.AMD Mantle

AMD Catalyst 14.1 BETA Finally Launched – Enables Mantle API

The Mantle drivers have been released and now available to download (please visit this page for download links and change log for the Catalyst 14.1 BETA driver). The patch for Battlefield 4 was released a few days ago which added Mantle support to the game but public was unable to use the API until the drivers were out at AMD’s end. We stated in our previous article that AMD would provide press with an exclusive testing time ahead of official launch. We were provided the BETA drivers by AMD a few hours ago and had only limited time in our hands for testing Mantle in a few games with AMD’s GCN enabled graphics cards which include the Radeon R7 260X, R9 270X and R9 280X.

So how does Mantle work?

The Mantle API is geared towards AMD GCN hardware which includes their Radeon HD 7000, 8000 and R200 series cards.While Mantle’s first focus would be the graphics processing unit utilization, from recent benchmarks we have come to the conclusion that Mantle’s initial focus would be to eliminate scenarios where the processor becomes the bottleneck of the entire system causing performance degradation. Mantle API helps by reducing the CPU load for better utilization of the system hardware.

top 10 logo

top 10 best graphics games 2014 page title

[Editorial] Before we begin, let me clear the ruling criteria. All the games chosen have not been released yet. All of them have a tentative release date of 2014. It also goes without saying that only the media that has been released so far could be considered. I will look not only at the graphical quality of a game, but also at the scale. I.e. Impressive Graphical Quality but not enough Scale =/= win, likewise with Massive Scale but not enough Graphical Quality. To judge the graphical quality, we have taken into account the art style, environment detail, post processing, atmospheric effects, asset detail, character detail and also eye candy. All the screenshots and images have been (down) scaled to 1920×1080 for comparison purposes. Click on the thumbnails to enlarge them.  Finally , that beauty is relative, as is this editorial, so  just sit back, relax and enjoy the ride.

Creative Commons License This work by WCCF (Pvt) Ltd. is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available on the Terms and Conditions page. WCCF 2011 is designed by UzEE, Inc.
Ran 27 queries in 1.488  seconds.
Find us on Google+