AMD-Radeon-R9-280X-Photo
Games Hardware

Videocardz has released some pretty interesting information regarding AMD’s upcoming Never Settle Forever Bundle (2014). The information not only mentions the exact games but the cards supported. However it seems that many modern games are not included, although it does contain its fair share of AAA titles from Square Enix and Eidos.

Never Settle Forever Banner

AMD’s Buyer’s Choice Game Offers

AMD Never Settle Forever 2014 Bundle – 3 Tiers scaled along the AMD Radeon GPU Lineup

Anyone familiar with AMD’s past Never Settle promotion bundles would recognize the three tiered reward mechanism of the promotion offer. Basically if you buy the low end, i.e. an R7 250 or an  R7 240 you will get only one game. If you buy the middle end, i.e. the R9 260, R9 270 or the R9 270X you will get 2 games. And if you buy the high end of AMD Radeon GPUs, namely the R9 280, R9 280X, R9 290, R9 290X and R9 295X2 you will get 3 games along with your brand new GPU. Here is the complete list of games that will be given to you courtesy of the AMD team and restrictions/Indie Packages that apply (if any).

AMD logo
Hardware

At the Red Hat Summit today, AMD’s server group made the first public demonstration of its second generation AMD Opteron X-Series APU codenamed ‘Berlin’. The APU runs on a Linux environment, based on the Fedora Project. The project was founded in 2003 as a result of a merger between Red Hat Linux and Fedora Linux projects and is a community driven Linux distribution. It aims to provide a familiar enterprise class operating environment to developers. It is an important development for developers looking to migrate to x86 APU servers but unwilling to introduce new tools and software platforms into IT environments, according to AMD.

HSAAcceleratedProcessingUnit

Demo showcases the world’s first HSA (Heterogeneous System Architecture ) featured server APU.

This demo of Berlin is the first showcase in the world of Heterogeneous System Architecture, the Opteron X-Series. HSA aims towards blending scalar processing on the CPU, parallel processing on the GPU and optimized performance on the DSP. Through this, it intends to expose the capabilities of mainstream programmable computing elements. The demo also highlighted a variety of new advancements including Project ‘Sumatra’ which allows Java applications to take advantage of GPUs present inside the APUs. This combination of Linux and Java based on ‘Berlin’ creates a variety of possibilities. It creates a more optimized server platform for multimedia based workloads according to AMD. It will also help drive new levels of workload efficiency on data in data centers according to the company.

nvidia-geforce-gtx-logo
Hardware

You thought the Geforce 700 Series was over, didn’t you ?  Well think again. I am also willing to bet that when asked of an upcoming 700 Series GPU you wold have answered: GTX 790, well, Nvidia has other plans (Does not mean that the 790 isn’t coming). We have just received confirmation that Nvidia is adding not one but three GPUs to their Geforce 700 Series. Namely and in ascending order these are the GT 705, GT 710 and GT 720.

NVIDIA-GeForce-GTX-650-635x403

An Nvidia GTX 650

Nvidia Geforce GT 705, GT 710 and GT 720 to Debut as OEM Only Cards – To have Fermi and Kepler Cores

So basically what we have here is Nvidia rolling out some low end (and when I say low end, I mean really low end) and almost certainly OEM only products to the Geforce 700 series. Although the GT 720 might get to see the light of retail, the GT 705 and GT 710 will probably only ever appear in pre-made OEM pcs charging people extra because of a ‘dedicated gpu’ and before this results in a flame war the same thing happens with low-end Radeon. Pardon my presumptuousness but some marketing tactics are borderline illegal. Putting aside my rant, here is the spec sheet of the upcoming arrivals to the Nvidia lineup, which will probably be launched completely stealthily:

WCCFTech  GPU Core Core Clock Memory Clock (Effective) Memory Memory Bus CUDA Cores  TMUs  ROPs Transistors TDP
Geforce GT 705  GF119-300-A1 810 Mhz  1796 Mhz 1 GB DDR3  64 Bit   48  8  4 292 Million 29W
Geforce GT 710  GK208-301-A1 823 Mhz  1800 Mhz 512MB DDR3  64 Bit   192  16  8 Unknown Unknown
Geforce GT 720  GK208-301-A1 902 Mhz  1800 Mhz 1 GB DDR3  64 Bit   384  16  8 Unknown Unknown
Catalyst logo
Hardware

Update - MSI has contacted us that the bug has been fixed through a new BIOS that was rolled out at the start of this month. The issue was related to AMD drivers and MSI’s double PWM fan control system on the TriFrozr graphics cooler. For those running MSI’s Radeon R9 290X Lightning, you can download the following BIOS file and update your cards using the LiveBIOS utility to ensure optimal performance.

http://www.msi.com/product/vga/R9_290X_LIGHTNING.html#/?div=Utility&os=Win8.1

[Report] At Hawaii’s launch there were multiple jokes about it running too hot. Well, if you are an MSI R9 290X Lightning owner then the joke might become a harsh reality for you. It turns out that a bug that ails only the 290X Lightning’s stops both side fans from working, causing severe heating issues. We were first notified of this critical malfunction by reader Greg Mann and a simple forum search revealed that the problem was rampant in practically every MSI Lightning (R9 290X) out there. At the time of writing, AMD has been notified of the problem.

Lightning R9 290X Catalyst 14.3 Fan Bug Overheating

Catalyst 14.3 Beta Drivers causes critical fan malfunction in MSI Lightning R9 290X

Catalyst 14.3 Driver Issue causes Side Fans of the MSI R9 290X Lightning to Stop Working

Thankfully however, from what we can see in the forum responses at least, all owners noticed the problem quickly enough and stopped any GPU intensive activity while downgrading to a safe Catalyst version. It is worth noting here that the Catalyst 14.3 is a BETA driver and AMD has warned about bugs like this on practically every documentation available.  However from a cursory glance this issue has been prevalent in even 14.2 BETA drivers. Here is the hierarchy of the problem:

  • Catalyst 14.3: Causes critical malfunction of side fans not functioning and middle fan locked at 30%. Manual override using MSI Afterburner or Catalyst Control Not possible. The result is almost certain damage to the GPU if it is put under continuous, heavy load.
  • Catalyst 14.2: Causes the auto feature to malfunction, resulting in the side fans not spinning up and increasing RPM when GPU is put under load. The result is inadequate cooling and overheating. Manual override is possible.
  • Catalyst 13.12: Everything works fine.
XFX Logo
Hardware

AMD Partners are releasing their more or less similar looking R9 295X2′s in an orderly fashion and today XFX have marked their entry with the XFX R9 295X2 ‘Core Edition’. Which is more or less your reference R9 295X2 with just the AMD sticker removed and a bordered XFX sticker in its place (it looks like theres nothing there, but its there). It will ship in a cardboard box and will retail for about $1499 MSRP.

XFX-R9295X2-2

XFX R9 295X2 Core Edition Revealed – A Reference R9 295X2 in a Box

XFX deserves a special mention in the tech world because its story is a sad one. It used to be a fluent Nvidia Partner back in the days (before 2009 to be exact) and rolled out some amazing cards. I still have the XFX GTS 250 Core Edition with me to this day. However it decided that it will start manufacturing AMD Radeon cards as well but that was against the exclusivity understanding they had with Nvidia. The result? they got a boot from Nvidia’s partners and ended up making AMD cards exclusively. Sadly Corporate Ethics is a debate better left untouched. Moving on.

The XFX R9 295X2 will be your standard Vesuvius, but for those who against all odds,  do not know, here are the official specs: 28nm Hawaii XT Cores, with 2816 Stream Processors per GPU and 8GB of memory (4GB per GPU) paired up with dual 512-bit memory interface. The GPUs are clocked at 1018MHz while the memory is clocked at a lazy 5Ghz. It uses Asetek’s custom cooling solution with water blocks and a 120mm radiator fan.

AMD Mantle Logo
Games Hardware

Sid Meier’s Civilization Series has been part and parcel of the childhood of many gamers around the world, myself included. If you consider yourself a fan of the RTS and Turn Based Strategy genre than the Civilization series is a rite of passage, along with the other classics such as Age of Empire and Command and Conquer. Before I descend into too much nostalgia, let me clear something up. Civilization: Beyond Earth is definitely not a sequel but technically speaking it hasn’t been stated as a reboot (of the highly acclaimed Alpha Centauri) either. So my calling “Civilization: Beyond Earth” a reboot might be a factual fallacy; a re-visitation would be more appropriate but that would make for a shallow headline. Moving on.

sid meier beyond earth

Mantle API will Power Sid Meier’s Civilization: Beyond Earth – Coming to Windows, Steam OS/Linux and Mac OS

I will tell you one thing though. Sid Meier’s team has left no stone unturned in an attempt to distribute it to the widest audience possible. The game not only supports every major platform including Windows, Steam OS, Linux and Mac OS but will also have Mantle API support hardwired into the code. Expect to see Mantle API fueled performance out of the box and without any synthetic updates unlike Battlefield 4 and Thief. It goes without saying that DirectX  is supported and unless I am mistaken, Nvidia will be working on bringing out a DX 11 Optimization Driver for the game in a tit for tat fashion.

And in all honestly, I heartily approve of this cat and mouse chase of Red and Green. Because in the end the only people that are benefiting from this topsy turvy race are the Gamers. Here is the official statement from the PR department over at Firaxis Games Studio about their upcoming IP:

9920_big
Hardware

Gigabyte has sent out press briefings for a card that will raise many an eyebrow. The reason? It is a custom cooled GTX Titan Black and no it is not a Water Cooled card (which does not violate Nvidia’s AIB Restriction). However Gigabyte steps around the restriction by including the Custom Cooling solution along with a full reference card and then selling it at slightly above MSRP. The result? you get a massive 600W Windforce 3X GPU Cooler along with a reference GTX TITAN BLACK for only 910 Euros.

2

Gigabyte’s 600W Windforce 3X GTX TITAN BLACK – A beast of a cooler that will excel at Over Clocking

I must say that AIBs should be commended for circumventing Nvidia’s AIB Restriction (innovation and ingenuity are the hallmark of progress). The price bump is somewhat hard to stomach, though I guess if you are already dishing out 1k, another 200 dollars should not make a difference. However, this goes on to show you that gaming companies might still be looking out for gamers along with their own Business Motive after all. Anyways moving on to the actual news. So this particular cooler series from Gigabyte is nothing new, infact a common example is the Windforce 3X 450W. Gigabyte hasn’t changed much apart from the actual heat sink which can now accommodate 200W more heat. The design remains more or less the same with copper heat pipes delivering the load directly to two heat sinks separated by a few centimeters of air.

NVIDIA Logo
Hardware

When Nvidia showed us official slides for its upcoming DX 11 ‘Optimization’ Drivers we saw massive performance increase all over the board in nearly all situations. Now of course, Nvidia’s 337.5 Beta Driver was meant to reduce CPU-Overhead so when we saw that it gave a frames per second increase in Single GPU solutions too, we were pretty impressed. However it turns out that the numbers portrayed, although ‘technically’ true, don’t really apply with the same magnitude to your average scenarios.

NVIDIA GeForce 337.50 BETA DriverImage courtesy of VideocardZ.com

Nvidia 337.5 Beta Drivers Give You a Massive 71% increase – Just throw in another GPU

To put it bluntly, Nvidia’s official slides are a hair short of being bad faith. See, you do get a 71% increase with these drivers in Rome Total War, as compared to the 335 Driver set. But thats only because the 335 Drivers did not support SLI, and the 337.5 Beta Drivers do. Don’t get me wrong the new drivers do give you a significant increase, but they do so very selectively and Nvidia conveniently failed to mention that. If you play Metro 2033, you will see a significant increase in FPS, but if you are for eg, playing Crysis 3, do  not expect much of an increase.

nvidia-geforce-gtx-logo
Hardware

[Update]: A Good many people have been confused by the 128 GB/s speed of the stacked DRAM mentioned in our table. They have also confused it with the 1 Terabytes per second achievable speed mentioned by the CEO of Nvidia. The speed mentioned here is not the total memory bandwidth of a GPU once all Mem. Chips are in place and is the total memory bandwidth of one chip only. Notice that the table mentions GDDR 5 max as 28GB/s when we clearly don’t get just 28GB/s with GDDR5 GPUs.

[Editorial] Before I begin, my humble warning that this post might get a little technical. This generation of graphic cards is not about brute power, but efficiency and intelligent design. To achieve the maximum throughput while maintaining a very small foot print. Basically, true progress; and its not just about adding more transistors on a die. Nvidia demoed two critical technologies on GTC this year, namely NV Link and Stacked DRAM aka ’3D Memory’. However they understandably failed to give a lot of technical details since the demo was for the general audience, but I will try to take care of that today, albeit slightly late.

NVIDIA Pascal GPU Chip Module

Nvidia Pascal: Using CoW (Chip-on-Wafer) based 3D Memory to Achieve the Next Gen GPU

Lets begin with 3D Memory. Now most of you know what SoC (System-on-Chip) means, but now we have a slightly less used term which I will take the opportunity to explain. Basically the CoW (cue mundane bovine jokes) or Chip on Wafer design is a technique used to plant a single logic circuit directly over or under a stack of wafers. Basically the chips are stacked and Silicon punched through in vertical pillars called TSV (Through Silicon Vias) till the Control Die. In this case, it means that the DRAMs that are stacked will be controlled by a single logic circuit and henceforth referred to as a ‘Chip-on-Wafer’ design. In all probability the Nvidia 3D RAM will be using the JEDEC HBM standard, which funnily enough was developed by JEDEC and AMD.  However the actual production will most likely be carried out by SK Hynix. Pascal’s Stacked DRAM Design’s 2 modules of configuration:

Configuration 1: 2x Stack (512 Gb/s) + 1 (Control Die). This is called 2-Hi HBM.
Configuration 2: 4x Stack (1024 Gb/s) + 1 (Control Die) This is called 4-Hi HBM.

Nvidia might even bring a configuration standard in its Pascal Architecture between these 2 ‘traditional’ configs (3 Stacks) but that is unlikely. It could theoretically reach speeds of 2 -  4 Terabytes per second by ramping upto 16-Hi or 32-Hi HBM with multiple chips on the GPU. So here we have an interesting question. Green has promised us speeds up to 1 Terabytes per Second. So there is more or less no question that the high end GPUs will ship with very high layers of stack , however what about the middle and lower order? Will they also ship with the same layers of stack or a lesser configuration. If I were to make an educated speculation I would put my money on multiple configurations scaled across the spectrum of GPUs. As in the lower order to have 2 + 1 layers, while as the top order could have the 8 + 1 layers (or more) . Continuing the same speculation, HBM utilizes a low operating frequency and low power requirement. Therefore Nvidia’s Stacked DRAM will most probably operate at around 1.2V with frequency around 1Ghz. Here is a comparison chart between our traditional GDDR5 Ram ad x2 ad x4 stacks of DRAM with the control dies.

AMD R9 Series Logo
Hardware

3D Mark’s Hall of Fame is one of the most prestigious proving grounds for Overclockers all over the world and recently it has been dominated by Nvidia products. However the pros over at OcUK have landed the top spot with AMD’s R9 290X. There are four categories in total, Single GPU configurations, Dual GPU Configurations, Triple GPU Configurations and Quad GPU Configurations. OcUK’s 8Pack Overclocking team is the top dog at the Quad SLI section with 4 R9 290Xs.

Quad GPU Benchmark

OcUK’s 8Pack Overclocking Team Lands the Top Position at 3d Mark Hall of Fame with 4 R9 290Xs

Overclockers UK’s (eh) 8 Pack team is a well known bunch of enthusiasts and they have proven their metal once again (and of the R9 290X) by smashing the Quad Config World Record with their AMD powered rigs. For those interested they used retail MSI Lightning R9 290Xs to get into the Quad GPU Hall of Fame of 3dMark. The complete specs were 4 Way R290X Lightning, 4930K, Corsair Dominator plats C9 2730+ and EK Water Cooling

This adds another badge to 8 pack’s growing list of achievements this year:

AMD R9 Series Logo
Hardware

*Nothing mentioned in this leak is set in concrete, AMD is bound to change a few things here and there.

[Exclusive] [Editorial]: I have just recently gotten my hands on a somewhat outdated and alleged AMD GPU specification details, the essentials of which should nonetheless hold true. The specifics mentioned in this post are liable to change and the primary focus of this leak is to serve as the basis of all future Pirate Islands speculation. This is going to be a very very interesting post, so buckle up, but keep that pinch of salt handy.

AMD Pirate IslandsHas absolutely nothing to do with AMD, but seemed awfully convenient. Respective Owners Reserve All Rights.

AMD Pirate Islands GPUs: R9 390X ‘Bermuda’ R9 380X ‘ Fiji’ R9 370X ‘ Treasure Island’

So lets begin, basically the AMD Details show three GPUs, the R9 390X, R9 380X and R9 370X, namely the Pirate Islands series. According to these details all GPUs will be based on the TSMC 20nm Node and will be true Pirate Island cores, which is why nomenclature of the dies will be derivative in nature. Another thing mentioned is that all Pirate Island GPUs will have the DirectX 12 Hardware Feature Set. According to these details the R9 370X is slated for announcement/arrival sometime this July-August this year. The R9 370X will feature a ‘Treasure Island XTX’ core and supposedly has 1536 Stream-processors, 96 Texture Units and 48 ROPs on the uncut die. Of course this road-map precedes the report we got of TSMC having a little trouble with 20nm so I am not sure how valid the time frame is anymore.

AMD Radeon R9 295X2 Side
Hardware

AMD, today officially launches their flagship dual-chip Radeon R9 295X2 graphics card powered by two beastly Hawaii cores. The Radeon R9 295X2 is also the first reference card from a GPU manufacturer to adopt a hybrid cooler as a reference design which adds to the aesthetics and cooling of the card.AMD Radeon R9 295X2

AMD’s Radeon R9 295X2 ‘Vesuvius’ Bursts Vulcan-Power With Two Full Hawaii Cores

The naming of the card is enough to suggest the amount of power and performance the card holds. Featuring a tremendous dual Hawaii chip configuration, the Radeon R9 295X2 is a feat of engineering from the AMD team. The AMD Radeon R9 295X2 ‘Vesuvius’ features the Hawaii XT cores with GCN 1.1 architecture. With two Hawaii XT GPUs on the same board, the specifications equate to 2816 x 2 stream processors, 176 x 2 texture mapping units, 64 x 2 Raster Operators and 4 GB of GDDR5 memory on each chip which equates to a total of 8 GB GDDR5 memory. Double these and you get an impressive 5632 stream processors, 352 TMUs and 128 ROPs for an unprecedented amount of gaming and compute performance.

Clock speeds are maintained at 1018 MHz for the core which is impressive considering AMD managed to clock their chip 18 MHz faster than the stock variant without worrying about the heat. This shows that the new cooling design has things kept well under control. The 8 GB GDDR5 memory runs across a 512-Bit X 2 memory bus that is clocked at 1250 MHz effective clock speed (5.00 GHz effective clock speed). This pumps out a total of 640 GB/s memory bandwidth.

AMD logo
Hardware

We have received word that VESA has accepted AMD’s proposal and FreeSync will become a standard for Display Port 1.2a. FreeSync, who was brought to life to rival Nvidia’s G-Sync, will now be a much greater force to be reckoned with then it was before with just a prototype to support it.

AMD VESA DisplayPort 1.2a Standard

Display Port 1.2A to have FreeSync Standard – Proposal accepted by VESA

Nvidia made headlines when it launched its new Monitor-GPU Syncing technology that is the Nvidia G-Sync. However currently G-Sync Enabled Monitors are pretty pricey. But one thing could not be denied, if G-Sync is successful it would almost completely (and subtly) shift the PC Market to Green’s Side. AMD Seemed to think so as well because, it started working on its own Free Alternative whose very name was a pun to Nvidia, called FreeSync.

Now it seems, with a brilliant move, AMD might just start shifting the odds in its favor again.
A report by the aptly named French hardware site Hardware.fr states that VESA (Video and Electronics Standards Association) has just approved AMD’s proposal and FreeSync will now become a part of the standard of Display port 1.2a. However do note that VESA has added an ‘optional’ tag to the standard (contradictory and oxymoronic, I know. ) which means that it will be upto the individual manufacturers whether to implement it in monitor models or not.

nvidia-geforce-gtx-logo
Hardware Software

NVIDIA has officially launched their performance boosting GeForce 337.50 BETA driver which as mentioned, optimizes DirectX 11 performance in a wide selection of gaming titles. Geared specifically towards GeForce graphics cards, the GeForce 337.50 BETA offers both SLI and Single GPU performance boosts with 10 – 20 % enhancements versus the previous GeForce driver.

DirectX 11 API Enhancements GeForce 337.50 Driver

NVIDIA GeForce 337.50 BETA Performance Boosting Driver Official

Now, what’s really interesting about the GeForce 337.50 BETA driver is that it is specifically geared to enhance performance in DirectX 11 API. The GeForce 337.50 BETA driver takes a well known approach to solve processor overhead without moving to a new API (DX12 for example). The approach is called ‘Tiled Resources’ and a weak variant of the same is also used in Mantle API. This will also grant Developers more control at the Assembly level which will allow them to tweak performance for specific set of hardware. Just like AMD’s Mantle API which runs on GCN based graphics unit, the GeForce 337.50 Driver will improve performance of GeForce 500 series and up without the need of a separate API.

nvidia-geforce-gtx-logo
Hardware

NVIDIA has revealed on their official GeForce website that the upcoming GeForce 337.50 BETA driver would launch on 7th April. The GeForce 337.50 BETA driver would bring several DirectX Optimizations that will help significantly boost performance of GeForce GPUs based graphics cards.

NVIDIA GeForce 337.50 Driver – 64-Bit Download Link (Will Be Available on 7th April)NVIDIA GeForce 337.50 BETA

NVIDIA GeForce 337.50 BETA Driver Available On 7th April

Now, what’s really interesting about the GeForce 337.50 BETA driver is that it is specifically geared to enhance performance in DirectX 11 API. The GeForce 337.50 BETA driver takes a well known approach to solve processor overhead without moving to a new API (DX12 for example). The approach is called ‘Tiled Resources’ and a weak variant of the same is also used in Mantle API. This will also grant Developers more control at the Assembly level which will allow them to tweak performance for specific set of hardware. Just like AMD’s Mantle API which runs on GCN based graphics unit, the GeForce 337.50 Driver will improve performance of GeForce 500 series and up without the need of a separate API.

With several gaming titles already running on DirectX 11 API, we can see performance enhancements through a wide variety of games as opposed to AMD’s Mantle which currently only enables on two titles (Battlefield 4 and Thief). The Mantle API is said to be featured across several upcoming gaming titles but as long as there’s DirectX 11 in the game, NVIDIA GeForce graphics cards will get the same if not better performance enhancements. This does show that both GPU manufacturer’s are gearing up with their own experiments for the upcoming DirectX 12 API which will enable all the low-level CPU overhead optimizations and additional features for next generation graphics hardware. This will give these companies ample time to develop hardware which can be utilized to the core by developers and programmers to effectively utilize the hardware capabilities.



Creative Commons License This work by WCCF (Pvt) Ltd. is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available on the Terms and Conditions page. WCCF 2011 is designed by UzEE, Inc.
Ran 34 queries in 0.956  seconds.
Find us on Google+