Friday, June 26th 2015
AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right
This has been a roller-coaster month for high-end PC graphics. The timing of NVIDIA's GeForce GTX 980 Ti launch had us giving finishing touches to its review with our bags to Taipei still not packed. When it launched, the GTX 980 Ti set AMD a performance target and a price target. Then began a 3-week wait for AMD to launch its Radeon R9 Fury X graphics card. The dance is done, the dust has settled, and we know who has won - nobody. AMD didn't get the R9 Fury X wrong, but NVIDIA got its GTX 980 Ti right. At best, this stalemate yielded a 4K-capable single-GPU graphics option from each brand at $650. You already had those in the form of the $650-ish Radeon R9 295X2, or a pair GTX 970 cards. Those with no plans of a 4K display already had great options in the form of the GTX 970, and price-cut R9 290X.
The Radeon R9 290 series launch from Fall-2013 stirred up the high-end graphics market in a big way. The $399 R9 290 made NVIDIA look comically evil for asking $999 for the card it beat, the GTX TITAN; while the R9 290X remained the fastest single-GPU option, at $550, till NVIDIA launched the $699 GTX 780 Ti, to get people back to paying through their noses for the extra performance. Then there were two UFO sightings in the form of the GTX TITAN Black, and the GTX TITAN-Z, which made no tangible contributions to consumer choice. Sure, they gave you full double-precision floating point (DPFP) performance, but DPFP is of no use to gamers. So what could have been the calculation at AMD and NVIDIA as June 2015 approached? Here's a theory.Image credit: Mahspoonis2big, Reddit
AMD's HBM Gamble
The "Fiji" silicon is formidable. It made performance/Watt gains over "Hawaii," despite a lack of significant shader architecture performance improvements between GCN 1.1 and GCN 1.2 (at least nowhere of the kind between NVIDIA's "Kepler" and "Maxwell.") AMD could do a 45% increase in stream processors for the Radeon R9 Fury X, at the same typical board power as its predecessor, the R9 290X. The company had to find other ways to bring down power consumption, and one way to do that, while not sacrificing performance, was implementing a more efficient memory standard, High Bandwidth Memory (HBM).
Implementing HBM, right now, is not as easy GDDR5 was, when it was new. HBM is more efficient than GDDR5, but it trades clock speed for bus-width, and a wider bus entails more pins (connections), which would have meant an insane amount of PCB wiring around the GPU, in AMD's case. The company had to co-develop the industry's first mass-producible interposer (silicon die that acts as substrate for other dies), relocate the memory to the GPU package, and still make do with the design limitation of first-generation HBM capping out at 8 Gb per stack, or 4 GB for AMD's silicon; after having laid a 4096-bit wide memory bus. This was a bold move.
Reviews show that 4 GB of HBM isn't Fiji's Achilles' heel. The card still competes in the same league as the 6 GB memory-laden GTX 980 Ti, at 4K Ultra HD (a resolution that's most taxing on the video memory). The card is just 2% slower than the GTX 980 Ti, at this resolution. Its performance/Watt is significantly higher than the R9 290X. We reckon that this outcome would have been impossible with GDDR5, if AMD never gambled with HBM, and stuck to the 512-bit wide GDDR5 interface of "Hawaii," just as it stuck to a front-end and render back-end configuration similar to it (the front-end is similar to that of "Tonga," while the ROP count is the same as "Hawaii.")
NVIDIA Accelerated GM200
NVIDIA's big "Maxwell" silicon, the GM200, wasn't expected to come out as soon as it did. The GTX 980 and the 5 billion-transistor GM204 silicon are just 9 months old in the market, NVIDIA has sold a lot of these; and given how the company milked its predecessor, the GK104, for a year in the high-end segment before bringing out the GK110 with the TITAN; something similar was expected of the GM200. Its March 2015 introduction - just six months following the GTX 980 - was unexpected. What was also unexpected, was NVIDIA launching the GTX 980 Ti, as early as it did. This card has effectively cannibalized the TITAN X, just 3 months post its launch. The GTX TITAN X is a halo product, overpriced at $999, and hence not a lot of GM200 chips were expected to be in production. We heard reports throughout Spring, that launch of a high-volume, money-making SKU based on the GM200 could be expected only after Summer. As it turns out, NVIDIA was preparing a welcoming party for the R9 Fury X, with the GTX 980 Ti.
The GTX 980 Ti was more likely designed with R9 Fury X performance, rather than a target price, as the pivot. The $650 price tag is likely something NVIDIA came up with later, after having achieved a performance lead over the R9 Fury X, by stripping down the GM200 as much as it could to get there. How NVIDIA figured out R9 Fury X performance is anybody's guess. It's more likely that the price of R9 Fury X would have been different, if the GTX 980 Ti wasn't around; than the other way around.
Who Won?
Short answer - nobody. The high-end graphics card market isn't as shaken up as it was, right after the R9 290 series launch. The "Hawaii" twins held onto their own, and continued to offer great bang for the buck, until NVIDIA stepped in with the GTX 970 and GTX 980 last September. $300 gets you not much more from what it did a month ago. At least now you have a choice between the GTX 970 and the R9 390 (which appears to have caught up), at $430, the R9 390X offers competition to the $499 GTX 980; and then there are leftovers from the previous-gen, such as the R9 290 series and the GTX 780 Ti, but these aren't really the high-end we were looking for. It was gleeful to watch the $399 R9 290 dethrone the $999 GTX TITAN in September 2013, as people upgraded their rigs for Holiday 2013. We didn't see that kind of a spectacle this month. There is a silver lining, though. There is a rather big gap between the GTX 980 and GTX 980 Ti just waiting to be filled.
Hopefully July will churn out something exciting (and bonafide high-end) around the $500 mark.
The Radeon R9 290 series launch from Fall-2013 stirred up the high-end graphics market in a big way. The $399 R9 290 made NVIDIA look comically evil for asking $999 for the card it beat, the GTX TITAN; while the R9 290X remained the fastest single-GPU option, at $550, till NVIDIA launched the $699 GTX 780 Ti, to get people back to paying through their noses for the extra performance. Then there were two UFO sightings in the form of the GTX TITAN Black, and the GTX TITAN-Z, which made no tangible contributions to consumer choice. Sure, they gave you full double-precision floating point (DPFP) performance, but DPFP is of no use to gamers. So what could have been the calculation at AMD and NVIDIA as June 2015 approached? Here's a theory.Image credit: Mahspoonis2big, Reddit
AMD's HBM Gamble
The "Fiji" silicon is formidable. It made performance/Watt gains over "Hawaii," despite a lack of significant shader architecture performance improvements between GCN 1.1 and GCN 1.2 (at least nowhere of the kind between NVIDIA's "Kepler" and "Maxwell.") AMD could do a 45% increase in stream processors for the Radeon R9 Fury X, at the same typical board power as its predecessor, the R9 290X. The company had to find other ways to bring down power consumption, and one way to do that, while not sacrificing performance, was implementing a more efficient memory standard, High Bandwidth Memory (HBM).
Implementing HBM, right now, is not as easy GDDR5 was, when it was new. HBM is more efficient than GDDR5, but it trades clock speed for bus-width, and a wider bus entails more pins (connections), which would have meant an insane amount of PCB wiring around the GPU, in AMD's case. The company had to co-develop the industry's first mass-producible interposer (silicon die that acts as substrate for other dies), relocate the memory to the GPU package, and still make do with the design limitation of first-generation HBM capping out at 8 Gb per stack, or 4 GB for AMD's silicon; after having laid a 4096-bit wide memory bus. This was a bold move.
Reviews show that 4 GB of HBM isn't Fiji's Achilles' heel. The card still competes in the same league as the 6 GB memory-laden GTX 980 Ti, at 4K Ultra HD (a resolution that's most taxing on the video memory). The card is just 2% slower than the GTX 980 Ti, at this resolution. Its performance/Watt is significantly higher than the R9 290X. We reckon that this outcome would have been impossible with GDDR5, if AMD never gambled with HBM, and stuck to the 512-bit wide GDDR5 interface of "Hawaii," just as it stuck to a front-end and render back-end configuration similar to it (the front-end is similar to that of "Tonga," while the ROP count is the same as "Hawaii.")
NVIDIA Accelerated GM200
NVIDIA's big "Maxwell" silicon, the GM200, wasn't expected to come out as soon as it did. The GTX 980 and the 5 billion-transistor GM204 silicon are just 9 months old in the market, NVIDIA has sold a lot of these; and given how the company milked its predecessor, the GK104, for a year in the high-end segment before bringing out the GK110 with the TITAN; something similar was expected of the GM200. Its March 2015 introduction - just six months following the GTX 980 - was unexpected. What was also unexpected, was NVIDIA launching the GTX 980 Ti, as early as it did. This card has effectively cannibalized the TITAN X, just 3 months post its launch. The GTX TITAN X is a halo product, overpriced at $999, and hence not a lot of GM200 chips were expected to be in production. We heard reports throughout Spring, that launch of a high-volume, money-making SKU based on the GM200 could be expected only after Summer. As it turns out, NVIDIA was preparing a welcoming party for the R9 Fury X, with the GTX 980 Ti.
The GTX 980 Ti was more likely designed with R9 Fury X performance, rather than a target price, as the pivot. The $650 price tag is likely something NVIDIA came up with later, after having achieved a performance lead over the R9 Fury X, by stripping down the GM200 as much as it could to get there. How NVIDIA figured out R9 Fury X performance is anybody's guess. It's more likely that the price of R9 Fury X would have been different, if the GTX 980 Ti wasn't around; than the other way around.
Who Won?
Short answer - nobody. The high-end graphics card market isn't as shaken up as it was, right after the R9 290 series launch. The "Hawaii" twins held onto their own, and continued to offer great bang for the buck, until NVIDIA stepped in with the GTX 970 and GTX 980 last September. $300 gets you not much more from what it did a month ago. At least now you have a choice between the GTX 970 and the R9 390 (which appears to have caught up), at $430, the R9 390X offers competition to the $499 GTX 980; and then there are leftovers from the previous-gen, such as the R9 290 series and the GTX 780 Ti, but these aren't really the high-end we were looking for. It was gleeful to watch the $399 R9 290 dethrone the $999 GTX TITAN in September 2013, as people upgraded their rigs for Holiday 2013. We didn't see that kind of a spectacle this month. There is a silver lining, though. There is a rather big gap between the GTX 980 and GTX 980 Ti just waiting to be filled.
Hopefully July will churn out something exciting (and bonafide high-end) around the $500 mark.
223 Comments on AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right
I do own hardware but it is not listed in my sig.
Most people I've met who admit to being in the PR/marketing business don't seem overly interested in anything except their own great ideas - any conversation that veers towards the technical, and their eyes glaze over. It's like trying to explain football tactics to your non-sports minded girlfriend.
I'm not even sure that those (I'm taking PR people not GF's, though YMMV) who shut up long enough to allow information to be imparted, actually take in anything that can't be expressed in buzzwords and slogans.
Kinda like what Microsoft does when they get one of their people out to bang the drum for their 'love of PC gaming' and how they have 'lost their way' and 'faltered in the past' but are will to try to make things right again. About 6 months down the road - that particular guy who was the face of microsoft's PC gaming PR campaign has left the company and absolutely fuck all progress has been made on the promises that were made in the interview. In fact M$ would probably disavow any knowledge of that guy being employed by them and the message that he was sent to deliver to the PC gaming community.
-----
IMO Unless AMD get their shit together there is no point calling them up for interviews and shit about their new products. Its just wasting everyones time.
When NVIDIA launched GTX 980 and GTX 970, they talked about the tech behind it, the new features, their future plans. None of it on AMD 300 series release. And I think I'm not the only one sharing such opinion...
Why on earth people want to buy a fury x ? Other then they're dumb or a blind amd fanboy.
It's just an opinion, lets keep it civil :)
Also its kinda odd to ask for people to call it civil because you basically said everyone who does not agree with you is either dumb or a blind amd fanboy.
EDIT: If AMD could have shown how great an overclocker it is - I'd be buying one. That for me was the only real weakness of it. It trades blows with 980ti at my current gaming resolution but the overclock killed it for me.
Having an AIO water cooler setup is a real burden. The least they could have done was make it single slot, put some holes on top, put the pump outside, and let you just hook up your own setup if you have one already. The least...
They could have but then again it will cost more and AMDs only real advantage at the moment is its price compared to the 980Ti.
Cards with full waterblocks pre-installed will be available shortly with the Fury X. I heard AMD didnt allow the other vendors to make their own custom cooling solutions so thats why a lot of the cards are physically exactly the same.
RE: AMD's PR dept. They shit the bed when Steamroller tiptoed into the limelight. They were promising the Moon leading up to that launch and we all know what they delivered. As I remember, most of them (PR Flacks) were canned soon afterwards.
Now, I hear that they were giving some review sites crap about past negative comments concerning AMD. They refused Fury samples after not so good reviews on their R9-390 cards.
AMD needs to understand that we need to be able to trust our favorite reviewers to tell the truth about these products. Especially when you consider the price that we're expected to pay for them.
If they continue down the path of information suppression, I'll never buy from them again.
Yeah, the Fury line looks good to me and I want one or two in the future, but I'll go with their competition if they're playing reindeer games.