Friday, June 26th 2015
AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right
This has been a roller-coaster month for high-end PC graphics. The timing of NVIDIA's GeForce GTX 980 Ti launch had us giving finishing touches to its review with our bags to Taipei still not packed. When it launched, the GTX 980 Ti set AMD a performance target and a price target. Then began a 3-week wait for AMD to launch its Radeon R9 Fury X graphics card. The dance is done, the dust has settled, and we know who has won - nobody. AMD didn't get the R9 Fury X wrong, but NVIDIA got its GTX 980 Ti right. At best, this stalemate yielded a 4K-capable single-GPU graphics option from each brand at $650. You already had those in the form of the $650-ish Radeon R9 295X2, or a pair GTX 970 cards. Those with no plans of a 4K display already had great options in the form of the GTX 970, and price-cut R9 290X.
The Radeon R9 290 series launch from Fall-2013 stirred up the high-end graphics market in a big way. The $399 R9 290 made NVIDIA look comically evil for asking $999 for the card it beat, the GTX TITAN; while the R9 290X remained the fastest single-GPU option, at $550, till NVIDIA launched the $699 GTX 780 Ti, to get people back to paying through their noses for the extra performance. Then there were two UFO sightings in the form of the GTX TITAN Black, and the GTX TITAN-Z, which made no tangible contributions to consumer choice. Sure, they gave you full double-precision floating point (DPFP) performance, but DPFP is of no use to gamers. So what could have been the calculation at AMD and NVIDIA as June 2015 approached? Here's a theory.Image credit: Mahspoonis2big, Reddit
AMD's HBM Gamble
The "Fiji" silicon is formidable. It made performance/Watt gains over "Hawaii," despite a lack of significant shader architecture performance improvements between GCN 1.1 and GCN 1.2 (at least nowhere of the kind between NVIDIA's "Kepler" and "Maxwell.") AMD could do a 45% increase in stream processors for the Radeon R9 Fury X, at the same typical board power as its predecessor, the R9 290X. The company had to find other ways to bring down power consumption, and one way to do that, while not sacrificing performance, was implementing a more efficient memory standard, High Bandwidth Memory (HBM).
Implementing HBM, right now, is not as easy GDDR5 was, when it was new. HBM is more efficient than GDDR5, but it trades clock speed for bus-width, and a wider bus entails more pins (connections), which would have meant an insane amount of PCB wiring around the GPU, in AMD's case. The company had to co-develop the industry's first mass-producible interposer (silicon die that acts as substrate for other dies), relocate the memory to the GPU package, and still make do with the design limitation of first-generation HBM capping out at 8 Gb per stack, or 4 GB for AMD's silicon; after having laid a 4096-bit wide memory bus. This was a bold move.
Reviews show that 4 GB of HBM isn't Fiji's Achilles' heel. The card still competes in the same league as the 6 GB memory-laden GTX 980 Ti, at 4K Ultra HD (a resolution that's most taxing on the video memory). The card is just 2% slower than the GTX 980 Ti, at this resolution. Its performance/Watt is significantly higher than the R9 290X. We reckon that this outcome would have been impossible with GDDR5, if AMD never gambled with HBM, and stuck to the 512-bit wide GDDR5 interface of "Hawaii," just as it stuck to a front-end and render back-end configuration similar to it (the front-end is similar to that of "Tonga," while the ROP count is the same as "Hawaii.")
NVIDIA Accelerated GM200
NVIDIA's big "Maxwell" silicon, the GM200, wasn't expected to come out as soon as it did. The GTX 980 and the 5 billion-transistor GM204 silicon are just 9 months old in the market, NVIDIA has sold a lot of these; and given how the company milked its predecessor, the GK104, for a year in the high-end segment before bringing out the GK110 with the TITAN; something similar was expected of the GM200. Its March 2015 introduction - just six months following the GTX 980 - was unexpected. What was also unexpected, was NVIDIA launching the GTX 980 Ti, as early as it did. This card has effectively cannibalized the TITAN X, just 3 months post its launch. The GTX TITAN X is a halo product, overpriced at $999, and hence not a lot of GM200 chips were expected to be in production. We heard reports throughout Spring, that launch of a high-volume, money-making SKU based on the GM200 could be expected only after Summer. As it turns out, NVIDIA was preparing a welcoming party for the R9 Fury X, with the GTX 980 Ti.
The GTX 980 Ti was more likely designed with R9 Fury X performance, rather than a target price, as the pivot. The $650 price tag is likely something NVIDIA came up with later, after having achieved a performance lead over the R9 Fury X, by stripping down the GM200 as much as it could to get there. How NVIDIA figured out R9 Fury X performance is anybody's guess. It's more likely that the price of R9 Fury X would have been different, if the GTX 980 Ti wasn't around; than the other way around.
Who Won?
Short answer - nobody. The high-end graphics card market isn't as shaken up as it was, right after the R9 290 series launch. The "Hawaii" twins held onto their own, and continued to offer great bang for the buck, until NVIDIA stepped in with the GTX 970 and GTX 980 last September. $300 gets you not much more from what it did a month ago. At least now you have a choice between the GTX 970 and the R9 390 (which appears to have caught up), at $430, the R9 390X offers competition to the $499 GTX 980; and then there are leftovers from the previous-gen, such as the R9 290 series and the GTX 780 Ti, but these aren't really the high-end we were looking for. It was gleeful to watch the $399 R9 290 dethrone the $999 GTX TITAN in September 2013, as people upgraded their rigs for Holiday 2013. We didn't see that kind of a spectacle this month. There is a silver lining, though. There is a rather big gap between the GTX 980 and GTX 980 Ti just waiting to be filled.
Hopefully July will churn out something exciting (and bonafide high-end) around the $500 mark.
The Radeon R9 290 series launch from Fall-2013 stirred up the high-end graphics market in a big way. The $399 R9 290 made NVIDIA look comically evil for asking $999 for the card it beat, the GTX TITAN; while the R9 290X remained the fastest single-GPU option, at $550, till NVIDIA launched the $699 GTX 780 Ti, to get people back to paying through their noses for the extra performance. Then there were two UFO sightings in the form of the GTX TITAN Black, and the GTX TITAN-Z, which made no tangible contributions to consumer choice. Sure, they gave you full double-precision floating point (DPFP) performance, but DPFP is of no use to gamers. So what could have been the calculation at AMD and NVIDIA as June 2015 approached? Here's a theory.Image credit: Mahspoonis2big, Reddit
AMD's HBM Gamble
The "Fiji" silicon is formidable. It made performance/Watt gains over "Hawaii," despite a lack of significant shader architecture performance improvements between GCN 1.1 and GCN 1.2 (at least nowhere of the kind between NVIDIA's "Kepler" and "Maxwell.") AMD could do a 45% increase in stream processors for the Radeon R9 Fury X, at the same typical board power as its predecessor, the R9 290X. The company had to find other ways to bring down power consumption, and one way to do that, while not sacrificing performance, was implementing a more efficient memory standard, High Bandwidth Memory (HBM).
Implementing HBM, right now, is not as easy GDDR5 was, when it was new. HBM is more efficient than GDDR5, but it trades clock speed for bus-width, and a wider bus entails more pins (connections), which would have meant an insane amount of PCB wiring around the GPU, in AMD's case. The company had to co-develop the industry's first mass-producible interposer (silicon die that acts as substrate for other dies), relocate the memory to the GPU package, and still make do with the design limitation of first-generation HBM capping out at 8 Gb per stack, or 4 GB for AMD's silicon; after having laid a 4096-bit wide memory bus. This was a bold move.
Reviews show that 4 GB of HBM isn't Fiji's Achilles' heel. The card still competes in the same league as the 6 GB memory-laden GTX 980 Ti, at 4K Ultra HD (a resolution that's most taxing on the video memory). The card is just 2% slower than the GTX 980 Ti, at this resolution. Its performance/Watt is significantly higher than the R9 290X. We reckon that this outcome would have been impossible with GDDR5, if AMD never gambled with HBM, and stuck to the 512-bit wide GDDR5 interface of "Hawaii," just as it stuck to a front-end and render back-end configuration similar to it (the front-end is similar to that of "Tonga," while the ROP count is the same as "Hawaii.")
NVIDIA Accelerated GM200
NVIDIA's big "Maxwell" silicon, the GM200, wasn't expected to come out as soon as it did. The GTX 980 and the 5 billion-transistor GM204 silicon are just 9 months old in the market, NVIDIA has sold a lot of these; and given how the company milked its predecessor, the GK104, for a year in the high-end segment before bringing out the GK110 with the TITAN; something similar was expected of the GM200. Its March 2015 introduction - just six months following the GTX 980 - was unexpected. What was also unexpected, was NVIDIA launching the GTX 980 Ti, as early as it did. This card has effectively cannibalized the TITAN X, just 3 months post its launch. The GTX TITAN X is a halo product, overpriced at $999, and hence not a lot of GM200 chips were expected to be in production. We heard reports throughout Spring, that launch of a high-volume, money-making SKU based on the GM200 could be expected only after Summer. As it turns out, NVIDIA was preparing a welcoming party for the R9 Fury X, with the GTX 980 Ti.
The GTX 980 Ti was more likely designed with R9 Fury X performance, rather than a target price, as the pivot. The $650 price tag is likely something NVIDIA came up with later, after having achieved a performance lead over the R9 Fury X, by stripping down the GM200 as much as it could to get there. How NVIDIA figured out R9 Fury X performance is anybody's guess. It's more likely that the price of R9 Fury X would have been different, if the GTX 980 Ti wasn't around; than the other way around.
Who Won?
Short answer - nobody. The high-end graphics card market isn't as shaken up as it was, right after the R9 290 series launch. The "Hawaii" twins held onto their own, and continued to offer great bang for the buck, until NVIDIA stepped in with the GTX 970 and GTX 980 last September. $300 gets you not much more from what it did a month ago. At least now you have a choice between the GTX 970 and the R9 390 (which appears to have caught up), at $430, the R9 390X offers competition to the $499 GTX 980; and then there are leftovers from the previous-gen, such as the R9 290 series and the GTX 780 Ti, but these aren't really the high-end we were looking for. It was gleeful to watch the $399 R9 290 dethrone the $999 GTX TITAN in September 2013, as people upgraded their rigs for Holiday 2013. We didn't see that kind of a spectacle this month. There is a silver lining, though. There is a rather big gap between the GTX 980 and GTX 980 Ti just waiting to be filled.
Hopefully July will churn out something exciting (and bonafide high-end) around the $500 mark.
223 Comments on AMD Didn't Get the R9 Fury X Wrong, but NVIDIA Got its GTX 980 Ti Right
Let me explain to you what you just posted
I see people waving flags and pointing fingers to a specific company all day every day, for any reason, or no reason. My apologies if I am saying that it is not that bad to point at the opposite direction occasionally, when there is a true reason to do so.
Yes it happened to me 100s of times, over a little over a month, that's 40 days. It amounts to maybe 2-3 times a day, and I'm in front of my computer way more than most people, probably at least 12 hours a day. Furthurmore, it was far less common on single GPU systems. Again, it happened ONCE on my GTX960 rig(which I actually use more than my SLI GTX970 rig). So the problem is even more localized to a very small subset of users as multi-GPU setups are still not very common. So, yeah, it really wasn't a big deal.
And, again, they fixed it in just about a month. I have yet to see AMD fix any bug in a month. It took them years to fix the HDMI scaling bug in their drivers, and that was just a registry setting that their drivers wrote wrong. And when they released the Omega drivers they actually made the bug worse for a lot of users.
At this point you're just trying to make a mountain out of a mole hill.
People wanted some pictures so here. Yeah I know with the flash on it looks kinda dirty lol... live with smokers.... it isn't that bad in person.
I have to remove the Zalman, mount the radiator, put the Zalman back in, then install the card into the slot.
www.techpowerup.com/forums/threads/amd-radeon-r9-fury-x-4096-mb.213728/page-8#post-3304205
Executives like Richard Hundley (I personally think he's a terrible "face" for promoting gaming for AMD), while CTO Joe Macri is a fool for opening his mouth. Again controlling the message continues to be AMD's biggest issue!
Moving on, the thing that's not discussed/covered here is there doesn't appear there will be a Fiji FirePro Professional variant, which means AMD has only gamers to spread costs over, and that's a hard pill. I've also understood that the GM200 is only for gaming or the semi-Pro as TitanX, while even that is not as relevant this time around as the Dp compute isn't there, so again they're selling more-or-less just to gamers. (not going to be easy to hold @ $999)
So we have two max'd out 28nm process chips, that both need sell and both just to gamers. The big issue is they're in a current configuration and price, primarily meant for 4K. 4K's not truly rolling-heavy into the "Enthusiast" crowd, while for 1440p either card is a pricey cost for admission. Plus 1440p monitor aren't exactly priced to light-a-fire in the "Mainstream", especially the one's guys are hoping for the FreeSyn and G-Syn stuff. So both made something and I'm not sure the market is there to sustain.
While AMD is saying they have two lower parts to come, is there a GM200 that Nvidia intends to bring at a lower price? This might be the crux of what we have; AMD can bring Fiji/HBM to lower price point to exact volume pricing, but it hardly matters if the monitor pricing is still the barrier. And, there lies another problem... FreeSyn to the masses! AMD can say it's out there, it doesn't take special circuitry or cost, but so far the panel manufactures aren't moving toward it, or if they are demanding extortionate up-charge. Why is that?
Oh, and well done @john_ . Nice derail of thread. What started out as a discussion about the Fury X and 980 Ti included an innocuous post about performance being held back by drivers, which you promptly turned into a single crusade regarding non-performance TDR issues.
To run it back on topic, this makes FuryX a grand success, and the 980 Ti a failure?
And if we hold Nvidia to a higher standard on software because it makes up a substantial part of their products, do we not hold AMD's hardware to the same higher standard? There was considerable uproar over Nvidia's capping voltage limits for their Kepler/Maxwell cards, so how much more scrutiny should AMD be under for locking it down completely? Moreover if this is just a temporary measure, why not allow it from the start? Board partners had access to the card for some time, yet are totally unable to work voltage control into their OC utilities? A sceptic might surmise that allowing voltage control for OC'ing might just have impacted on reviews negatively. Overclocking doesn't seem to yield high real world returns, but I'm pretty sure turning up the voltage control would add more power, increase heat, and in turn, require that Gentle Typhoon fan to spin a little faster than the 1500rpm is was sent out with. Ha, if it were only that simple. A success is usually defined for a consumer product by sales, feedback from both the industry and consumer, and whether the product realized all- or most of its goals. By all accounts the Fury's launch was delayed considerably by performance issues with software. If the company had launched the card in a timely manner when the hardware was ready, and pre-empted the 980 Ti (and possibly the Titan X) which would have presented a completely different picture of the card to both the masses and consumers. Fury X with no GM 200 competition would have been raved about in every public forum imaginable. The card now suffers not from its own image, but that by comparison with the competition. Would AMD judge what would have been a PR bonanza, but is now "nice try" territory a success? Sales are brisk, but I doubt that there is any significant volume to speak of, and how many people who held their breaths waiting for Fury X, exhaled when it dropped and opted to either sit it out or buy a 980 Ti ?
So, regarding your ( I'm presuming largely rhetorical) question, I would point you towards the TPU poll, and ask if the voting would be the same if the 980 Ti option was replaced with just the Titan X at $1K or with the GTX 980 in its place?
So it probably requires a definition of success. Yes the card is selling, but there also seems to be a general air of "meh" and deflation. I'm betting that AMD banked on a little more than "meh" for their largest GPU in their history by a considerable margin utilizing a revolutionary (discounting that Intel got 2.5D stacked memory into product some time ago) memory technology. Personally it is a storm in a teacup. The card isn't bad, but from my viewpoint it isn't great either. Reference only, voltage locked doesn't scream "BUY ME!" for the enthusiast tinkerer in me. In the greater scheme, I suspect that the card will sell, and will continue to do so when AMD institute their inevitable price cuts, and in a year we'll be debating the veracity of Arctic Islands and Pascal benchmarks popping up on gossip sites reprinting some Chinese adolescents first attempt at making a bar chart in Excel.
This is what you just posted
PS From the driver download page It does not say "multi gpu setups" because it happens with single GPU setups also. You may continue trying to lie about that. It's your right. OK, then go to the other thread where you tried to make me look bad and instead you managed to humiliate yourself by proving all my points.
Let me help you find the post
AMD Doesn't Trust its Own Processors - Project Quantum Driven by Intel Core i7-4790K | Page 8 | TechPowerUp Forums
Keep up the capslock, bolding, and mouth foaming - it really adds to your argument.
BTW: My bad on the other thread. I should have done some more research. Feel better now?