• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Radeon R9 Fury X 4 GB

umm that's the same story of this review. In some tests the Fury came out on top of the 980 ti, trouble is when it did it was by 1-2%, in the tests the 980 ti came out on top it was a higher percentage making the overall average in favor of the 980 ti as even the games which the fury x is faster, the 980 ti has almost the exact gaming experience. The reverse isn't true however where the Fury X is significantly slower in several tests, so much so it might affect detail level or even playability in some cases.
If you count those Gameworks titles, I don't think we would have a nice discussion.
On other note, here is the latest driver for FuryX
http://www2.ati.com/drivers/amd-catalyst-15.15.1004-software-suite-win8.1-win7-64bit-june20.exe
Notice the june20 part.
 
If you count those Gameworks titles, I don't think we would have a nice discussion.
On other note, here is the latest driver for FuryX
http://www2.ati.com/drivers/amd-catalyst-15.15.1004-software-suite-win8.1-win7-64bit-june20.exe
Notice the june20 part.

No doubt the drivers will help performance but you're really quite 'graspy' when it comes to propping up AMD. They don't need your help - the card's good, just not what (people like you) hyped it up to be.

EDIT:

Your link doesn't work.
 
  • Like
Reactions: 64K
Someone at AMD doesn't know the difference between GB (gigabyte) and Gb (gigabit)...

iyyYmAo.jpg
 
Last edited:

Not sure what's the situation there. Warner Brothers has pulled the game for the PC for the time being. I don't think at this point that they are just having a hissy fit because they got called out for releasing a buggy game but it's not like they didn't know from beta testing what the issues were with the game to begin with. Hopefully they will do their job and fix the game.

http://arstechnica.com/gaming/2015/...-pulled-from-steam-and-retailers-due-to-bugs/
 
Technical question based on statement:
If AMD designed it to work with water AIO from the start (and not air cooled) this must mean the chip produces a lot of heat? On such a small PCB would this create heat dissipation problems?
I think that about sums it up. The big problem it seems is that the HBM stacks sit in close proximity to the GPU (and under the same heatsink), while the PCB acts as a heatsink itself for the VRMs -as someone else noted, the high localised temp, is the same scenario - albeit not as drastic, as found on the 295X2. Heat from all sides, with minimal internal airflow reliant upon a cold plate on one side to heatsink the entire card. The heat buildup is by my reckoning largely behind the decision to voltage lock the GPU and clock lock the HBM at Hynix's default settings. We may never know unless PowerColor or some other vendor gets a full cover waterblock version of the card out and sanction from AMD to relax voltage lock ( BTW :wasn't there a huge furore when Nvidia voltage locked their cards?).
I genuinely think Fury (non X) is air cooled on slower clocks because it thermally will struggle.
I concur. AMD's GPUs have about the same power requirement as Nvidia's, but have over the last few generations had greater issues with thermal dissipation (maybe the higher transistor density?). The ideal situation would be (aside from TEC) a heatsink fed directly to vapour chamber, with the HBM stacks cooled by fan air and ramsinks, but I think the HBM stacks proximity to the GPU make that a tricky assembly job as well as adding some unwelcome (for the vendors) added expense working in machining tolerances.
I really don't think there is any headroom for air cooling and even then, the water cooling isn't allowing higher clocks. It could be a HBM integration issue with heat affecting perf? Who knows.
A bad IC on a GDDR5 card means the RMA involves removal and replacing. RMA of a Fury X means the whole package probably gets binned and the GPU salvaged for assembly into another interposer package by Amkor. Warranty returns mean bad PR in general, but the physical cost on a small volume halo product card like the Fury X could also be prohibitive.
But isn't that more as a built-in of Nvidia Gameworks; ShadowWorks? Honestly, I personally find Very High looks very... un-natural.
Doesn't matter in the greater scheme of things. How many AMD users vehemently deride, ignore, and refuse to buy any Nvidia sponsored title? Yet the titles still arrive, and if they are AAA titles, sell. If they sell, tech sites are bound by common sense and page views if nothing else to do performance reviews of the titles (and if popular enough include them in their benchmark suites). Benchmarking involves highest image quality at playable settings for the most part, and highest game i.q. in general.
Now, bearing that in mind, and seeing the Dying Light numbers, what do you suppose Nvidia are likely to shoehorn into any of their upcoming sponsored games for maximum quality settings?
Medium is about the top limit where you have "defined but soften shadows" that are more real to life. Even the differences from low/medium hardly noticeable.
In practical terms, no it doesn't. In marketing terms?...well, that's an entirely different matter. DiRT Showdown's advanced lighting enabled lens flares to warm the heart of JJ Abrams. Minimal advancement in gaming enjoyment (less if overdone lens flare is a distraction), but the default testing scenario.
There's been comparisons to the Gigabyte GeForce GTX 980 Ti G1 review W1zzard did prior to this. Looking at that as to the subject of "power" under gaming it's reveling.
The G1 saw 23% higher in it's Average/Peak from the reference 980Ti. Avg: 211 vs. 259W; Peak 238 vs. 293W
The Fury numbers... Avg 246W; Peak: 280W
One is a non-reference card sporting a very high 20% overclock out of the box, one is a reference card. 20% overclock equates to 23% more power usage (and 27% higher performance than the Fury X at 4K). What kind of wattage do you suppose a Fury X would consume at 1260MHz ? Now, this observation aside, what the hell does your long winded power usage diatribe have to do with my quote that you used to launch into it?
 
Last edited:
what do you suppose Nvidia are likely to shoehorn into any of their upcoming sponsored games for maximum quality settings?
IDK, I suppose we need to see what Warner Bros. can do to fix the PC version Batman Arkham Knight?

In practical terms, no it doesn't.
Like belly buttons we all get an opinion. Mine was running very-high shadow map to me adds no visual reality.

and 27% higher performance than the Fury X at 4K
Odd if a 980Ti and Fury X are close at 4K ~2%... W1zzard review of that G1 appears to indicate it like 15% between a 980Ti.

Now, this observation aside, what the hell does your long winded power usage diatribe have to do with my quote that you used to launch into it?
Nothing that was a new paragraph, and new topic not directed at you, just comparing published data. While, that’s the part you "decry" as long winded?
 
Oh. Oh my. This is beautiful. It's exactly what I expected. A card that underperformed, didn't OC for crap, and was overpriced because of the CLC. I can't believe people really expected a card that shipped stock with a AIO water cooling solution would OC well. Of course it gets good temps, the cooler is like $100 tacked onto the pricetag. The power consumption is still garbage, and the performance on sub-4K resolutions is pretty bad. If this were priced around $500 or even $549.99, it would be a decent product, but at $650 I see literally no reason to buy this thing.



TPU already kind of debunked the idea of games using 4GB+ at higher resolutions. For some reason people refused to believe that games wouldn't quickly climb to using 6GB or 8GB. Then, almost immediately after that article was published here, they reviewed The Witcher 3, which used ~2GB of VRAM at 4K, which kind of solidified the idea that VRAM saturation wasn't a real problem, lazy optimization was. A lot of games can use 6-8GB of VRAM, but most of them don't need to. There are plenty of games that adapt to the amount of VRAM available too. I remember seeing reviewed of BF3 and BF4 where on cards with more than 4GB of VRAM it would be using 3.5GB give or take, and with 3GB and 2GB cards it would be using near all of it, with no real performance loss.

but you are considering only stupid-old polygonal approximative graphics even without raytracing.. the next gen graphics with a lot of recursion is where you can start needing quite unpretty amount of memory and it makes hardware with large amount of it more future-proof. you may have noticed that nvidia kinda made the first step towards monetizing voxelized graphics with their vxgi 2-bounce illumination model (with $hitload of pep-talk trying to suggest that nvidia actually invented everything what spells graphics). what i wanted to say waiting for hbm v2 is just another waiting and losing time and fury is 1 year late product. its been at least 2 years of prolonging, rebranding old stuff and losing time from both camps (and intel comfortably waits in hiding whit their shitty igps integrated to cpus mostly increasing their prices with little use). what strikes me though is, that many people jumped on that competition train that competition is a needed thing. it in fact is only for a minority of creatures (or groups there of) of predatory nature who are using its effect to eliminate or control a subject that poses a threat, because its basically a war and who runs out of resources first loses or at least isnt allowed the originally intended share of outcome by the will of the stronger entity. on the other hand theres a much greater power called synergy which is usually beneficial for all partakers thanks to the unity and coherency. i wonder when this world finally realizes it and starts spelling this word. probably not as early as theres less than only half a billion peeps left from the mutual wars we are going to have soon in the future as members of society kepp on predating on each other because of competition and because we do what politicians backed by enforcers say using media outlets, not what we would stand for ourselves representing our free, but scattered, disunited incoherent minds. whatever.. the price is unbearable for me anyway regardless of the hardwares performance.. see ya in the next discussion. :ohwell:

Compassion and Science are the only things that can save mankind.
true
 
Last edited:
Odd if a 980Ti and Fury X are close at 4K ~2%... W1zzard review of that G1 appears to indicate it like 15% between a 980Ti.
My bad on using a wrong baseline figure, but your 15% is wrong.
perfrel_3840.gif
perfrel_3840.gif


100 / 85 = 1.1765, or 17.65% higher. To normalize the G1's percentage in the Fury X chart you have to normalize it to the 102% for the reference card. 120% for the G1 / 102% reference gives the same 17.65%. 120% of the Fury X equals 20% faster than Fury X
Nothing that was a new paragraph, and new topic not directed at you, just comparing published data. While, that’s the part you "decry" as long winded?
Just seemed a bit defensive and hinged largely on the one review that had the Fury X pulling less power than 980 Ti/Titan X ( although compute is a totally different story).
While the differences are marginal in a lot of cases, the consensus is that the card (and system - which would also involve AMD's driver overhead / CPU load into the reckoning) isn't more frugal than the Titan X/980 Ti overall - although using certain game titles that may be the case. Hardware France, true to Damien's exacting test protocol sourced a second Fury X in addition to the AMD provided review sample. The differences are quite marked
0ku2ijE.jpg


I might also add that sites that tested the Fury X's power demand to be lower than the 980Ti/Titan X:
Tom's Hardware, Hardware Heaven (355W 980Ti, 350W Fury X)
Sites that tested the Fury X's power demand to be greater than the 980 Ti/Titan X
TechPowerUp, Hardware France, Guru3D, bit-tech, Hot Hardware, ComputerBase, PC Perspective, Tech Report, HardOCP, Digital Storm, Tweaktown, Hexus, HardwareBG, SweClockers, Legit Reviews, Hardware.info, Eurogamer, Overclock3D, Forbes, Hardware Canucks, PC World, Hardwareluxx, and PCGH (who also test power consumption with two games)
 
Last edited:
but your 15% is wrong.
When you started with 27%.... I'm less wrong!

Just seemed a bit defensive
Nope, not me... I'm not anyone who's saying a Fury X is ever going to be seen as "pulling less power", but is an improvement (yes some or good portion is HBM) over Hawaii. Not sure why you seem to put me in that group?

Hardwareluxx has in past done some sophisticated power testing, and would concur their findings of 13% above a 980Ti'. That's a lot better than W1zzard data of 18%, though if I had to put a round number to it I'd say 15%. Looking a Hardwareluxx and their OC they got a really good 1185MHz (13%) and saw between 9-10% bump in FpS in several tiles, though power went up 17%, So yes Fiji should not have had an executive shooting his mouth off about OC'n.

Like I said earlier at Tom's I can't find they tested a reference 980Ti, so no sure how they arrived at that.
Day off tomorrow (so my Friday night), and a pleasant Southern California evening... done.
 
With the fact that Furmark was still used in this bench, what would be expected here? Do you guys even realize that nVidia card consumes less power in Furmark than in game? Does that really suit to be the maximum consumption test?

And even in game test, what is the point of comparing peak power of different architectures? It should be the average consumption as in Tom's. It's similar to max fps and avg fps, the later is what mater.

Then people process to hail nVidia and boo AMD based on that fail numbers, what a joke.
 
1st World issues, why do they amuse me?

:confused:

:lovetpu:
 
Isn't it possible that you are the one who suffers from fanboyism and spreading fuds? That table can't be right... Maxwell2 supports it for sure, more to that iirc it's mandatory to support it for every DX12 GPU.


Yes, AMD cards will gain more speed increase from dx12, but we don't know how that will actually impact real life gaming performances, and I'm so tired of this benchmark being linked on every tech sites. Do you even realize it's an api OVERHEAD(!) test, or you just see the bigger number and the longer something?
3DMark's API overhead benchmark also test GPU's command I/O ports and pathways
 
AMD again lose from NVIDIA. Again is weaker than two high end GeForce.
I'm NVIDIA fan but I'm sad little because AMD. Somehow no matter what they try they can't offer to customers same performance as NVIDIA.
Even several months after NVIDIA are unable to bring same performance.
I would like really to AMD at least sell enough graphics to cover production and to earn something for existence.
NVIDIA dominance is huge... and they know everything. Whole time they knew that AMD can't offer same performance even as slower GM200.
One more thing is very bad, customers love to OC, this is so popular only because overclocking. Their Fury X don't need 8+8 pin. He could work and with 6+6pin.
Anyway customers can't OC more than 5-6 or maybe 10%, probably for games and less. In mean time GTX980Ti go on 200-250MHz and difference among them grow and grow...
Don't even to talk about overclocked TITAN X vs overclocked Fury X.... I afraid when everything settle down Fury X will be exactly on middle GTX980 - GTX 980Ti.
30-40% stronger than R9-290X. People expect to driver bring them 10% improvements total in every situation... That's impossible even for NVIDIAs best moments not for AMD.
Customers expect one clear win, at least for 2-3% more powerful than TITAN X, maybe even same, only to avoid someone to say Yes TITAN X is stronger... I mean because NVIDIA is better and because I love EVGA and I will continue with GeForce, I say that even in moments when I thought that AMD will win but no reason to anybody wish bad things to AMD, I remember they served me well 10 years until Cayman.
I still have god memories about Radeon before they start to lose from NVIDIA and that GeForce become better option for gaming, no doubt in that.
 
Last edited:
AMD again lose from NVIDIA. Again is weaker than two high end GeForce.
I'm NVIDIA fan but I'm sad little because AMD. Somehow no matter what they try they can't offer to customers same performance as NVIDIA.
Even several months after NVIDIA are unable to bring same performance.
I would like really to AMD at least sell enough graphics to cover production and to earn something for existence.
NVIDIA dominance is huge... and they know everything. Whole time they knew that AMD can't offer same performance even as slower GM200.
One more thing is very bad, customers love to OC, this is so popular only because overclocking. Their Fury X don't need 8+8 pin. He could work and with 6+6pin.
Anyway customers can't OC more than 5-6 or maybe 10%, probably for games and less. In mean time GTX980Ti go on 200-250MHz and difference among them grow and grow...
Don't even to talk about overclocked TITAN X vs overclocked Fury X.... I afraid when everything settle down Fury X will be exactly on middle GTX980 - GTX 980Ti.
30-40% stronger than R9-290X. People expect to driver bring them 10% improvements total in every situation... That's impossible even for NVIDIAs best moments not for AMD.
Customers expect one clear win, at least for 2-3% more powerful than TITAN X, maybe even same, only to avoid someone to say Yes TITAN X is stronger... I mean because NVIDIA is better and because I love EVGA and I will continue with GeForce, I say that even in moments when I thought that AMD will win but no reason to anybody wish bad things to AMD, I remember they served me well 10 years until Cayman.
I still have god memories about Radeon before they start to lose from NVIDIA and that GeForce become better option for gaming, no doubt in that.


Nvidia has what, 20+ times the budget AMD has? And AMD has to divide their budget for more things than Nvidia and has fewer engineers?

I'd say AMD is doing well to offer what they do, all things considered.
 
AMD again lose from NVIDIA. Again is weaker than two high end GeForce.
I'm NVIDIA fan but I'm sad little because AMD. Somehow no matter what they try they can't offer to customers same performance as NVIDIA.
Even several months after NVIDIA are unable to bring same performance.
I would like really to AMD at least sell enough graphics to cover production and to earn something for existence.
NVIDIA dominance is huge... and they know everything. Whole time they knew that AMD can't offer same performance even as slower GM200.
One more thing is very bad, customers love to OC, this is so popular only because overclocking. Their Fury X don't need 8+8 pin. He could work and with 6+6pin.
Anyway customers can't OC more than 5-6 or maybe 10%, probably for games and less. In mean time GTX980Ti go on 200-250MHz and difference among them grow and grow...
Don't even to talk about overclocked TITAN X vs overclocked Fury X.... I afraid when everything settle down Fury X will be exactly on middle GTX980 - GTX 980Ti.
30-40% stronger than R9-290X. People expect to driver bring them 10% improvements total in every situation... That's impossible even for NVIDIAs best moments not for AMD.
Customers expect one clear win, at least for 2-3% more powerful than TITAN X, maybe even same, only to avoid someone to say Yes TITAN X is stronger... I mean because NVIDIA is better and because I love EVGA and I will continue with GeForce, I say that even in moments when I thought that AMD will win but no reason to anybody wish bad things to AMD, I remember they served me well 10 years until Cayman.
I still have god memories about Radeon before they start to lose from NVIDIA and that GeForce become better option for gaming, no doubt in that.


Fury X is competitive against 980 Ti at very high resolution. Remember, AMD hasn't enabled DX11 MT drivers in Windows 8.1.

I rather see benchmarks done on Windows 10 i.e. Project Cars has frame rate uplift on R9-280 on Windows 10. Windows 10 forces DX11 MT.

 
Last edited:
Nvidia has what, 20+ times the budget AMD has?"]And AMD has to divide their budget for more things than Nvidia
No and Yes.
No, Nvidia doesn't have 20+ times the budget. Nvidia's R&D for the past year amounted to $1.36bn. AMD's R&D came to $1.04bn
Yes, the lions share of AMD's R&D should be going into K12 and Zen (especially the latter) and the platform as a whole, but the split wouldn't be anywhere close to 10:1 in favour of processors.
I'd say AMD is doing well to offer what they do, all things considered.
Too early to say. Fiji isn't going to make or break the company. Sales volumes for $650 cards aren't particularly high, and I'm estimating that it costs AMD a hell of a lot more to build a Fury X than it does for Nvidia and its partners to churn out GTX 980 Ti's. With Fury X missing the halo of "worlds fastest GPU", they will need to get a dual-GPU up and running very quickly - but that again will be a double-edged sword. Two expensive high performance GPUs will need a 240 radiator at least - all adds to the bill of materials. Somehow I don't see the company turning their market share around too much unless they sacrifice average selling prices of the rest of the Fury line- and the product stack under it.
If the company get Zen out the door in a timely manner, and the platform lives up to AMD's PR ( not a given based on recent history) they should/could be OK. If Zen flops and/or is late out of the gate, the AMD Financial Analyst Day in 2017 might look like this
Homeless-Man-By-Matthew-Woitunski.jpeg
 
No and Yes.
No, Nvidia doesn't have 20+ times the budget. Nvidia's R&D for the past year amounted to $1.36bn. AMD's R&D came to $1.04bn
Yes, the lions share of AMD's R&D should be going into K12 and Zen (especially the latter) and the platform as a whole, but the split wouldn't be anywhere close to 10:1 in favour of processors.

Too early to say. Fiji isn't going to make or break the company. Sales volumes for $650 cards aren't particularly high, and I'm estimating that it costs AMD a hell of a lot more to build a Fury X than it does for Nvidia and its partners to churn out GTX 980 Ti's. With Fury X missing the halo of "worlds fastest GPU", they will need to get a dual-GPU up and running very quickly - but that again will be a double-edged sword. Two expensive high performance GPUs will need a 240 radiator at least - all adds to the bill of materials. Somehow I don't see the company turning their market share around too much unless they sacrifice average selling prices of the rest of the Fury line- and the product stack under it.
If the company get Zen out the door in a timely manner, and the platform lives up to AMD's PR ( not a given based on recent history) they should/could be OK. If Zen flops and/or is late out of the gate, the AMD Financial Analyst Day in 2017 might look like this
According to http://www.streetwisereport.com/adv...t-intel-nvidia-corporation-nasdaqnvda/120113/

Qualcomm expects to make a MA(merge or acquisition) offer for AMD. Qualcomm already kicked NVIDIA's out of mobile phones.
 
According to http://www.streetwisereport.com/adv...t-intel-nvidia-corporation-nasdaqnvda/120113/

Qualcomm expects to make a MA(merge or acquisition) offer for AMD. Qualcomm already kicked NVIDIA's out of mobile phones.
Qualcomm could fund the buy out from pocket change, but I'll believe it when I see it. Seems like every bad financial quarter brings the rumour of someone buying AMD (Samsung, BLX, Xilinx, and the list goes on). AMD's stock price then magically firms up just before the earnings call.
 
Lots of negative comments here,, i might add.
What i havent seen are some DX12 benchmarks, because i have a feeling this little pinky is going to blow everything out of the water. Because in the end dx12 is the future, and not dx11 (which mean these benchmarks here)
Allthough, sincere to be, i might add that we must wait at least a generation before dx12 becomes mainstream.
 
With the fact that Furmark was still used in this bench, what would be expected here? Do you guys even realize that nVidia card consumes less power in Furmark than in game? Does that really suit to be the maximum consumption test?

And even in game test, what is the point of comparing peak power of different architectures? It should be the average consumption as in Tom's. It's similar to max fps and avg fps, the later is what mater.

Then people process to hail nVidia and boo AMD based on that fail numbers, what a joke.
whaaat ?? o_Oo_Oo_O
 
And even in game test, what is the point of comparing peak power of different architectures? It should be the average consumption as in Tom's. It's similar to max fps and avg fps, the later is what mater.
you do realize that we have "average" gaming power draw numbers in our reviews? clearly marked as "average", and provide additional data points for interested readers
 
Lots of negative comments here,, i might add.
What i havent seen are some DX12 benchmarks, because i have a feeling this little pinky is going to blow everything out of the water. Because in the end dx12 is the future, and not dx11 (which mean these benchmarks here)
Allthough, sincere to be, i might add that we must wait at least a generation before dx12 becomes mainstream.
its probably the lousy 64 computing units (where as much as twice as much were expected by the crowd). *shrug*
amd ends up creating an average console hardware in the place where pc enthusiast hardware is craved. :cry:
so for the true technology progress enthusiasts, its a bit of a let down after what nvida pulled off with their flexible overclocking and kinda so much waiting for amd to deliver on their heap of promises.
 
Last edited:
amd ends up creating an average console hardware in the place where pc enthusiast hardware is craved. :cry:

Fury X is not avarage console hardware if that's what you mean.
 
Back
Top