Thursday, July 2nd 2015

AMD Revises Pump-block Design for Radeon R9 Fury X

AMD seems to have reacted swiftly to feedback from reviewers and owners of initial batches if its Radeon R9 Fury X, over a noisy pump-block; and revised its design. The revised pump-block lacks the "high pitched whine" that users were reporting, according to owners. At this point there are no solid visual cues on how to identify a card with the new block, however a user with the revised card (or at least one that lacks the whine), pointed out a 2-color chrome Cooler Master (OEM) badge on the pump-block, compared to the multi-color sticker on pump-blocks from the initial batches. You can open up the front-plate covering the card without breaking any warranties.
Source: AnandTech Forums
Add your own comment

87 Comments on AMD Revises Pump-block Design for Radeon R9 Fury X

#76
midnightoil
MxPhenom 216I could say the same thing to you in reply to 90% of your posts. Oh the irony.

I pointed it out the day reviews flooded the internet, the ROP count does not make any sense for a card with ~4000 SPs. It has a similar "bottlenecking" issue that the 7970/280x has.
#1 - can a mod please purge all the troll posts. Particularly the half a dozen or so that start the thread.

#2 - If ROPs are such a huge bottleneck as you claim, then why is scaling so ridiculously good in CF? It's better than any previous AMD card, and miles ahead of any SLI setup. Even games it loses badly in (even at 4K) in single card config, it shits on the TX in 2x FX vs 2x TX. This is with very early drivers. 2x FX @4K Ultra in BF4 gets ~120FPS, and this is a game where the drivers seem to be really bad for the FX. I expect to see new drivers extend these gains (in single & CF) immensely ... and DX12 will in my opinion be a complete whitewash for GCN and Fiji particularly vs Maxwell or Kepler.

If you're going for more than one high end card, it looks like Fiji is the only thing you should consider right now. Also, if you intend to buy a VR headset, GCN (& more particularly Fiji / 380/ 285) is the only game in town ... the hardware is far more suitable than Maxwell (v.1 or v.2) and LiquidVR is miles ahead of GWVR.
Posted on Reply
#77
MxPhenom 216
ASIC Engineer
midnightoil#1 - can a mod please purge all the troll posts. Particularly the half a dozen or so that start the thread.

#2 - If ROPs are such a huge bottleneck as you claim, then why is scaling so ridiculously good in CF? It's better than any previous AMD card, and miles ahead of any SLI setup. Even games it loses badly in (even at 4K) in single card config, it shits on the TX in 2x FX vs 2x TX. This is with very early drivers. 2x FX @4K Ultra in BF4 gets ~120FPS, and this is a game where the drivers seem to be really bad for the FX. I expect to see new drivers extend these gains (in single & CF) immensely ... and DX12 will in my opinion be a complete whitewash for GCN and Fiji particularly vs Maxwell or Kepler.

If you're going for more than one high end card, it looks like Fiji is the only thing you should consider right now. Also, if you intend to buy a VR headset, GCN (& more particularly Fiji / 380/ 285) is the only game in town ... the hardware is far more suitable than Maxwell (v.1 or v.2) and LiquidVR is miles ahead of GWVR.
Thanks, id still get a 980Ti though. Not the biggest fan of multi GPU, and even if it scales well, the frametimes are still lousy.

Scaling is good on 7970/280x, but those cards are a bit bottlenecked by the 32 ROPs.
Posted on Reply
#78
midnightoil
MxPhenom 216Thanks, id still get a 980Ti though. Not the biggest fan of multi GPU, and even if it scales well, the frametimes are still lousy.

Scaling is good on 7970/280x, but those cards are a bit bottlenecked by the 32 ROPs.
Frame times for CF are often half that of SLI, or even less. That's why NVIDIA withdrew permission for sites to do comparative FCAT tests for SLI / CF for the sites they sent the FCAT equipment / software to. Last major test of it was early this year by SweClockers, since then nothing - said test showed a gigantic lead in frame times for CF over SLI.

Forgive me if I'm skeptical about that ROP bottlenecking claim .. because you're claiming the same thing with the Fiji and it's total bollocks.
Posted on Reply
#79
MxPhenom 216
ASIC Engineer
midnightoilFrame times for CF are often half that of SLI, or even less. That's why NVIDIA withdrew permission for sites to do comparative FCAT tests for SLI / CF for the sites they sent the FCAT equipment / software to. Last major test of it was early this year by SweClockers, since then nothing - said test showed a gigantic lead in frame times for CF over SLI.

Forgive me if I'm skeptical about that ROP bottlenecking claim .. because you're claiming the same thing with the Fiji and it's total bollocks.
Dude, single GPU frame time issues with AMD cards are there, and that's single GPU. Someone posted the graph in the fury X review thread. Directly compared to the 980Ti/Titan X, and the Nvidia cards look to have a lot better single GPU frame times. And just let me remind you, until now, crossfire frametimes have been total rubbish. I still here about cases where stuttering is pretty bad with CF.

Even the review at PCPER brings up the low rop count that could be hindering the performance of the card. This is the same rop count as the 290/290x like 2 years ago.
Posted on Reply
#80
Haytch
I think the Fury X is a fantastic card. I love HBM.

I have decided to give it a miss this time 'round and see what AMD have install for the next series.
I am looking forward to 8+Gb HBM with further refined technology. I love the GPU and memory being closer together concept and obviously this is a factor that Nvidia will surely need to adapt to stay competitive in the future.

Sure, Fury X might not beat the current Nvidia range, or maybe it does. The point being is, if Nvidia don't start improving their architecture they will soon fall behind with no real hope of ever catching up. The Fury X might as well be considered the turn-around point for AMD.
As for the coil whining . . . I seem to see more whining coming from people than the actual GPU package. You can always improve GPU whining, but you can't improve people whining!\

For those that have an issue with Crossfire performance, let me assure you, Crossfire has no issues except with the end-user. Sure my 3x 290X's don't scale 100%, but it's close enough for me not to complain. In-fact, 2x 290X's scale better than my 2x Titans ranging from 1080 to 4k. Only after 4k is where my Titans are better, which I don't even use anymore, and I am sure 99% of you don't either.
Posted on Reply
#81
arbiter
HaytchSure, Fury X might not beat the current Nvidia range, or maybe it does. The point being is, if Nvidia don't start improving their architecture they will soon fall behind with no real hope of ever catching up. The Fury X might as well be considered the turn-around point for AMD.
fury is only 2 cards over all. You say its a turn around but its a 3-4 from that being possible with how AMD loves to re brand everything lately. Pascal which will be nvidia's next gpu will have HBM2. maxwell is plenty good as it was able to keep up and in most cases beat AMD super hyped card. So i wouldn't say Nvidia needs to improve as its AMD needs to improve a lot more yet.
Posted on Reply
#82
Haytch
arbiterfury is only 2 cards over all. You say its a turn around but its a 3-4 from that being possible with how AMD loves to re brand everything lately. Pascal which will be nvidia's next gpu will have HBM2. maxwell is plenty good as it was able to keep up and in most cases beat AMD super hyped card. So i wouldn't say Nvidia needs to improve as its AMD needs to improve a lot more yet.
Thank you, I was not aware that Nvidia were going to use HBM2. I thought that AMD had some licencing rights to it, something like that. That's why I assumed that Nvidia needed to get their act together.
HBM1 is all good, but just a minor stepping stone in my books, the real fun wont start until HBM2/3+.
Posted on Reply
#83
arbiter
HaytchThank you, I was not aware that Nvidia were going to use HBM2. I thought that AMD had some licencing rights to it, something like that. That's why I assumed that Nvidia needed to get their act together.
HBM1 is all good, but just a minor stepping stone in my books, the real fun wont start until HBM2/3+.
www.techpowerup.com/213254/nvidia-tapes-out-pascal-based-gp100-silicon.html
Its only a prototype but you can see what hbm lead amd has won't last long. I expect HBM will be highest end gpu only since really mid range don't need it yet.
Posted on Reply
#84
Freebird
TheGuruStudI've been buying Radeons since the 4890 b/c they have been a better buy for me all the way up to the 290x.

I bought a 980 Ti this week. Everyone knows I hate Nvidia. FURY X Fing SUCKS!

If they can get their shit together, then I will GLADLY buy their new card on launch day and disown the GTX.
As with everything AMD a little Waiting is required...

I believe we will find that the Fury X is a fine card once the it's OCing abilities are "found" i.e. Over Volting GPU & HBM. It should have some decent head room once this is accomplished... in addition, I speculate that there may be more performance to squeeze out of the Fury/X when DX12 arrives in about a month... and then another boost when "true" DX12 games appear that leverage asynchronous shaders...

you stated in another post that AMD needs more ROPs to take advantage of those 4096 shaders... maybe just maybe AMD built the FURY/X this way because DX12 can use it to the full potential... MAYBE AMD built the FURY/X for the FUTURE. (DX12 which is only just a month away) So would you rather have a card that excels in the near future or one that only performs well in the past... Pre-DX12 release...

which leads me to another reason AMD PUSHED out Mantle when they did... if they hadn't pushed it out with the R290x; DX12 might still be a year or two away... and still hampering AMD's GPU design decisions. In my OWN opinion DX9-11 has been hamstringing GPU performance for WAY TOO LONG; due to not fully utilizing the CPU cores available.

Now, if we could just get someone to develop a game or two that uses more than 4GB of SYSTEM memory, I would be ECSTATIC... :D I'm tired of Fallout NV crapping out to the desktop running out of memory with over 20+GB free and Fallout 4Gnv isn't much help. (Yeah, I know it's about 7 years old... I'm looking forward to Fallout 4 and the graphics detail in Star Wars Battle Front; YEA!!)
Posted on Reply
#85
arbiter
Freebirdwhich leads me to another reason AMD PUSHED out Mantle when they did... if they hadn't pushed it out with the R290x; DX12 might still be a year or two away... and still hampering AMD's GPU design decisions. In my OWN opinion DX9-11 has been hamstringing GPU performance for WAY TOO LONG; due to not fully utilizing the CPU cores available.
Only thing tyhat was hampering AMD's design decisions was AMD.
Posted on Reply
#86
Freebird
Apparently spell-checker doesn't "hamper" your misspellings... but thanks for your thoughts.
Posted on Reply
#87
AsRock
TPU addict
arbiterOnly thing tyhat was hampering AMD's design decisions was AMD.
Only thing that was hampering AMD's design decisions is money \ shrink.
Posted on Reply
Add your own comment
Dec 19th, 2024 03:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts