Thursday, July 2nd 2015
AMD Revises Pump-block Design for Radeon R9 Fury X
AMD seems to have reacted swiftly to feedback from reviewers and owners of initial batches if its Radeon R9 Fury X, over a noisy pump-block; and revised its design. The revised pump-block lacks the "high pitched whine" that users were reporting, according to owners. At this point there are no solid visual cues on how to identify a card with the new block, however a user with the revised card (or at least one that lacks the whine), pointed out a 2-color chrome Cooler Master (OEM) badge on the pump-block, compared to the multi-color sticker on pump-blocks from the initial batches. You can open up the front-plate covering the card without breaking any warranties.
Source:
AnandTech Forums
87 Comments on AMD Revises Pump-block Design for Radeon R9 Fury X
#2 - If ROPs are such a huge bottleneck as you claim, then why is scaling so ridiculously good in CF? It's better than any previous AMD card, and miles ahead of any SLI setup. Even games it loses badly in (even at 4K) in single card config, it shits on the TX in 2x FX vs 2x TX. This is with very early drivers. 2x FX @4K Ultra in BF4 gets ~120FPS, and this is a game where the drivers seem to be really bad for the FX. I expect to see new drivers extend these gains (in single & CF) immensely ... and DX12 will in my opinion be a complete whitewash for GCN and Fiji particularly vs Maxwell or Kepler.
If you're going for more than one high end card, it looks like Fiji is the only thing you should consider right now. Also, if you intend to buy a VR headset, GCN (& more particularly Fiji / 380/ 285) is the only game in town ... the hardware is far more suitable than Maxwell (v.1 or v.2) and LiquidVR is miles ahead of GWVR.
Scaling is good on 7970/280x, but those cards are a bit bottlenecked by the 32 ROPs.
Forgive me if I'm skeptical about that ROP bottlenecking claim .. because you're claiming the same thing with the Fiji and it's total bollocks.
Even the review at PCPER brings up the low rop count that could be hindering the performance of the card. This is the same rop count as the 290/290x like 2 years ago.
I have decided to give it a miss this time 'round and see what AMD have install for the next series.
I am looking forward to 8+Gb HBM with further refined technology. I love the GPU and memory being closer together concept and obviously this is a factor that Nvidia will surely need to adapt to stay competitive in the future.
Sure, Fury X might not beat the current Nvidia range, or maybe it does. The point being is, if Nvidia don't start improving their architecture they will soon fall behind with no real hope of ever catching up. The Fury X might as well be considered the turn-around point for AMD.
As for the coil whining . . . I seem to see more whining coming from people than the actual GPU package. You can always improve GPU whining, but you can't improve people whining!\
For those that have an issue with Crossfire performance, let me assure you, Crossfire has no issues except with the end-user. Sure my 3x 290X's don't scale 100%, but it's close enough for me not to complain. In-fact, 2x 290X's scale better than my 2x Titans ranging from 1080 to 4k. Only after 4k is where my Titans are better, which I don't even use anymore, and I am sure 99% of you don't either.
HBM1 is all good, but just a minor stepping stone in my books, the real fun wont start until HBM2/3+.
Its only a prototype but you can see what hbm lead amd has won't last long. I expect HBM will be highest end gpu only since really mid range don't need it yet.
I believe we will find that the Fury X is a fine card once the it's OCing abilities are "found" i.e. Over Volting GPU & HBM. It should have some decent head room once this is accomplished... in addition, I speculate that there may be more performance to squeeze out of the Fury/X when DX12 arrives in about a month... and then another boost when "true" DX12 games appear that leverage asynchronous shaders...
you stated in another post that AMD needs more ROPs to take advantage of those 4096 shaders... maybe just maybe AMD built the FURY/X this way because DX12 can use it to the full potential... MAYBE AMD built the FURY/X for the FUTURE. (DX12 which is only just a month away) So would you rather have a card that excels in the near future or one that only performs well in the past... Pre-DX12 release...
which leads me to another reason AMD PUSHED out Mantle when they did... if they hadn't pushed it out with the R290x; DX12 might still be a year or two away... and still hampering AMD's GPU design decisions. In my OWN opinion DX9-11 has been hamstringing GPU performance for WAY TOO LONG; due to not fully utilizing the CPU cores available.
Now, if we could just get someone to develop a game or two that uses more than 4GB of SYSTEM memory, I would be ECSTATIC... :D I'm tired of Fallout NV crapping out to the desktop running out of memory with over 20+GB free and Fallout 4Gnv isn't much help. (Yeah, I know it's about 7 years old... I'm looking forward to Fallout 4 and the graphics detail in Star Wars Battle Front; YEA!!)