Thursday, July 2nd 2015

AMD Revises Pump-block Design for Radeon R9 Fury X

AMD seems to have reacted swiftly to feedback from reviewers and owners of initial batches if its Radeon R9 Fury X, over a noisy pump-block; and revised its design. The revised pump-block lacks the "high pitched whine" that users were reporting, according to owners. At this point there are no solid visual cues on how to identify a card with the new block, however a user with the revised card (or at least one that lacks the whine), pointed out a 2-color chrome Cooler Master (OEM) badge on the pump-block, compared to the multi-color sticker on pump-blocks from the initial batches. You can open up the front-plate covering the card without breaking any warranties.
Source: AnandTech Forums
Add your own comment

87 Comments on AMD Revises Pump-block Design for Radeon R9 Fury X

#2
R-T-B
TheGuruStudMore ROPs or throw it away.
It's not THAT bad. It's not good compared to a TI or something, but you can't seriously expect AMD to go back to the drawing board at this point.

Their most competitive move would be to put it in the 980s price bracket, IMO. It would absolutely be competitive then.
Posted on Reply
#3
TheGuruStud
R-T-BIt's not THAT bad. It's not good compared to a TI or something, but you can't seriously expect AMD to go back to the drawing board at this point.

Their most competitive move would be to put it in the 980s price bracket, IMO. It would absolutely be competitive then.
It is bad. It's terrible.

They had it. They got the power consumption down to a decent level, crammed a shit load of shaders in there, crazy fast ram....and then bottlenecked the whole goddamn card. Screw them. They're running out of things screw up.

It is a monumental failure b/c their profitability relied on it and now they're going to lose their ass even more.

Fire every goddamn exec and lead engineer that allowed this to happen.
Posted on Reply
#4
ZoneDymo
TheGuruStudMore ROPs or throw it away.
I can say from every videocard out today "more performance or throw it away" tbh, because honestly they are not where they should be.
Posted on Reply
#5
ZoneDymo
fantastic, but what I dont understand yet again, how did they not catch this during development.
Is everyone on the team deaf or something? Why not address it from the start instead of letting it get some bad press on release?
Posted on Reply
#6
Xzibit
TheGuruStudIt is bad. It's terrible.

They had it. They got the power consumption down to a decent level, crammed a shit load of shaders in there, crazy fast ram....and then bottlenecked the whole goddamn card. Screw them. They're running out of things screw up.

It is a monumental failure b/c their profitability relied on it and now they're going to lose their ass even more.

Fire every goddamn exec and lead engineer that allowed this to happen.
Scott explains it well without the tinfoil hats from either side.

The TR Podcast 178: Going deep with the Radeon Fury X

1:09:00+
Posted on Reply
#7
chinmi
useless change... no one in their sane mind wanna buy a weak failed card when with the same money they can have a much stronger, cooler, overclockable, more silence, great driver suppport, more power efficient and awesome green color, the 980ti !!
Posted on Reply
#8
SetsunaFZero
Fury isn't even lunched and ppl are already whining. Give it some months after lunch day 16.6 and the price will drop. What mre bugs are DX12 benches.
Posted on Reply
#9
ZoneDymo
chinmiuseless change... no one in their sane mind wanna buy a weak failed card when with the same money they can have a much stronger, cooler, overclockable, more silence, great driver suppport, more power efficient and awesome green color, the 980ti !!
the fanboy is strong with this one
Posted on Reply
#10
techy1
I just hope (for everybody) that AMD can make this crappy fury dirt cheap - and there is lot of room for price cuts so AMD can still make profit... for this price - Fury 's price/preformace sux so bad that I can not even belive it is 2015 out there... but we all need AMD alive or else Nvidia can and willl go crazy with prices... so lets us all pray that AMD will get some cash from red fanatics now, with this stupid price tag, and after few weeks will have plenty room for price cuts then that GPU will not look so bad after all (it still will sound bad and heat your room - but it must be dirt cheap)
Posted on Reply
#11
Ferrum Master
very pleased to see such past paced reaction to the issue... I bet they knew it already before the launch.
Posted on Reply
#12
FordGT90Concept
"I go fast!1!11!1!"
XzibitScott explains it well without the tinfoil hats from either side.

The TR Podcast 178: Going deep with the Radeon Fury X

1:09:00+
Very interesting. AMD was limited by the size of the interposer which gave a GPU handicap against NVIDIA but a huge memory boon. And while I watched that, I was thinking "what is really stopping AMD from making these Fiji chips swappable?" Since virtually all of the magic is happening at the interposer level and above and not on the card anymore, are we quickly reaching the point where we can swap GPUs like we swap CPUs? Also, what does this mean for CPUs (especially the embedded variety)? If GPU memory can be set into an interposer to vastly increase bandwidth and response time, why can't CPUs do the same? I think we know what next generation console processors (perhaps even the next Nintendo console) will look like: imagine a Jaguar CPU, a smaller version of Fiji (maybe 1024-2048 stream processors), and 8-16 GiB of HBM all on one interposer...

Fury X performs similar to 980 Ti and they are priced similar as well. Sure Fury X might be a few percentage points slower but the trade off on the purchasing price is water cooled versus air cooled. In my opinion, that trade off more than offsets the price/performance difference.

The next node of GPUs, whatever they are so long as they aren't 28nm, will be very, very interesting.


On topic: I'm glad they got it fixed. The advantage of being water cooled (quiet) being swept away by a noisy pump is a deal breaker on the aforementioned advantage of Fury X over 980 Ti.
Posted on Reply
#13
RejZoR
Smaller version of Fiji with 2048 shaders would make R9-290X with GCN 1.2 basically. Not sure if you could stuff this into an APU just yet...
Posted on Reply
#14
Ferrum Master
FordGT90Conceptwhat is really stopping AMD from making these Fiji chips swappable?"
300W thing in a socket? Remember early socket 1155 burning out due to bad pins? An that was only a 130W thermal package.

It needs to be bigger, then designed for the added capacity and resistance, more added cost, testing etc stuff... bigger RMA rates etc...
Posted on Reply
#15
Easo
chinmiuseless change... no one in their sane mind wanna buy a weak failed card when with the same money they can have a much stronger, cooler, overclockable, more silence, great driver suppport, more power efficient and awesome green color, the 980ti !!
Fury X is weak card to you?
Realy?
Posted on Reply
#16
mroofie
ZoneDymothe fanboy is strong with this one
Not really its the truth and the truth hurts obviously :rolleyes:

But yes he is a fanboy by looking at his previous posts.
Posted on Reply
#17
buggalugs
TheGuruStudIt is bad. It's terrible.

They had it. They got the power consumption down to a decent level, crammed a shit load of shaders in there, crazy fast ram....and then bottlenecked the whole goddamn card. Screw them. They're running out of things screw up.

It is a monumental failure b/c their profitability relied on it and now they're going to lose their ass even more.

Fire every goddamn exec and lead engineer that allowed this to happen.
AMD slotted the card in exactly where they wanted it.

The FuryX is selling out, they cant make enough of them, and they are selling at 30% higher than recommended price in some places.

I wasn't going to buy FuryX just because of the closed loop cooler but what the heck, I'm going to buy one, (when I can buy one, they are sold out) I might get 2. Thanks for the advice.
Posted on Reply
#18
FordGT90Concept
"I go fast!1!11!1!"
Ferrum Master300W thing in a socket? Remember early socket 1155 burning out due to bad pins? An that was only a 130W thermal package.

It needs to be bigger, then designed for the added capacity and resistance, more added cost, testing etc stuff... bigger RMA rates etc...
But most of those pins are connecting to very high frequency DIMMs in a CPU. Those are in the interposer with Fiji--not transmitted to the socket. Yes, because of the 250+ watt requirement, it would still have to be on a daughter board or accept PCIE power directly on the motherboard close to the socket. I still think it may be feasible. Basically all this chip needs is power, PCI Express lanes, and pins to wire up the DisplayPort connectors.

No one can deny HBM and the interposer open up possibilities that did not exist with GDDR5. They could theoretically even move some logic to the interposer freeing up even more die space for the GPU.
Posted on Reply
#19
Unregistered
What I think after few days of digging that, 28nm is obviously not the right point of time for HBM. You see, for GCN, if you cram a few more ROPs, engines etc, it can beat TITAN X. But they can't cram anymore, why? Because of the limitations of HBM and ofcourse the interposer. I guess there are some shortcomings you can't figure out until they're implemented.

Whatever the reason, I think Fury X can outperform a 980 Ti with few clock boost after new driver release and the voltage unlocks.

Anyway, glad AMD is being swift on the user reactions, it's quite seldom. You don't see everyday big companies with thick walls between end users & enthusiasts and them reaching and hearing them out so quickly. I guess AMD is trying to get back on every little details as possible, and retain their position again.
#20
Aquinus
Resident Wat-man
FordGT90ConceptNo one can deny HBM and the interposer open up possibilities that did not exist with GDDR5. They could theoretically even move some logic to the interposer freeing up even more die space for the GPU.
Like what? Latency is still a thing and for it to exist outside the GPU, it would need to be connected to the memory controller or the PCI-E controller... both of which are relatively slow latency wise versus having something on the same GPU die like cache. I don't think there would be much benefit by doing this as the point of HBM was to move components closer to the core, not further away from it. I honestly think that last sentence makes very little sense from a technical perspective.
Posted on Reply
#21
Bytales
Im not going to deny me the pleasure of owning 2 Fury X because it was suppose to be the fastest GPU and because its 3-4% beneath 980ti/Titan x. Do remember that we are yet to see DX12 related benchmarks games, where i have a feeling the fury x might be on top on nvidia.

Im getting the Fury x because im supporting Free Sync (Clearly Nvidia could have made G-Sync without a G-Sync Module as there are laptopts now on the market with g-sync screens withouth the expensive g-sync module, yet they choose to sell us an expensive extra thats not really needed, and continue not to support a free standard which would cost them nothing - the display port 1.2a specs) after i sold my asus rog Swift, im owning now the 32inch 4k ips from samsung.

And Fury x its the best card AMD has for 4 K, 4gb are clearly enough for 4k, im not going to use 32AA at 4k anyway.
Apart from that im mostly choosing fury x because of its small PCB that can be made single slot with waterblock, its not extra wide and its not extra ong.
And i could fit 4 of them with custom waterblock, and still have place for an Areca Raid Card, and a pci express USB 3.0 adapter. The second USB 3.0 adapter, i can take out, (thus freeing the slot for the 4th card) - i can do this because of the short length of the card i can use the built in USB 3.0 19pin connectors. Wich would be imposible with the long titans or 980ti, or 390x for that matter.

I cannot take wider cards, like the 980ti kingpin woud be, which would be the only other competitive card that can be made single slot through water cooling, as i have 40mm fans on the side installed.

So these are the reasons i am choosing the fury x, the 1 or 2 frames are not going to make or break a game compared to the titanium 980 or titan x, but its size and build characteristic, as well as the support for FREE SYNC; is what compells me to choose the fury x instead of its direct competitors.

Not to mention i want to help a bit AMD get on their feet. Competition is good for us all.
Posted on Reply
#22
RejZoR
buggalugsAMD slotted the card in exactly where they wanted it.

The FuryX is selling out, they cant make enough of them, and they are selling at 30% higher than recommended price in some places.

I wasn't going to buy FuryX just because of the closed loop cooler but what the heck, I'm going to buy one, (when I can buy one, they are sold out) I might get 2. Thanks for the advice.
If I had plenty of money (which I don't), I'd get R9 Fury X just because it's such a novelty. It's revolutionary on several levels and some want exotic stuff even if it's not the best at the raw framerate. That's like picking the fastest possible car for road that might not be able to utilize it anyway. But you look real good in the other one and the drive itself in it is sublime. That's how I see Fury. It's beautiful, exotic card and some people just want that over raw performance.
Posted on Reply
#23
FordGT90Concept
"I go fast!1!11!1!"
AquinusLike what? Latency is still a thing and for it to exist outside the GPU, it would need to be connected to the memory controller or the PCI-E controller... both of which are relatively slow latency wise versus having something on the same GPU die like cache. I don't think there would be much benefit by doing this as the point of HBM was to move components closer to the core, not further away from it. I honestly think that last sentence makes very little sense from a technical perspective.
e.g. the embedded DisplayPort chips and the PCI Express controller. It could be designed in a way that the interposer is actually closer (think 3D) then the component in the GPU. There's only two problems with moving components to the interposer:
1) it is 65nm so they'll require more power
2) heat dissipation is a problem so it can't be too intensive

Let me put it this way: most GPUs are designed with the PCI Express controller off to the side and logic branches out from there. The controller could instead be in the center of the interposer and connect directly up to the GPU. The distance from the PCI Express controller would be equally short to all compute units.


The purpose of HBM was to stack memory. Because that resulted in many, many pins for each stack, the interposer was the only reasonable solution to connect everything. Think embedding logic in the interposer as stacking the GPU. The gains wouldn't be as massive as HBM but there would still be gains.
Posted on Reply
#24
SonicZap
I like how AMD lately seems to react quicker to issues than before. While Fury X was released already, their response time (a bit over a week) for this issue wasn't that bad and they also released drivers for the latest Batman game before its flopped release. I hope they keep it up. It's hard to avoid issues with hardware as complex as graphics cards (as evidenced by the Chrome TDRs, Nvidia isn't perfect either), but how fast you solve those issues is critical.
Posted on Reply
Add your own comment
Dec 19th, 2024 00:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts