Thursday, July 2nd 2015

AMD Revises Pump-block Design for Radeon R9 Fury X

AMD seems to have reacted swiftly to feedback from reviewers and owners of initial batches if its Radeon R9 Fury X, over a noisy pump-block; and revised its design. The revised pump-block lacks the "high pitched whine" that users were reporting, according to owners. At this point there are no solid visual cues on how to identify a card with the new block, however a user with the revised card (or at least one that lacks the whine), pointed out a 2-color chrome Cooler Master (OEM) badge on the pump-block, compared to the multi-color sticker on pump-blocks from the initial batches. You can open up the front-plate covering the card without breaking any warranties.
Source: AnandTech Forums
Add your own comment

87 Comments on AMD Revises Pump-block Design for Radeon R9 Fury X

#26
jboydgolfer
before i ask/pose this question, Please excuse My ignorance when it comes to this particular subject.

After watching the video posted by someone above, I took that the AMD engineers were bound by rules of the nature of HBM ,and its positioning on the interposer, which binds they're hands as far as making a bigger/beefier chip to "blow away" Nvidia's higher end Ti's, etc... meaning they are stuck since there is only SO much room...

My question/point, is this. Being that the HBM MUST be on an Interposer, Could it be an option for the engineers to come out with a "Dual Chip" Card, e.g (7990) (6990), etc.., but instead of it being 2 Graphic processors, it could be One Large "Kill 'Em All" GPU, and then the other Chips Interposer dedicated SOLEY to the housing of HBM?

Or, another way as One Large GPU Only chip, then a Smaller GPU with say 8Gb's of HMB?

I understand that there are inherent limitation's when Xfiring on one PCB(or Two), ESPECIALLY (I would imagine) with such a High bandwidth Dram, but I cant help but wonder if that would be a Viable direction to explore...
Posted on Reply
#27
ShurikN
TheGuruStudIt is bad. It's terrible.
Last truly terrible card from AMD was HD 2900XT. Every card after was more than competitive with NV.
Posted on Reply
#29
Initialised
R-T-BIt's not THAT bad. It's not good compared to a TI or something, but you can't seriously expect AMD to go back to the drawing board at this point.

Their most competitive move would be to put it in the 980s price bracket, IMO. It would absolutely be competitive then.
I disagree, to make it competitive they need to lose the pump and ship a single slot version of the card with a full cover block.
Posted on Reply
#31
the54thvoid
Super Intoxicated Moderator
jboydgolferbefore i ask/pose this question, Please excuse My ignorance when it comes to this particular subject.

After watching the video posted by someone above, I took that the AMD engineers were bound by rules of the nature of HBM ,and its positioning on the interposer, which binds they're hands as far as making a bigger/beefier chip to "blow away" Nvidia's higher end Ti's, etc... meaning they are stuck since there is only SO much room...

My question/point, is this. Being that the HBM MUST be on an Interposer, Could it be an option for the engineers to come out with a "Dual Chip" Card, e.g (7990) (6990), etc.., but instead of it being 2 Graphic processors, it could be One Large "Kill 'Em All" GPU, and then the other Chips Interposer dedicated SOLEY to the housing of HBM?

Or, another way as One Large GPU Only chip, then a Smaller GPU with say 8Gb's of HMB?

I understand that there are inherent limitation's when Xfiring on one PCB(or Two), ESPECIALLY (I would imagine) with such a High bandwidth Dram, but I cant help but wonder if that would be a Viable direction to explore...
Forgive me if I'm wrong. Is the interposer not required for the HBM to talk to the chip to remove the need for extremely complex PCB traces?
A separate HBM area would require PCB traces that would add latency? I think the interposer is required and the only current issue is that it's manufactured on a far larger process than the chip it's linked to.
Posted on Reply
#32
the54thvoid
Super Intoxicated Moderator
jigar2speedThis is exactly what is being discussed here, AMD has a fix...
Sorry for double post, on phone, harder to copy paste quotes in an edit.

The fix was meant to appear in retail (as reviewers were told by AMD). I'm sure those review sites with retail versions could easily remove the cover to see what version they have.
AMD will have some whiny versions in the channel but I imagine they'll sort it out.
Posted on Reply
#33
AsRock
TPU addict
RejZoRIf I had plenty of money (which I don't), I'd get R9 Fury X just because it's such a novelty. It's revolutionary on several levels and some want exotic stuff even if it's not the best at the raw framerate. That's like picking the fastest possible car for road that might not be able to utilize it anyway. But you look real good in the other one and the drive itself in it is sublime. That's how I see Fury. It's beautiful, exotic card and some people just want that over raw performance.
Wait for the Fury that should be even better as custom designs will be allowed, if it be a nerfed we will have to see.
Posted on Reply
#34
RejZoR
I had enough of waiting. At first it was March/April, then it was June and now it's July. No. Enough is enough. Why the fuck they couldn't release Fury X and normal Fury is just out of this world idiotic.
Posted on Reply
#35
MxPhenom 216
ASIC Engineer
ZoneDymothe fanboy is strong with this one
I could say the same thing to you in reply to 90% of your posts. Oh the irony.

I pointed it out the day reviews flooded the internet, the ROP count does not make any sense for a card with ~4000 SPs. It has a similar "bottlenecking" issue that the 7970/280x has.
Posted on Reply
#36
Aquinus
Resident Wat-man
jboydgolferOr, another way as One Large GPU Only chip, then a Smaller GPU with say 8Gb's of HMB?
Memory latency. If you do something like that you're still stuck with the same limitations of GDDR5. HBM can be really wide because there are no traces going through the PCB (the graphics card itself,) to connect memory to the GPU, you only have the interposer. If you have two different ICs on the same PCB, you need traces going through the PCB to make the connections between the two and the wider you make it, the more wires you need to do it. Also, because of the distance problem, latency goes up.

The point of HBM was to put memory closer to the compute cores because it results in lower latency and (relatively) easier implementation of wider data buses. Moving HBM to a different IC goes against the reasoning for making HBM in the first place IMHO and that is to get memory closer to compute cores and to take up less space. Adding dedicated ICs doesn't accomplish either of those goals.
Posted on Reply
#37
Fluffmeister
jigar2speedThis is exactly what is being discussed here, AMD has a fix...
AMDThe issue is limited to a very small batch of initial production samples and we have worked with the manufacturer to improve the acoustic profile of the pump. This problem has been resolved and a fix added to production parts and is not an issue.
They said retail wouldn't be affected, but they clearly are.
Posted on Reply
#38
ironwolf
So if I get this correctly, there is no indicator on the box itself that the card is the revised version w/updated pump-block? Does this mean people might play video card roulette and people might be returning their working cards to play? :confused:
Posted on Reply
#39
the54thvoid
Super Intoxicated Moderator
ironwolfSo if I get this correctly, there is no indicator on the box itself that the card is the revised version w/updated pump-block? Does this mean people might play video card roulette and people might be returning their working cards to play? :confused:
Yes but when you buy a gfx card from either vendor you're also rolling the dice on coil whine too these days.
Posted on Reply
#40
R-T-B
TheGuruStudIt is bad. It's terrible.

They had it. They got the power consumption down to a decent level, crammed a shit load of shaders in there, crazy fast ram....and then bottlenecked the whole goddamn card. Screw them. They're running out of things screw up.

It is a monumental failure b/c their profitability relied on it and now they're going to lose their ass even more.

Fire every goddamn exec and lead engineer that allowed this to happen.
If you're approaching it from a design and profitability perspective it's certainly a failure.

I'm just pointing out from a consumer perspective the card can still be made competitive.
ShurikNLast truly terrible card from AMD was HD 2900XT. Every card after was more than competitive with NV.
Speaking as a sad owner of said card, I'll admit that card was about as bad as they came...
Posted on Reply
#41
ZoneDymo
MxPhenom 216I could say the same thing to you in reply to 90% of your posts. Oh the irony.

I pointed it out the day reviews flooded the internet, the ROP count does not make any sense for a card with ~4000 SPs. It has a similar "bottlenecking" issue that the 7970/280x has.
I dare you to find me 3 posts of mine that shows pure fanboyism.
Ill also have you know that 3 of my previous cards were Nvidia.

Apart from that, that post of his was not just borderline trolling fanboyism, it also made no sense in making a point.
For example, calling Fury weak and the 980Ti much stronger, apart from that being just wrong, how is being much stronger then weak anything special, that would barely make it "good".
Posted on Reply
#42
v12dock
Block Caption of Rainey Street
I love that were all engineers / scientist.
Posted on Reply
#43
TheGuruStud
ShurikNLast truly terrible card from AMD was HD 2900XT. Every card after was more than competitive with NV.
I've been buying Radeons since the 4890 b/c they have been a better buy for me all the way up to the 290x.

I bought a 980 Ti this week. Everyone knows I hate Nvidia. FURY X Fing SUCKS!

If they can get their shit together, then I will GLADLY buy their new card on launch day and disown the GTX.
Posted on Reply
#44
INSTG8R
Vanguard Beta Tester
R-T-BIf you're approaching it from a design and profitability perspective it's certainly a failure.

I'm just pointing out from a consumer perspective the card can still be made competitive.



Speaking as a sad owner of said card, I'll admit that card was about as bad as they came...
It was pretty tho wasn't it? I had one and had the insane dreams of Crossffire, thankfully that moment passed tho I did briefly own a 2900 Pro just to give that Crossfire a shot. Total failure, sold the card to a buddy the next day.
Lets not forget that the 3xxx wasn't any better just being a refresh of an already horrid chip. 4xxx was when they came back in force.

As for Fury I am very anxiously awaiting Sapphires take on Fury(non X) That is the card I am more interested in.
Posted on Reply
#45
haswrong
TheGuruStudIt is bad. It's terrible.

They had it. They got the power consumption down to a decent level, crammed a shit load of shaders in there, crazy fast ram....and then bottlenecked the whole goddamn card. Screw them. They're running out of things screw up.

It is a monumental failure b/c their profitability relied on it and now they're going to lose their ass even more.

Fire every goddamn exec and lead engineer that allowed this to happen.
i think they decided to work around it by a dualchip version once again. for dx12 titles there should be 4+4GB of vram, only the power draw and pricing will be funny again.. i think the $650 price tag is more suitable for the dualchip than a single chip card. whatever.. as someone noted later, all videocards are kinda meh performance and price wise.
Posted on Reply
#46
Ikaruga
Hey, I'm a top technology decision-maker at AMD and now as we also tested our top of the line world fastest and coolest card ever, we actually found out that the pump is indeed whining like hell as how our customers said so, so we fixed it for you. Thanks.

/sarcasm
Posted on Reply
#47
haswrong
Bytales...
So these are the reasons i am choosing the fury x, the 1 or 2 frames are not going to make or break a game compared to the titanium 980 or titan x, but its size and build characteristic, as well as the support for FREE SYNC; is what compells me to choose the fury x instead of its direct competitors.

Not to mention i want to help a bit AMD get on their feet. Competition is good for us all.
you mean neverending competition without a clear winner.. right?
Posted on Reply
#48
Casecutter
It's sad how vehemently people are deploring FuryX, almost as if they have to spew condemnation and have really never studied more than one review and then ran to the conclusion page.

FuryX is considered for 4K and at that it matches the 980Ti reference vs. ref. (sure there's OC 980Ti's) They are as close/identical as possible, while being totally different as two companies should be. It's good for us that AMD isn't trying to clone the competition it nice to have some choice... And given all things it's more than a valiant effort. AMD has delivered a competitive card, and while both have pluses and minuses why does AMD have to tender a lower price and succumb to less profit? As the mantra back when some set an inordinate price said... "they're not charity".

Sure there was misstep's by executives who don't know about "loose-lips....". Though AMD negated the whole OC'n issuein like a day or two, but no one seem to have heard about that part. Pumps whine… like is that any different than coil whine? Some surmise they could/should've thrown more ROP/SP’s at it, though have no idea whether technical challenges or developments had a basis on such considerations, if you do you might send in a resume’ for such a job.

Was it the “win” the forums and rumor mill was "drumming up" months ahead, No though hardly ever is. Were there executives prior to launch that aren’t using/given properly vetted talking points... :shadedshu: Honestly executive and engineers should never speak on behave of the company unless truly tasked to do so… STFU. AMD need credible young spokesperson(s)... I envision a team of say 3; fun/hip gamer (speaker), easy talking while strong (technical), and a “Luther” (aka anger translator) that hangs back adds fun, debunks sh#t, though keeps it real. That would be the all-encompassing AMD PR front, the media face that provides both social campaigns and announcements. Either being tapped collectively in media/stage events, although more often individuals working a “measured cadence” for releasing information.
AND Lastly… just dump Richard Huddy he’s just not cutting it!
Posted on Reply
#49
GorbazTheDragon
CasecutterIt's sad how vehemently people are deploring FuryX, almost as if they have to spew condemnation and have really never studied more than one review and then ran to the conclusion page.
...
TL;DR People believed the marketing hype.

That never happened before did it??? Oh, wait, Devil's Canyon...
(Was I the only one screaming at all the pre-release presentations that they should just solder the damn chip to the IHS or just drop the IHS all together on unlocked chips...)
Posted on Reply
#50
arbiter
BytalesIm getting the Fury x because im supporting Free Sync (Clearly Nvidia could have made G-Sync without a G-Sync Module as there are laptopts now on the market with g-sync screens withouth the expensive g-sync module, yet they choose to sell us an expensive extra thats not really needed, and continue not to support a free standard which would cost them nothing - the display port 1.2a specs) after i sold my asus rog Swift, im owning now the 32inch 4k ips from samsung.
G-sync module does a ton of work and IS needed. There is a difference between a desktop monitor and a laptop monitor. Laptop the gpu is connected DIRECTLY to the panel and they can use gpu itself to control VRR on the panel. In a desktop monitor its not, its connected by a small little cable and the monitor has its own hardware inside. G-sync module handles all VRR of the panel and does the overdriving itself on the panel. The faster you drive the panel the most likely hood you will get Ghosting which is an issue freesync has which is non-existent on g-sync panels. (Image: www.pcper.com/files/review/2015-06-26/ghost1.jpg ) You can see the ghosting issues on 2 freesync monitors in middle and right where as virtually nothing wrong with g-sync on left. Freesync has problems that need to be fixed. As for expensive module, freesync monitors needed new hardware to do same job that was claimed wouldn't be needed.

Why freesync isn't a free standard, its proprietary software AMD wrote up that uses the adaptive sync standard. Its far from perfect which AMD left on the monitor makers to fix themselves were as nvidia with g-sync did all the leg work to make it right since the start with no ghosting and list of panels that THEY tested to work well doing VRR.

Best way monitor makers found to fix ghosting well least minimize it, is to lock VRR down. you buy a 144hz panel but enable free sync you are limited to 90hz. Really? Kinda makes that bad buy to not use what you pay for.
(cue all the flame posts now)
GorbazTheDragonTL;DR People believed the marketing hype.
That never happened before did it??? Oh, wait, Devil's Canyon...
Intel was up front about it being a refresh. AMD said problem was fixed in retail cards but looks like some retail cards have the issue. Look at 390/x cards, AMD has said they are not re-brands even though there is enough proof that they are are. AMD isn't doing them selves many favors by lieing to everyone and when cault they keep quiet like a 3 year old. If AMD wants to get on more stable financial ground, can't keep saying something when doing another.

Reasons they gave 390/x isn't re-brand is, they increased memory, they increased memory clock speed, and they rewrote the power management microcode. So they claim its not a re-brand even though its the same chip......
Posted on Reply
Add your own comment
Dec 19th, 2024 02:41 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts