Saturday, June 13th 2015

AMD Radeon R9 Fury X Pictured Some More

Here are some of the clearest pictures of AMD's next-generation flagship graphics card, the Radeon R9 Fury X. Much like the R9 295X2, this card features an AIO liquid cooling solution. With the relocation of memory from chips surrounding the GPU to the GPU package as stacked HBM, the resulting PCB space savings translate into a card that's very compact. Under its hood is a full-coverage liquid cooling pump-block, which is plumbed to a thick 120 mm x 120 mm radiator. The card draws power from a pair of 8-pin PCIe power connectors. Display outputs include three DisplayPort 1.2a, and one HDMI 2.0.
Source: PC Perspective
Add your own comment

98 Comments on AMD Radeon R9 Fury X Pictured Some More

#76
BiggieShady
mirakulNot to mention with HBM's monstrous bandwidth, the streaming delay will be none to see.
This is not 4 GB of GDDR5, it's 4 GB of HBM.
People whining about Fury's 4 GB seem to forget the time when we moved from GDDR3 for GDDR5
That monstrous bandwidth is between GPU and HBM memory. Texture streaming happens over PCI-E.
mirakulYou need to shoot 240 bullets, and have two guns: TitanX with a massive capacity of 120 and 4s loading time, and FuryX with 40 capacity and just 1s loading time. Assuming two guns have the same firing rate, which gun will finish the job first?

Maybe we need a 5th grader here.
Yes, Fury X has more memory bandwidth at lower memory clocks thanks to HBM. Thanks to that GPU will get texture samples faster and do MSAA faster, and those are two parts of the rendering pipeline. If shaders are complex enough (or sub optimal) memory bus may be unused some of the time. It's all about balance of shading power and memory bandwidth.
Capacity is completely different story, your example tests how fast each GPU reads its complete amount of VRAM.
Posted on Reply
#77
RejZoR
And history has shown on and on that VRAM capacity plays WAY smaller factor in performance than bandwidth between GPU and VRAM. One would expect that people would learn this by now. Saying "but we didn't have 4K back then". Back then, 1024x768 was 4K for graphic cards that only had 32MB of VRAM...
Posted on Reply
#78
Assimilator
newtekie1Why? There are plenty of good 1440p monitors with DVI only. Why replace the monitor that you likely paid $400+ for a couple years ago just to use displayport?
If by "good" you mean "cheap South Korean imports" then yes. You'd be hard pressed to find ANY 1080p+ monitor released in the last half decade that doesn't have HDMI or DisplayPort or both... except the aforementioned South Korean monitors. What's ironic is that those same South Korean models do come in versions that support HDMI and DP for just a few bucks more, so the only ones to blame are the consumers who cheaped out when purchasing.

After all, why pay $450 for a monitor that's future-proof when you can pay $400 for one that isn't, then whine about how it's not your fault on the internet?
Posted on Reply
#79
Aquinus
Resident Wat-man
RejZoRPeople are so gladly forgetting about texture streaming. You don't need 300 gazillion gigaterabytes of VRAM to play things and max possible settings even with cards that have less than absolutely ideal capacity of VRAM.

When game requires 6GB of VRAM and you only have 4GB it'll stream half of the textures.
When game requires 6GB of VRAM and you have 8GB available, it'll just store everything in VRAM.

The end result is game that essentially runs equally fast on both graphic cards and the game looks identical on both. You may experience texture pop in with streaming in certain situations, but that really depends on the game...
That's not entirely true. Once I've gone over about 300MB of shared on my 6870s, it can't maintain a steady frame rate. If it goes north of 500MB, it's choppyness interspersed with smooth game play. I would say streaming textures are okay if it's going to be less than 20% of your total VRAM given your system memory is fast. I bet the size of the PCI-E bus makes a difference too but when push comes to shove, latency for streaming textures is pretty bad. As a result the GPU (like a CPU core,) spends more time waiting for the data and less time actually doing stuff.
RejZoRAnd history has shown on and on that VRAM capacity plays WAY smaller factor in performance than bandwidth between GPU and VRAM. One would expect that people would learn this by now. Saying "but we didn't have 4K back then". Back then, 1024x768 was 4K for graphic cards that only had 32MB of VRAM...
Back then the things getting rendered were far more simple than they are now. You're comparing apples and oranges there I think it is because of how stuff gets rendered now as how it's done has changed a lot since then. Heck, what gets rendered has changed a lot too. Too little VRAM is like having a squirrel stuck in the airbox of your car. Yeah, the car runs, but it won't run well because the

Remember when hardware T&L support was revolutionary? :p

In summary: Streaming textures are good but it has a point of diminishing returns that comes around very quickly.
Posted on Reply
#80
RejZoR
If you're relying to stream textures from a pathetic WD Green shit drive, then the shitty results are to be expected...
Posted on Reply
#81
Brusfantomet
RejZoRIf you're relying to stream textures from a pathetic WD Green shit drive, then the shitty results are to be expected...
The limiting factor will be the PCI-e buss, as the textures will be stored in ram (if the game engine is worth a dam). The pci-e bus is just ass fast for a r9 290x or a 980 ti, so the memory bandwidth of the card will NOT help there ( it will howerevver help with MSAA performance and other effects requiring massive GPU-VRAM bandwidth).

That benign said, my two x290x cards have no trouble with modern games at 2560 x 1600, and by the looks of things two x290x (with 4 GB vram) have no trouble with 4k either source

add in that the Fury will get the texture compression from GCN 1.2 making both the memory bandwidth AND stored memory for textures effectively 40 % bigger (since the compression gives 40 % more memory bandwidth) than the same memory on a GCN 1.1 card, it AMD metrics hold for their compression algorithm.

all in all i thing the memory will be enough in modern games until HMB2 becomes available.
Posted on Reply
#82
RejZoR
Really? PCIe 2.0 has a max bandwidth of 16GB/s. This means it fills 8GB of VRAM in 0,5 sec. You ppl are looking at the wrong potential bottleneck and it isn't a PCIe bus. It's the HDD drive feeding textures to the graphic card...
Posted on Reply
#83
Lagittaja
RejZoRIt's a premium product, it comes with premium cooling. If even AiO will be on the edge, then it'll be too hot indeed. But if it will be able to run at reasonably low fan speeds, then it's a big plus, because it'll be quieter than any NVIDIA card while performing with excellence. I mean, just look at the specs of that monster...

As for the "drivers conundrum", has anyone had any serious problems with a single card? I haven't. But I admit CrossfireX isn't the best. Then again no multicard setup ever was, so there's that...
Dude. That thing has a high speed D1225C (Gentle Typhoon) on it from Nidec. (!!)
The most tame model is 3000rpm (which I guess is the one they're using), there's also a 4250rpm and 5400rpm models.
Too bad the images are a bit shaky, can't tell whether it's DC or PWM. But on DC the 3000rpm model starts at 4V but it's not really a usable speed (<400rpm) as it's not gonna move pretty much any air at all since the radiator they have must be high FPI.
At 5V it's gonna already spin at ~1250rpm. By 6V it's at 1600rpm.

If it needs that kind of a fan...

We've previously seen it with a low speed D1225C. Now this close to the paper launch with a high speed version? Oh dear.
Posted on Reply
#84
RejZoR
Well, the fans they stick to the coolers are absolutely ridiculous most of the time. The ones on my Antec H2O 920 were like jet turbines. The Noiseblockers however, I run them at really low fixed RPM and they are super silent while CPU is coole enough for it to operate at 4,2GHz and never goes past 75°C which is excellent noise/heat ratio. But it's push/pull configuration, here is just 1 fan so I don't know... We'll see...
Posted on Reply
#85
Lagittaja
I found another picture of it. The fan cable is PWM so at least it will be reasonable when idle :D

Yeah sure the H2O 920 fans are crap. I'm not saying the Gentle Typhoon is a shit fan because it's a high speed fan. On the contrary the Gentle Typhoons are great fans on a radiator. Pretty much one of the best.
But since they've swapped to a high speed version of it.. That's what's concerning.
Posted on Reply
#86
ensabrenoir
Must admit...thought there would be more leaks by now of the fury performance. ...truth will be known in a few more days.....
Posted on Reply
#87
bug
ensabrenoirMust admit...thought there would be more leaks by now of the fury performance. ...truth will be known in a few more days.....
Historically, when AMD has kept mum about performance, they have always underwhelmed. I really hope this will be an exception.
Posted on Reply
#88
AsRock
TPU addict
Luka KLLPThat is a really good looking card
Look even better naked!, how ever not long to wait for that :).
Posted on Reply
#89
Athlonite
newtekie1Two 8-Pins? So we can expect the power draw to surpass the 300w mark.
how else do you think that AIO is being powered if it was just air cooled it'd proly only be 1x8 and 1x6 pin
Posted on Reply
#90
techtard
That fan looks like the high rpm Gentle Typhoons I got lying around. Hopefully they aren't the 3k rpm version like I have, they are not quiet but push a ton of air.
Posted on Reply
#91
Prima.Vera
RejZoRReally? PCIe 2.0 has a max bandwidth of 16GB/s. This means it fills 8GB of VRAM in 0,5 sec. You ppl are looking at the wrong potential bottleneck and it isn't a PCIe bus. It's the HDD drive feeding textures to the graphic card...
The textures are loaded to the system's RAM then into VRAM as required. The HDD/SDD is only used on the initial loading..
Posted on Reply
#92
chinmi
interesting.... too bad the 980ti will still be faster...
Posted on Reply
#93
Vayra86
Imagine this kind of GPU in a Mini-ITX build. Yummy.

As far as placing the card in the current market, the form factor is definitely OK by today's standards. AMD will easily have the most powerful *short* card on the market, perhaps not the most powerful across the board, but that's something at least. I can definitely see why they went for the AIO solution.

At this point the thing I am most interested in, is how much noise it generates at full load. A single fan is still a single fan solution and as a high speed fan, it may very well top the 290X stock blower which could destroy this card's desirability...
Posted on Reply
#94
ensabrenoir
Vayra86Imagine this kind of GPU in a Mini-ITX build. Yummy.

As far as placing the card in the current market, the form factor is definitely OK by today's standards. AMD will easily have the most powerful *short* card on the market, perhaps not the most powerful across the board, but that's something at least. I can definitely see why they went for the AIO solution.

At this point the thing I am most interested in, is how much noise it generates at full load. A single fan is still a single fan solution and as a high speed fan, it may very well top the 290X stock blower which could destroy this card's desirability...
Mini itx application is why im interested in this card. Power draw will be a factor to me in an mini itx format.
Posted on Reply
#95
xfia
Prima.VeraThe textures are loaded to the system's RAM then into VRAM as required. The HDD/SDD is only used on the initial loading..
not true for every game and program.. if you monitor the pagefile you would notice that it gets used alot and of course if ram fills up the pagefile will get used very rapidly.
not a secret that a hdd can be a bottleneck and leave you waiting being the reason that the ssd was developed.
Posted on Reply
#96
ironwolf
chinmiinteresting.... too bad the 980ti will still be faster...
Your crystal ball back that up? :wtf:
Posted on Reply
#97
fullinfusion
Vanguard Beta Tester
ironwolfYour crystal ball back that up? :wtf:
I was thinking the same dam thing, where's the troll spray.

I was looking forward to the 390x but it offered me not a dam thing..

Hello Fury, I'll be sure to give you a good home.. Now hurry the f up and bring these out already.

Edit: oh its going to be so fun re-pasting the GPU and hit 4 little buddies ;)
Posted on Reply
#98
Brusfantomet
RejZoRReally? PCIe 2.0 has a max bandwidth of 16GB/s. This means it fills 8GB of VRAM in 0,5 sec. You ppl are looking at the wrong potential bottleneck and it isn't a PCIe bus. It's the HDD drive feeding textures to the graphic card...
Its not the single loading of the textures that is the problem, but a constant swapping to/from system ram 500 milli seconds is ages in computer latencies.

lets say a card has the performance to draw a scene at 50 fps, that is a frame time of 20 ms.

if it has to fetch 1 GB of data from ram a PCI-e 16x bus will take 62 ms. (1GB / 16GB/s = 0,062s = 62ms)

If the drivers are made well and the data allows it the transfer can be done in parallel to the render job, giving a frame time of 62ms and that givers a FPS of 16.1 (1000ms / 62ms/frame = 16.1 fps)

if the GPU is not able to do the parallel work it will take 82 ms for one frame(computing an waiting for data) 1000ms / 82ms/frame = 12,2 FPS.

that being said, some games will load as many textures and models into Vram, and not clean it when its not needed, netting a higher Vram usage than necessary.
xfianot true for every game and program.. if you monitor the pagefile you would notice that it gets used alot and of course if ram fills up the pagefile will get used very rapidly.
not a secret that a hdd can be a bottleneck and leave you waiting being the reason that the ssd was developed.
Remember that a game is more than the graphics, its inteirly possible for the engine or other parts of the game to use the page file without the textures ever getting near the page file.
Posted on Reply
Add your own comment
Dec 11th, 2024 16:07 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts