# AMD Radeon R9 Fury X Pictured Some More



## btarunr (Jun 13, 2015)

Here are some of the clearest pictures of AMD's next-generation flagship graphics card, the Radeon R9 Fury X. Much like the R9 295X2, this card features an AIO liquid cooling solution. With the relocation of memory from chips surrounding the GPU to the GPU package as stacked HBM, the resulting PCB space savings translate into a card that's very compact. Under its hood is a full-coverage liquid cooling pump-block, which is plumbed to a thick 120 mm x 120 mm radiator. The card draws power from a pair of 8-pin PCIe power connectors. Display outputs include three DisplayPort 1.2a, and one HDMI 2.0.



 

 

 



*View at TechPowerUp Main Site*


----------



## Aceman.au (Jun 13, 2015)

Jesus it's tiny.


----------



## newtekie1 (Jun 13, 2015)

Two 8-Pins?  So we can expect the power draw to surpass the 300w mark.


----------



## kiddagoat (Jun 13, 2015)

I like it, picking up one myself when they become available.


----------



## Marian Popa (Jun 13, 2015)

As a 780ti owner, an having been through 270x,280x and 780, I gotta say kudos to team Red . Can't wait to check this model out in a store soon, or even better : in my own system.


----------



## jigar2speed (Jun 13, 2015)

newtekie1 said:


> Two 8-Pins?  So we can expect the power draw to surpass the 300w mark.



Yap but its still just 50 Watts up from TITAN X...


----------



## newtekie1 (Jun 13, 2015)

jigar2speed said:


> Yap but its still just 50 Watts up from TITAN X...



Assuming it only hits 300w, with 2 8-pins AMD has 375w to play with.


----------



## SimpleTECH (Jun 13, 2015)

Interesting that AMD went with a Gentle  Typhoon (Nidec Servo) as their fan choice.


----------



## btarunr (Jun 13, 2015)

Liquid cooling people should be excited because this card is truly single-slot capable (unlike Titan-X), if you give it a custom full-coverage block. The reference card is 2-slot only because the reference cooler has a pump+block inside. If you have your own loop with pumps and res sitting outside, then this card will look epic. With custom (real) liquid cooling and unlocked voltage limits, this card could be epic.


----------



## ZoneDymo (Jun 13, 2015)

Love it, love the fan, love the little box that makes the fan and radiator seem one item, love the sleeved tubes for the water, love the cards look.

Now I just need it to preform properly


----------



## xfia (Jun 13, 2015)

just one thing.. 4gb vram yet the 390x is standard 8gb. it could use more in crossfire but hbm1 cant.


----------



## bogami (Jun 13, 2015)

O short on this radiator is very capable also fen. 2x Fury X will not be long but will need at least 3 x 8 power inputs. this liquid cooling consumes a lot more, I think. TPD will be unfortunately high. Regardless, I think that this will be my next choice.


----------



## damric (Jun 13, 2015)

ZoneDymo said:


> Love it, love the fan, love the little box that makes the fan and radiator seem one item, love the sleeved tubes for the water, love the cards look.
> 
> Now I just need it to preform properly



I agree. This does look fantastic.


----------



## RejZoR (Jun 13, 2015)

Who cares how many watts, it's water cooled. It's not like you're running this thing 24/7 anyway, besides, who buys this sort of stuff doesn't give a flying f about power consumption.


----------



## lZKoce (Jun 13, 2015)

For a serial production item, they really went the line: sleeving, design and all. Looks like a great card hands down. 

Edit: I want to see it disassembled. How did they cool the VRMs? May be some sort of modified AIO that covers them as well? Or a fan below the dotted top?


----------



## BiggieShady (Jun 13, 2015)

RejZoR said:


> Who cares how many watts, it's water cooled. It's not like you're running this thing 24/7 anyway, besides, who buys this sort of stuff doesn't give a flying f about power consumption.


I don't think people care about power consumption, rather overclocking headroom


----------



## GreiverBlade (Jun 13, 2015)

damn it almost tempt me ... but the AIO is a set off  but i would gladly put one of those with a custom block in place of my actual 290
that one or a 390X ... if the 390X has a more standard size ...  hell i don't have a itx case.

short card are cute and all and fit well in itx mATX cases but ... 

judging by the length of my card backplate the fury should be the same size as the 1st part before the grove for the heatpipe (bracket to grove) well maybe a bit longer since the grove is right on the end of the PCIeX connector...

 
remove the back part (grove to back) and ... eurghh i don't even want to think of it ...


----------



## arbiter (Jun 13, 2015)

newtekie1 said:


> Two 8-Pins?  So we can expect the power draw to surpass the 300w mark.





jigar2speed said:


> Yap but its still just 50 Watts up from TITAN X...





newtekie1 said:


> Assuming it only hits 300w, with 2 8-pins AMD has 375w to play with.



I guess its worth pointing out that r9 295x2 had 2x8pin as well and that card could draw as 600watts at max so It would be safe bet say that 300watts probably tad low for the card. So 150watt spec for 8pin kinda goes out the window IMO when said company has used almost 2x the spec on a recent card.


----------



## the54thvoid (Jun 13, 2015)

RejZoR said:


> Who cares how many watts, it's water cooled. It's not like you're running this thing 24/7 anyway, besides, who buys this sort of stuff doesn't give a flying f about power consumption.



But some of us do.  What is relevant though is that if it pulls 350 watts but gives 295x performance, then that is a win.  It's perf/wattage that matters.  I can now get rid of my sli 780ti's and replace with a single one of these (or 980ti).  Some people do care about power usage beyond $$ reasons as well, such as environmental responsibility.  Given we're all selfish anyway, it'd be better to have our awesome PC's consuming a little less power.

Truly excited to read reviews real soon.


----------



## Joss (Jun 13, 2015)

To use an AIO on a reference card means it heats too much.
To use one as thick as this means it heats like hell.
And this comes right after Nvidia took heat and consumption by the horns with the results we know.

If this GPU doesn't perform very, very well _and_ is cheaper than the direct alternative, then AMD is screwed because the rest of the new cards are nothing but re-brands.


Oh... and there's the drivers conundrum as well.


----------



## RejZoR (Jun 13, 2015)

Joss said:


> To use an AIO on a reference card means it heats too much.
> To use one as thick as this means it heats like hell.
> And this comes right after Nvidia took heat and consumption by the horns with the results we know.
> 
> ...



It's a premium product, it comes with premium cooling. If even AiO will be on the edge, then it'll be too hot indeed. But if it will be able to run at reasonably low fan speeds, then it's a big plus, because it'll be quieter than any NVIDIA card while performing with excellence. I mean, just look at the specs of that monster...

As for the "drivers conundrum", has anyone had any serious problems with a single card? I haven't. But I admit CrossfireX isn't the best. Then again no multicard setup ever was, so there's that...


----------



## the54thvoid (Jun 13, 2015)

RejZoR said:


> It's a premium product, it comes with premium cooling. If even AiO will be on the edge, then it'll be too hot indeed. But if it will be able to run at reasonably low fan speeds, then it's a big plus, because it'll be quieter than any NVIDIA card while performing with excellence. I mean, just look at the specs of that monster...
> 
> As for the "drivers conundrum", has anyone had any serious problems with a single card? I haven't. But I admit CrossfireX isn't the best. Then again no multicard setup ever was, so there's that...



Yes, exactly this -apart from quieter than any Nvidia card - a few 980ti's are sold with blocks and EVGA do a hybrid water cooled one https://www.overclockers.co.uk/showproduct.php?prodid=GX-283-EA&groupid=701&catid=1914&subcat=1402

The fact is, the 290X was slated at launch for it's noise and heat issues (throttling etc).  AMD listened and have looked realistically at how to bring out a top end card without flaws.  I think they should be applauded for having the balls to do things this way.  This will be competing with NV's top end (just don't know which way it'll go yet) and this time they've created a pretty sterling looking package.


----------



## HumanSmoke (Jun 13, 2015)

Joss said:


> If this GPU doesn't perform very, very well _and_ is cheaper than the direct alternative, then AMD is screwed because the rest of the new cards are nothing but re-brands.


It should perform well enough. There is enough headroom with input power for AMD to dial in ( up to a point) the performance they need. Having said that, recent history tends to indicate that " winning" will be on a case-by-case, title-by-title basis. I could well see the Fury X and 980 Ti split benchmarks, so it will probably depend upon overclocking headroom as a decider. By the time the Fury X makes the site reviews, I'm guessing there will also be a whole range of custom vendor 980 Ti's hitting the review circuit.


Joss said:


> To use an AIO on a reference card means it heats too much.


Well, I'm sure they didn't add it because the company is so flush with cash, that they thought they'd give consumers an early Christmas present.


Joss said:


> And this comes right after Nvidia took heat and consumption by the horns with the results we know.


That came at a cost of stripping out a lot of compute functionality. Double precision took a hit, the register file size is no bigger than GK110 (and half that of the reworked GK210), and the L1 memory cache is slightly reduced as well. Now, if the Fury X also has a crippled double precision from the Radeon Hawaii's 1/8th rate, some questions then have to be answered - especially if the Fiji GPU turns out to be a doubled in size Tonga ( which has a native FP64 rate of 1/16)


Joss said:


> Oh... and there's the drivers conundrum as well.


Well, the card has been supposedly done for around six months, so I don't think that should be an issue - although launching flagship cards with immature drivers isn't an unknown occurrence.


----------



## Luka KLLP (Jun 13, 2015)

That is a really good looking card


----------



## BiggieShady (Jun 13, 2015)

HumanSmoke said:


> That came at a cost of stripping out a lot of compute functionality. Double precision took a hit, the register file size is no bigger than GK110 (and half that of the reworked GK210), and the L1 memory cache is slightly reduced as well.


I'd say cutting out special double precision cuda cores was a worthwhile trade off when you consider for what gpu compute is used and how much overall they gained with simplified instruction scheduling. Furthermore cache system got entire another level with maxwell as opposed to kepler/fermi, so I'm not surprised level 1 cache size was adjusted. Same goes for register file size, I think all these changes were for the better even for compute - after all, these GPUs work in clusters for compute applications


HumanSmoke said:


> Now, if the Fury X also has a crippled double precision from the Radeon Hawaii's 1/8th rate, some questions then have to be answered - especially if the Fiji GPU turns out to be a doubled in size Tonga ( which has a native FP64 rate of 1/16)


You may be onto something since Tonga is GCN 1.2


----------



## Aquinus (Jun 13, 2015)

I don't know. I'm on the fence. The issue I'm having now with my 6870s isn't that there is a lack of GPU power but that there is not enough video memory. Fury (X) might be good, but the real question remains, will 4GB limit it for multi-GPU setups going forward down the line? If the 980 Ti and Fury X are on par, there might be a slight incentive to go with the 980 Ti because of the 6GB.

Once again, this is all more speculation on top of what has already been said. I suspect reviews will put the matter to rest once it comes out.

Side note: That's a damn small card.


----------



## Prima.Vera (Jun 13, 2015)

This is probably the best card to play on 1440p. Not for 4K. Or at least is not future proof for 4K.


----------



## RejZoR (Jun 13, 2015)

BiggieShady said:


> I'd say cutting out special double precision cuda cores was a worthwhile trade off when you consider for what gpu compute is used and how much overall they gained with simplified instruction scheduling. Furthermore cache system got entire another level with maxwell as opposed to kepler/fermi, so I'm not surprised level 1 cache size was adjusted. Same goes for register file size, I think all these changes were for the better even for compute - after all, these GPUs work in clusters for compute applications
> 
> You may be onto something since Tonga is GCN 1.2



Then again, how many users use them for compute tasks? These are gaming cards and while they can do compute, they are not meant for this. That's why both, AMD and NVIDIA have more expensive FireGL and Quadro. So they can sell dedicated cards that are great at compute and sort of suck at gaming...



Prima.Vera said:


> This is probably the best card to play on 1440p. Not for 4K. Or at least is not future proof for 4K.



Based on what? VRAM capacity? While that may be true to some extent, 4GB is still a lot and with enormous bandwidth it has, I don't think that's really a problem this moment. And we are talking just raw bandwidth here, we aren't even taking into account framebuffer compression which basically adds additional ~40% of bandwidth.

Lets be realistic, majority of people who buy Titan-X and Fury X cards buy them every year. They want best of the best, zero compromises. These kind of people don't care how future proof the card is, they just want it to perform great NOW. And Fury X will do that without any doubt.

People like me and probably you, we buy them so that they remain useful for 2, maybe 3 years and then we buy a new high end (not enthusiast!) one. For us, buying such card is a double edged thing. Yes, it's expensive, but in a long run, maybe it's not that future proof. Still, there are just few games that really need so much VRAM, 3/4 of others are happy with VRAM even at around only 2-3GB...


----------



## Prima.Vera (Jun 13, 2015)

RejZoR said:


> Based on what? VRAM capacity? While that may be true to some extent, 4GB is still a lot and with enormous bandwidth it has, I don't think that's really a problem this moment. And we are talking just raw bandwidth here, we aren't even taking into account framebuffer compression which basically adds additional ~40% of bandwidth.
> 
> Lets be realistic, majority of people who buy Titan-X and Fury X cards buy them every year. They want best of the best, zero compromises. These kind of people don't care how future proof the card is, they just want it to perform great NOW. And Fury X will do that without any doubt.
> 
> People like me and probably you, we buy them so that they remain useful for 2, maybe 3 years and then we buy a new high end (not enthusiast!) one. For us, buying such card is a double edged thing. Yes, it's expensive, but in a long run, maybe it's not that future proof. Still, there are just few games that really need so much VRAM, 3/4 of others are happy with VRAM even at around only 2-3GB...



Yeah, I was talking about the VRAM capacity. Already games like GTA V, COD or Watch Dogs go beyond 4GB usage in 4K, so the target for the next games is clear...
You are right, personally I play in 1080p and I always upgrade my card every 3rd generation. But I always buy the top card of the moment.


----------



## RejZoR (Jun 13, 2015)

I prefer to stick with certain resolution, because it's just a smart thing to do. In the past it was 1280x1024 on a 19 inch 5:4 display. Sure it was a bit small and not wide screen, but it had superb latency and it was a 75Hz one so it served me well for gaming in particular. Image was super sharp and responsive. Now I've gone to 1080p 144Hz ASUS and I'm planning to stick with it for quite a while. It's big enough, it's widescreen and I'll be able to run all games at max possible settings with same high end card for very long time before it actually becomes too slow.

I mean, my HD7950 just now starting to show some age in maybe 2 or 3 games. The rest still runs on max with FSAA! So, if I buy vanilla Fury or even maybe just R9-390X, I should be fine again for quite some time.


----------



## R-T-B (Jun 13, 2015)

RejZoR said:


> I prefer to stick with certain resolution, because it's just a smart thing to do. In the past it was 1280x1024 on a 19 inch 5:4 display. Sure it was a bit small and not wide screen, but it had superb latency and it was a 75Hz one so it served me well for gaming in particular. Image was super sharp and responsive. Now I've gone to 1080p 144Hz ASUS and I'm planning to stick with it for quite a while. It's big enough, it's widescreen and I'll be able to run all games at max possible settings with same high end card for very long time before it actually becomes too slow.
> 
> I mean, my HD7950 just now starting to show some age in maybe 2 or 3 games. The rest still runs on max with FSAA! So, if I buy vanilla Fury or even maybe just R9-390X, I should be fine again for quite some time.



As much as I want to buy a Fury to support AMD RejZor, I'm with you.  I play at 1080p 60Hz and don't plan to upgrade for some time...  I have an R9 290X right now.  That's probably future proof for most games for a fair bit, and makes upgrading now kinda silly.

I'm actually considering even sidegrading to Tongo but it's not worth the effort honestly for just a small energy savings.


----------



## Assimilator (Jun 13, 2015)

Gigayte's GeForce 900 G1 Gaming series has 6 display outputs (Flex Display Technology), I feel like AMD missed a trick here by not having an add-on bracket that gives additional outputs - similar to the way some low-end HTPC cards come with a full-height bracket with 2 outputs and a half-height with only 1. That way AMD could cater for people who want single-slot cards (standard bracket as in the picture) while allowing partners to choose whether they want to offer SKUs with more than 4 outputs.

Granted, not that many people will be using more than 4 outputs anyway.


----------



## john_ (Jun 13, 2015)

newtekie1 said:


> Two 8-Pins?  So we can expect the power draw to surpass the 300w mark.



Or it will have plenty of room for overclocking. It is a water cooled hi end card. Everyone seems to forget that.


----------



## FreedomEclipse (Jun 13, 2015)

Aceman.au said:


> Jesus it's tiny.



thats what she said. ( ͡° ͜ʖ ͡°)


----------



## lZKoce (Jun 13, 2015)

FreedomEclipse said:


> thats what she said. ( ͡° ͜ʖ ͡°)



BUUUURN!


----------



## RejZoR (Jun 13, 2015)

What I wish is BIOS editing tools. I've been using this on my HD6950 and now HD7950 and it's stuff sent from heaven. All this overclocking software like MSI Afterburner delivers unstable overclocking. In my case it was constantly failing and it was also not switching 2D/3D properly.

But with flashed bios, fan curve, voltage, clocks, everything is rock solid at much higher values that I could ever use in Afterburner. And best of all, system format, bootup, other OS, it's ready for performance without fiddling. Seeing how R9-290X didn't have that, it makes me sad. It'll be hard to go back on crappy software overclocking once you taste the brilliance of flashed BIOS...


----------



## Devon68 (Jun 13, 2015)

I just have to leave this here.
JESUS CHRIST IT'S UGLY.


----------



## xfia (Jun 13, 2015)

RejZoR said:


> Then again, how many users use them for compute tasks? These are gaming cards and while they can do compute, they are not meant for this. That's why both, AMD and NVIDIA have more expensive FireGL and Quadro. So they can sell dedicated cards that are great at compute and sort of suck at gaming...
> 
> 
> 
> ...


that is true that we have not seen this much memory bandwidth before along with the architecture itself and the compression.. page file might be used a little different for all we know at the moment.


----------



## BiggieShady (Jun 13, 2015)

RejZoR said:


> Then again, how many users use them for compute tasks? These are gaming cards and while they can do compute, they are not meant for this. That's why both, AMD and NVIDIA have more expensive FireGL and Quadro. So they can sell dedicated cards that are great at compute and sort of suck at gaming...


Quadro M6000 has the same GM200 GPU as Titan X, only difference is ECC memory AFAIK. These are used in production where no mistakes can happen (hence ECC), but I'm sure many GPGPU devs use mid range gaming cards for development. Although I have a feeling that's nothing compared to total number of pc gamers that play games on Titan X


----------



## R-T-B (Jun 13, 2015)

RejZoR said:


> What I wish is BIOS editing tools. I've been using this on my HD6950 and now HD7950 and it's stuff sent from heaven. All this overclocking software like MSI Afterburner delivers unstable overclocking. In my case it was constantly failing and it was also not switching 2D/3D properly.
> 
> But with flashed bios, fan curve, voltage, clocks, everything is rock solid at much higher values that I could ever use in Afterburner. And best of all, system format, bootup, other OS, it's ready for performance without fiddling. Seeing how R9-290X didn't have that, it makes me sad. It'll be hard to go back on crappy software overclocking once you taste the brilliance of flashed BIOS...




I feel your pain man.  Felt it very vividly when I dumped my old 7970...  The bios editor tools for that were awesome.


----------



## ensabrenoir (Jun 13, 2015)

that 390x unboxing video is back up again


----------



## R-T-B (Jun 13, 2015)

My only concern is that the GCN 1.2 added compression isn't going to so much.  It didn't from Tahiti->Tonga, just read the TPU review.  5% or less in most cases.

That said, it has so much bandwidth it still will kick ass IMO.


----------



## bug (Jun 13, 2015)

Aceman.au said:


> Jesus it's tiny.



Only if you don't factor in that giant fan.
I'm curious what it can do (a few days to wait, I guess), but as an nvidia made man, it won't matter much to me till next year.


----------



## Ja.KooLit (Jun 13, 2015)

man.... think need to start courting my wife so I can buy 2 of these


----------



## RejZoR (Jun 13, 2015)

R-T-B said:


> My only concern is that the GCN 1.2 added compression isn't going to so much.  It didn't from Tahiti->Tonga, just read the TPU review.  5% or less in most cases.
> 
> That said, it has so much bandwidth it still will kick ass IMO.



Wait a second, you're comparing apples with oranges here! Tonga compared to Tahiti has less shaders, less TMU units and narrower memory bus and identical clocks. Where did you get those 5% when card is significantly less powerful on paper, but performs on par with the larger "brother". It has to gain performance from somewhere and trust me, it ain't just 5%...


----------



## R-T-B (Jun 13, 2015)

RejZoR said:


> Wait a second, you're comparing apples with oranges here! Tonga compared to Tahiti has less shaders, less TMU units and narrower memory bus and identical clocks. Where did you get those 5% when card is significantly less powerful on paper, but performs on par with the larger "brother". It has to gain performance from somewhere and trust me, it ain't just 5%...



I just skimmed the TPU review tbh, I assumed it to be a respin of Tahiti silicon directly.  My bad.


----------



## RejZoR (Jun 13, 2015)

Tonga wasn't a success, it was a tech preview after all. But they proved the point with it. It can be underpowered and the optimizations in the GPU can make up the difference quite well. How that extrapolates on a card with twice the physical memory bus bandwidth and a lot more shaders, no one but AMD really know. But it has to do something there as well, particularly at higher resolutions where these cards should shine.


----------



## OneMoar (Jun 13, 2015)

would't use one if AMD gave one to me ... no place to mount that rad either ...


----------



## BiggieShady (Jun 13, 2015)

OneMoar said:


> would't use one if AMD gave one to me ... no place to mount that rad either ...


Rubbish, you're not fooling anyone ... in fact no one here can say that and mean it. 
Even with no place to mount the rad, we'd all learn to live with the rad hanging out of the open case


----------



## Loosenut (Jun 13, 2015)

OneMoar said:


> would't use one if AMD gave one to me ... no place to mount that rad either ...



I'll PM you my adress if ever it happens


----------



## RejZoR (Jun 13, 2015)

I have a miniATX case with one AiO already in it. And I'm quite confident I could place a second one in there. If I've thought myself anything is that case is never too small for anything. You should see my old box, TT Lanbox with the same Core i7 920 in it. This Lian Li is luxury.

I'd have to move the CPU AiO to the exhaust port and graphic card AiO in the front...


----------



## newtekie1 (Jun 13, 2015)

BiggieShady said:


> I don't think people care about power consumption, rather overclocking headroom



If two cards are pretty equal in all other areas, I'm picking the one that uses less power.  So it isn't something to be ignored.


----------



## MxPhenom 216 (Jun 13, 2015)

Prima.Vera said:


> This is probably the best card to play on 1440p. Not for 4K. Or at least is not future proof for 4K.


My 780 plays every game I own at 1440p maxed out flawlessly. I'd put this card in a realm of 1600p-4k.


----------



## Uplink10 (Jun 13, 2015)

RejZoR said:


> What I wish is BIOS editing tools. I've been using this on my HD6950 and now HD7950 and it's stuff sent from heaven. All this overclocking software like MSI Afterburner delivers unstable overclocking. In my case it was constantly failing and it was also not switching 2D/3D properly.
> 
> But with flashed bios, fan curve, voltage, clocks, everything is rock solid at much higher values that I could ever use in Afterburner. And best of all, system format, bootup, other OS, it's ready for performance without fiddling. Seeing how R9-290X didn't have that, it makes me sad. It'll be hard to go back on crappy software overclocking once you taste the brilliance of flashed BIOS...



I cannot even overclock my Radeon HD 7650M because every time I have to set the settings and enable that mode where there is a higher limit for overclocking in MSI Afterburner. Too bad BIOS editor does not support my card and I cannot change frequeny in card's BIOS.

*But with working BIOS editor there is also a vulnerability namely an option for malware like RAT to change BIOS and destroy your components on graphic card by overclocking. I think this option should be limited to motherboard's firmware or some external acces to graphic card like if graphic card had USB acces through which you could change BIOS.*



newtekie1 said:


> If two cards are pretty equal in all other areas, I'm picking the one that uses less power. So it isn't something to be ignored.


They are rarely equal and the price is often associated with the features like that. But I am looking forward to this time when Nvidia will not have a leaverage over AMD by these "side" features.


----------



## RejZoR (Jun 13, 2015)

That limit increase is a total bullshit. I got everything from twitching display to lockups just because of it. Absolute rubbish. Where with BIOS, I could freely clock my core to 1200 and memory even up to 7000 MHz. It did lockup in certain games, but majority was quite happy with it. I can't even dream about such clocks using shitty OC software. Everything goes to shits after 1150/5400 basically, regardles of temperatures or voltages used...

As for malware, forget it. Destroying someone's hardware is not profitable for malware writers. They need functional machines so they can milk money from them. Killing them is bad for their business. Unless AMD was doing it, but I don't think they are a) that stupid or b) that desperate


----------



## btarunr (Jun 13, 2015)

ensabrenoir said:


> that 390x unboxing video is back up again



Sauce?


----------



## R-T-B (Jun 13, 2015)

Uplink10 said:


> *But with working BIOS editor there is also a vulnerability namely an option for malware like RAT to change BIOS and destroy your components on graphic card by overclocking. I think this option should be limited to motherboard's firmware or some external acces to graphic card like if graphic card had USB acces through which you could change BIOS.*



Nonsense.  This has never made sense for any malware writer.  Nearly any component in your PC can ALREADY be flashed with all 0's, effectively destroying it without having to overclock or anything.  No one has bothered to date.


----------



## 2big2fail (Jun 13, 2015)

HumanSmoke said:


> That came at a cost of stripping out a lot of compute functionality. Double precision took a hit, the register file size is no bigger than GK110 (and half that of the reworked GK210), and the L1 memory cache is slightly reduced as well. Now, if the Fury X also has a crippled double precision from the Radeon Hawaii's 1/8th rate, some questions then have to be answered - especially if the Fiji GPU turns out to be a doubled in size Tonga ( which has a native FP64 rate of 1/16)



I concur. What saved AMD over the last few years was the fact that the Tahiti architecture had a the 1/4 rate and the superior opencl performance. I would have considered a Titan X until I saw the horrendous 1/32 rate. It defeats the purpose of an entry level professional card, which was the reasoning behind the $1k msrp. Now its just an insanely overpriced gaming card. I'm really hoping for a 1/4 rate on Fiji, but I settle for maintaining the 1/8 rate considering how crippled Maxwell is in that respect.


----------



## ensabrenoir (Jun 13, 2015)

btarunr said:


> Sauce?


----------



## BiggieShady (Jun 13, 2015)

newtekie1 said:


> If two cards are pretty equal in all other areas, I'm picking the one that uses less power.  So it isn't something to be ignored.


What I meant is that people care more if card is cooled well and it can keep maximum boost clocks under load. That's how priorities are.
No one should ignore power consumption, it directly translates to heat and noise, only thing that trumps power consumption is price ... again priorities.


----------



## happita (Jun 13, 2015)

I want to see a Fury and Fury X unboxing....go find it...NAOWWWWW!!!!


----------



## Bansaku (Jun 13, 2015)

No DVI? They bloody well better throw in an active DP-DVI adaptor!


----------



## damric (Jun 14, 2015)

Bansaku said:


> No DVI? They bloody well better throw in an active DP-DVI adaptor!



If you are buying a card like this you should really be using display port.


----------



## Bansaku (Jun 14, 2015)

damric said:


> If you are buying a card like this you should really be using display port.



If I were in the market for 3 new monitors, you bet. But people are not necessarily going to go out and replace their monitors at the same time they are upgrading their GPU(s).


----------



## hero1 (Jun 14, 2015)

I like the PCB being short and single slot. This will be awesome when combined with a different waterblock from EK, BP, AC etc and have a single slot setup. I will be waiting for reviews and go from there.


----------



## newtekie1 (Jun 14, 2015)

Bansaku said:


> No DVI? They bloody well better throw in an active DP-DVI adaptor!



It has HDMI, so they'll include an HDMI to DVI adapter, I'm guessing.



damric said:


> If you are buying a card like this you should really be using display port.



Why? There are plenty of good 1440p monitors with DVI only. Why replace the monitor that you likely paid $400+ for a couple years ago just to use displayport?


----------



## FYFI13 (Jun 14, 2015)

btarunr said:


> The only moving part in this entire product is the 15 mm-thick 120 mm fan


Really, how about the pump?


----------



## damric (Jun 14, 2015)

Bansaku said:


> If I were in the market for 3 new monitors, you bet. But people are not necessarily going to go out and replace their monitors at the same time they are upgrading their GPU(s).



I advise you to upgrade your 1200p monitor before you upgrade graphics card. It would be a waste of GPU power otherwise.


----------



## Prima.Vera (Jun 14, 2015)

MxPhenom 216 said:


> My 780 plays every game I own at 1440p maxed out flawlessly. I'd put this card in a realm of 1600p-4k.


Do you also own  last Witcher, Watch Dogs, GTA V, last A$$asin's Creed?


----------



## MxPhenom 216 (Jun 14, 2015)

Prima.Vera said:


> Do you also own  last Witcher, Watch Dogs, GTA V, last A$$asin's Creed?


Witcher. That I haven't checked performance, but, Fury would be overkill for 1440p when the 290/290x 780/780ti/970/980 already push 1440p damn well.


----------



## newtekie1 (Jun 14, 2015)

MxPhenom 216 said:


> Witcher. That I haven't checked performance, but, Fury would be overkill for 1440p when the 290/290x 780/780ti/970/980 already push 1440p damn well.



I'm going with the assumption that Fury will be close to 980Ti performance, and I have to agree that it will be overkill for 1440p, and should handle 4k pretty well with just a few settings lowered from max.


----------



## RejZoR (Jun 14, 2015)

People are so gladly forgetting about texture streaming. You don't need 300 gazillion gigaterabytes of VRAM to play things and max possible settings even with cards that have less than absolutely ideal capacity of VRAM.

When game requires 6GB of VRAM and you only have 4GB it'll stream half of the textures.
When game requires 6GB of VRAM and you have 8GB available, it'll just store everything in VRAM.

The end result is game that essentially runs equally fast on both graphic cards and the game looks identical on both. You may experience texture pop in with streaming in certain situations, but that really depends on the game...


----------



## the54thvoid (Jun 14, 2015)

RejZoR said:


> People are so gladly forgetting about texture streaming. You don't need 300 gazillion gigaterabytes of VRAM to play things and max possible settings even with cards that have less than absolutely ideal capacity of VRAM.
> 
> When game requires 6GB of VRAM and you only have 4GB it'll stream half of the textures.
> When game requires 6GB of VRAM and you have 8GB available, it'll just store everything in VRAM.
> ...


I'm quite sure the bandwidth will adequately compensate for the 4gb ram but my 3gb cards suffered in several recent games including Shadow of Mordor because of Vram usage.  Inadequate Vram has a hefty impact on gaming performance.


----------



## mirakul (Jun 14, 2015)

RejZoR said:


> People are so gladly forgetting about texture streaming. You don't need 300 gazillion gigaterabytes of VRAM to play things and max possible settings even with cards that have less than absolutely ideal capacity of VRAM.
> 
> When game requires 6GB of VRAM and you only have 4GB it'll stream half of the textures.
> When game requires 6GB of VRAM and you have 8GB available, it'll just store everything in VRAM.
> ...


This.

Not to mention with HBM's monstrous bandwidth, the streaming delay will be none to see.

And this is not 4 GB of GDDR5, it's 4 GB of HBM.

People whining about Fury's 4 GB seem to forget the time when we moved from GDDR3 for GDDR5


----------



## mirakul (Jun 14, 2015)

Let's do some math.

You need to shoot 240 bullets, and have two guns: TitanX with a massive capacity of 120 and 4s loading time, and FuryX with 40 capacity and just 1s loading time. Assuming two guns have the same firing rate, which gun will finish the job first?

Maybe we need a 5th grader here.


----------



## the54thvoid (Jun 14, 2015)

@mirakul, I clearly state the bandwidth will likely compensate. Read my post properly before putting on your red cape and mentioning schooling.
RejZor made the reference regarding low Vram affecting texture popping, thus placing it into general terms, not solely linked to HBM, unless you both accept texture popping as a trade off for the Fury X.
FTR, I've never whined about the Fury's 4gb RAM. I actually only even comment on my own set up's limited 3gb.


----------



## BiggieShady (Jun 14, 2015)

mirakul said:


> Not to mention with HBM's monstrous bandwidth, the streaming delay will be none to see.
> This is not 4 GB of GDDR5, it's 4 GB of HBM.
> People whining about Fury's 4 GB seem to forget the time when we moved from GDDR3 for GDDR5


That monstrous bandwidth is between GPU and HBM memory. Texture streaming happens over PCI-E.


mirakul said:


> You need to shoot 240 bullets, and have two guns: TitanX with a massive capacity of 120 and 4s loading time, and FuryX with 40 capacity and just 1s loading time. Assuming two guns have the same firing rate, which gun will finish the job first?
> 
> Maybe we need a 5th grader here.


Yes, Fury X has more memory bandwidth at lower memory clocks thanks to HBM. Thanks to that GPU will get texture samples faster and do MSAA faster, and those are two parts of the rendering pipeline. If shaders are complex enough (or sub optimal) memory bus may be unused some of the time. It's all about balance of shading power and memory bandwidth. 
Capacity is completely different story, your example tests how fast each GPU reads its complete amount of VRAM.


----------



## RejZoR (Jun 14, 2015)

And history has shown on and on that VRAM capacity plays WAY smaller factor in performance than bandwidth between GPU and VRAM. One would expect that people would learn this by now. Saying "but we didn't have 4K back then". Back then, 1024x768 was 4K for graphic cards that only had 32MB of VRAM...


----------



## Assimilator (Jun 14, 2015)

newtekie1 said:


> Why? There are plenty of good 1440p monitors with DVI only. Why replace the monitor that you likely paid $400+ for a couple years ago just to use displayport?



If by "good" you mean "cheap South Korean imports" then yes. You'd be hard pressed to find ANY 1080p+ monitor released in the last half decade that doesn't have HDMI or DisplayPort or both... except the aforementioned South Korean monitors. What's ironic is that those same South Korean models do come in versions that support HDMI and DP for just a few bucks more, so the only ones to blame are the consumers who cheaped out when purchasing.

After all, why pay $450 for a monitor that's future-proof when you can pay $400 for one that isn't, then whine about how it's not your fault on the internet?


----------



## Aquinus (Jun 14, 2015)

RejZoR said:


> People are so gladly forgetting about texture streaming. You don't need 300 gazillion gigaterabytes of VRAM to play things and max possible settings even with cards that have less than absolutely ideal capacity of VRAM.
> 
> When game requires 6GB of VRAM and you only have 4GB it'll stream half of the textures.
> When game requires 6GB of VRAM and you have 8GB available, it'll just store everything in VRAM.
> ...


That's not entirely true. Once I've gone over about 300MB of shared on my 6870s, it can't maintain a steady frame rate. If it goes north of 500MB, it's choppyness interspersed with smooth game play. I would say streaming textures are okay if it's going to be less than 20% of your total VRAM given your system memory is fast. I bet the size of the PCI-E bus makes a difference too but when push comes to shove, latency for streaming textures is pretty bad. As a result the GPU (like a CPU core,) spends more time waiting for the data and less time actually doing stuff.


RejZoR said:


> And history has shown on and on that VRAM capacity plays WAY smaller factor in performance than bandwidth between GPU and VRAM. One would expect that people would learn this by now. Saying "but we didn't have 4K back then". Back then, 1024x768 was 4K for graphic cards that only had 32MB of VRAM...


Back then the things getting rendered were far more simple than they are now. You're comparing apples and oranges there I think it is because of how stuff gets rendered now as how it's done has changed a lot since then. Heck, what gets rendered has changed a lot too. Too little VRAM is like having a squirrel stuck in the airbox of your car. Yeah, the car runs, but it won't run well because the

Remember when hardware T&L support was revolutionary? 

In summary: Streaming textures are good but it has a point of diminishing returns that comes around very quickly.


----------



## RejZoR (Jun 14, 2015)

If you're relying to stream textures from a pathetic WD Green shit drive, then the shitty results are to be expected...


----------



## Brusfantomet (Jun 14, 2015)

RejZoR said:


> If you're relying to stream textures from a pathetic WD Green shit drive, then the shitty results are to be expected...



The limiting factor will be the PCI-e buss, as the textures will be stored in ram (if the game engine is worth a dam). The pci-e bus is just ass fast for a r9 290x or a 980 ti, so the memory bandwidth of the card will NOT help there ( it will howerevver help with MSAA performance and other effects requiring massive GPU-VRAM bandwidth).

That benign said, my two x290x cards have no trouble with modern games at 2560 x 1600, and by the looks of things  two x290x (with 4 GB vram) have no trouble with 4k either  source 

add in that the Fury will get the texture compression from GCN 1.2  making both the memory bandwidth AND stored memory for textures effectively 40 % bigger (since the compression gives 40 % more memory bandwidth) than the same memory on a GCN 1.1 card, it AMD metrics hold for their compression algorithm.

all in all i thing the memory will be enough in modern games until HMB2 becomes available.


----------



## RejZoR (Jun 14, 2015)

Really? PCIe 2.0 has a max bandwidth of 16GB/s. This means it fills 8GB of VRAM in 0,5 sec. You ppl are looking at the wrong potential bottleneck and it isn't a PCIe bus. It's the HDD drive feeding textures to the graphic card...


----------



## Lagittaja (Jun 14, 2015)

RejZoR said:


> It's a premium product, it comes with premium cooling. If even AiO will be on the edge, then it'll be too hot indeed.* But if it will be able to run at reasonably low fan speeds, then it's a big plus, because it'll be quieter than any NVIDIA card while performing with excellence.* I mean, just look at the specs of that monster...
> 
> As for the "drivers conundrum", has anyone had any serious problems with a single card? I haven't. But I admit CrossfireX isn't the best. Then again no multicard setup ever was, so there's that...



Dude. That thing has a high speed D1225C (Gentle Typhoon) on it from Nidec. (!!)
The most tame model is 3000rpm (which I guess is the one they're using), there's also a 4250rpm and 5400rpm models.
Too bad the images are a bit shaky, can't tell whether it's DC or PWM. But on DC the 3000rpm model starts at 4V but it's not really a usable speed (<400rpm) as it's not gonna move pretty much any air at all since the radiator they have must be high FPI.
At 5V it's gonna already spin at ~1250rpm. By 6V it's at 1600rpm.

If it needs that kind of a fan...

We've previously seen it with a low speed D1225C. Now this close to the paper launch with a high speed version? Oh dear.


----------



## RejZoR (Jun 14, 2015)

Well, the fans they stick to the coolers are absolutely ridiculous most of the time. The ones on my Antec H2O 920 were like jet turbines. The Noiseblockers however, I run them at really low fixed RPM and they are super silent while CPU is coole enough for it to operate at 4,2GHz and never goes past 75°C which is excellent noise/heat ratio. But it's push/pull configuration, here is just 1 fan so I don't know... We'll see...


----------



## Lagittaja (Jun 14, 2015)

I found another picture of it. The fan cable is PWM so at least it will be reasonable when idle 

Yeah sure the H2O 920 fans are crap. I'm not saying the Gentle Typhoon is a shit fan because it's a high speed fan. On the contrary the Gentle Typhoons are great fans on a radiator. Pretty much one of the best.
But since they've swapped to a high speed version of it.. That's what's concerning.


----------



## ensabrenoir (Jun 14, 2015)

Must admit...thought there would be more leaks by now of the fury performance. ...truth will be known in a few more days.....


----------



## bug (Jun 14, 2015)

ensabrenoir said:


> Must admit...thought there would be more leaks by now of the fury performance. ...truth will be known in a few more days.....



Historically, when AMD has kept mum about performance, they have always underwhelmed. I really hope this will be an exception.


----------



## AsRock (Jun 15, 2015)

Luka KLLP said:


> That is a really good looking card



Look even better naked!, how ever not long to wait for that .


----------



## Athlonite (Jun 15, 2015)

newtekie1 said:


> Two 8-Pins?  So we can expect the power draw to surpass the 300w mark.



how else do you think that AIO is being powered if it was just air cooled it'd proly only be 1x8 and 1x6 pin


----------



## techtard (Jun 15, 2015)

That fan looks like the high rpm Gentle Typhoons I got lying around. Hopefully they aren't the 3k rpm version like I have, they are not quiet but push a ton of air.


----------



## Prima.Vera (Jun 15, 2015)

RejZoR said:


> Really? PCIe 2.0 has a max bandwidth of 16GB/s. This means it fills 8GB of VRAM in 0,5 sec. You ppl are looking at the wrong potential bottleneck and it isn't a PCIe bus. It's the HDD drive feeding textures to the graphic card...


The textures are loaded to the system's RAM then into VRAM as required. The HDD/SDD is only used on the initial loading..


----------



## chinmi (Jun 15, 2015)

interesting.... too bad the 980ti will still be faster...


----------



## Vayra86 (Jun 15, 2015)

Imagine this kind of GPU in a Mini-ITX build. Yummy.

As far as placing the card in the current market, the form factor is definitely OK by today's standards. AMD will easily have the most powerful *short* card on the market, perhaps not the most powerful across the board, but that's something at least. I can definitely see why they went for the AIO solution.

At this point the thing I am most interested in, is how much noise it generates at full load. A single fan is still a single fan solution and as a high speed fan, it may very well top the 290X stock blower which could destroy this card's desirability...


----------



## ensabrenoir (Jun 15, 2015)

Vayra86 said:


> Imagine this kind of GPU in a Mini-ITX build. Yummy.
> 
> As far as placing the card in the current market, the form factor is definitely OK by today's standards. AMD will easily have the most powerful *short* card on the market, perhaps not the most powerful across the board, but that's something at least. I can definitely see why they went for the AIO solution.
> 
> At this point the thing I am most interested in, is how much noise it generates at full load. A single fan is still a single fan solution and as a high speed fan, it may very well top the 290X stock blower which could destroy this card's desirability...



Mini itx application is why im interested in this card.  Power draw will be a factor to me in an mini itx format.


----------



## xfia (Jun 15, 2015)

Prima.Vera said:


> The textures are loaded to the system's RAM then into VRAM as required. The HDD/SDD is only used on the initial loading..


not true for every game and program.. if you monitor the pagefile you would notice that it gets used alot and of course if ram fills up the pagefile will get used very rapidly.
not a secret that a hdd can be a bottleneck and leave you waiting being the reason that the ssd was developed.


----------



## ironwolf (Jun 15, 2015)

chinmi said:


> interesting.... too bad the 980ti will still be faster...


Your crystal ball back that up?


----------



## fullinfusion (Jun 15, 2015)

ironwolf said:


> Your crystal ball back that up?


I was thinking the same dam thing, where's the troll spray.

I was looking forward to the 390x but it offered me not a dam thing..

Hello Fury, I'll be sure to give you a good home.. Now hurry the f up and bring these out already.

Edit: oh its going to be so fun re-pasting the GPU and hit 4 little buddies


----------



## Brusfantomet (Jun 17, 2015)

RejZoR said:


> Really? PCIe 2.0 has a max bandwidth of 16GB/s. This means it fills 8GB of VRAM in 0,5 sec. You ppl are looking at the wrong potential bottleneck and it isn't a PCIe bus. It's the HDD drive feeding textures to the graphic card...



Its not the single loading of the textures that is the problem, but a constant swapping to/from system ram 500 milli seconds is ages in computer latencies. 

lets say a card has the performance to draw a scene at 50 fps, that is a frame time of 20 ms.

if it has to fetch 1 GB of data from ram a PCI-e 16x bus will take 62 ms. (1GB / 16GB/s = 0,062s =  62ms)

If the drivers are made well and the data allows it the transfer can be done in parallel to the render job, giving a frame time of 62ms and that givers a FPS of 16.1 (1000ms / 62ms/frame = 16.1 fps)

if the GPU is not able to do the parallel work  it will take 82 ms for one frame(computing an waiting for data) 1000ms / 82ms/frame = 12,2 FPS.

 that being said, some games will load as many textures and models into Vram, and not clean it when its not needed, netting a higher Vram usage than necessary.  



xfia said:


> not true for every game and program.. if you monitor the pagefile you would notice that it gets used alot and of course if ram fills up the pagefile will get used very rapidly.
> not a secret that a hdd can be a bottleneck and leave you waiting being the reason that the ssd was developed.



Remember that a game is more than the graphics, its inteirly possible for the engine or other parts of the game to use the page file without the textures ever getting near the page file.


----------

