Friday, June 12th 2015
AMD Radeon R9 Fury X Confirmed SKU Name for "Fiji XT"
The bets are off, AMD's latest flagship graphics card will indeed get a fancy name, and it will be named Radeon R9 Fury X. Korean tech-site HardwareBattle leaked a product flyer with the SKU name, and its catchphrase "revolutionary, inside out." Based on the 28 nm "Fiji" silicon, the R9 Fury X is expected to feature 4,096 stream processors, 256 TMUs, 128 ROPs, and a 4096-bit wide HBM memory interface, holding 4 GB of memory.
The reference-design Fury X will come with an AIO cooling solution, likely designed by Asetek, featuring a Cooler Master made fan, ventilating its 120 x 120 mm radiator. Just as the Radeon R9 290X did away with D-Sub (VGA) support (even with dongles), Fiji does away with the DVI connector. You still get three DisplayPort 1.2a ports, and a single HDMI 2.0 connector. The card has been pictured on the web featuring two 8-pin PCIe power connectors.
Sources:
HardwareBattle, VideoCardz
The reference-design Fury X will come with an AIO cooling solution, likely designed by Asetek, featuring a Cooler Master made fan, ventilating its 120 x 120 mm radiator. Just as the Radeon R9 290X did away with D-Sub (VGA) support (even with dongles), Fiji does away with the DVI connector. You still get three DisplayPort 1.2a ports, and a single HDMI 2.0 connector. The card has been pictured on the web featuring two 8-pin PCIe power connectors.
105 Comments on AMD Radeon R9 Fury X Confirmed SKU Name for "Fiji XT"
well ... it's a next gen... but indeed the CFX thumbs are away since 2 yrs :roll:
about the rebrand ... well the 290 is still a pretty capable card, so why not, it's just like what happened with the 7XX line so who really care in the end.(and i owned a 770 ... so, not me) i am more interested in reference custom waterblock and 1 slot bracket ... or custom air cooled non OC model but with a custom waterblock
(because factory OC sucks most of the time ... and you can easily do yourself what they do to make you pay a premium over a ref stock)
also ... 4gb is actually enough and standard (look at the 970 it also pack 4... wait ... forget it):D
before we had 250<260<270<280<290 and X variants
now we have 250<360<370<380<390<fury and X variants
same hierarchy in term of performance just 2 card added into the high end.
just like Nv did during the 7XX time and the Titan+Titan black, with a shorter timeframe (aka: none) between the 2 top SKU tho (well not really ... i forgot the 750/Ti 760 and 780/Ti but ... oh well )
The real point here is that so long as you have to start dipping into shared system memory, you lose a substantial amount of performance because the GPU starts spending more time waiting and less time working. Simply put, if I had 2GB on my 6870s, I wouldn't be considering an upgrade. Running out of VRAM is like driving a car with a huge engine but a tiny intake manifold (or a squirrel in your airbox if you will.) So if performance between any two GPUs were similar, I would take the one with more VRAM because it will last you longer. A lot of times it not the rendering itself in a game that gets more complex, it's the size of the textures, however that's not true for all games.
With that all said, CFX works most of the time for me. If it doesn't, a driver update usually takes care of it.
Don't confuse this with justifying Titan's price. It's not. I'm just saying VRAM is important if you're planning long term (several years,) which I am, since I'm in the market and don't intend to do so much as buy another of the same GPU, 3 years down the road like last time. Just saying.
When I bought my second 6870, it was significantly cheaper than it was when I bought it on release day, that's for sure. What did it get me? 7970 performance (sans vram) for half the cost.
You're overly optimistic given an industry that can't even be bothered to usually support resolutions beyond 1280x720 and 1920x1080, won't remember FOV sliders, customizable keys, and even sometimes manages to forget not to put "Press Start to Begin" on the first screen of the game.
You think these are the companies that are going to by and large invest lots of time into making sure games are made with DX12 to support the truly advanced feature set you think will make DX12 superior to DX11 for SLI/CF?
I doubt it. I sincerely doubt it. They'll treat DX12 the way they treated DX11 over 9. It'll be a great way to add a few extra features in, but largely the ports will be treated as DX9 with 11 gloss. In this case, it'll be programmers doing things exactly like they remember with DX11 with a slim few (probably done in a month or pulled from something like Gameworks by an external hardware partner) improvements added so they can call it "DirectX 12."
Without the proper effort into rethinking the game as a fully DX12 title, it won't matter at all. If they start with an Xbox One base, it seems unlikely they'll ever move beyond the peculiar ESRAM buffer and large amount of shared system RAM to truly customize it for gaming cards with less than, say, the 5GB's of system RAM the current consoles dedicate usually to GPU-related tasks.
No DVI ports = bad
Fury = confusing name
16nm = happening someday
16nm = only 'chance' AMD has to get serious(according to Nvidia loyalists)
DX12 = false prophet?
Nice typo.
But maybe I'm wrong.
well ... at last owner of a 290/290X will have no worries to wait a bit on the 8gb version if there ever is one ahah ... (i suspect it's the HBM who limit the vRAM to 4gb but no biggies ... 8gb is still not really common and 4K is also not common :D altho most 4k gaming can be done with 4gb and for those who are satisfied with 1080p monitor, well no need to explain :D )
lately i am telling myself : "hey, i already have a 390 ... no need to upgrade, can wait the next next one... or maybe a 390X."