# Radeon Fury to be slower than GTX 980 Ti



## Luka KLLP (Jun 2, 2015)

So apparently  the AMD Radeon Fury will be just a bit slower than the 980 Ti:
http://nl.hardware.info/nieuws/43996/computex-amd-toont-fiji-videokaart-achter-gesloten-deuren (sorry for Dutch site, couldn't find the news anywhere else)
Power consumption probably a bit higher than the 980 Ti, and it will be cooled with a 120mm radiator with two fans.
Honestly, I'm a bit disappointed, but we'll have to wait for the actual launch and price to conclude anything I guess


----------



## RCoon (Jun 2, 2015)

They don't even have a source, merely "We caught on..." during some ominous activity at Computex

_"A Titan X-killer we do not expect: we caught on that the performance level slightly below that of the Nvidia GeForce GTX 980 Ti would be. For this purpose, the clock frequency will increase to more than 1 GHz. The power was called 'under the 300 watt, "we probably can deduce that the consumption above that of the 980 Ti and Titan X will lie. Namely both those cards have a TDP of 250 watts, which is often not achieved in practice."_


----------



## FordGT90Concept (Jun 2, 2015)

Don't count your chickens before the eggs hatch.  The card should launch this month.


----------



## Xzibit (Jun 2, 2015)

RCoon said:


> They don't even have a source, merely "We caught on..." during some ominous activity at Computex



Probably the Nvidia booth

There is also this...

*PCGameshardware.de*


----------



## Cartel (Jun 2, 2015)

I can still use my CRT on the 980ti though...


----------



## FordGT90Concept (Jun 2, 2015)

Indeed, AMD axed the RAMDAC on the high end cards.  I'm planning on buying one of these to keep VGA support alive going forward:
StarTech DisplayPort™ 1.2 to VGA Adapter Converter – DP to VGA – 1920x1200


----------



## the54thvoid (Jun 2, 2015)

I'm happy to be open minded on performance. Not buying till I see it's reviews. I'll say this, unless they're neck and neck, much hatred there will be.


----------



## Luka KLLP (Jun 2, 2015)

the54thvoid said:


> I'm happy to be open minded on performance. Not buying till I see it's reviews. I'll say this, unless they're neck and neck, much hatred there will be.


To be sure. It'll probably also be better for us consumers if the performance is really close and they start some sort of price war


----------



## Ferrum Master (Jun 2, 2015)

FordGT90Concept said:


> Indeed, AMD axed the RAMDAC on the high end cards.  I'm planning on buying one of these to keep VGA support alive going forward:
> StarTech DisplayPort™ 1.2 to VGA Adapter Converter – DP to VGA – 1920x1200




Anyone actually tried these?


----------



## NC37 (Jun 2, 2015)

Luka KLLP said:


> To be sure. It'll probably also be better for us consumers if the performance is really close and they start some sort of price war



Exactly. And because AMD will always be lower, it'll be a benefit to consumers. 

What really will matter is if AMD did in fact rehash the rest of the line or brought out new toys. Reports have favored both possibilities but I think it is safe to say that HBM will only be on the 390X for this gen until yields get higher.

However, any of this is moot until DX12 ships. That may be more of a deciding factor as AMD hardware will score some big gains from it.


----------



## FordGT90Concept (Jun 2, 2015)

Ferrum Master said:


> Anyone actually tried these?


Yuck!  Suitable only for primitive 2D desktop displaying.


----------



## SASBehrooz (Jun 2, 2015)

R9 390X may be  faster than GTX 980 Ti by 5-10% ( lets say Draw with TITAN X ). which means GTX 980 Ti "OC" will beat R9 390X in FPS .and will be better in Temperature , Noise , Power , Drivers , Technology  and finally " Price ". Why price ? because AMD didnt released any GPU since R9 295X2 in April 2014 , so they lost consumers and markets ( financial issues ) and they must price that Card + $700. So the winner card will be GTX 980 Ti for sure.
 if 390X be better than TITAN X ( +10% ) , then GTX 980 Ti OC will be close to 390X and again would be a better choice  performance per dollar.


----------



## erocker (Jun 2, 2015)

Wow, this FUD story sure is making its rounds. Slower, faster, whatever this is all based on nothing.


----------



## 64K (Jun 2, 2015)

SASBehrooz said:


> R9 390X may be  faster than GTX 980 Ti by 5-10% ( lets say Draw with TITAN X ). which means GTX 980 Ti "OC" will beat R9 390X in FPS .and will be better in Temperature , Noise , Power , Drivers , Technology  and finally " Price ". Why price ? because AMD didnt released any GPU since R9 295X2 in April 2014 , so they lost consumers and markets ( financial issues ) and they must price that Card + $700. So the winner card will be GTX 980 Ti for sure.
> if 390X be better than TITAN X ( +10% ) , then GTX 980 Ti OC will be close to 390X and again would be a better choice  performance per dollar.



We will have to see the performance of the 390x overclocked and the price before knowing whether the 390x or the 980 Ti is the better deal. You can't compare an overclocked 980 Ti to a non overclocked 390x.


----------



## GhostRyder (Jun 2, 2015)

64K said:


> We will have to see the performance of the 390x overclocked and the price before knowing whether the 390x or the 980 Ti is the better deal. You can't compare an overclocked 980 Ti to a non overclocked 390x.


 Especially seeing as one of those is not in the public yet, though the possibility still stands that the Radeon Fury is slower but we will have to see that in person.  However the only things we can guess at are based on the rough specs of other cards out there.  But even then that does not translate to the real world


----------



## Steevo (Jun 2, 2015)

Merely based on the 4096 shaders and if all else were the same, scaling not being perfect and their listed speeds so far it should push the Fury "XT" variety to about 5-20% faster than the Titan-X before any overclocks. 45% more shaders etc.


----------



## Tatty_One (Jun 2, 2015)

erocker said:


> Wow, this FUD story sure is making its rounds. Slower, faster, whatever this is all based on nothing.


And so the cycle continues Fud >>> Speculation  >>>  Arguments >>>  More Fud >>> more arguments >>>> More speculation >>>  Blah Blah


----------



## Fluffmeister (Jun 2, 2015)

If it's faster then by rights it should also cost > $650, equally the talk of lower yields and it being generally more expensive to manufacture may also force their hand when it comes to pricing.

Even if it faster, an apparently expensive hard to find part isn't really what they need right now.

Exciting times ahead, glad it will be become clear soon though, it is getting boring now waiting for it.


----------



## SASBehrooz (Jun 3, 2015)

64K said:


> We will have to see the performance of the 390x overclocked and the price before knowing whether the 390x or the 980 Ti is the better deal. You can't compare an overclocked 980 Ti to a non overclocked 390x.



yes of course we cant say anything right now and we better wait to see  . BUT  According to GTX 980 Ti Overclocking review : http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980_Ti/34.html
and some benchmarks around the web that shows 390X will not be faster than TITAN X by 5-10 % . no matter how much performance 390X OC will have . 980 ti will ( or might ) be better choice in price point and performance.

lets just wait and stop talking and comparing about unreleased GPU


----------



## jboydgolfer (Jun 3, 2015)

Ferrum Master said:


> Anyone actually tried these?
> 
> View attachment 65327







OH!!! Sweet!!! I see You found a Leaked image of the New AMD GPU  .... 
Seriously tho ,  They better come through this year, I've stuck with them through thick, and thin, and on , and on, and Now Im running out of reasons NOT to switch, I DO love My AMD GpU's tho.


----------



## HumanSmoke (Jun 3, 2015)

RCoon said:


> They don't even have a source, merely "We caught on..." during some ominous activity at Computex


Oddly enough, Sweclockers (who usually have an OK strike rate for breaking stories), also reported much the same. Possible $600 price tag, and some not insignificant yield issues.


Xzibit said:


> Probably the Nvidia booth
> 
> 
> 
> ...


Well, one source definitely isn't from Nvidia. I posted this a day or so ago in a different thread. While it is ostensively about noted Nvidia hater Charlie's revelations about the GTX 980 Ti ( every single one of which is totally wrong except the base clock- quelle surprise), there is an observation about Fiji: "_Fiji seems to be a bit downgraded from what we heard_"





So, it could be that the Fiji observation is there just to give Charlie a 100% record in getting it wrong, and AMD are sandbagging, or it could be that people are just connecting dots - in which case the delay could be either the yield issue or the clock retune rather than both. I for one certainly hope the card eventuates and kicks some butt. The market share numbers are edging towards the unhealthy side for sustainable competition with Nvidia piling it on to the tune of 80% or so - the pendulum needs to swing back.


----------



## RCoon (Jun 3, 2015)

HumanSmoke said:


> Oddly enough, Sweclockers (who usually have an OK strike rate for breaking stories), also reported much the same. Possible $600 price tag, and some not insignificant yield issues.
> 
> Well, one source definitely isn't from Nvidia. I posted this a day or so ago in a different thread. While it is ostensively about noted Nvidia hater Charlie's revelations about the GTX 980 Ti ( every single one of which is totally wrong except the base clock- quelle surprise), there is an observation about Fiji: "_Fiji seems to be a bit downgraded from what we heard_"
> 
> ...



Wow, his article is so blatantly wrong. The 980ti isn't just a higher binned higher clocked 980. How is that guy still press? Or do they keep him on for clickbait purposes?


----------



## FordGT90Concept (Jun 3, 2015)

Wait, what?  R9 3## got delayed?  No longer launching in June?


----------



## HumanSmoke (Jun 3, 2015)

RCoon said:


> Wow, his article is so blatantly wrong. The 980ti isn't just a higher binned higher clocked 980.


The funny/sad thing is, that he quoted 1GHz for the 980 Ti as being a higher bin than the 980 which has a base clock of...1127 MHz ! His idea of a " higher bin" is actually 127MHz less than the supposedly slower part.


RCoon said:


> How is that guy still press?


He feeds the conspiracy line and those susceptible lap it up like the fat kid at an all you can eat smorgasbord. Keep playing the "only we can see what's REALLY going on" card long enough, and they'll revel in their unique "insight".


RCoon said:


> Or do they keep him on for clickbait purposes?


I think he owns the site, so there's a direct correlation between his contributions and the clickbait ratio. Having said that, he caters to the disenfranchised AMD evangelists, so I doubt any of them actually noticed any discrepancy between fact and fiction with an Nvidia product...assuming any of them actually read it in the first place since it didn't have some inflammatory Weekly World News style headline.


----------



## GreiverBlade (Jun 3, 2015)

just in case ... even if Fiji is a bit slower than a 980Ti (serious doubt about it ... wait and see eh? ) it's a 2015 gpu versus a 2015 gpu, but for now AMD is still on nvidia with a 2013 gpu ... the only gpu that took over by a significant enough margin is the 980Ti (NO, the Titanics... Titan X sorry ... doesn't count, paying 5.5 time the price you can get for a 290 to get that one is not what i call a good deal)

real life situation over benchmark and speculation, thanks.
after all it's what i see from day to day with my rig and friends rig: 290 vs 970 and even a 980, for all the games we play in common: no big differences in play-ability (all maxed ofc, most of the time) , again you can go with power consumption but for me it doesn't matter, personal and case to case rule apply.


----------



## lilhasselhoffer (Jun 3, 2015)

It's Seinfeld.  We've got a thread that's about nothing.  No performance specifications, no blurry pictures, no leaked documents, not even an internal slide.

Please, continue with this.  It helps to label the Green team and Red team members, because anybody not spouting praise or heaping vitriol are the people worth listening to in the future.


----------



## wiak (Jun 3, 2015)

yet more FUD, wcctech got two articles, and hey contradict eachother, pretty sure its the new "news bait"
much like "cute girl" = old hag when you click on the link, or like this

given the history of graphics cards, we know its faster than Titan X
why? how can it not be, it has 512GB/s bandwidth, for cyring out lound, 4096 shaders (its a know fact that new generations almost always have twice or 50% or more shaders)

there hasn't been any *new* amd core in ~2 years... and that amd knew for 7 years that it needed to improve memory subsystem for hbm

i think this projected graph from wcctech will enlight anyone thinking its slower..


----------



## yotano211 (Jun 3, 2015)

My question is, will it make my Prius faster?


----------



## BarbaricSoul (Jun 3, 2015)

meanwhile @W1zzard is sitting back looking at his 390X review sample and laughing at us as we play this guessing game.


----------



## Luka KLLP (Jun 3, 2015)

lilhasselhoffer said:


> It's Seinfeld.  We've got a thread that's about nothing.  No performance specifications, no blurry pictures, no leaked documents, not even an internal slide.
> 
> Please, continue with this.  It helps to label the Green team and Red team members, because anybody not spouting praise or heaping vitriol are the people worth listening to in the future.


Just hyped up for all the video card releases, one of the most fun things about it is speculating and watching rumours before the cards are released. Didn't mean for this to be an Nvidia / AMD fanboy thread or something like that


----------



## Tatty_One (Jun 3, 2015)

Don't feed the Fud's!


Luka KLLP said:


> Just hyped up for all the video card releases, one of the most fun things about it is speculating and watching rumours before the cards are released. Didn't mean for this to be an Nvidia / AMD fanboy thread or something like that


Sadly, that is the natural evolution of things here for these kind of topics


----------



## Fluffmeister (Jun 3, 2015)

wiak said:


> yet more FUD, wcctech got two articles, and hey contradict eachother, pretty sure its the new "news bait"
> much like "cute girl" = old hag when you click on the link, or like this
> 
> given the history of graphics cards, we know its faster than Titan X
> ...



Just 10% faster than a stock 980 Ti?

I hope it isn't too juicy!


----------



## Ferrum Master (Jun 3, 2015)

Radeon Führer edition... 

Soon it will leak somewhere for sure, the situation is getting really tense.


----------



## purecain (Jun 5, 2015)

the reason NVidia launched first was to pinch all the people waiting to upgrade... everyone will of seen how close the performance is to the titan z so naturally thinking they have a bargain have gone all in.

meanwhile AMD have a card that beats the gtx980ti by a stretch, if its released and im wrong i'll be devastated. although im told theres a buzz around this card so... im getting excited.

it better be good as all the 980ti's are going out of stock fast...


----------



## c12038 (Jun 5, 2015)

Both the 
Fury and 980 Ti are vastly overpriced and beyond many gamers budgets

So does this mean the 980 and R9 295 will go cheaper


----------



## GreiverBlade (Jun 5, 2015)

c12038 said:


> Both the
> Fury and 980 Ti are vastly overpriced and beyond many gamers budgets
> 
> So does this mean the 980 and R9 295 will go cheaper


wait you know the price of the Fury now? HOW MUCH?  

for the bottom line well maybe the price will go down (tho nvidia i doubt ...) 
but remember AMD has not only the Fury in line the whole 3XX series is also coming and nope the rebrand of the 290 in 390 (with improvement) is not a shame ... it just mean it's still competitive if they upgrade it. 
tho the 295x2 is already lowered in price ... i mean i can actually get it for 750chf instead of 1500chf ...  (tho i would better get a 2nd 290 at 189$ ... than replace my current 290 by a 295x2)


----------



## natr0n (Jun 5, 2015)

Some gpus are way overpriced and yet you guys buy em up in quantity thus accepting/enabling the higher prices.


----------



## Tatty_One (Jun 5, 2015)

purecain said:


> the reason NVidia launched first was to pinch all the people waiting to upgrade... everyone will of seen how close the performance is to the titan z so naturally thinking they have a bargain have gone all in.
> 
> meanwhile AMD have a card that beats the gtx980ti by a stretch, if its released and im wrong i'll be devastated. although im told theres a buzz around this card so... im getting excited.
> 
> it better be good as all the 980ti's are going out of stock fast...


They were placed to launch first as the technology was already there for the basis of the card, my instincts tell me though that you may just be devastated but we wait and hope.


----------



## Ferrum Master (Jun 5, 2015)

Tatty_One said:


> my instincts



Your W1zzard instincts?


----------



## Tatty_One (Jun 5, 2015)

Ferrum Master said:


> Your W1zzard instincts?


I wish, if that were the case I would be playing with the card already!


----------



## Zero3606 (Jun 5, 2015)

Besides from the speculation, the only thing that we know is that the AMD "FURY" offers new technology in HBM memory. HBM memory alone will not greatly improve performance. Performance mostly comes for the power of the GPU doing the calculations. Also with this GPU Die being an evolution in technology and yields being difficult to put an handle on, their might be an limitation early on with clocking the GPU and keeping thermal limits reasonable.


----------



## Caring1 (Jun 5, 2015)

Ferrum Master said:


> Soon it will leak somewhere for sure, the situation is getting really tense.


It better not leak, warranty won't cover all the components


----------



## broken pixel (Jun 5, 2015)

Ferrum Master said:


> Anyone actually tried these?
> 
> View attachment 65327



I used an active adapter with my two 7970s a while back. It would max out at 85Hz at 2560x1440. So if you have a panel that you overclock better research the ones that support high refresh rates if any do?


----------



## Frick (Jun 5, 2015)

Tatty_One said:


> They were placed to launch first as the technology was already there for the basis of the card, my instincts tell me though that you may just be devastated but we wait and hope.



I too have that feeling, for some reason. It doesn't matter though because what I'm interested in is how HBM will affect their future APU's.


----------



## trodas (Jun 7, 2015)

Well, AMD did not have any excuses to not blow Titan X out of the water with Radeon Fury X card, and they have seriously screw things up, if they did not do it. After all, the performance tests of the slower Fury seems to be more that impressive:

Well, another source show these speed results, that are not much impressive:





http://wccftech.com/amd-r9-390x-nvidia-gtx-980ti-titanx-benchmarks/


So let's hope the real performance will be higher 

Also this test shows, that the speed-up over Titan X is consistent thru 21 games (posted already, added the link):





http://wccftech.com/amd-hbm-fury-x-fastest-world/

...so I cannot wait till 16 this month to see the reviews and performance, what AMD set as new standard


----------



## the54thvoid (Jun 7, 2015)

trodas said:


> Also this test shows, that the speed-up over Titan X is consistent thru 21 games (posted already, added the link):
> 
> 
> 
> ...



Unless I read that article wrong - that 'test' is just WCCFtech extrapolating game results form the shader and clocks speed info.  It's not actually a result of 21 games.

EDIT:

Yes - it's a big old guess based on known chip stats:



> #1 We Divide the number of stream processors of Fiji by the number of stream processors of Hawaii, trusting that the performance will scale linearly with each additional SP. Based on the scaling we observed with all other GCN GPUs we found that the performance scales with perfect linearity at 4K.
> You can go back back to the R9 290 vs 7870 data as a reference point.
> 
> For example the 2560 GCN unit R9 290 was precisely 2.0x faster than the 1280 GCN unit HD 7870. Even though the 290 is clocked slightly below the 7870, so this would account for the architectural improvements that AMD introduced with GCN 1.1 vs GCN 1.0. In this particular exercise around Fiji we do not account for any possible architectural improvements because they’re difficult to quantify.
> ...


----------



## rtwjunkie (Jun 7, 2015)

I think the doomsayers are going to be surprised when the Fury comes in stronger than expected.


----------



## Luka KLLP (Jun 7, 2015)

rtwjunkie said:


> I think the doomsayers are going to be surprised when the Fury comes in stronger tgan exoected.


I hope it will  And I also hope that AMD gives it a nice sharp price so Nvidia's line up comes down in price as well


----------



## the54thvoid (Jun 7, 2015)

rtwjunkie said:


> I think the doomsayers are going to be surprised when the Fury comes in stronger tgan exoected.



Only a fool would expect it to perform poorly in respect to expectations.  Oh wait, quite a few on TPU.



Luka KLLP said:


> I hope it will  And I also hope that AMD gives it a nice sharp price so Nvidia's line up comes down in price as well



They can't afford to price it low.  My take is that if it's comparable or better than 980ti, it'll be priced in the same ballpark.  Then AMD can price the Hawaii respins lower (at current 290X prices?) and sort of 'fool' people into believing it's the same thing.  Both Nvidia and AMD have used that PR misrepresentation in the past*.  Fiji expensive but gets brand prestige, the layperson buys the not quite so expensive Fiji (being respun Hawaii).  But that's just my take.

*The 770 being a 680 and so on.


----------



## alwayssts (Jun 7, 2015)

rtwjunkie said:


> I think the doomsayers are going to be surprised when the Fury comes in stronger than expected.




But, with respect...math?

980ti:

22*128+32 = 3520
somewhere around 56-57% extra bw
.56x*.16*3520 = similar to 3840sp

1128 (avg clock according to TPU review)*3840/4096 = 1057mhz

This is not taking into account AMD's arch usually scales around 96-97% of nvidia.  Also, while HBM may provide a little extra bw perf (around 10-15% or if 1ghz), that's still only going to add a percent or two extra real performance.

While there are certainly unknowns to the core (Is Fiji built on old-school 28HP and clock better than more-recent HDL library parts...ie can the water-cooling edition and it's 375w tdp allow a clock closer to 1200-1300mhz or is it HDL and clock like crap...closer to 1100mhz or so max like more recent parts?) and to the memory (is it stock clocked at 1ghz?  Can it scale similar to most DDR3 at 1.3v which could add a little more performance?), there are not going to be any miracles here.  They will be close...If AMD performs a good chunk better through overclocking, it will likely largely be owed to a higher tdp/better cooling allowance.


----------



## Frag_Maniac (Jun 8, 2015)

yotano211 said:


> My question is, will it make my Prius faster?



No, but it might make RCoon's Viking Raccoon frisky. 

Anyways, regarding that chart, it's a bit boasty without even mentioning what games were tested. The way a lot of these tit for tat GPU power struggle scenarios have gone is the more closely matched models often have games they excel at, but are beaten in others.

Let's judge the graphics parts by what players see, vs what hit hungry sites claim pre launch.


----------



## yotano211 (Jun 8, 2015)

Frag Maniac said:


> No, but it might make RCoon's Viking Raccoon frisky.
> 
> Anyways, regarding that chart, it's a bit boasty without even mentioning what games were tested. The way a lot of these tit for tat GPU power struggle scenarios have gone is the more closely matched models often have games they excel at, but are beaten in others.
> 
> Let's judge the graphics parts by what players see, vs what hit hungry sites claim pre launch.


oh man, i was hoping that it could give it more than 110hp.


----------



## purecain (Jun 8, 2015)

even if fury is a faster card, I may opt for nvidias 980ti...  the reason I support amd usually is because I don't want them to go under... the core on the 980ti is amazing though, kidney anyone?????


----------



## RCoon (Jun 8, 2015)

trodas said:


> Well, AMD did not have any excuses to not blow Titan X out of the water with Radeon Fury X card, and they have seriously screw things up, if they did not do it. After all, the performance tests of the slower Fury seems to be more that impressive:
> 
> Well, another source show these speed results, that are not much impressive:
> 
> ...



I stopped trusting WCCFTech after discovering their writers copy-paste articles from other news sites without citing sources. Most of what they write is utter nonsense anyway. I don't think they've been an original credible source for any leak, it's all just reposts of leak news from elsewhere.


----------



## broken pixel (Jun 9, 2015)

WTFTech


----------



## Frag_Maniac (Jun 9, 2015)

purecain said:


> even if fury is a faster card, I may opt for nvidias 980ti...  the reason I support amd usually is because I don't want them to go under... the core on the 980ti is amazing though, kidney anyone?????



Keep in mind too that even though the 900 series doesn't have NVLink, Maxwell cards WILL get a slight boost once CUDA 6 releases, because they're designed to make use of a virtual software version of NVLink to speed up data between it and the CPU and other GPUs. So we won't really know how the AMD 300s compare to Maxwell cards until then. Plus Nvidia has ShadowPlay, which is miles better than Raptr Game DVR.


----------



## Xzibit (Jun 9, 2015)

Frag Maniac said:


> Keep in mind too that even though the 900 series doesn't have NVLink, Maxwell cards WILL get a slight boost once CUDA 6 releases, because they're designed to make use of a virtual software version of NVLink to speed up data between it and the CPU and other GPUs. So we won't really know how the AMD 300s compare to Maxwell cards until then. Plus Nvidia has ShadowPlay, which is miles better than Raptr Game DVR.



NVLink is an interconnect for HPC systems.  PCIe is replaced in favor of a mezzanine form to achive it.



			
				NvidiaNews said:
			
		

> NVIDIA has designed a module to house GPUs based on the Pascal architecture with NVLink. This new GPU module is one-third the size of the standard PCIe boards used for GPUs today. Connectors at the bottom of the Pascal module enable it to be plugged into the motherboard, improving system design and signal integrity


----------



## Frag_Maniac (Jun 9, 2015)

Xzibit said:


> NVLink is an interconnect for HPC systems.  PCIe is replaced in favor of a mezzanine form to achive it.



I've seen people say that, but it's NOT limited to HPC. The only reason people are assuming that is because Nvidia first sold (marketed) it to businesses seeking powerful compute GPUs. Recently they've added that it also helps for SLI, because it also speeds up data flow between GPUs as well.

Keep in mind, Nvidia has been building HPC power into their consumer grade GPUs since Fermi. NVLink is not workstation exclusive. It would be a real travesty to create a feature that effectively has the same end result as unified memory, and not use it for gaming, especially when Dx12's higher draw call capability will compliment it so well.

You have to realize, Nivida corp has big investors that they first pitch new tech to. Their investments are a big part of affording to bring the tech to the consumer market as well. So naturally they're initial presentations will be geared toward them.


----------



## Ja.KooLit (Jun 9, 2015)

ill be very disappointed if new AMD cards will be slower than 980ti..... 

but i will just have to wait for the reviews rather than just rumors


----------



## Xzibit (Jun 9, 2015)

Frag Maniac said:


> I've seen people say that, but it's NOT limited to HPC. The only reason people are assuming that is because Nvidia first sold (marketed) it to businesses seeking powerful compute GPUs. Recently they've added that it also helps for SLI, because it also speeds up data flow between GPUs as well.
> 
> Keep in mind, Nvidia has been building HPC power into their consumer grade GPUs since Fermi. NVLink is not workstation exclusive. It would be a real travesty to create a feature that effectively has the same end result as unified memory, and not use it for gaming, especially when DX12's higher draw call capability will compliment it so well.
> 
> You have to realize, Nivida corp has big investors that they first pitch new tech to. Their investments are a big part of affording to bring the tech to the consumer market as well. So naturally they're initial presentations will be geared toward them.



What ?

NVLink is HPC exclusive because the system is made to specs for the mezzanine form factor of 2/3/4.  Unless Nvidia is planning a major motherboard ecosystem change with AMD & Intel its not happening.  The best you can expect is something similar to what AMDs has done with XDMA.


----------



## GreiverBlade (Jun 9, 2015)

night.fox said:


> ill be very disappointed if new AMD cards will be slower than 980ti.....
> 
> but i will just have to wait for the reviews rather than just rumors


if it's by a small margin i would not be disappointed ... 

and yep! REVIEW REVIEW REVIEW!


----------



## Frag_Maniac (Jun 9, 2015)

Xzibit said:


> NVLink is HPC exclusive because the system is made to specs for the mezzanine form factor of 2/3/4.



Source to prove that it's not for consumer cards?

Why would they be calling it unified memory, planning to implement a software version of it on the Maxwell cards via CUDA 6, and showing charts of NVLink benefits in SLI?

I've seen people claim Pascal will not be a consumer card too, which is also false. At Nvidia's stage demo debuting Pascal, they compared it's estimated power to Maxwell, not workstation specific cards.

And here is their article verifying Pascal will have NVLink.

http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/

Here is AnandTech's article showing an Nvidia road map chart indicating the software version of uni mem will be implemented on Maxwell cards.

http://www.anandtech.com/show/7515/nvidia-announces-cuda-6-unified-memory-for-cuda


----------



## erocker (Jun 9, 2015)

I really don't find this rumor to be plausible. I could see the 390x being slower, but not the top Fury card... Actually I don't think the lower-end Fury card will be slower either. I'm kind of expecting (based on what I've read) that there will be three Fury cards. W/C version, air version (slower), and a slightly cut-down pro version.


----------



## BorisDG (Jun 9, 2015)

Just this:

"Hunting Titans" LOL!


----------



## horik (Jun 9, 2015)

I wonder when AMD will "leak" some numbers.


----------



## BiggieShady (Jun 9, 2015)

broken pixel said:


> WTFTech


or even ...


Spoiler: wait for it



WaterClosetCrapFestTech


----------



## the54thvoid (Jun 9, 2015)

erocker said:


> I really don't find this rumor to be plausible. I could see the 390x being slower, but not the top Fury card... Actually I don't think the lower-end Fury card will be slower either. I'm kind of expecting (based on what I've read) that there will be three Fury cards. W/C version, air version (slower), and a slightly cut-down pro version.



390X is rebranded 290X which is far behind 980ti. Its not that it could be slower, it will be.
Fury however, yes, unknown but I don't see it being slower. And it might come bundled with BF Star Wars (Hexus article). Ooooh.


----------



## erocker (Jun 9, 2015)

I'm not sure it's an apples to apples rebrand. I hope not anyways, hopefully they tweaked the silicon a little bit for some sort of improvement.


----------



## uuuaaaaaa (Jun 9, 2015)

I will just leave this here...

http://wccftech.com/benchmarks-amds-fiji-gpu-surface-open-cl-compubench-performance-beats-titanx/


----------



## v12dock (Jun 9, 2015)

All these “leaks“ with numbers all over the place just a bunch of FUD


----------



## the54thvoid (Jun 9, 2015)

erocker said:


> I'm not sure it's an apples to apples rebrand. I hope not anyways, hopefully they tweaked the silicon a little bit for some sort of improvement.



I'm sure it's been tweaked but its 290x origin is a long way behind 980ti perf. I'm ambivalent about the whole 3xx range tbh. Only the Fiji parts interest me on a tech and prospective purchase point.

A lot of AMD peeps have banged on about supporting the underdog, well, I hope they display their loyalty when Fiji arrives and help AMD out by purchasing it. (Not aimed at you erocker).

I might.


----------



## arbiter (Jun 9, 2015)

trodas said:


> Well, AMD did not have any excuses to not blow Titan X out of the water with Radeon Fury X card, and they have seriously screw things up, if they did not do it. After all, the performance tests of the slower Fury seems to be more that impressive:
> 
> Well, another source show these speed results, that are not much impressive:
> 
> ...


The thing about your "source" is:
*"[May 24th 2015] UPDATE : Benchmarks below were confirmed to be fake."*
Next time READ your source BS story you post a link for so you don't look like a complete moron Tool.



trodas said:


> So let's hope the real performance will be higher
> 
> Also this test shows, that the speed-up over Titan X is consistent thru 21 games (posted already, added the link):
> 
> ...



Might want to read how they came up with those results, Based using scaling how they claim GCN scales. Which they claim it scales 1:1 for every gcn core that is added. So they numbers are based on fixed scaling which isn't how it works. Just cause you take a cpu go from 1 core to 10, doesn't mean it scales to be 10x faster.



> #1 We Divide the number of stream processors of Fiji by the number of stream processors of Hawaii, trusting that the performance will scale linearly with each additional SP. Based on the scaling we observed with all other GCN GPUs we found that the performance scales with perfect linearity at 4K.
> You can go back back to the R9 290 vs 7870 data as a reference point.
> 
> For example the 2560 GCN unit R9 290 was precisely 2.0x faster than the 1280 GCN unit HD 7870. Even though the 290 is clocked slightly below the 7870, so this would account for the architectural improvements that AMD introduced with GCN 1.1 vs GCN 1.0. In this particular exercise around Fiji we do not account for any possible architectural improvements because they’re difficult to quantify.
> ...



That is HOW they got the numbers, not based on any game testing but expecting it will scale 1 for 1.


----------



## HumanSmoke (Jun 9, 2015)

uuuaaaaaa said:


> I will just leave this here...
> 
> http://wccftech.com/benchmarks-amds-fiji-gpu-surface-open-cl-compubench-performance-beats-titanx/


Pretty funny. WTFtech print a sensationalist headline, but conveniently overlook that the Fury scores lower than the mobile 980M and 290X in other tests. I guess they are setting it up for tomorrows scoop headline.


----------



## Xzibit (Jun 9, 2015)

Frag Maniac said:


> Source to prove that it's not for consumer cards?
> 
> Why would they be calling it unified memory, planning to implement a software version of it on the Maxwell cards via CUDA 6, and showing charts of NVLink benefits in SLI?
> 
> ...



Both articles make no mention of any sort of benefit to SLI or what your implying. 

First article explains NVLink.

The interconnect had to change from PCIE to "NVLink" for the addition connections needed.

Pascal is coming to the consumer in PCIE.

We just had Computex 2015.  *Did you see any of the new motherboards with "NVLink" connectors ?*


----------



## Frag_Maniac (Jun 9, 2015)

One of the links I just showed you talks about Pascal having NVLink AND being compared to Maxwell. How would it make sense that they'd compare Pascal to Maxwell, since they explain it will take NVLink to reach that claimed 10x power?

PC needs uni mem, consoles have had it for a while. It would be stupid to bring this only to workstation cards. And the charts clearly show multi GPU use as per SLI.

Then there's the fact that they've said Maxwell will have the software version of NVLink via Cuda 6. You're implying they'll go from that to leaving consumer Pascal cards without ANY uni mem. Otherwise they'd have said Maxwell AND Pascal will have the Cuda 6 software version of uni mem.

And here's an article that shows NVLink version 1.0 having the GPU connected to CPU via Pci Ex, and using NVLink between GPUs. To carry it further in consumer MBs to use it between GPU and CPU is just a matter of Intel and AMD jumping on board with it. Which seems the obvious next step since Pci Ex bottlenecks exist even in gaming.

http://www.enterprisetech.com/2014/11/18/early-nvlink-tests-show-gpu-speed-burst/

It may take a bit longer to reach it's full potential on the consumer side, but Nvidia ARE making it possible on their end.

I can see Intel jumping on this fairly soon. If they do, AMD will have to follow or sink even deeper financially than they have already. I don't give a crap what they had at Computex 2015, it will be some time before Pascal is released, that's NEXT year bud.

There are MANY levels of NVLink being mentioned from 30Gb/s all the way to 200GB/s. It's conceivable we'll soon see consumer gaming MBs with one of those lower levels of it, just enough to balance data flow.

Nvidia's hardware has been bridging the gap between gaming and HPC for some time, from Fermi, to the Titans, to NVLink. This shouldn't surprise you. It's time to move forward where we need it most, on the platform itself.


----------



## Xzibit (Jun 9, 2015)

Frag Maniac said:


> One of the links I just showed you talks about Pascal having NVLink AND being compared to Maxwell. How would it make sense that they'd compare Pascal to Maxwell, since they explain it will take NVLink to reach that claimed 10x power?



Custom HPCs
Maxwell = PCIE
Pascal = NVLink



Frag Maniac said:


> PC needs uni mem, consoles have had it for a while. It would be stupid to bring this only to workstation cards. And the charts clearly show multi GPU use as per SLI.



None of the charts show SLI.  They show the NVLink communication.  Do you see SLI connectors on any of the Pascal screenshots ?









Frag Maniac said:


> Then there's the fact that they've said Maxwell will have the software version of NVLink via Cuda 6. You're implying they'll go from that to leaving consumer Pascal cards without ANY uni mem. Otherwise they'd have said Maxwell AND Pascal will have the Cuda 6 software version of uni mem.



CUDA 7 has been available since the start of the year. Nvidia officially launched CUDA 7 at GTC.  Havent heard of this CUDA 6 NVLink your talking about.

NVLink = New connector in Mezzanine form for custom HPCs
Pascal for consumers will be in PCIE = No NVLink


----------



## GreiverBlade (Jun 10, 2015)

the54thvoid said:


> 390X is rebranded 290X which is far behind 980ti. Its not that it could be slower, it will be.
> Fury however, yes, unknown but I don't see it being slower. And it might come bundled with BF Star Wars (Hexus article). Ooooh.


a 20ish % behind the Ti but it's the 290X we still don't know how the refined core in 390X will behave


----------



## Frag_Maniac (Jun 10, 2015)

Xzibit said:


> None of the charts show SLI.  They show the NVLink communication.  Do you see SLI connectors on any of the Pascal screenshots ?


You're the one implying NVLink can't even work with Pci Ex switches, yet it says this right below one of the charts in the last link I posted, and multi GPU being used.

_"The scenarios that Nvidia is suggesting in the chart above for how CPU-GPU complexes will be linked with the NVLink 1.0 interconnect shows the CPUs being linked to the GPUs using PCI-Express switches."_




> CUDA 7 has been available since the start of the year. Nvidia officially launched CUDA 7 at GTC.  Havent heard of this CUDA 6 NVLink your talking about.
> 
> 
> 
> ...




Here it shows the very first feature listed for CUDA 6 is uni mem, which has also been described as the software version of NVLink.

http://devblogs.nvidia.com/parallelforall/powerful-new-features-cuda-6/

I think you're in denial and a bit out of the loop dude.


----------



## Xzibit (Jun 10, 2015)

Frag Maniac said:


> You're the one implying NVLink can't even work with Pci Ex switches, yet it says this right below one of the charts in the last link I posted, and multi GPU being used.
> 
> _"The scenarios that Nvidia is suggesting in the chart above for how CPU-GPU complexes will be linked with the NVLink 1.0 interconnect shows the CPUs being linked to the GPUs using PCI-Express switches."_
> 
> ...



From the first link you posted



> Now to be clear here, CUDA 6’s unified memory system doesn’t resolve the technical limitations that require memory copies – specifically, the limited bandwidth and latency of PCIe – rather it’s a change in who’s doing the memory management. Data still needs to be copied to the GPU to be operated upon, but whereas CUDA 5 required explicit memory operations (higher level toolkits built on top of CUDA withstanding) CUDA 6 offers the ability to have CUDA do it instead, freeing the programmer from the task.



I think your confused as to what NVLink is

*NVIDIA NVLINK HIGH-SPEED INTERCONNECT
*











			
				Nvidia said:
			
		

> NVIDIA® NVLink™ is a high-bandwidth, energy-efficient interconnect that enables ultra-fast communication between the CPU and GPU, and between GPUs. The technology allows data sharing at rates 5 to 12 times faster than the traditional PCIe Gen3 interconnect,



I cant make it any more simple then that video.



			
				Nvidia said:
			
		

> NVLink replaces PCI Express


----------



## the54thvoid (Jun 10, 2015)

GreiverBlade said:


> a 20ish % behind the Ti but it's the 290X we still don't know how the refined core in 390X will behave



TPU review, its 50% gap in an undoubted AMD title, DA: Inquisition at 4k.

Could pick n choose but it looks like 40% fps gap on most things averaged (this is a quick check before work at 6.45am!)


----------



## irani (Jun 10, 2015)

the54thvoid said:


> 390X is rebranded 290X .



it's not true


----------



## Frag_Maniac (Jun 10, 2015)

Xzibit said:


> I think your confused as to what NVLink is...


I'm well aware of what it is, but unlike you, I don't ignore that it can be used between GPUs even on a Pci Ex MB, which I just showed in my last post. The only thing currently that's keeping it from being a reality between CPU and GPU on consumer MBs is someone like Intel jumping on board.

You have no spec chart, no system info sig, so I have to wonder, are you trying to hide being an AMD fanboy or something, because this reeks of denial. 

After all, you wouldn't even respond to the info I showed on Nvidia's own blog about the FIRST feature of CUDA 6 being listed as uni mem, after saying you've heard of no such thing. And that's for already existing Maxwell cards.


----------



## GreiverBlade (Jun 10, 2015)

the54thvoid said:


> TPU review, its 50% gap in an undoubted AMD title, DA: Inquisition at 4k.
> 
> Could pick n choose but it looks like 40% fps gap on most things averaged (this is a quick check before work at 6.45am!)


ooohhh... one title ... (tho a 290/290X still achieve a playable framerate in any title, so : no biggies  )

ok sorry ... i was talking in 1080p and averaged... my  bad  too bad all my friends have 970 or 980 and not a Ti otherwise i would gladly like to see that gap in real life situation just as i, well ... , don't see it.
well yes fps counter show their card is faster, but does it warrant the price difference they paid  (even new cards ... not specially talking about the price i got for my card  ) now if nvidia would have a more "affordable" price i wouldn't mind supporting them a bit more (than having just a Shield Tablet and a GT730... as a backup for Bios flash  ) because now from a 290X to a 980Ti the price difference is around 116.7% more

oops sorry it was 29% in 1080p averaged, 33% on 1440p, 27% on 2160p and 29% in 900p so yep around 40% averaged (don't misunderstand me i find it quite impressive for the power consumption difference)

but i see what you meant no worries




irani said:


> it's not true


well not technically ... but yep it's the same silicon with some refinement. (they don't need to renew the whole lineup as they hold their ground quite fine ... )


----------



## HumanSmoke (Jun 10, 2015)

irani said:


> it's not true


You have some proof? Because there is some pretty substantial evidence that points to the 390X being a barely warmed over 290X.
The first example would be PowerColor's 390X using the same PCB as its 290X ( LF R*29*FA), while XFX and Asus's leaks confirm basically the same thing. If you consider a 50MHz core bump and a 250Mhz (1G effective) memory clock increase to not be a rebrand, then that's your prerogative, but cards such as the GTX 770 are considered by many people to be rebrands - and that particular card bought a larger clockspeed increase as well as a dynamic boost algorithm.


----------



## irani (Jun 10, 2015)

I was wrong with fury 

yes 390x is re-branded of 290x

Sorry guys


----------



## the54thvoid (Jun 10, 2015)

I want to state here that if the Fury card over clocked doesn't beat a standard over clocked 980ti, its an Nvidia win (and vice versa). I say this because Maxwell cards could all be a good 100-200mhz higher in core but Nvidia insist on being power efficient (by limiting their hardware).
I'll concede that Fury gets the popular win if at stock its faster. And if it wins the over clock condition, well, happy days for AMD!


----------



## BiggieShady (Jun 10, 2015)

the54thvoid said:


> I want to state here that if the Fury card over clocked doesn't beat a standard over clocked 980ti, its an Nvidia win (and vice versa). I say this because Maxwell cards could all be a good 100-200mhz higher in core but Nvidia insist on being power efficient (by limiting their hardware).
> I'll concede that Fury gets the popular win if at stock its faster. And if it wins the over clock condition, well, happy days for AMD!


I must agree, but 980 Ti needs to be OC-ed on the same liquid cooler as Fury card has (will have) 
... otherwise we could already give a title to Fury because with massive shader count and memory bandwidth, it should throttle insanely for it to be slower ... and it won't because liquid cooling.


----------



## RejZoR (Jun 10, 2015)

the54thvoid said:


> 390X is rebranded 290X which is far behind 980ti. Its not that it could be slower, it will be.
> Fury however, yes, unknown but I don't see it being slower. And it might come bundled with BF Star Wars (Hexus article). Ooooh.



Not true. R9-390X is based on R9-290X GPU core arrangement with R9-285 optimizations and possibly additional DX12 instructions. Rebrand is when you just stick a new BIOS with different name on it and maybe raise clocks a bit...


----------



## the54thvoid (Jun 10, 2015)

RejZoR said:


> Not true. R9-390X is based on R9-290X GPU core arrangement with R9-285 optimizations and possibly additional DX12 instructions. Rebrand is when you just stick a new BIOS with different name on it and maybe raise clocks a bit...



Not in my books.  GPU rebrands are done using the vast majority of the chip architecture and tweaking here or there.  The 390X is a spiritual 'rebrand'. Feel free to take that up with many other tech sites that call it such.

I suppose it would be normalised if it were called the 295X but that kind of got taken.  I know NV do it too, 770 was a 680 on juice but either way - Fury is AMD's only real 'new' creation.


----------



## TheGuruStud (Jun 10, 2015)

the54thvoid said:


> Not in my books.  GPU rebrands are done using the vast majority of the chip architecture and tweaking here or there.  The 390X is a spiritual 'rebrand'. Feel free to take that up with many other tech sites that call it such.
> 
> I suppose it would be normalised if it were called the 295X but that kind of got taken.  I know NV do it too, 770 was a 680 on juice but either way - Fury is AMD's only real 'new' creation.



Intel is the biggest rebrander of all time. From SB to skylake lol


----------



## Rivage (Jun 10, 2015)

*AMD Radeon Fury X 3DMark performance
*


----------



## TheGuruStud (Jun 10, 2015)

Rivage said:


> *AMD Radeon Fury X 3DMark performance*
> 
> *View attachment 65590*
> 
> * View attachment 65591 *



Let the fancy faking of numbers begin!


----------



## Rivage (Jun 10, 2015)

*AMD Radeon Fury X pictured some more - Liquid Cooling, Backplate And Red LED Lit Radeon Logo











*


----------



## the54thvoid (Jun 10, 2015)

What's wrong with the infill on the 'R', 'A', 'D' and 'O'?  It's not shiny like the rest of the shroud.

And do i see a double BIOS switch for tampering goodness?  Though I've never flashed a Radeon, will need help if I get one.

I wonder what the air cooled one looks like?  Different PCB but longer cooler shroud?

Not long now......


----------



## Rivage (Jun 10, 2015)

That soft touch backplate and plate on top looks sexy. But are they in their places? I mean, is it good for heat dissipation? Doubt..



the54thvoid said:


> And do i see a double BIOS switch for tampering goodness?


that one at 1 pic. Think so..


the54thvoid said:


> Not long now......


Yeah! just a 5 days to Go!


----------



## RejZoR (Jun 10, 2015)

the54thvoid said:


> Not in my books.  GPU rebrands are done using the vast majority of the chip architecture and tweaking here or there.  The 390X is a spiritual 'rebrand'. Feel free to take that up with many other tech sites that call it such.
> 
> I suppose it would be normalised if it were called the 295X but that kind of got taken.  I know NV do it too, 770 was a 680 on juice but either way - Fury is AMD's only real 'new' creation.



Then I don't know what books you are reading. Refresh is NOT rebranding. It even goes against the dictionary definition of it.

New architecture = entirely new chip that has very little in common with predecessor
Refresh = better process node, optimizations
Rebrand = Same everything, maybe different clocks and different BIOS with (obviously) different name.


----------



## the54thvoid (Jun 10, 2015)

RejZoR said:


> Then I don't know what books you are reading. Refresh is NOT rebranding. It even goes against the dictionary definition of it.
> 
> New architecture = entirely new chip that has very little in common with predecessor
> Refresh = *better process node*, optimizations
> Rebrand = Same everything, maybe different clocks and different BIOS with (obviously) different name.



Same process.  Is it a half baked refresh?

In fact - what tweaks has it got?  What GCN compared with 290X? Every site I see refers to it as a rebrand.  Its got more memory and it's got higher clocks.  It's a rebrand.

Final Edit from TPU article today:



> The R9 390X, is expected to be a re-brand of the previous generation R9 290X, with its standard memory amount raised to 8 GB. It's based on the 28 nm "Grenada" silicon. We've seen *no evidence* pointing at "Grenada" being some sort of an upgrade of "Hawaii" with newer GCN 1.2 stream processors. Perhaps AMD polished its electricals to the extent it could, without changing the silicon. We'll know for sure only next week.


----------



## RejZoR (Jun 10, 2015)

R9-290X has NO framebuffer compression. R9-390X does. R9-290 uses GCN 1.1, R9-390X uses (modified) GCN 1.2 cores. Ergo, NOT a rebrand no matter what you say. And considering the changes within, you can also expect extended DX12 support. There is no way they'll leave high end range at only base DX12 feature level...


----------



## Fluffmeister (Jun 10, 2015)

Source: https://www.reddit.com/r/Amd/comments/399775/new_picture_of_fury_nano_w_backplate_tubing_and/


----------



## R-T-B (Jun 10, 2015)

RejZoR said:


> R9-290X has NO framebuffer compression. R9-390X does. R9-290 uses GCN 1.1, R9-390X uses (modified) GCN 1.2 cores. Ergo, NOT a rebrand no matter what you say. And considering the changes within, you can also expect extended DX12 support. There is no way they'll leave high end range at only base DX12 feature level...



TPU seems to disagree, as noted above.


----------



## erocker (Jun 10, 2015)

R-T-B said:


> TPU seems to disagree, as noted above.


Going off of what? A graphic on a box?


----------



## irani (Jun 10, 2015)

TheGuruStud said:


> Intel is the biggest rebrander of all time. From SB to skylake lol



big like


----------



## RejZoR (Jun 10, 2015)

You can't say it's a rebrand if it's using a different core, jesus christ. So GTX 970 is just a rebrand of GTX 750Ti you'll say? It isn't. It's a different core. Stop spreading nonsense and learn what is a rebrand and what is a refresh. R9-390X is a refresh!


----------



## the54thvoid (Jun 10, 2015)

R-T-B said:


> TPU seems to disagree, as noted above.



He knows best.  Everyone else is wrong.  They may be.  Only RejZoR knows.


----------



## erocker (Jun 10, 2015)

the54thvoid said:


> He knows best.  Everyone else is wrong.  They may be.  Only RejZoR knows.


Welcome to the internet. Where everyone see's a box and thinks the sky is falling.


----------



## RejZoR (Jun 10, 2015)

the54thvoid said:


> He knows best.  Everyone else is wrong.  They may be.  Only RejZoR knows.



Because I AM correct. It can't possibly be a rebrand by any definition if the chip is nothing like the old one. HD7970 transition to R9-280X is a rebrand. It's exactly the same core, they tweaked clocks a bit and slammed a new BIOS on it. This is what we consider rebrand. But when you take a foundation, replace all shaders with new ones, tweak the clocks, potentially slam additional instructions for new version of DX, based on what fucked up logic you can call this "rebrand"?

It's also not a rebrand of Tonga core despite sharing same GCN version. R9-285 has less shaders, less compute units, less ROP and narrow memory bus. Ergo, not the same again.

With your logic, Fiji is just a rebrand of ATi Rage from the freaking 90's. Um oh, they both have GPU on it, so they are the same.


----------



## arbiter (Jun 10, 2015)

RejZoR said:


> You can't say it's a rebrand if it's using a different core, jesus christ. So GTX 970 is just a rebrand of GTX 750Ti you'll say? It isn't. It's a different core. Stop spreading nonsense and learn what is a rebrand and what is a refresh. R9-390X is a refresh!


Kinda hard to call something a rebrand when one has 640 cuda cores and other has 1664 cores. SO your logic is completely retarded. One requirements for rebrand is would need to be THE SAME.


----------



## RejZoR (Jun 10, 2015)

You just proved my fucking point. 2816 cores found in R9-290X are not the same as 2816 cores found in R9-390X. Assuming they will have the same number, there are no official numbers yet mind you. But one thing is known for sure, the 390X will NOT use GCN 1.1. It will have GCN 1.2 That is without a shadow of a doubt a solid fact. Which by itself makes my point correct.

By your logic, 400 HP Mitsubishi Lancer Evo is just a rebrand of 60 HP Mitsubishi Colt. They both have four cylinder engines, meaning they are EXACTLY the same aka it's a rebrand. My sides, my sides are gone...


----------



## R-T-B (Jun 10, 2015)

RejZoR said:


> Because I AM correct. It can't possibly be a rebrand by any definition if the chip is nothing like the old one.



From TPU today, they say they have NO evidence the silicon is modified in the least for at least the R9 390x.  So at least that one is possibly a rebrand.


No, I don't like that idea any more than you, but it's what's being indicated...


> You just proved my fucking point. 2816 cores found in R9-290X are not the same as 2816 cores found in R9-390X. Assuming they will have the same number, there are no official numbers yet mind you. But one thing is known for sure, the 390X will NOT use GCN 1.1. It will have GCN 1.2 That is without a shadow of a doubt a solid fact. Which by itself makes my point correct



Again, this is NOT certain.

@erocker
I was supposed to be quoting RejZor, my tablet thought otherwise aparently.  It would have made more sense in context.


----------



## RejZoR (Jun 10, 2015)

Sorry, but that would be galactic scale fail. If anyone thinks that way, he's not thinking with any logic.

Why would AMD develop a Tonga core between Tahiti and Hawaii around GCN 1.2 and then rebrand an OLDER GCN 1.1 core into a newer generation card. That's like NVIDIA doing work on Maxwell 1 and then making GTX 970 based on Kepler instead of Maxwell 2 which is what they did. It would be that kind of level of stupid.


----------



## HumanSmoke (Jun 10, 2015)

RejZoR said:


> Sorry, but that would be galactic scale fail. If anyone thinks that way, he's not thinking with any logic.
> 
> Why would AMD develop a Tonga core between Tahiti and Hawaii around GCN 1.2 and then rebrand an OLDER GCN 1.1 core into a newer generation card. That's like NVIDIA doing work on Maxwell 1 and then making GTX 970 based on Kepler instead of Maxwell 2 which is what they did. It would be that kind of level of stupid.


Bookmarked this baby!

Knowing something as fact (GCN 1.2 isn't confirmed for Grenada), and assuming that it so because you cant possibly entertain the fact that it could be otherwise aren't really the same thing at all. Case in point is the first verified leak of AMD's (non-OEM) 300 series. The R7 370 gets a fancy new GPU name (Trinidad) , yet still retains a Crossfire finger - so no XDMA, and likely is at best a metal layer respin of Pitcairn/Curacao.....which means that AMD is releasing a high-volume part that doesn't have support for a number of features AMD itself are pushing to sell the card series - notably TrueAudio, FreeSync, and VSR.


----------



## R-T-B (Jun 10, 2015)

RejZoR said:


> Sorry, but that would be galactic scale fail. If anyone thinks that way, he's not thinking with any logic.
> 
> Why would AMD develop a Tonga core between Tahiti and Hawaii around GCN 1.2 and then rebrand an OLDER GCN 1.1 core into a newer generation card. That's like NVIDIA doing work on Maxwell 1 and then making GTX 970 based on Kepler instead of Maxwell 2 which is what they did. It would be that kind of level of stupid.



I know and there's only one very sad explanation for it in my mind:

AMD is very, very strapped for cash.


----------



## HumanSmoke (Jun 10, 2015)

Rivage said:


> *AMD Radeon Fury X 3DMark performance
> *


Well, I certainly hope that isn't indicative of the Fury X's actual performance. 2% faster than the 980 Ti with 45% more cores, 52% more bandwidth, 33% more raster ops, 45% more texture address units, and nominal 20% higher board power limit doesn't make for great reading. Still, if it's slugging it out with a $650 card, maybe the pricing will be better than originally mooted.


----------



## GreiverBlade (Jun 10, 2015)

aaaaaaaahhh we don't care (well at last I don't ...) about rebrand ... if there is a rebrand even with some optimization or refinement, it only mean they need only 1 new sku : the rest still has enough potential to hold the line. 

if the price is right and the performance are here ... then : everything's OK
will still wait till a Fiji XT or Pro hit the 2nd hand market, depending the price, before deciding on a upgrade ... or keep my 290   BUT first! i wait for REVIEW, enough rumors BS and other joyfull things!
nvidia AMD : BOTH HAVE INTERESTING OFFER! (performances and power consumption for the first and bang for bucks for the later )


----------



## MrGenius (Jun 10, 2015)

RejZoR said:


> Refresh is NOT rebranding. It even goes against the dictionary definition of it.
> 
> New architecture = entirely new chip that has very little in common with predecessor
> Refresh = better process node, optimizations
> Rebrand = Same everything, maybe different clocks and different BIOS with (obviously) different name.





RejZoR said:


> HD7970 transition to R9-280X is a rebrand.


Because GCN 1.0 is the same as GCN 1.1...Open GL 4.2 is the same as Open GL 4.3...and DX 11.1 is the same as DX 11.2...right?


----------



## Xzibit (Jun 10, 2015)

Fluffmeister said:


> Source: https://www.reddit.com/r/Amd/comments/399775/new_picture_of_fury_nano_w_backplate_tubing_and/




That doesn't look like a 25/27mm radiator.


----------



## HumanSmoke (Jun 10, 2015)

GreiverBlade said:


> if the price is right and the performance are here ... then : everything's OK


No, everything isn't OK.
AMD have had price and performance on their side for years, and look where it has got the company.
The whole reason AMD are only launching a single new GPU is because they are R&D constrained because their account books look like a dumpster fire being dowsed with raw sewage. Those great prices, and their potential customer base queuing up to buy second hand cards, pretty much mean that the company cant invest in R&D today, which translates to reduced amount of product on a slower timetable 2, 3, 4, 5 years in the future.


GreiverBlade said:


> will still wait till a Fiji XT or Pro hit the 2nd hand market, depending the price, before deciding on a upgrade ... or keep my 290


I'm sure AMD's bean counter is VERY happy to hear that.


----------



## the54thvoid (Jun 10, 2015)

BTW @HumanSmoke - I dig your CD tagline.  Was that not from when Intel was working on Larabee/Knights Corner and he was spouting stuff about it finishing Nvidia etc or was it on die gfx talk?  Either way I do recall reading the ramble he was on that day.  I also recall the time he said a viable GF100 core was impossible (with all cores enabled) and then about a month later Nvidia dropped the GTX 580.  T'was funny.
I liked S|A at the start but it got ugly and the mods were dicks.  If people think TPU has a bias, hell - need to look around...  Anyway, enough off topic stuff.




Xzibit said:


> That doesn't look like a 25/27mm radiator.



I'd prefer if they release AIO's with optional components.  I'd rather switch out most of these parts, in other words do it custom.


----------



## Xzibit (Jun 10, 2015)

the54thvoid said:


> I'd prefer if they release AIO's with optional components.  I'd rather switch out most of these parts, in other words do it custom.



Just curious...  If they were going to use something smaller then the 295x2 38mm since its a single gpu.

CoolIt is rumor to be making it so it might end up something similar to the *Corsair H80i* underneath the shroud


----------



## HumanSmoke (Jun 10, 2015)

the54thvoid said:


> BTW @HumanSmoke - I dig your CD tagline.  Was that not from when Intel was working on Larabee/Knights Corner and he was spouting stuff about it finishing Nvidia etc or was it on die gfx talk?  Either way I do recall reading the ramble he was on that day.  I also recall the time he said a viable GF100 core was impossible (with all cores enabled) and then about a month later Nvidia dropped the GTX 580.  T'was funny.
> I liked S|A at the start but it got ugly and the mods were dicks.  If people think TPU has a bias, hell - need to look around...  Anyway, enough off topic stuff.


Back when Fermi was undergoing its troubled gestation-the thread title is the top right hand corner. Charlie and noted AMD shill Spigzone


Spoiler









 I and five other friends who learned computer tech as school back in the 70's shared a user account at S|A when it first started up. Used to drive some people nuts as the posting viewpoints came from all over the show. One of the guys was in Taiwan doing chip layout, one in Hong Kong working for OEM supply chain, and the others spread far and wide - and all rather critical of the industry as a whole


The mods there don't suffer anything that steps over the party line. The major "no no's" are questioning Charlie's facts, criticizing Intel content written by Charlie (who is *extremely* Intel friendly), and pointing out deficiencies with AMD's management/strategy. Anyhow, back on topic...


the54thvoid said:


> I'd prefer if they release AIO's with optional components.  I'd rather switch out most of these parts, in other words do it custom.


Seconded. Even adding some quick disconnects and an options list would give the products wider appeal, although I guess the whole reason behind AIO's is that they cater to people who don't want to be bothered with mix-and-match - or are too intimidated by bespoke watercooling.
On a related note, that fan blade profile looks somewhat like a Scythe Gentle Typhoon.


----------



## GreiverBlade (Jun 10, 2015)

HumanSmoke said:


> No, everything isn't OK.
> AMD have had price and performance on their side for years, and look where it has got the company.
> The whole reason AMD are only launching a single new GPU is because they are R&D constrained because their account books look like a dumpster fire being dowsed with raw sewage. Those great prices, and their potential customer base queuing up to buy second hand cards, pretty much mean that the company cant invest in R&D today, which translates to reduced amount of product on a slower timetable 2, 3, 4, 5 years in the future.
> 
> I'm sure AMD's bean counter is waiting happy to hear that.


well you got me ... i will buy a day one Fury X  

altho i can't think about anything else than : AMD is doing the job of intel and nvidia together so, no wonder they are more limited and only "middle of the road perf" than the 2 other who focus on only 1 sector ... but... ah whatever ... 

as long as the reviews are not up, this thread as no more meaning for me  

cheers


----------



## R-T-B (Jun 10, 2015)

GreiverBlade said:


> well you got me ... i will buy a day one Fury X
> 
> altho i can't think about anything else than : AMD is doing the job of intel and nvidia together so, no wonder they are more limited and only "middle of the road perf" than the 2 other who focus on only 1 sector ... but... ah whatever ...
> 
> ...



Yeah, and AMD does it with like a small slice of the capital of either of those groups.  Amazing they are relevant at all, really...


----------



## TheGuruStud (Jun 11, 2015)

R-T-B said:


> Yeah, and AMD does it with like a small slice of the capital of either of those groups.  Amazing they are relevant at all, really...



It's amazing what can happen when you care about competing and aren't committing crimes just b/c you're afraid to compete *looks at inhell*.


----------



## HumanSmoke (Jun 11, 2015)

TheGuruStud said:


> It's amazing what can happen when you care about competing and aren't committing crimes just b/c you're afraid to compete *looks at inhell*.


Just before you petition to have AMD's BoD canonized, you might want to check AMD's record. AMD aren't adverse to bending (false claims) and breaking the law (IP theft, price fixing), its just that their execution is less than stellar and generally isn't worth the effort.
Sure, they aren't a patch on Intel, but neither are they spotless.


----------



## arbiter (Jun 11, 2015)

HumanSmoke said:


> Bookmarked this baby!
> 
> Knowing something as fact (GCN 1.2 isn't confirmed for Grenada), and assuming that it so because you cant possibly entertain the fact that it could be otherwise aren't really the same thing at all. Case in point is the first verified leak of AMD's (non-OEM) 300 series. The R7 370 gets a fancy new GPU name (Trinidad) , yet still retains a Crossfire finger - so no XDMA, and likely is at best a metal layer respin of Pitcairn/Curacao.....which means that AMD is releasing a high-volume part that doesn't have support for a number of features AMD itself are pushing to sell the card series - notably TrueAudio, FreeSync, and VSR.



Easy to say their gpu if faster then a gpu that is what? almost year and half old already? 1.5 years, only manage 20%. Figure most 750ti's can overclock a good 40+%.



HumanSmoke said:


> Well, I certainly hope that isn't indicative of the Fury X's actual performance. 2% faster than the 980 Ti with 45% more cores, 52% more bandwidth, 33% more raster ops, 45% more texture address units, and nominal 20% higher board power limit doesn't make for great reading. Still, if it's slugging it out with a $650 card, maybe the pricing will be better than originally mooted.


Well if you look at 980 vs 290x. 290x had 40% more shaders, 50% more memory bandwidth and look which of the 2 cards were faster in most testing? So really you think about it, its well possible that is only that.

fiji has almost 2x the memory bandwidth of the 980ti. You kinda wonder where that limit is of where it doesn't help the gpu. For an intel cpu for example, 1333 to 1600mhz you get an ok jump in performance but 1600 to 2400 there is very slim jump in most uses cept in applications where it needs it.

Nvidia and even Intel both prove its not how many cores you have its how you you use them.


----------



## RejZoR (Jun 11, 2015)

MrGenius said:


> Because GCN 1.0 is the same as GCN 1.1...Open GL 4.2 is the same as Open GL 4.3...and DX 11.1 is the same as DX 11.2...right?



Please do us all a favor and stop posting. GCN 1.1 is NOT the same as GCN 1.0. With GCN 1.1 chips using it gain TrueAudio, new iteration of PowerTune logic and enhanced stream processors instructions. I'm too lazy at the moment to cite you full differences from the ISA specs sheet.

All this is the reason why HD7790 performs almost as fast as HD7850 with these MASSIVE differences...

*HD7850*
CU: 1024 / TMU: 64 / ROP: 32 / Bus: 256bit / MEM: 153 GB/s

*HD7790*
CU: 896 / TMU: 56 / ROP: 16 / Bus: 128bit / MEM: 96 GB/s

Granted, Bonaire XT used on HD7790 has a higher GPU and MEM clock, but there is no way you can gain such a difference on such "crippled" core just by increasing clocks. Especially when we throw in the fact that HD7790 has a TDP of only 85W and HD7850 has TDP of 150W...

Bonaire was for Hawaii what Tonga is now for the Grenada. Can't put it in any better description really.

They based Hawaii (R9-290 series) upon Bonaire's enhancements, that's why it performs so much better than HD7900/R9-280 series.

Same will happen now. Grenada (R9-390 series) will be based upon Tonga's enhancements, that's why it will perform so much better than R9-290 series.

It's not a rocket science...


----------



## arbiter (Jun 11, 2015)

RejZoR said:


> Same will happen now. Grenada (R9-390 series) will be based upon Tonga's enhancements, that's why it will perform so much better than R9-290 series.


it should only be around 10-15% that's about it. but even that lookin at benchmarks that seems like could be a stretch, as r9 285 still seems to be slower then 280x was. the r9 280 that the 285 replaced, about only boost it got was around 5% or so, going by that likely won't be. being new one might not have the crippled memory 15% probably seems like could be around there.


----------



## RejZoR (Jun 11, 2015)

You're again comparing apples with a sandwiches.

Yes, it catches R9-280, just. But you're also forgetting the fact that R9-285 had 256bit bus with bandwidth of 176 GB/s and R9-280 had 384 bit bus with bandwidth of 288 GB/s (raw bandwidth, BUS*FREQ). I wouldn't call that a tiny difference.

I don't get it why you're questioning this business practice when we already had such process in existence. R9-290 exists because of HD7790 released after HD7900. R9-390 will exist because of R9-285 released after R9-290.

I just don't get it which part of this you don't understand.


----------



## R-T-B (Jun 11, 2015)

RejZor, I want to believe you're right.  But even you have to admit you can't know until the product comes out.


----------



## RejZoR (Jun 11, 2015)

Sure, but why would AMD bother investing tech into Tonga if they'll just release another R9-290X renamed to R9-390X ? It makes zero sense. Sure, they may actually rebrand R9-290X into R9-380X, I can see that happening and would actually kinda make sense, but doing that for the high end range (R9-390X is high end, not enthusiast, that's reserved for Fiji) would be absolutely moronic, thats why I don't expect AMD to even do that. Remember, Fiji is Titan grade stuff, you can't expect AMD to stick GCN 1.2 only in that. That would be just dumb. And I'm assuming they aren't THAT dumb. If they are, then god bless them, I'll buy GTX 980 then or not buy anything at all. It's not like I'm in any hurry with my overclocked HD7950...


----------



## GreiverBlade (Jun 11, 2015)

well the new hierarchy is the 280 line is the 390 line the 290 line is the Fury line ... they just added a step after the X90 line so


----------



## HumanSmoke (Jun 11, 2015)

P4-630 said:


> Radeon Fury 3d Mark


You're a bit late. This was posted two pages ago (post #91)


----------



## P4-630 (Jun 11, 2015)

HumanSmoke said:


> You're a bit late. This was posted two pages ago (post #91)



Ok thanks, I haven't read all that....


----------



## MrGenius (Jun 11, 2015)

RejZoR said:


> Please do us all a favor and stop posting.


Give me a proper counter-point...and I'll think about it.

It was a rhetorical question anyway. But you went and answered it...like _I'm_ the idiot who doesn't know the difference.

Let's try it again then...

You gave us your definitions of what's a what. Then contradicted yourself by saying a 280X is a _rebrand_ of a HD 7970. When, *by your own definition*, it would actually be a _refresh_.

So which is it? Got an answer to explain yourself? Or are you just going to tell me to get lost again?


----------



## RejZoR (Jun 11, 2015)

I can do this all day... HD7970 is in fact R9-280X. Check your facts.

Same number of shaders (even the exact same version of it, GCN 1.0), texture units and ROP's. They also have identical bus width, memory capacity and memory bandwidth. HD7970 has a Tahiti XT core and R9-280X has Tahiti XTL core (notice the immense similarity in name). Which are in a nutshell identical. There are differences in clocks, but changed clocks are not refresh by any definition.

And if transition from R9-290X to R9-390X is a "rebrand" for you, then what is a transition from HD7970 to R9-280X ?It can't be rebrand again, because then you'd just be dishonest.

HD7970 -> NEW CHIP
HD7970 to R9-280X -> REBRAND
R9-290X to R9-390X -> REFRESH

We could debate about "NEW CHIP" because they are often just refreshes of something they've made in the past, but when something is so drastically different, it can be called "NEW CHIP". After all, that was the point where AMD switched from VLIW to RISC (GCN) so I think it deserves it...


----------



## eidairaman1 (Jun 11, 2015)

Cartel said:


> I can still use my CRT on the 980ti though...



Im using an VGA LCD from 2001 on my R9 290


----------



## the54thvoid (Jun 11, 2015)

RejZoR said:


> I can do this all day...



No.. please spare us...

But really - can you clarify why the 390X is a refresh over 290X?  Do you know what GCN 390X has?  And FTR, if that is implemented in firmware, not hardware - it's a rebrand.  The hardware MUST change to qualify as a refresh.  What have AMD changed with the 290X Hawaii chip.  Or, how if it is a Tonga refresh - is Tonga any different from Hawaii?  I'm simply going pretty much on all the tech sites saying it is a rebrand.  I believe in them more than you - no offence.


----------



## erocker (Jun 11, 2015)

Guys... RELAX! AMD has their TOP MEN working on it right now! TOP MEN!


----------



## Steevo (Jun 11, 2015)

Hopefully not boiling their Top RAMEN with the new card.


----------



## Tatty_One (Jun 11, 2015)

erocker said:


> Guys... RELAX! AMD has their TOP MEN working on it right now! TOP MEN!



I'll relax providing it's not the same "Top Men" who brought out the FX 9590


----------



## the54thvoid (Jun 11, 2015)

Tatty_One said:


> I'll relax providing it's not the same "Top Men" who brought out the FX 9590



I'm just hoping its not Peter Griffin.


----------



## RejZoR (Jun 11, 2015)

the54thvoid said:


> No.. please spare us...
> 
> But really - can you clarify why the 390X is a refresh over 290X?  Do you know what GCN 390X has?  And FTR, if that is implemented in firmware, not hardware - it's a rebrand.  The hardware MUST change to qualify as a refresh.  What have AMD changed with the 290X Hawaii chip.  Or, how if it is a Tonga refresh - is Tonga any different from Hawaii?  I'm simply going pretty much on all the tech sites saying it is a rebrand.  I believe in them more than you - no offence.



Have I been speaking Chinese for the last few posts by any chance? Hell, even dictionary says "rebrand" is just a change of the company or product image. Not content, image aka apperance. HD7970 to R9-280X transition was exactly that. Same core, slightly updated software (BIOS) and new stickers. A rebrand. R9-290X to R9-390X holds a similar layout of the old GPU, but features completelly new shaders which are way more efficient than old ones, way more efficient tessellation unit and framebufer compression. A refresh. How freaking more clear can I be?


----------



## Luka KLLP (Jun 11, 2015)

You can't blame RejZoR for repeating his story if people keep arguing even though he's actually consistent in what he says and by his definitions the 390x IS in fact a refresh of the 290x 
I would also want to have the last word if I knew I was right.

Edit: provided of course that what he says about the hardware of the 390x is really true, which we can't know for sure


----------



## eidairaman1 (Jun 11, 2015)

the designs have been refined, unlike the transition from the 7900-R280


----------



## RejZoR (Jun 11, 2015)

Someone has been paying attention to my posts...


----------



## eidairaman1 (Jun 11, 2015)

RejZoR said:


> Someone has been paying attention to my posts...



all the naysayers always are, Honestly it doesnt matter how well the  390X and Fury run as I have a 290 in my rig on a old old monitor anyway. (thing is bulky as hell though due to the HSF making the card take up 3 slots)


----------



## GhostRyder (Jun 11, 2015)

the54thvoid said:


> No.. please spare us...
> 
> But really - can you clarify why the 390X is a refresh over 290X?  Do you know what GCN 390X has?  And FTR, if that is implemented in firmware, not hardware - it's a rebrand.  The hardware MUST change to qualify as a refresh.  What have AMD changed with the 290X Hawaii chip.  Or, how if it is a Tonga refresh - is Tonga any different from Hawaii?  I'm simply going pretty much on all the tech sites saying it is a rebrand.  I believe in them more than you - no offence.


 Well its all going to come down to the release as it would seem there is very little proof one way or the other.  Some of the reasons its considered to be a refresh and not a rebrand is because there was a picture (I am currently looking for it but lost it somehow) from AMD claiming the entire series (R7 370 and up) would support all new GCN features including those supported by the R9 285 which currently is the latest chip on the market, the fact the 285 exists would seem like a wasted effort in development if they just ignored the new changes that brought out for the old one (Especially considering their push for things like Freesync), and the fact that the chip designations are different (Err ill double check the last one but I know Hawaii is now called Grenada) at least hints to something being done.  I am still putting this up as speculation, but it would be a big mistake to release this series top dog chips without support for all the new features they are trying to push (Even simple things like the color compression and the improvements in the architecture (Like tessellation which is supposed to be improved) along with Freesync) as that is free improvements that can help push them over the edge in some areas. 

With their R&D stretched so thin, any improvements they can bring is a win and can help keep them relevant in the GPU market.  That is my outlook and based off what I have seen/heard.  I could always be wrong but I feel it would be a big slap in the face to miss the opportunity and a waste of time to have just Tonga and Fiji support new features (Top end and lower middle support new features only?).


----------



## the54thvoid (Jun 11, 2015)

RejZoR said:


> R9-290X to R9-390X holds a similar layout of the old GPU, but features completelly new shaders which are way more efficient than old ones, way more efficient tessellation unit and framebufer compression. A refresh. How freaking more clear can I be?



Where do you have this info from?  That is all I am asking.  How do you know the architecture of the 390X.  Regardless of this, it is still re-using the Hawaii architecture.  A tweak here or there may be classed as a refresh but the refinements simply mask a re-used chip.  I'm not saying it's bad.  Far from it.  It's an effective use of already proven hardware.

What I think we would all have preferred is a cascade downwards of Fiji parts to cover the 290X position.

And stop thinking I'm ignoring your posts and their content - I simply ask - how do you know the architecture of the 390X when there is no public available info, other than the leaked specs on core, memory etc?

That's all.  Let's hug and be euro friends.  You're Slovenian right?  Well you know of the band Laibach?  I love 'em.  I have a Spectre belt - I saw them earlier this year - they're from Yugo originally but they are a Slovenian band as such now.  See, we're closer than you think.


----------



## RejZoR (Jun 11, 2015)

Luka KLLP said:


> You can't blame RejZoR for repeating his story if people keep arguing even though he's actually consistent in what he says and by his definitions the 390x IS in fact a refresh of the 290x
> I would also want to have the last word if I knew I was right.
> 
> Edit: provided of course that what he says about the hardware of the 390x is really true, which we can't know for sure



Well, all you need to do is use logic. You can't reuse old card for the new card rebrand and have it within same range. That would be a business suicide. But dropping it one level down, I can see it as an option. HD7970 transition to R9-280X did just that. Last year, HD7970 was top end, next year, R9-280X dropped one notch down and high end got replaced by R9-290X. It's a usual business practice done by both AMD and NVIDIA. Same applies to refreshes. They are doing them for years and years in exactly the same way. So, if you've been following graphic cards industry for 15+ years like I have, you kinda learn their ways...


----------



## Luka KLLP (Jun 11, 2015)

RejZoR said:


> Well, all you need to do is use logic. You can't reuse old card for the new card rebrand and have it within same range. That would be a business suicide. But dropping it one level down, I can see it as an option. HD7970 transition to R9-280X did just that. Last year, HD7970 was top end, next year, R9-280X dropped one notch down and high end got replaced by R9-290X. It's a usual business practice done by both AMD and NVIDIA. Same applies to refreshes. They are doing them for years and years in exactly the same way. So, if you've been following graphic cards industry for 15+ years like I have, you kinda learn their ways...


Yeah that makes a lot of sense. We'll see soon if you're right


----------



## GhostRyder (Jun 11, 2015)

the54thvoid said:


> Where do you have this info from?  That is all I am asking.  How do you know the architecture of the 390X.  Regardless of this, it is still re-using the Hawaii architecture.  A tweak here or there may be classed as a refresh but the refinements simply mask a re-used chip.  I'm not saying it's bad.  Far from it.  It's an effective use of already proven hardware.
> 
> What I think we would all have preferred is a cascade downwards of Fiji parts to cover the 290X position.
> 
> ...


 


RejZoR said:


> Well, all you need to do is use logic. You can't reuse old card for the new card rebrand and have it within same range. That would be a business suicide. But dropping it one level down, I can see it as an option. HD7970 transition to R9-280X did just that. Last year, HD7970 was top end, next year, R9-280X dropped one notch down and high end got replaced by R9-290X. It's a usual business practice done by both AMD and NVIDIA. Same applies to refreshes. They are doing them for years and years in exactly the same way. So, if you've been following graphic cards industry for 15+ years like I have, you kinda learn their ways...








Here is what I was referring to for saying the updates in architecture.

http://videocardz.com/56182/amd-off...t-for-radeon-r9-300-and-r7-300-graphics-cards

This is the site its from as well, however this is not definitive but it does push a little towards the side of "refresh" over "rebrand".


----------



## the54thvoid (Jun 11, 2015)

RejZoR said:


> Well, all you need to do is use logic. You can't reuse old card for the new card rebrand and have it within same range. That would be a business suicide. But dropping it one level down, I can see it as an option. HD7970 transition to R9-280X did just that. Last year, HD7970 was top end, next year, R9-280X dropped one notch down and high end got replaced by R9-290X. It's a usual business practice done by both AMD and NVIDIA. Same applies to refreshes. They are doing them for years and years in exactly the same way. So, if you've been following graphic cards industry for 15+ years like I have, you kinda learn their ways...



But what about Laibach?












GhostRyder said:


> Here is what I was referring to for saying the updates in architecture.
> 
> http://videocardz.com/56182/amd-off...t-for-radeon-r9-300-and-r7-300-graphics-cards
> 
> This is the site its from as well, however this is not definitive but it does push a little towards the side of "refresh" over "rebrand".




Yeah but that says the 290 has VSR as well.  If 290 has it, it nullifies it as being a 390 improvement.


----------



## RejZoR (Jun 11, 2015)

I never said VSR is a R9-390 exclusive feature. I'm aware of the fact that R9-290 users already got it as a part of I think it was Omega drivers? Or was it before that already? But since it's found on R9-290 already, R9-380 users will also get it for sure. Because that wone will most certainly be a rebrand. Unless they surprise us.


----------



## GhostRyder (Jun 11, 2015)

the54thvoid said:


> But what about Laibach?
> 
> 
> 
> ...


Well my argument behind that is why improve the lower segment, make a new top end, and then ignore the second highest column.  But yes I agree it offers no definitive proof just (to me) pushes it a bit more towards the other side.


----------



## RejZoR (Jun 12, 2015)

Also, VSR could technically be done on ANY graphic card, even on Radeon 8500 from 15 years ago. But no one thought of it back then, besides, why would you do that and spit in your own bowl? It's in every business nature to give something exclusive only to the owners of higher end products to encourage others to buy those upmarket products. Here and there you throw in a treat to mid end users just to keep people quiet, but in general you always focus on higher end. It's marketing 101.

Same reason why R9 users got VSR, but HD7900 users didn't and I'm quite certain it could be done just the same without any issues since it's mostly a software thing. But that's just how it is...


----------



## R-T-B (Jun 12, 2015)

erocker said:


> Guys... RELAX! AMD has their TOP MEN working on it right now! TOP MEN!



I somehow read this as "TEN MEN" instead of "TOP MEN."

Sadly, that may be more true than I want to believe...




RejZoR said:


> Also, VSR could technically be done on ANY graphic card, even on Radeon 8500 from 15 years ago.



True, but AMDs official reason for limiting it is the VSR driver implementation uses an "offdie scaler chip" found only on the R9/R7 series.


----------



## evanwins (Jun 12, 2015)

RejZoR said:


> Well, all you need to do is use logic. You can't reuse old card for the new card rebrand and have it within same range. That would be a business suicide. But dropping it one level down, I can see it as an option. HD7970 transition to R9-280X did just that. Last year, HD7970 was top end, next year, R9-280X dropped one notch down and high end got replaced by R9-290X. It's a usual business practice done by both AMD and NVIDIA. Same applies to refreshes. They are doing them for years and years in exactly the same way. So, if you've been following graphic cards industry for 15+ years like I have, you kinda learn their ways...



Except Nvidia is currently including better chips entirely like basing them off maxwell transitioning to the 9xx series, for every card.

Not just using the new stuff in the top 2 or 3 cards and rebranding Kepler in the lower end 9xx series cards.

I like both companies but it's clear that they haven't been bringing the same level of innovation to the table in a wide range, most people don't get top tier GPUs and thus Nvidia has brought a significant edge in bringing a lot more innovation to the low and mid range lineup whereas AMD has been shifting at a much slower pace only moving to upgrade top tier cards and send the last years big dog down the line one generation at a time.

All of this is speculation, AMD has proven to be inferior for years; they push prices down because they have to because reviews are bad and they push the cards to unreasonable TDPs to try and keep up. Reviews speak for themselves there sure are a lot of artifact and failure complaints about the AMD line up over the last couple years. Would it be cool if they actually brought new shit to the table this year instead of 2 new GPUs and a bunch of 'refresh' 'or overclocked' garbage? Yea it would be but until they do I'm planning on getting a 980 ti, one of the several cards (not two) that have actually changed from the transition of Kepler to Maxwell.

Because I'm a fanboy? No quite the opposite, because hardware stability is important for the small business we're running. But also because in general I want to invest a little more money in something that will retain some level of worth and not drop drastically in price because it was originally sold with nothing but a bunch of advertising propaganda and fanbase fluff.


----------



## R-T-B (Jun 12, 2015)

> Except Nvidia is currently including better chips entirely like basing them off maxwell transitioning to the 9xx series, for every card.



They also are much larger than AMD at the moment and can afford to do so.  This hasn't always been the case.



> AMD has proven to be inferior for years; they push prices down because they have to because reviews are bad and they push the cards to unreasonable TDPs to try and keep up



Fermi?



> Reviews speak for themselves there sure are a lot of artifact and failure complaints about the AMD line up over the last couple years.



If anything this is due to these being the goto cards for mining cryptocoins (which they were better at, FWIW) and a ton of them being dumped on the used market in DAMAGED states when that fell out.


----------



## evanwins (Jun 12, 2015)

R-T-B said:


> They also are much larger than AMD at the moment and can afford to do so.  This hasn't always been the case.
> 
> 
> 
> ...



Yea and Nvidia is larger for the same reason Intel is larger and more sucessful, AMD is either spreading thin in two markets and or there have been some poor business model decisions along the way. It hasn't always been the case but in the last several years it has been a growing issue. I think AMD would benefit from taking on a similar model of finding a way to release new chips and gimping (as much as people complain about this) them to maintain a set of tiers like Nvidia does, it's more exciting and would better compete with what's happening right now with Nvidia.

What about Fermi? At this point Fermi is 5 years old at this point and that's a long time in terms of technology and it's not like AMD had any ground breaking change Nvidia didn't during that time. Fermi displays the changes Nvidia has brought to the table pretty well in terms of TDP though.

And when taking about reviews I'm not talking about used cards which is another issue entirely in which the only reason the cards were ever going up in price was based on something they weren't originally intended for that became a silly fad for a period of time thanks to media influence.

I'm talking about new reviews of off the shelf un-opened cards, the failure rate is pretty high let's not kid ourselves it's not only second hand people have had issues with, look at reviews of the 280x it's sad.


----------



## OneMoar (Jun 12, 2015)

In other news rumor rumored to be a rumor ...


----------



## the54thvoid (Jun 12, 2015)

OneMoar said:


> In other news rumor rumored to be a rumor ...



Source please.


----------



## OneMoar (Jun 12, 2015)

the54thvoid said:


> Source please.


cite: http://www.rumor.com/


----------



## Luka KLLP (Jun 12, 2015)

OneMoar said:


> cite: http://www.rumor.com/


But... is it under Rumors or Myths?


----------



## OneMoar (Jun 12, 2015)

Luka KLLP said:


> But... is it under Rumors or Myths?


nobody cares 
can we lock this failthread now


----------



## 64K (Jun 12, 2015)

FailThread has failed.


----------



## HumanSmoke (Jun 12, 2015)

RejZoR said:


> Have I been speaking Chinese for the last few posts by any chance? Hell, even dictionary says "rebrand" is just a change of the company or product image. Not content, image aka apperance. HD7970 to R9-280X transition was exactly that. Same core, slightly updated software (BIOS) and new stickers. A rebrand. R9-290X to R9-390X holds a similar layout of the old GPU, but features completelly new shaders which are way more efficient than old ones, way more efficient tessellation unit and framebufer compression. A refresh. How freaking more clear can I be?


Well. so far it isn't looking good.


----------



## RejZoR (Jun 12, 2015)

So, my predictions were indeed correct. Same shader, TMU and ROP count, they just upgraded GCN to 1.2, gave the card that sweet framebuffer compression and better tessellation unit and slammed extra 4GB of VRAM on it. Considering old R9-290X was still jumping up in the GTX 980/970 face even as it was, this seems a logical move (from AMD's perspective that is).


----------



## arbiter (Jun 12, 2015)

HumanSmoke said:


> Well. so far it isn't looking good.


MSI claims 208 watts on the card, i highly doubt it will be that low. yea gcn 1.2 is more efficient but not that much.


----------



## HumanSmoke (Jun 12, 2015)

arbiter said:


> MSI claims 208 watts on the card, i highly doubt it will be that low.


That's MSI's estimate/marketing. Probably some fine print somewhere about wattage measured on a DX9 title. MSI's 290X has the exact same 208W power usage figure.


----------



## R-T-B (Jun 13, 2015)

evanwins said:


> What about Fermi? At this point Fermi is 5 years old at this point and that's a long time in terms of technology and it's not like AMD had any ground breaking change Nvidia didn't during that time. Fermi displays the changes Nvidia has brought to the table pretty well in terms of TDP though.



AMD had a much lower TDP during the Fermi time table.  Since you said "for years" it IS relevant.



> And when taking about reviews I'm not talking about used cards which is another issue entirely in which the only reason the cards were ever going up in price was based on something they weren't originally intended for that became a silly fad for a period of time thanks to media influence.
> 
> I'm talking about new reviews of off the shelf un-opened cards, the failure rate is pretty high let's not kid ourselves it's not only second hand people have had issues with, look at reviews of the 280x it's sad.



I've yet to see any unbiased review group mention this "flickering" issue on either cards.  Also, failure rate is by nature not going to be noticed in a review of a new product.  Don't kid yourself, that's the used cards at play.



RejZoR said:


> So, my predictions were indeed correct. Same shader, TMU and ROP count, they just upgraded GCN to 1.2, gave the card that sweet framebuffer compression and better tessellation unit and slammed extra 4GB of VRAM on it. Considering old R9-290X was still jumping up in the GTX 980/970 face even as it was, this seems a logical move (from AMD's perspective that is).



Source for this?

If so, good for everyone.


----------



## Super XP (Jun 13, 2015)

There's no proof that the Radeon Fury will be slower than the GTX 980i.
Have you guys/gals seen the specs of the Fury? They are insane. Couple that with awesome drivers and it should be the fastest cards out.

This time round, and going forward we should never see Re-Brands again. AMD clearly stated this in a previous press release. Just don't remember which one. New tech all the way.


----------



## HumanSmoke (Jun 13, 2015)

Super XP said:


> *There's no proof that the Radeon Fury will be slower than the GTX 980i.*
> Have you guys/gals seen the specs of the Fury? They are insane. Couple that with awesome drivers and it should be the fastest cards out.


There is also no proof that the reverse is true...so you're refuting an argument because of no proof with an argument that also doesn't contain proof. Did someone ask you to show a written example of irony?


Super XP said:


> This time round, and going forward we should never see Re-Brands again.


Yet the R7 370 is almost certainly a rebrand of the R9 270 (which is in turn a rebranded HD 8860/8870 OEM, which is turn a rebranded HD 7870 GHz Edition.) - note the Crossfire finger in the slide.


Super XP said:


> AMD clearly stated this in a previous press release. Just don't remember which one.


In all honesty, I doubt AMD do either.


Super XP said:


> New tech all the way.


----------



## arbiter (Jun 13, 2015)

Super XP said:


> There's no proof that the Radeon Fury will be slower than the GTX 980i.
> Have you guys/gals seen the specs of the Fury? They are insane. Couple that with awesome drivers and it should be the fastest cards out.
> 
> This time round, and going forward we should never see Re-Brands again. AMD clearly stated this in a previous press release. Just don't remember which one. New tech all the way.





HumanSmoke said:


> There is also no proof that the reverse is true...so you're refuting an argument because of no proof with an argument that also doesn't contain proof. Did someone ask you to show a written example of irony?


I was about to drop that same bomb shell on him but you got there first.

Thing is we can take history of card performance and show what expected performance of card is based on AMD's history. History says the card is not gonna really be faster, that it will be around same of 980ti or titan X. If that is a good or bad thing will end up coming down to what price tag AMD has to slap on the card. If it costs more for only a few % then well ......

As for AMD saying no future re-brands, I hope you seriously don't believe that? Given AMD's state $ wise there will likely be more in the future even if they claim otherwise.



arbiter said:


> MSI claims 208 watts on the card, i highly doubt it will be that low. yea gcn 1.2 is more efficient but not that much.



If you use 285 as base for fiji, since its gcn 1.2 as well. 1792 cores, can come up with possible watt draw, which is near 400watt range since its over double.


----------



## MrGenius (Jun 13, 2015)

RejZoR said:


> I can do this all day... HD7970 is in fact R9-280X. Check your facts.
> 
> Same number of shaders (even the exact same version of it, GCN 1.0), texture units and ROP's.


It depends on where you look for said "facts". As I've come to find out. Since there's just about as many places to find it stated that a 280X is GCN 1.1 as there are stating that it's GCN 1.0. Which _probably_ has something to do with Tahiti XT2 vs. Tahiti XTL vs. Tahiti B0 XTL(which are all found on 280X...supposedly). What's the differences there? Again, rhetorical question. Don't waste your breath. You're not any more likely to know the real truth than I am(or anybody else is apparently). Thanks for suggesting the can of worms dude! Had no idea what I was opening there...

Moral of the story: I don't know...you don't know...nobody knows...THE END.


----------



## OneMoar (Jun 13, 2015)

@Tatty_One 
ill give you a eCookie if you lock this thread


----------



## GreiverBlade (Jun 13, 2015)

OneMoar said:


> @Tatty_One
> ill give you a eCookie if you lock this thread


make it two!


----------



## R-T-B (Jun 13, 2015)

MrGenius said:


> Moral of the story: I don't know...you don't know...nobody knows...THE END.



Actually, R9 280X is GCN 1.0.  Even AMD admits this, and you could write a program to test for it if you really were paranoid and wanted to confirm it.


----------



## trodas (Jun 13, 2015)

Now there is some new informations about the upcomming (day D for them - 6.16. - is comming  ) Radeons, that kinda support the idea, that fastest Fury will beat Titan X by 10% "at least."

The important information is, that it will feature 128 ROP units...! :thumb:

http://wccftech.com/amd-radeon-fury-x-specs-fiji/







...now since the number of ROP units, together with the bus speed, 4096 stream processors and clocks are known, then the watercooled Fiji chip will compare to known R9 290X in Uber mode that way:

aritmetical computing power: +53%
texturing power: +53%
rasterizing power: +110% (!)
data bandwitch: +60%

...

And since R9 290X in DX12 could rival Titan X (yes, in draw calls it beat (closely, but beat) Titan X, witch did not mean that it will beat it in game, but there is the possibility...), then card that will be at least +53% faster... will blow Titan X out of any comparsion.

Unless AMD screw something up badly   (this possibility cannot be ruled out)
...

But fear not! Just apply some GameWorks AMD killing code and suddently Titan X will outperform watercooled Fiji  Cheats are cheats... and nVidia *love* cheats


----------



## RejZoR (Jun 13, 2015)

I'm just wondering how far will they expand DX12 support. I have no doubt it'll perform well based on specs. GCN 1.2 is still a DX12 stuff after all and they must have fiddled with it in some way. Which also goes for R9-390X... No one is talking regarding this, just hardware features...


----------



## Steevo (Jun 13, 2015)

Meh, I am waiting for W1zz to come out of shell shock and present the review. Cause its either going to be really good, or really bad.


----------



## R-T-B (Jun 13, 2015)

What I don't get is this isn't any worse than any other speculation post.  Why the calls for locking it?  At least some of this is plausible speculation...  and it's even been somewhat civilized.


----------



## HumanSmoke (Jun 13, 2015)

Steevo said:


> Meh, I am waiting for W1zz to come out of shell shock and present the review. Cause its either going to be really good, or really bad.


Or it might be nothing at all.
Judging by the conversations taking place, there is a possibility that like desktop Trinity launch, AMD might be limiting what reviewers can and cannot test and disclose. Hopefully this isn't the case.


----------



## the54thvoid (Jun 13, 2015)

HumanSmoke said:


> Or it might be nothing at all.
> Judging by the conversations taking place, there is a possibility that like desktop Trinity launch, AMD might be limiting what reviewers can and cannot test and disclose. Hopefully this isn't the case.



In W1zz we trust.


----------



## HumanSmoke (Jun 13, 2015)

RejZoR said:


> Same will happen now. Grenada (R9-390 series) will be based upon Tonga's enhancements, that's why it will perform so much better than R9-290 series.


Well, your vociferous defence of the Grenada being GCN 1.2 and "enhanced", comes to nought. A guy who bought a 390 from Best Buy basically confirmed with GPU-Z and Futuremark that the card is an upclocked 290.






RejZoR said:


> It's not a rocket science...


That's what we've been trying to tell you.


----------



## arbiter (Jun 14, 2015)

HumanSmoke said:


> Well, your vociferous defence of the Grenada being GCN 1.2 and "enhanced", comes to nought. A guy who bought a 390 from Best Buy basically confirmed with GPU-Z and Futuremark that the card is an upclocked 290.


As much want to think that is real, problem is the release date being nov 2013, it yells to me that its another fake.


----------



## RejZoR (Jun 14, 2015)

HumanSmoke said:


> Well, your vociferous defence of the Grenada being GCN 1.2 and "enhanced", comes to nought. A guy who bought a 390 from Best Buy basically confirmed with GPU-Z and Futuremark that the card is an upclocked 290.
> 
> 
> 
> ...



Sorry, I've changed my contacts for the glasses and I still can't see any of your "evidence"...

What you're showing me here is:
a) GPU-Z that is not yet Radeon Rx-300 series aware
b) there is not a single evidence of what type of shaders are used or any other underlaying technology
c) it only proves that it uses same SHADER/TMU/ROP/Bus count as R9-290X
d) most of info is manually added to the GPU-Z database and is clearly detected wrong here
e) wrongfully detecting it as Hawaii because of a), c) and d)

It basically confirms everything I've been saying to this point really. I've been saying it will have the same structure of units as R9-290X, but will use enhanced Tonga (GCN 1.2) shaders and will support DX12.

Why do I believe that?
Because it's officially called Grenada and not Hawaii XTL (or Hawaii anything) like they rebranded Tahiti XT (HD7970) into Tahiti XTL (R9-280X).
Just rebranding a high end card (meaning it would still be a DX11.2 only) would be a business suicide at this point.


----------



## HumanSmoke (Jun 14, 2015)

arbiter said:


> As much want to think that is real, problem is the release date being nov 2013, it yells to me that its another fake.


If the screenshot is legit, then that is the date you would expect to see. AFAIK, GPU-Z reads the device ID, which for the 390 is supposedly 1002-67B1, and is exactly the same as the 290





My guessing is that all will be revealed as soon as this guy gets through his backlog of testing.


----------



## RejZoR (Jun 14, 2015)

You'll see I was right again... If not, I'll eat my own shoe. And AMD will go bust the same day they'll release these cards...


----------



## R-T-B (Jun 14, 2015)

RejZoR said:


> You'll see I was right again... If not, I'll eat my own shoe. And AMD will go bust the same day they'll release these cards...



Confidence in something with no actual evidence is kinda...  not smart man.  That's all I got to say.  Hope you don't mind 50/50 odds on eating shoes.


----------



## FordGT90Concept (Jun 14, 2015)

Um...wouldn't this be something Best Buy would put on their website?  Note how the pictures never show 390X either...just R7 370 and R7 360.  As the rest already pointed out, none of this comes across as remotely credible.


----------



## HumanSmoke (Jun 14, 2015)

FordGT90Concept said:


> Um...wouldn't this be something Best Buy would put on their website?  Note how the pictures never show 390X either...just R7 370 and R7 360.  As the rest already pointed out, none of this comes across as remotely credible.


Legit Reviews own people confirmed that the cards are being sold, and at least two people bought their cards from the store. Are you saying that Legit Reviews is posting false information? and that the people buying them from BB are part of some elaborate countrywide hoax? Seems strange that a respected site would burn itself to the ground to post a bogus story about second-tier (and lower) cards when the launch is just (?) a few days away.
Anyhow, if its bogus, the guy with the " supposed" 390X sure made a good copy of XFX's product ID sticker







R-T-B said:


> Confidence in something with no actual evidence is kinda...  not smart man.  That's all I got to say.  Hope you don't mind 50/50 odds on eating shoes.


The odds seem to be swinging towards the footwear feast judging by the Fire Strike results. Hopefully he's not wearing plastic Crocs.


----------



## Xzibit (Jun 14, 2015)

If they turn out to be true.

Its basicly a 290X OC to est 1200 core
390X @ 1000 = 290X @ 1200

Probably could overclock it to est 1400 if he put shoes on.


----------



## HumanSmoke (Jun 14, 2015)

Xzibit said:


> Probably could overclock it to est 1400 if he put shoes on.


I heard he posted them out to Slovenia.


----------



## FordGT90Concept (Jun 14, 2015)

The cards may be real but GPU-Z certainly isn't reporting the correct information on them yet.  Just look at them.  Most of the data looks copy pasta and we know, for a fact, that Greneda is DX12, not 11.2.







If it is correct then Greneda is a rebrand with double the memory and that is very disappointing.


----------



## wiak (Jun 14, 2015)

HumanSmoke said:


> Or it might be nothing at all.
> Judging by the conversations taking place, there is a possibility that like desktop Trinity launch, AMD might be limiting what reviewers can and cannot test and disclose. Hopefully this isn't the case.


FUD before the storm eh?


----------



## Xzibit (Jun 14, 2015)

FordGT90Concept said:


> The cards may be real but GPU-Z certainly isn't reporting the correct information on them yet.  Just look at them.  Most of the data looks copy pasta and we know, for a fact, that Greneda is DX12, not 11.2.
> 
> 
> 
> ...



If hes running it in Win 7/8.1 it wont read DX 12 if it is or not.  Driver Win8 64


----------



## GhostRyder (Jun 14, 2015)

HumanSmoke said:


> Legit Reviews own people confirmed that the cards are being sold, and at least two people bought their cards from the store. Are you saying that Legit Reviews is posting false information? and that the people buying them from BB are part of some elaborate countrywide hoax? Seems strange that a respected site would burn itself to the ground to post a bogus story about second-tier (and lower) cards when the launch is just (?) a few days away.
> Anyhow, if its bogus, the guy with the " supposed" 390X sure made a good copy of XFX's product ID sticker
> 
> 
> ...


If this is all true then XFX needs an English lesson...





"If Mantles is the language then Graphics Core Next (GCN) is the translator connects software to hardware and fully take advantage of performance potential in all and only AMD GPUs"



FordGT90Concept said:


> The cards may be real but GPU-Z certainly isn't reporting the correct information on them yet.  Just look at them.  Most of the data looks copy pasta and we know, for a fact, that Greneda is DX12, not 11.2.
> 
> 
> 
> ...


The only thing that bugs me is the core clocks, that was indicate there has been only a memory frequency change.

Either way, something about this is very fishy to me...


----------



## HumanSmoke (Jun 14, 2015)

GhostRyder said:


> If this is all true then XFX needs an English lesson..."If Mantles is the language then Graphics Core Next (GCN) is the translator connects software to hardware and fully take advantage of performance potential in all and only AMD GPUs"


Screw up on box art aren't entirely unknown when the product comes out of China, although that mangled line, while laughable, isn't a patch on the absolute bullshit right above it. " Powered by the language standard used on today's hottest games and consoles including the XBOX One, Playstation 4 and Nintendo Wii"
Mantle isn't a language, it's an API
Mantle isn't used by XBOX One - that would be DirectX 11.x
Mantle isn't used by Play*S*tation 4 (bonus misspelling by XFX) - that would be GNM and GNMX
Mantle isn't used by Nintendo Wii - that would be the OpenGL based GX2.

I'm pretty sure that by tomorrow, some people will hold to the belief that Best Buy, Legit Reviews, the card buyers, and a rogue element inside XFX are secretly working against AMD. I would have added Chinese counterfeiters, but since XFX haven't disavowed the existence of these cards originating from them, I think I can rule that out - which is a shame really as it would be a great call back to the infamous era of CPU remarking and bogus mobo chipsets by unscrupulous third parties and garbage vendors like PC Chips.


----------



## Xzibit (Jun 14, 2015)

Obviously they would all be dead by tomorrow due to the lead poisoning.  duh!!! 



HumanSmoke said:


> Or it might be nothing at all.
> Judging by the conversations taking place, there is a possibility that like desktop Trinity launch, AMD might be limiting what reviewers can and cannot test and disclose. Hopefully this isn't the case.



I think Ryan is just sour.  He knows PC Gamer is sponsoring the event.  He made fun of PC Gamer sponsoring it in his podcast.  Just like any other sponsor they get exclusivity when hosting a event.

He probably wants the clicks which is why he didn't hesitate to post those Fury shots when someone sent them to him.


----------



## RejZoR (Jun 14, 2015)

R-T-B said:


> Confidence in something with no actual evidence is kinda...  not smart man.  That's all I got to say.  Hope you don't mind 50/50 odds on eating shoes.



It's just funny how I got everything right so far "without evidence". I mean, when evidence did come up, it turns out to be what I was saying all along...

And I also hope you don't mind 50/50 chance of AMD closing their door on their 300 series release day then... Because releasing a rebrand (relabeling R9-290X into R9-390X, since many here are having problems understanding what "rebrand" really means) of a DX11.2 hardware more than half a year behind NVIDIA release of actual DX12 hardware would be, you know, business suicide and no amount of Fury awesome would save them. There is NO WAY in hell or this world that AMD is that stupid. Because if they are, I'll be glad for AMD to go bust. Because they'd deserve it 100% in that case.
But you know, I honestly don't believe they are that stupid so there's that...


----------



## R-T-B (Jun 14, 2015)

RejZoR said:


> It's just funny how I got everything right so far "without evidence". I mean, when evidence did come up, it turns out to be what I was saying all along...



I'm willing to admit you're right just as much as I'm willing to admit you're wrong (not what I'm seeing above, so not sure what you're reading, but that's beside the point).  What's kinda annoying to me to be honest is your level of certainty on a product when the last generation were largely rebrands.

I hope you're right for the record...  but I don't share your confidence.



> And I also hope you don't mind 50/50 chance of AMD closing their door on their 300 series release day then...



Of course I do.  Who in their right mind would want that?  But I also highly doubt it would ruin them.  We have to remember, they're squeezing every last drop into R&D right now and maybe it made sense to them to do this for a gen or two to develop a new tech.  *shrugs*


----------



## RejZoR (Jun 14, 2015)

Rx-200 series were largely a rebrand because they were able to afford doing that and because we were still within a DX11 era. It made no difference either way. Kepler was, as we all know less than stellar. They don't have that luxury this time around...


----------



## Tatty_One (Jun 14, 2015)

OneMoar said:


> @Tatty_One
> ill give you a eCookie if you lock this thread


Sadly I am not a fan of cookie's, you will have to do a little better than that


----------



## Super XP (Jun 14, 2015)

HumanSmoke said:


> There is also no proof that the reverse is true...so you're refuting an argument because of no proof with an argument that also doesn't contain proof. Did someone ask you to show a written example of irony?
> 
> Yet the R7 370 is almost certainly a rebrand of the R9 270 (which is in turn a rebranded HD 8860/8870 OEM, which is turn a rebranded HD 7870 GHz Edition.) - note the Crossfire finger in the slide.
> 
> In all honesty, I doubt AMD do either.


Lol, 
Some sites claim no re-brands and others claim all the 370's are re-Branded. 

At this point it's all speculation. And yes I'm speculating on a rumoured speculation. Why not.

"""The AMD Radeon R9 300 series will reportedlycontain no re-branded graphics cards from the R9 200 series. An entirely new lineup is coming at Computex. The report in question alleges that AMD’s original plan was centered around introducing a new flagship Radeon product this month and rebranding a few other R9 200 series products to R9 300 series products."""

This is where I read about it way back. Obviously they were wrong. "Re-Branding" should be illegal.  

*AMD Radeon 300 Series Won’t Be A Rebrand, New GPUs Coming in June*
*http://wccftech.com/amd-r9-300-series-not-rebrands/*


----------



## Solaris17 (Jun 14, 2015)

FordGT90Concept said:


> Um...wouldn't this be something Best Buy would put on their website?  Note how the pictures never show 390X either...just R7 370 and R7 360.  As the rest already pointed out, none of this comes across as remotely credible.



No, I don't know if you were around when I was the first consumer to own a pair of 9600GTs but I bought them at a local bestbuy because while we did get deliveries of the cards weeks in advance we weren't allowed to put them out. However they are in the system and allowed to be rung out. That said someone put them on the shelf too early. I walked in without my uniform on. bought them and immediately told w1zz.

No they weren't on the site. thats controlled by the company itself those system are usually all automated.


----------



## 64K (Jun 14, 2015)

Super XP said:


> This is where I read about it way back. Obviously they were wrong. "Re-Branding" should be illegal.



True but wouldn't we just be bitching about the lack of shiny new cards otherwise? The bottom line is that neither AMD nor Nvidia can put much on the table at this point. Next year on the 16nm process we will have what we want.


----------



## RejZoR (Jun 14, 2015)

If you want to make all 300 series cards fully DX12 capable, then we can't talk about rebranding anyway. It's impossibel to begin with then...


----------



## Super XP (Jun 14, 2015)

64K said:


> True but wouldn't we just be bitching about the lack of shiny new cards otherwise? The bottom line is that neither AMD nor Nvidia can put much on the table at this point. Next year on the 16nm process we will have what we want.


If the price is right then I'll upgrade my DualX. But the issue is my current card is also a rebrand lol, and I was not aware of it at the time of purchase.
Just do NOT like deception that's all.

I checked CPUID and my stomach turned LOL. Felt like a sucker actually.


----------



## the54thvoid (Jun 14, 2015)

RejZoR said:


> fully DX12 capable



What does that even mean?  AMD have already said GCN is not fully compliant with some aspect of DX12 and likewise, while Maxwell 2 (GM200) is compliant in that specific area, it is not Tier 3.  So really, saying fully DX12 capable is very confusing.

Go back here for more confusion http://www.techpowerup.com/213191/n...t-supporting-direct3d-12-1.html?cp=2#comments


----------



## RejZoR (Jun 14, 2015)

Erm, isn't that what I've said? Clearly existing GPU's aren't "enough" DX12 capable to deserve name DX12 GPU's. Clearly they have to do something about it for the upcoming series. How far they'll go with DX12 features, only they know, but they certainly can't support only what current DX11.2 cards support...


----------



## Super XP (Jun 14, 2015)

RejZoR said:


> Erm, isn't that what I've said? Clearly existing GPU's aren't "enough" DX12 capable to deserve name DX12 GPU's. Clearly they have to do something about it for the upcoming series. How far they'll go with DX12 features, only they know, but they certainly can't support only what current DX11.2 cards support...


Interesting you bring this point up. But depends on when they plan on implementing DX12 Fully. ATM seems it's going to take long just like it's counter-parts.


----------



## lilhasselhoffer (Jun 15, 2015)

12 days of this thread.  Just, for one moment, let that sink in.  For nearly two weeks people have argued that Nvidia is better, AMD is better, and all of this argument is based off of conflicting information and complete fabrications of "fact" that are demonstrably idiotic.

This is how religion start.


I will make this plea one last time.  Can we please wait until some benchmarks appear?

DX 12 is a pipe dream right now.  Despite the fact that MS is selling Windows 10 with it, it isn't ready.  There's no hardware that supports all its features, it doesn't have a reasonably sized test sample showing its benefits, and even when it's introduced the consoles are unlikely to support its features 100% (so developer adoption is likely to stagnate again, similar to DX11).  Becoming frothing mad, on either side of this debate, is insane.  Keep calm, and let's see some actual test data before we jump to any conclusions.


----------



## arbiter (Jun 15, 2015)

lilhasselhoffer said:


> 12 days of this thread. Just, for one moment, let that sink in. For nearly two weeks people have argued that Nvidia is better, AMD is better, and all of this argument is based off of conflicting information and complete fabrications of "fact" that are demonstrably idiotic.


Yea its gotten pretty sad, We know for fact what Nvidia has. Everything from AMD side has been super hype that it will be a super gpu that nothing can match, even though history says other wise. yea it does have HBM memory but just cause it has it doesn't mean performance scales 1:1 with it. There is a point were performance benefits level off. You can look at CPU performance with diff speed of ram as proof of that. As for where that of diminishing return is yet to be known.


----------



## Steevo (Jun 15, 2015)

lilhasselhoffer said:


> 12 days of this thread.  Just, for one moment, let that sink in.  For nearly two weeks people have argued that Nvidia is better, AMD is better, and all of this argument is based off of conflicting information and complete fabrications of "fact" that are demonstrably idiotic.
> 
> This is how religion start.
> 
> ...




Years ago we watered at the mouth for DX10 and it took a long time to get a worthwhile title out that meant anything, same thing will happen here, we get DX12 and all of a sudden we have what to do with it? Need games to support the standard to take advantage of it, and that will require a year for something that doesn't run like it came flying out a monkeys ass and done for the sake of saying durr hurr, it DX12, you buy....

So just like when tessellation became the new huge toy, that did jack shit for a long time, except allowing one dickhead company to pay off a developer for shit like Crysis 2 underground Tessellation on non rendered water, overabundant tessellation on concrete barriers, shitty looking rock roads in demos, and specifically porting a OpenCL piece of software back to a in house proprietary code so that on AMD systems the call would have to go back through the CPU to make it seem like AMD drivers have issues...... well by the time this card is mainstream they will have found and fixed the bugs in DX12 and be on to a standard that no current cards support, making all this moot.


----------



## R-T-B (Jun 15, 2015)

RejZoR said:


> Erm, isn't that what I've said? Clearly existing GPU's aren't "enough" DX12 capable to deserve name DX12 GPU's.




Yes, they are.  They only need to support DX_12_0 to be DX12 "compliant" and benefit from it.

The rest is optional standards crap, much like DX9 capability bits were.

EDIT:  And quit trying to lock a thread just because there's speculation you guys.  If there wasn't speculation 90% of tech sites would vaporize.


----------



## Steevo (Jun 15, 2015)

FYI

http://physxinfo.com/news/12197/introducing-nvidia-hairworks-fur-and-hair-simulation-solution/

"
*PhysXInfo.com: It was really a big surprise to see NVIDIA HairWorks utilizing DirectCompute API, while other NVIDIA hardware physics effects is typically exclusive to CUDA capable GPUs. What was the reasoning behind this decision?*

*Tae-Yong Kim:* One of the goals of NVIDIA GameWorks is to solve hard visual computing problems in a way that balances implementation efficiency and time-to-market, runtime performance, and ease of integration. This requires choosing the right technologies, and sometimes that will lead to CUDA solutions, other times to DirectCompute, and other times to solutions using completely different approaches.

With NVIDIA HairWorks, the balance landed in favor of DirectCompute, partly because the simulation portion of the algorithm is a small part of overall runtime cost, which is dominated by rendering"

Nvidia was using DirectCompute with Hairworks until they discovered they could screw AMD users, and owners of their older series and hope no one noticed it runs like shit unless you have a new 9XX series GPU. 

http://www.tomshardware.com/reviews/call-of-duty-ghosts-pc-performance,3683-8.html


----------



## MxPhenom 216 (Jun 15, 2015)

purecain said:


> even if fury is a faster card, I may opt for nvidias 980ti...  the reason I support amd usually is because I don't want them to go under... the core on the 980ti is amazing though, kidney anyone?????


The 980ti wont cure your intolerance to hackers though.


----------



## GhostRyder (Jun 15, 2015)

arbiter said:


> Yea its gotten pretty sad, We know for fact what Nvidia has. Everything from AMD side has been super hype that it will be a super gpu that nothing can match, even though history says other wise. yea it does have HBM memory but just cause it has it doesn't mean performance scales 1:1 with it. There is a point were performance benefits level off. You can look at CPU performance with diff speed of ram as proof of that. As for where that of diminishing return is yet to be known.


You have 0 right to talk especially on the whole "History says otherwise" argument.  Have you ever looked at actual benchmarks on this site recently because the GTX 770/680 versus R9 280X/HD 7970 and R9 290X versus GTX 780ti, because they show things under a different light.  So give it up on the "History says otherwise" argument.  While there is no proof of how good it is which everyone acknowledges (mostly) however you have no facts either...



RejZoR said:


> Erm, isn't that what I've said? Clearly existing GPU's aren't "enough" DX12 capable to deserve name DX12 GPU's. Clearly they have to do something about it for the upcoming series. How far they'll go with DX12 features, only they know, but they certainly can't support only what current DX11.2 cards support...


Yea nothing is and will be for awhile.  it honestly makes very little difference in my book because its going to be so long before DX12 is actually used.

Either way, what we need is some actual facts which will come hopefully very soon (16th will have something).


----------



## Schmuckley (Jun 15, 2015)

After a quick skim;This thread is typical TPU.I will make a prediction: There will be many butthurt Nvidia fanboys when the AMD card releases.
PS:EVGA Titan-X is..I haven't checked lately but..around..
It is.. well-known the Nvidia fanboyness of TPU.
Myself,Whatever I can get the most performance out of for less than $300 is what I'm getting.
I am an EOL buyer..
780 Ti you say?


----------



## HumanSmoke (Jun 15, 2015)

Schmuckley said:


> After a quick skim;This thread is typical TPU


Well if it wasn't before, I'm sure your post put it over the top.


Schmuckley said:


> Myself,Whatever I can get the most performance out of for less than $300 is what I'm getting.
> I am an EOL buyer.. 780 Ti you say?


Get the best of both worlds. Buy a 290X and BIOS flash it to 390X. Sorted.


----------



## arbiter (Jun 15, 2015)

Steevo said:


> Nvidia was using DirectCompute with Hairworks until they discovered they could screw AMD users, and owners of their older series and hope no one noticed it runs like shit unless you have a new 9XX series GPU.


Hairworks uses a standard in DX11, not nvidia's fault AMD's cards suck in doing it.



GhostRyder said:


> Have you ever looked at actual benchmarks on this site recently because the GTX 770/680 versus R9 280X/HD 7970


When gtx680 was released it beat the 7970 by a bit. AMD had to re-release the 7970 with clocks upp'ed to 1ghz to make it compete.


----------



## Tatty_One (Jun 15, 2015)

arbiter said:


> Hairworks uses a standard in DX11, not nvidia's fault AMD's cards suck in doing it.
> 
> 
> When gtx680 was released it beat the 7970 by a bit. AMD had to re-release the 7970 with clocks upp'ed to 1ghz to make it compete.


To be accurate, according to this sites initial review by 6% across all resolutions, I acknowledge that "quite a bit" is subjective but 6% hardly falls within it in my subjectivity, to be fair slightly more in favour of the 680 than that in the lower resolutions, less than that (4%) in the higher.  I do seem to recall though that when the 290/290X came out, despite them being acknowledged as running power hungry and hot an absolute legion of people, many from here went out and bought one.


----------



## RejZoR (Jun 15, 2015)

I hate % values in mentions with graphic cards. They are absolutely pointless and misleading. 6% sounds like a lot, but at 60fps, it means a 3.6fps difference. And we all know this doesn't mean it universaly extrapolates into 3.6fps less as a whole. It just means that much of a difference with that exact scenario (usage of HairWorks) And then you know it's all BS.

Besides, how is it with TressFX which was the first feature to utilize this anyway?


----------



## SonicZap (Jun 15, 2015)

arbiter said:


> When gtx680 was released it beat the 7970 by a bit. AMD had to re-release the 7970 with clocks upp'ed to 1ghz to make it compete.


Actually, while AMD did up the clocks on the 7970, most of the performance difference has come from AMD's so-called horrible drivers improving the performance of all GCN cards more than Nvidia improved Kepler's performance. Back in the day GTX 680 beat HD 7970, now the R9 280X beats the GTX 770 (which is an overclocked 680) by a fair margin while having 1 GB more VRAM. With current drivers even the original 7970 beats the GTX 770 (= overclocked 680), as you can see in GhostRyder's link. So, in this case AMD didn't really need to raise clocks to compete. They wanted to get the most powerful GPU crown and a higher price (= fatter margins) for Tahiti, and it was succesful. The 7970 GHz Edition was the most powerful single-GPU card until the Titan was released.

About tesselation performance, the R9 285 (Tonga) is decent at it, so Fury likely won't exactly suck in it either.


----------



## RejZoR (Jun 15, 2015)

The problem is, ONLY Fury. The rest will be the same shit as almost 2 years ago. Yay...


----------



## R-T-B (Jun 15, 2015)

SonicZap said:


> Actually, while AMD did up the clocks on the 7970, most of the performance difference has come from AMD's so-called horrible drivers improving the performance of all GCN cards more than Nvidia improved Kepler's performance. Back in the day GTX 680 beat HD 7970, now the R9 280X beats the GTX 770 (which is an overclocked 680) by a fair margin while having 1 GB more VRAM. With current drivers even the original 7970 beats the GTX 770 (= overclocked 680), as you can see in GhostRyder's link. So, in this case AMD didn't really need to raise clocks to compete. They wanted to get the most powerful GPU crown and a higher price (= fatter margins) for Tahiti, and it was succesful. The 7970 GHz Edition was the most powerful single-GPU card until the Titan was released.
> 
> About tesselation performance, the R9 285 (Tonga) is decent at it, so Fury likely won't exactly suck in it either.



You do realize the R9 280X is the original 7970 at Ghz edition clocks, right?


----------



## purecain (Jun 15, 2015)

MxPhenom 216 said:


> The 980ti wont cure your intolerance to hackers though.


its weird, I only ever noticed them on that one version of bf... I still play every other multiplayer game just fine... so im still happy pc gaming... obviously if I ever started playing bf again it would be an issue on the servers available to me with the lowest ping(but  I wont). cant pull myself away from the witcher and destiny has been a lot of fun on the xbox one... im sure pc multiplayer will be back in my favour when the right title comes out. it gave me the push to want to learn coding so something positive came out of it in the end.. 

btw cant wait for tomorrow.... either a titan x or AMD's fury will be on its way...


----------



## SonicZap (Jun 15, 2015)

R-T-B said:


> You do realize the R9 280X is the original 7970 at Ghz edition clocks, right?


Yes, I know that. What about it?


----------



## R-T-B (Jun 15, 2015)

SonicZap said:


> Yes, I know that. What about it?



As in, they did need an overclock.  You're comparing the R9 280 benchmarks (overclocked) to a 770....  Though I suppose that is overclocked 680 as well...

I guess it's early.  Not sure what I was getting at lol.


----------



## GhostRyder (Jun 15, 2015)

arbiter said:


> Hairworks uses a standard in DX11, not nvidia's fault AMD's cards suck in doing it.
> 
> 
> When gtx680 was released it beat the 7970 by a bit. AMD had to re-release the 7970 with clocks upp'ed to 1ghz to make it compete.


They bumped it to the same frequency area as the GTX 680 started at which was 1ghz.  Doing that put it over the top while there were already after market variants at that area and beyond.  I could argue the reverse and say the GTX 770 raised its core clocks to the limit later to compete with the R9 280X which was clocked the same as the HD 7970ghz edition (Yet in that case the R9 280X still out performs it overall).



RejZoR said:


> The problem is, ONLY Fury. The rest will be the same shit as almost 2 years ago. Yay...


We will have to see I guess, I am hoping it runs on GCN 1.2 as it would at least offer some nice improvements at least enough to satisfy for now.  However if it really is just a core clock bumped/4 more gb/slightly more mature silicon graphics card then I like many others will be disappointed...


----------



## purecain (Jun 15, 2015)

id not seen any retail packaging so thought I'd post it up... it says 8gb GDDR5 so I don't know what to make of it.. infact this has gotta be photoshopped.. someone just posted it to AMD's facebook page... looks fake now im actually reading everything...
okay it aint fake...


----------



## RejZoR (Jun 15, 2015)

GhostRyder said:


> They bumped it to the same frequency area as the GTX 680 started at which was 1ghz.  Doing that put it over the top while there were already after market variants at that area and beyond.  I could argue the reverse and say the GTX 770 raised its core clocks to the limit later to compete with the R9 280X which was clocked the same as the HD 7970ghz edition (Yet in that case the R9 280X still out performs it overall).
> 
> 
> We will have to see I guess, I am hoping it runs on GCN 1.2 as it would at least offer some nice improvements at least enough to satisfy for now.  However if it really is just a core clock bumped/4 more gb/slightly more mature silicon graphics card then I like many others will be disappointed...



But if performance is identical to R9-290X, then I find it very unlikely to use GCN 1.2. I want to believe it will have it, but nothing is pointing into that direction. I might eat whole closet of shoes if it turns out to actually be GCN 1.2...


----------



## purecain (Jun 16, 2015)

nope the R9 390x and 390 are definatly using the same Hawaii core. no alterations. exactly the same pcb. AMD have improved bandwidth by 50mb sec using faster memory.  people with the 8gb version of the 290x should be able to just flash their cards for a firmware upgrade if their Gddr5 mem is fast enough. hopefully a 4gb version becomes available so I can flash my 290x.

no news of the Fury yet, only that it has 4096 cores... that would be enough cores to give the titan x(meant to say 980ti) a run for its money... I am so tempted to buy a titan x(980ti).... its a quality card.....


----------



## RejZoR (Jun 16, 2015)

Funny thing is, I skipped R9-290X because difference was too small to be worth it for overclocked HD7950. 2 years later and the same thing still stands because they've just rebranded the god damn shit.


----------



## the54thvoid (Jun 16, 2015)

purecain said:


> nope the R9 390x and 390 are definatly using the same Hawaii core. no alterations. exactly the same pcb. AMD have improved bandwidth by 50mb sec using faster memory.  people with the 8gb version of the 290x should be able to just flash their cards for a firmware upgrade if their Gddr5 mem is fast enough. hopefully a 4gb version becomes available so I can flash my 290x.
> 
> no news of the Fury yet, only that it has 4096 cores... that would be enough cores to give the titan x a run for its money... I am so tempted to buy a titan x.... its a quality card.....



Why would you buy a Titan X when the custom 980ti's are coming out cooler, quieter and 10% faster?
The extra 6gb is all it has, the Titan X has been properly usurped by the 980ti.


----------



## 64K (Jun 16, 2015)

the54thvoid said:


> Why would you buy a Titan X when the custom 980ti's are coming out cooler, quieter and 10% faster?
> The extra 6gb is all it has, the Titan X has been properly usurped by the 980ti.



and $350 cheaper (USA).


----------



## mroofie (Jun 16, 2015)

purecain said:


> nope the R9 390x and 390 are definatly using the same Hawaii core. no alterations. exactly the same pcb. AMD have improved bandwidth by 50mb sec using faster memory.  people with the 8gb version of the 290x should be able to just flash their cards for a firmware upgrade if their Gddr5 mem is fast enough. hopefully a 4gb version becomes available so I can flash my 290x.
> 
> no news of the Fury yet, only that it has 4096 cores... that would be enough cores to give the titan x a _*run for its money*_... I am so tempted to buy a titan x.... its a quality card.....


Did you forget about the gtx 980 ti ?




R-T-B said:


> You do realize the R9 280X is the original 7970 at Ghz edition clocks, right?


Tell Em


----------



## Zero3606 (Jun 16, 2015)

the54thvoid said:


> Why would you buy a Titan X when the custom 980ti's are coming out cooler, quieter and 10% faster?
> The extra 6gb is all it has, the Titan X has been properly usurped by the 980ti.



The custom 980 Ti's are overclocked that's the reason why they are faster. I have a Titan X SC and then I overclocked it again and I haven't gotten above 67 degrees and its faster than any custom 980 Ti. I also have it air cooled.


----------



## GhostRyder (Jun 16, 2015)

Zero3606 said:


> The custom 980 Ti's are overclocked that's the reason why they are faster. I have a Titan X SC and then I overclocked it again and I haven't gotten above 67 degrees and its faster than any custom 980 Ti. I also have it air cooled.


 The difference though and what makes the GTX 980ti a better value is the custom PCB variants which allow for further overclocking.  From what I have seen already the standard one does overclock slightly better as is and then you include those custom PCB's and you can push it to match or beyond the Titan X.



RejZoR said:


> Funny thing is, I skipped R9-290X because difference was too small to be worth it for overclocked HD7950. 2 years later and the same thing still stands because they've just rebranded the god damn shit.


 Wait for the Fury Pro, we don't have official pricing and based on the Fury yet and if we look at the pricing of the 390X, its not unlikely to see some around the 5-600 mark.  That being said I know you are wanting something a little cheaper but bear in mind we have no official confirmation on every detail yet.  Something might surprise us.


----------



## RejZoR (Jun 16, 2015)

HD7950 was around 350€ when I bought it and it was jolly faster than everything else. Only thing faster was HD7970 at that time.

R9-390X will be ~400€ and it'll be meh faster than my 3 years old graphic card just because of this two time rebranding. It just doesn't compute.


----------



## purecain (Jun 16, 2015)

I meant the 980ti...doh! im getting my cards mixed up... they are the same core and like you say virtually the same thing.

im hoping amd's fury card beats the 980ti and theres a price war, because if any of the top cards drop below 500 I can pick up two...

I saved a £1000 towards my gpu upgrade...  two 980ti's or two amd fury cards would be sweet...  and that means I'll have a 290x4gb(390x) going cheap. fully water cooled... runs at 40-45%c @solid1050mhz. without water cooling these cards were always dropping to 800mhz.. my two samples exhibited this behaviour...


----------



## RejZoR (Jun 16, 2015)

I could buy two Fury X as well, but that would be throwing money out the window and I find that stupid. Vanilla Fury for 400€, fuck yeah, yes please. But they are trying to sell us 2 years old tech for that kind of money. I got a brand new tech HD7950 for 50€ less at 350€. It was unmatched at the time, brand new GCN architecture, 3GB of VRAM, kick ass stuff for very reasonable price.

R9-390X is a ripoff by itself because it's just a rehashed junk from years ago and Fury is a ripoff because it's just overpriced if we extrapolate prices from the R9-390X...


----------



## the54thvoid (Jun 16, 2015)

Hell, if the Fury card retails on release at or around £500 I'll eat RejZoR's other shoe.


----------



## Ferrum Master (Jun 16, 2015)

Just two hours guys, the streaming link from E3 is already up.


----------



## RejZoR (Jun 16, 2015)

They are going to present it on E3? Do you have the link to there? I just have to prepare a pile of shoes and set up the alarm clock...


----------



## Ferrum Master (Jun 16, 2015)

Hmmm lets see

http://www.twitch.tv/amd/popout


----------



## RejZoR (Jun 16, 2015)

The webpage is slow like I'm using a 56k modem and I want to stream 8K video through it. It barely even loads even without any video being streamed...


----------



## Ferrum Master (Jun 16, 2015)

RejZoR said:


> The webpage is slow like I'm using a 56k modem and I want to stream 8K video through it. It barely even loads even without any video being streamed...



It is located in United States, what do you expect?


----------



## RejZoR (Jun 16, 2015)

Twitch works fine otherwise even when ppl/companies from US stream through it. Just not this time... Hell, I watched GTX 900 series press conference through it...


----------



## Ferrum Master (Jun 16, 2015)

RejZoR said:


> Twitch works fine otherwise even when ppl/companies from US stream through it. Just not this time... Hell, I watched GTX 900 series press conference through it...



Well as soon the crowd will start to watch it will mirror a buffer of the sources overall the world on each lead ISP in their respective country... it just how internet works...


----------



## LightningJR (Jun 16, 2015)

30 min!       \0.


----------



## horik (Jun 16, 2015)

Just got home from work, 20 min more


----------



## the54thvoid (Jun 16, 2015)

Don't want to be a sour puss but I'll not trust a thing they say.  I only ever get reliable info from reviews from multiple sources.  PR announcements are cloak and daggers - be it Nvidia screw gate or Hawaii throttle gate.


----------



## Ferrum Master (Jun 16, 2015)

For the lulz, let's take a shot before


----------



## horik (Jun 16, 2015)

the54thvoid said:


> Don't want to be a sour puss but I'll not trust a thing they say.  I only ever get reliable info from reviews from multiple sources.  PR announcements are cloak and daggers - be it Nvidia screw gate or Hawaii throttle gate.


 
Seems like some benchmarks have showed up on YouTube.


----------



## RejZoR (Jun 16, 2015)

The stream is as dead as a roadkill. Like wtf...


----------



## Ferrum Master (Jun 16, 2015)

RejZoR said:


> The stream is as dead as a roadkill. Like wtf...



Wut? I am using it now fine? At least British English... ears don't hurt.


----------



## RejZoR (Jun 16, 2015)

The stream is not even fucking loading. It's just blank grey page with Twitch text on it. They can't even do streaming right ffs...


----------



## LightningJR (Jun 16, 2015)

working perfectly for me. Must be your location


----------



## Ferrum Master (Jun 16, 2015)

It even works in Spartan


----------



## Luka KLLP (Jun 16, 2015)

RejZoR said:


> The stream is not even fucking loading. It's just blank grey page with Twitch text on it. They can't even do streaming right ffs...


Yeah this seems to happen with almost every big launch stream I try to catch...

Sadly I'm not home from uni yet so I'll have to see afterwards..


----------



## Ferrum Master (Jun 16, 2015)

R9 390X @ 439$ I heard it right?


----------



## LightningJR (Jun 16, 2015)

$329 for 390, $429 for the 390x..


----------



## purecain (Jun 16, 2015)

they missed out the enthusiast segment...wtf...O-o

scrap that any of those fury cards look amazing...

549 dollars... 2 please...


----------



## LightningJR (Jun 16, 2015)

purecain said:


> they missed out the enthusiast segment...wtf...O-o


 Expect it  closer to the end. They always leave the best for last....


----------



## Ferrum Master (Jun 16, 2015)

purecain said:


> they missed out the enthusiast segment...wtf...O-o



That's called the foreplay...


----------



## LightningJR (Jun 16, 2015)

Ferrum Master said:


> That's called the foreplay...


lol


Just for reference the cheapest 290 on newegg is 259.99, the 390 is going to be 329.99, cheapest 290X is 299.99, the 390X is going to be 429.99.



His voice clearing is getting pretty annoying


----------



## Ferrum Master (Jun 16, 2015)

LightningJR said:


> His voice clearing is getting pretty annoying



It seems Oculus have their own Apu... gosh... I bet his stressing too much...


----------



## xfia (Jun 16, 2015)

horik said:


> Seems like some benchmarks have showed up on YouTube.



i think it will take a week or so for the pricing to balance out.

im not the biggest fan of these boxing videos. always seem to say something that really doesn't fit with the big pictures.

what about if you put 3 970's against 3 390's? i have seen in 2 way the 290x 8gb putting up around 15 percent better performance.

as for the heat.. maybe some of them are just not the best coolers. i have a 290 tri-x that only gets into the 70c area after hours of gaming or the most demanding titles. from what i have seen that is normal and even the 290x tri-x doesnt break around 73c.. that is quite a bit under throttle.


----------



## AsRock (Jun 16, 2015)

LightningJR said:


> working perfectly for me. Must be your location



Yeah no problems at all here either.


----------



## horik (Jun 16, 2015)

Fury time it seems, show us Lisa.


----------



## the54thvoid (Jun 16, 2015)

R9 nano..... nice


----------



## SonicZap (Jun 16, 2015)

Hmm, AMD Radeon R9 Nano. "Half the power of the R9 290X". Sounds interesting.

Supposedly Fiji has 1.5x better performance-per-watt over Hawaii.



LightningJR said:


> 399 for 390, 429 for the 390x.. crazy


Didn't they say 329$ for 390 and 429$ for 390X? Still high, though.


----------



## FordGT90Concept (Jun 16, 2015)

So screw 390X and everyone buy 290X! sadpanda.jpg


----------



## xfia (Jun 16, 2015)

FordGT90Concept said:


> So screw 390X and everyone buy 290X! sadpanda.jpg


yeah it seems the few extra features really wont be a big deal for most people and if you know you only want a single then there isnt much point for 8gb.


----------



## LightningJR (Jun 16, 2015)

SonicZap said:


> Hmm, AMD Radeon R9 Nano. "Half the power of the R9 290X". Sounds interesting.
> 
> Supposedly Fiji has 1.5x better performance-per-watt over Hawaii.
> 
> ...



I could have heard wrong, but I didn't think I did. Idk maybe I did.

Edit: You are right $329.99 and $429.99


----------



## Ferrum Master (Jun 16, 2015)

649$ TAKE MY MONEY FFSSSS


----------



## LightningJR (Jun 16, 2015)

649 for fury x, very nice


----------



## Loosenut (Jun 16, 2015)

Fury X $649??


----------



## horik (Jun 16, 2015)

549$ for the Fury, may get that one.


----------



## the54thvoid (Jun 16, 2015)

@RejZoR - can I have your other shoe?

I'll have a salad with it.


----------



## RejZoR (Jun 16, 2015)

It was an awesome event. Couldn't see shit because the stream of live event didn't work at all... How can I be confident in their hardware if they can't even arrange their reveal event right... AMD, I'm disappoint...


----------



## the54thvoid (Jun 16, 2015)

A dampener is the lie of a new era using the 390X rebrands.   

Fiji looks to be very, very good.  Roll on the NDA lifting.


----------



## purecain (Jun 16, 2015)

amazing price $549. I wonder if that will translate into anything near £350-399 (the price usually gets gouged by the retailers aswell so we might be looking at anything up to £550 .... epic result....

I think my dream of owning two water cooled fury cards might become a reality after all...

then im gonna spank some 980ti's... lol jks  brilliant result for the pc community...


----------



## Steevo (Jun 16, 2015)

Worked fine for me on my phone.


----------



## Frag_Maniac (Jun 17, 2015)

I think none of this means much until we see some reliable benches pitting Fury/Fury X against the 980 Ti.

What say W1zzard, any chance you'll be benching one of these cards soon?


----------

