# Radeon R9 380X Based on "Grenada," a Refined "Hawaii"



## btarunr (Feb 9, 2015)

AMD's upcoming Radeon R9 380X and R9 380 graphics cards, with which it wants to immediately address the GTX 980 and GTX 970, will be based on a "new" silicon codenamed "Grenada." Built on the 28 nm silicon fab process, Grenada will be a refined variant of "Hawaii," much in the same way as "Curacao" was of "Pitcairn," in the previous generation. 

The Grenada silicon will have the same specs as Hawaii - 2,816 GCN stream processors, 176 TMUs, 64 ROPs, and a 512-bit wide GDDR5 memory interface, holding 4 GB memory. Refinements in the silicon over Hawaii could allow AMD to increase clock speeds, to outperform the GTX 980 and GTX 970. We don't expect the chip to be any more energy efficient at its final clocks, than Hawaii. AMD's design focus appears to be performance. AMD could save itself the embarrassment of a loud reference design cooler, by throwing the chip up for quiet custom-design cooling solutions from AIB (add-in board) partners from day-one. 






In other news, the "Tonga" silicon, which made its debut with the performance-segment Radeon R9 285, could form the foundation of Radeon R9 370 series, consisting of the R9 370X, and the R9 370. Tonga physically features 2,048 stream processors based on the more advanced GCN 1.3 architecture, 128 TMUs, 32 ROPs, and a 384-bit wide GDDR5 memory interface. Both the R9 370 and R9 370X could feature 3 GB of standard memory amount. 

The only truly new silicon with the R9 300 series, is "Fiji." This chip will be designed to drive AMD's high-end single- and dual-GPU graphics cards, and will be built to compete with the GM200 silicon from NVIDIA, and the GeForce GTX TITAN-X it will debut with. This chip features 4,096 stream processors based on the GCN 1.3 architecture - double that of "Tonga," 256 TMUs, 128 ROPs, and a 1024-bit wide HBM memory interface, offering 640 GB/s of memory bandwidth. 4 GB could be the standard memory amount. The three cards AMD will carve out of this silicon, are the R9 390, the R9 390X, and the R9 390X2.

*View at TechPowerUp Main Site*


----------



## THE_EGG (Feb 9, 2015)

640GB/s?!?!!!!!?! wowsers. Looking forward to see how they go about pricing these.


----------



## RichF (Feb 9, 2015)

28nm?

"The only truly new silicon with the R9 300 series, is 'Fiji.'"

Ok.. Nevermind. At least there will be something that's actually new.

"4 GB could be the standard memory amount."

Awful. 6 GB should be the standard for high-end cards going forward.


----------



## TheGuruStud (Feb 9, 2015)

THE_EGG said:


> 640GB/s?!?!!!!!?! wowsers. Looking forward to see how they go about pricing these.



550 for the 390x, I'm sure, unless it is really fast. Nvidia should cut prices immediately and force AMD come off 600.


----------



## Sony Xperia S (Feb 9, 2015)

THE_EGG said:


> 640GB/s?!?!!!!!?! wowsers. Looking forward to see how they go about pricing these.





TheGuruStud said:


> 550 for the 390x, I'm sure



Sure, at least 500 euros.  But it will be worth it... somewhat. 

I wonder what the 16nm and 14nm GPUs will bring.


----------



## megamanxtreme (Feb 9, 2015)

380 and 380X will be R9 290/290X?
Rebrand for the lose?
For my FX-6300 I wasn't looking for anything more than performance of GTX 970, but everything is a mixed bag.


----------



## jateruy (Feb 9, 2015)

Hell that die size...Seriously thou, when would AMD start to consider some efficiency improvement, guess we will have OOB LN2 GPU cooler when we reach R9 5 or 6?


----------



## TheGuruStud (Feb 9, 2015)

RichF said:


> 28nm?
> 
> "The only truly new silicon with the R9 300 series, is 'Fiji.'"
> 
> ...



1st gen stacked ram is only 4GB.


----------



## Sony Xperia S (Feb 9, 2015)

jateruy said:


> Seriously thou, when would AMD start to consider some efficiency improvement, guess we will have OOB LN2 GPU cooler when we reach R9 5 or 6?



Ahaha    Very funny...................


----------



## xfia (Feb 9, 2015)

4gb seems pretty good to me for the most part for high res and eyefinity gaming..  having more vram can certainly help but how much and what is the premium. check out what even the gimped 3.5gb 970 does http://www.tomshardware.com/reviews/sapphire-vapor-x-r9-290x-8gb,3977.html

yeah it would have been nice to see a 20nm chip. probably could have lowered the power draw anyway but maybe hbm is more efficient and will help out. have not seen anything about the efficiency...  really just that its up to 9x faster than ddr5 and should be kick ass for high res and eyefinity.

edit-I have expressed how high clocks may not be the most reliable but if they can make refinements and deliver reliable products then I'm all in for the race to the first 2000mhz gaming gpu


----------



## Rahmat Sofyan (Feb 9, 2015)

Ah yesss... when some rumours hit TPU.com newsroom, I believe these would came true soon..


----------



## Assimilator (Feb 9, 2015)

640GB/sec, why? Unless AMD is planning to address the "4K problem" by swapping out textures all the time, I don't see any benefit to this, and lots of drawbacks (price being one of them). Considering nVIDIA's GPUs have always been able to match AMD's for performance, while using much narrower bus widths (Hawaii: 512bit, Maxwell: 256bit), I'm not seeing any good reason, unless of course AMD's architecture is far more bandwidth-dependant than nVIDIA's.


----------



## xfia (Feb 9, 2015)

what do you mean by 4k problem? 
nvidia uses compression techniques to have a narrow bus.


----------



## HumanSmoke (Feb 9, 2015)

The really odd thing about this lineup, is what AMD expects to field in the discrete mobile arena. Presently, the top part is Pitcairn based (M290X) in its third generation of cards. The M295X's (Tonga) heat production in the iMac probably preclude its use in laptops, and Hawaii is clearly unsuitable.


Assimilator said:


> 640GB/sec, why?


That's where HBM starts. Better too have too much bandwidth than too little.


Assimilator said:


> I don't see any benefit to this, and lots of drawbacks (price being one of them).


Hey, someone has to lead the charge. Just imagine the marketing mileage from 640GB/sec. It's like those nutty theoretical fillrates pumped up to eleven!


Assimilator said:


> Considering nVIDIA's GPUs have always been able to match AMD's for performance, while using much narrower bus widths (Hawaii: 512bit, Maxwell: 256bit), I'm not seeing any good reason, unless of course AMD's architecture is far more bandwidth-dependant than nVIDIA's.


Fiji will do double duty as a compute chip, where on-card bandwidth will play a much greater role in GPGPU. FWIW, even Nvidia are unlikely to go below 384-bit for their compute chip. The one thing that will hold Fiji back is the 4GB as a FirePro (not the bandwidth). AMD already has the W9100 with 16GB of onboard GDDR5 for a reason.


----------



## xfia (Feb 9, 2015)

yeah they have said nothing about the high end mobile market but carrizo should hit home to be put into reasonably priced laptops for a lot of people.


----------



## HumanSmoke (Feb 9, 2015)

xfia said:


> yeah they have said nothing about the high end mobile market but carrizo should hit home to be put into reasonably priced laptops for a lot of people.


APUs on their own means a basic feature set for the most part because AMD are fighting a losing battle against Intel, OEMs, and public perception. While Intel owns the "Top of the Mind" awareness in the market segment, AMD offerings aren't going to be as well specced, which then impacts the performance of the parts available. Just as the halo desktop parts help sell the cheaper models, so does the mobile desktop replacement/gaming/workstation offerings.


----------



## Breit (Feb 9, 2015)

Assimilator said:


> 640GB/sec, why? Unless AMD is planning to address the "4K problem" by swapping out textures all the time, I don't see any benefit to this, and lots of drawbacks (price being one of them). Considering nVIDIA's GPUs have always been able to match AMD's for performance, while using much narrower bus widths (Hawaii: 512bit, Maxwell: 256bit), I'm not seeing any good reason, unless of course AMD's architecture is far more bandwidth-dependant than nVIDIA's.



Enough bandwidth to the VRAM should help in scaling performance better to higher resolutions and AMD did always scale better with higher resolution. Question is if the 4GB VRAM will hold them back. I would've liked to see 8GB standard at least on the Fiji part.


----------



## john_ (Feb 9, 2015)

The most interesting part of the rumors is about Trinidad(if it is not a Pitcrain rebrand and I think it will not) and it is not in the above article. Trinidad is supposed to be used in 360 and 360X, be more advanced than Tonga like Fiji(no Fiji is supposed to be more advanced than Tonga, not just doubled Tonga) and we could probably see it even next month, sooner than all the others.
But, anyway, rumors...

AMD Radeon 300 series speculation: 395X2 "Bermuda", 390X "Fiji" and 380X "Grenada" | VideoCardz.com


----------



## xfia (Feb 9, 2015)

HumanSmoke said:


> APUs on their own means a basic feature set for the most part because AMD are fighting a losing battle against Intel, OEMs, and public perception. While Intel owns the "Top of the Mind" awareness in the market segment, AMD offerings aren't going to be as well specced, which then impacts the performance of the parts available. Just as the halo desktop parts help sell the cheaper models, so does the mobile desktop replacement/gaming/workstation offerings.



where is those AMD stickers on the consoles


----------



## Sony Xperia S (Feb 9, 2015)

Assimilator said:


> 640GB/sec, why?



Because you will need it when playing at ultra high resolutions and levels of detail.
AMD's solutions have always shown better behaviour with resolution scaling, i.e losing less performance than nvidia's....



Assimilator said:


> Considering nVIDIA's GPUs have always been able to match AMD's for performance, while using much narrower bus widths (Hawaii: 512bit, Maxwell: 256bit)



Historically, truth is actually the other way round. AMD offered narrow busses with state-of-the-art memory. Like being first with GDDR4, GDDR5, and now HBM.

Hawaii hit the upper performance limit of GDDR5 and that's why it needed a 512-bit MI.


----------



## crishan (Feb 9, 2015)

Just a FYI: The owner of 3Dcenter gets heavily criticised for this news posting on their own forums.

With this news, he is mainly regurgitating stuff from the worst sites out there,
WhatTheFCK-Tech and VideocardsIMakeStuffUp.

Please don't take this "news" to mean anything.


----------



## the54thvoid (Feb 9, 2015)

xfia said:


> where is those AMD stickers on the consoles



I would think Sony and Microsoft said a pleasant, "bugger off" to any 3rd party branding on their consoles. It actually backs up what HumanSmoke said.  Think about it this way - you think Apple want a Samsung sticker on their phones with chips produced by Samsung?  Big business is very much dog eat dog as long as dog gets paid.


----------



## xfia (Feb 9, 2015)

yeah very true and if every company had a right to sticker space that had something in a product a lot of stuff would look like it should be on a race track.


----------



## Ferrum Master (Feb 9, 2015)

Grenada should have memory compresion just as Tonga has, GCN upgrade.

I am really curious about FirePro, just as Boney said, what they will actually do when they need more vram.

We totally lack information really.


----------



## RejZoR (Feb 9, 2015)

640GB/s is just raw bandwidth without framebuffer compression. Meaning effective bandwidth will be wastly higher.

And R9-390X will exist after all. Though it kinda sucks that they rebrand last gen high end into current gen mid end. While it is the most economic solution, it's bad for customers. Unless if they plan to price them really well. In that case even rebranded R9-290X might be interesting. Especially if they'll be fully DX12 compatible (they should be afaik).


----------



## Breit (Feb 9, 2015)

RejZoR said:


> And R9-390X will exist after all. Though it kinda sucks that they rebrand last gen high end into current gen mid end.



It is common practice, NVIDIA does this as well...


----------



## RejZoR (Feb 9, 2015)

Breit said:


> Enough bandwidth to the VRAM should help in scaling performance better to higher resolutions and AMD did always scale better with higher resolution. Question is if the 4GB VRAM will hold them back. I would've liked to see 8GB standard at least on the Fiji part.



I don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...


----------



## mroofie (Feb 9, 2015)

RejZoR said:


> I don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...


Well double standards exist especially within the amd clan as well so im not sure what you're tying to imply here ? :0

"Remember 2 wrongs don't make a right" GTA IV WKTT 



Breit said:


> Those 4GB NVIDIA cards came out a year ago. At that time 4GB might have been enough. But now we are approaching the 4K era and as such we need more VRAM. This is logical as the cards will be faster too, enabling gaming at 4K.


obviously


----------



## Breit (Feb 9, 2015)

RejZoR said:


> I don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...



Those 4GB NVIDIA cards came out a year ago. At that time 4GB might have been enough. But now we are approaching the 4K era and as such we need more VRAM. This is logical as the cards will be faster too, enabling gaming at 4K.


----------



## Sony Xperia S (Feb 9, 2015)

Breit said:


> Those 4GB NVIDIA cards came out a year ago. At that time 4GB might have been enough. But now we are approaching the 4K era and as such we need more VRAM. This is logical as the cards will be faster too, enabling gaming at 4K.



I think that either they will find a way to use these 4 GB or this will simply be an artificial marketing bottleneck in order to sell next-generation cards better (perhaps the similar "solution" as what nvidia did with the crippled 970  )

Don't forget that these cards will have a short lifespan and we are expecting soon the shrunk GPUs.


----------



## RejZoR (Feb 9, 2015)

GTX 970 and 980 did not come out 1 year ago...


----------



## xfia (Feb 9, 2015)

there is nothing wrong with gaming at 4k with the gpu's that are out now. its just some drpy unoptimized cpu bound games that make people all wtf. 

shrinking does also mean less space to work with and not any guaranteed performance gains. any gpu that is full dx12 will be some good stuff for a pretty long time.


----------



## the54thvoid (Feb 9, 2015)

RejZoR said:


> I don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...



Don't try comparing the arguments over a 970's memory shenanigans to AMD's next uber chip (Fiji).  I'm not clued up on it but many say HBM only caters for 4GB memory allowance (for now?...).  The 970 is the cheaper Maxwell performance part whereas 390X will be the single gpu top tier.

And yes, those that bought cards with 4GB (or not as the 970 is) would have figured that into their choice.  If 390X is to be AMD's next gen top tier card, you would hope it would have more as they have already seen fit (AIB's) to release a 8GB 290X with albeit small rewards at 4k.

IMO, I don't know if we need >4GB for gaming purpose except on poorly coded things (look at the recent CoD for bad memory hogging, or Titanfall IIRC).  But if we do need >4GB in the next year or so, I'm pretty sure there will be developments to allow higher memory usage on the AMD chips.

So, to be plain - 4GB won't be an immediate worry and I'm sure it will be addressed when needed.


----------



## RejZoR (Feb 9, 2015)

Seeing how I can play stuff with everything maxed out on a 3GB HD7950, I feel like 4GB is plenty for next 2-3 years (I know not all play at "only" 1080p but it's kinda norm these days). Unless if someone will make a drastic jump in quality (and compute demand). Unreal Engine 4 or Frostbite 4 maybe...


----------



## GreiverBlade (Feb 9, 2015)

well it's not so bad if the 380/380X are 290/290X rebrand with refinement ... maybe my Kryografics Hawaii will be compatible with it  

will wait the 390 tho


----------



## RejZoR (Feb 9, 2015)

Yeah, the R9-280X was indeed slightly faster than HD7970, despite being seemingly the same GPU.


----------



## xfia (Feb 9, 2015)

I like how you said seemingly there..  there is a bit more going on than re branding.


----------



## Breit (Feb 9, 2015)

RejZoR said:


> Seeing how I can play stuff with everything maxed out on a 3GB HD7950, I feel like 4GB is plenty for next 2-3 years (I know not all play at "only" 1080p but it's kinda norm these days). Unless if someone will make a drastic jump in quality (and compute demand). Unreal Engine 4 or Frostbite 4 maybe...



Maybe game developers know that the majority of cards only have <=4GB VRAM and as such tune their games around that? With more VRAM to work with, they may implement features that require more memory and hence can not be played with "everything enabled ultra uber whatever" settings...


----------



## xfia (Feb 9, 2015)

I could see how it would be possible for shrinking gpu's to show many of the same problems.  they are loving smaller lith for for mobile devices but perhaps there is bigger hurtles on the high end gpu side of things.


----------



## Ferrum Master (Feb 9, 2015)

RejZoR said:


> Yeah, the R9-280X was indeed slightly faster than HD7970, despite being seemingly the same GPU.



Flash R280X bios and enjoy the same... I did it with my card... put a GB 280X bios as the PCB and VRM is completely the same, just RAM timings and clocking table, different power limit.


----------



## Serpent of Darkness (Feb 9, 2015)

megamanxtreme said:


> 380 and 380X will be R9 290/290X?
> Rebrand for the lose?
> For my FX-6300 I wasn't looking for anything more than performance of GTX 970, but everything is a mixed bag.



Both brands do it.  AMD 7970 was apart of the AMD 7000 generation.  AMD 7990 was the king of that generation.  R9-280 is the rebrand of a 7970 with better frame time variance performance and tweaks.  R9-380 and 380x is the rebrand of the R9-290 and 290x.  Since the R9-390 and 390x are possibly having a nice TDP drop: Almost twice the Streaming processors with an idle TDP of a 290x, is what rumor-mills are hinting about in the R9-390, R9-380 and 380x will have a TDP below the current generation.  R9-390 at idle, is hinted to have a TDP around 300 watts.  So if R9-290x has a TDP around 300 watts on idle, R9-380 and 380x will have a TDP less than the R9-290.  Basically, the new 380s will consume less power, and have the minor improvements it's been missing in the 290.  R9-390 has the new GCN and all the premium stuff from the AMD camp.

For Nvidia, GTX 760 is a GTX 680 rebrand.  GTX 960 is a GTX 780 rebrand.  Since the economy is the way it is, effectively selling off leftover stock is not what it use to be for both sides.  So NVidia and AMD play the game of rebranding old chips into the next generation of graphic cards to get people to buy them instead of a high-end card due to budget constraints.  A chip not sold, or a graphic card not sold, is money lost in the long run.




RejZoR said:


> I don't get the double standards imposed here. For NVIDIA, everyone is saying you don't even need 4GB, 4GB si plenty, who cares if it only has 3,5GB of high speed memory etc. And here we have AMD cards with exactly the same amount of (uncrippled) HBM stacked memory and everyone is questioning "would 4GB be enough". Confused much...



I think it's been stigmatized into every high-end, gaming enthusiasts' mind, who knows a thing or two about AMD and NVidia Graphic Cards, that NVidia is king at 1080p, and anything above that, AMD dominate.  Dominates barely...  This could explain the expectations set by the consumers, and for their response: "The 390 only has a 4GB Framebuffer, isn't that a little lacking?"   In addition, if you look at the past few generations of graphic cards, NVidia comes out with 2GBs, AMD comes out with 3GBs in the same generation.  This has always been the trend.  AMD will always have 1 GB more worth of memory than the same tier and generation card as NVidia's Card.  On the Nvidia side, they try to make up for it by having a higher memory bandwidth.  This is there way of making up for the lost in performance.  Personally, I don't think 5GBs or more is needed, and if they really need the extra VRam, consumers could have just gotten a Sapphire R9-290x with the 8GB Framebuffer, or a Titan-black.  Now, speaking of the future, there's no doubt in my mind that some variant of a R9-390x from one of the vendors, a non-reference card, will have 8 GBs or more in the not to distant future.  In addition, there's really no need to go that high unless you are going surround or 4K.  I bet the number of consumers who are at that level, or being in the position to own a 4K tv or surround setup is anywhere from 1 to 10%.  The remaining consumers will probably still hover around the 1080p resolution for the next 2 to 5 years.  Only games that in theory, can go above 4GBs VRam, would be Star Citizens, and Skyrim with a heavy set of mods on a 1080p setup.  Crysis 3, at full blast settings, could peak around 3.6 GB Vram.  Maybe that Lord of the Rings, Mordor game on high textures can go past it on 1080p resolution.

edit: it's late, I edited my stuff, I don't give a shet if you are a grammar Nazi...


----------



## Ferrum Master (Feb 9, 2015)

Serpent of Darkness said:


> R9-280 is the rebrand of a 7970



7950



xfia said:


> bigger hurtles on the high end gpu side of things.



Apples and oranges... The GPU die cools off also with the PCB copper layer it self as it is soldered directly to it, and it ain't the first very hot card actually in history also and the die package of an GPU is HUGE as itself... And we cannot actually compare what problems Intel has with other TSMC problems...


----------



## W1zzard (Feb 9, 2015)

Ferrum Master said:


> Grenada should have memory compresion just as Tonga has, GCN upgrade.


no, it's just a rebrand. no new memory compression, not gcn 1.3.


----------



## Breit (Feb 9, 2015)

Serpent of Darkness said:


> GTX 960 is a GTX 780 rebrand


This is just wrong. The GTX780 was based on a cut-down GK110 core (Kepler architecture) and the GTX960 is based on the fully-enabled GM206 core (Maxwell architecture). Don't confuse these two...


----------



## Ferrum Master (Feb 9, 2015)

W1zzard said:


> no, it's just a rebrand. no new memory compression, not gcn 1.3.



Ah crap... then hope it won't happen as 7970 the newer XTL die revision will actually clock worse, maybe a coincidence. And the Stock cooler... what it will be again... same hairdryer...


----------



## Aquinus (Feb 9, 2015)

btarunr said:


> AMD could save itself the embarrassment of a loud reference design cooler, by throwing the chip up for quiet custom-design cooling solutions from AIB (add-in board) partners from day-one.


WTF is this? Hardly the attitude appropriate for a news article... This quote isn't news, it's speculation. I've found reference coolers to be some of the best coolers, sound aside. AMD needs to save itself some embarrassment by not screwing up and making sure that these GPUs don't under-perform. If the 390X doesn't compete, AMD has nothing with this release from what I gather. I seriously doubt the type of cooler on release will determine if the GPU is a contender or not.


----------



## the54thvoid (Feb 9, 2015)

Serpent of Darkness said:


> For Nvidia, GTX 760 is a GTX 680 rebrand



Yeah, that's wrong too.  760 was the 670 and the 770 was the 680.  Both Kepler.

As Breit rightly says, GM is not GK.


----------



## Ferrum Master (Feb 9, 2015)

Aquinus said:


> WTF is this? Hardly the attitude appropriate for a news article... This quote isn't news, it's speculation. I've found reference coolers to be some of the best coolers, sound aside. AMD needs to save itself some embarrassment by not screwing up and making sure that these GPUs don't under-perform. If the 390X doesn't compete, AMD has nothing with this release from what I gather. I seriously doubt the type of cooler on release will determine if the GPU is a contender or not.



Actually you both are exaggerating... 290x stock blower is a disaster and that's a fact, it's poorly made to begin with. But yes you are right, news article should not contain such bashing already, that's our task to do .

The cooler actually can do wonders... as clock really matters since turbo button was used on PC cases and the competition is so tight few 100Mhz can carve out the lead for sure, especially on minimum FPS.


----------



## refillable (Feb 9, 2015)

Wow, I was disappointed... I thought 380X is Fiji and Bermuda is 390X. Well I guess to put this in perspective is 380X will shoot straight at 970. 390X will compete with the future Titan II/980 Ti.

What could be disappointing is the efficiency. With these chips you will only be getting 285 like efficiency, which is no where near Maxwell. Heat should be maintained pretty well IMO. AIB coolers with double fans are going to keep the temperatures down.


----------



## RejZoR (Feb 9, 2015)

Who cares about efficiency really. If they can still manage noise levels, I don't really care. Efficiency mattered for miners, but you don't play games 24/7 or do you?


----------



## Aquinus (Feb 9, 2015)

RejZoR said:


> Who cares about efficiency really. If they can still manage noise levels, I don't really care. Efficiency mattered for miners, but you don't play games 24/7 or do you?


I care that the 970 has a multi-monitor idle consumption of <5-watts and the 290 is closer to 55-watts. So, yes. People like me, who aren't gaming most of the time (but do still game regularly) but are using multiple monitor for productivity reasons, do care a little bit as power usage as it adds up over time. Is it a huge factor? No. Is it one worth considering? Sure.

Also, higher efficiency would mean lower temps or more overhead for higher clocks which is never a bad thing.


----------



## RejZoR (Feb 9, 2015)

Your GTX 970 is NEVER using just 5W when idling on a desktop...


----------



## Aquinus (Feb 9, 2015)

RejZoR said:


> Your GTX 970 is NEVER using just 5W when idling on a desktop...


My bad, it's 10 watts. Still 40 lower... Also I find it funny when you say "Your 970" as I don't own one as I'm pretty sure my specs still say I'm rocking two 6870s which also suck with multi-monitor power consumption. 

All I'm saying is that it's a selling point and nothing bad comes out of improving efficiency.


----------



## RejZoR (Feb 9, 2015)

Because multi-monitor setup is somehow the majority of what users have? And if you can afford GTX 970 and 2 monitors, surely that few watts of difference makes zero real world difference.

Now show me a single monitor difference? Something large majority of users have...


----------



## Aquinus (Feb 9, 2015)

RejZoR said:


> Because multi-monitor setup is somehow the majority of what users have? And if you can afford GTX 970 and 2 monitors, surely that few watts of difference makes zero real world difference.
> 
> Now show me a single monitor difference? Something large majority of users have...


Why would I care what other users want when I'm looking at a GPU for myself? I think you misread my post...


Aquinus said:


> *I care* that the 970 has a multi-monitor idle consumption of <5-watts and the 290 is closer to 55-watts.



I never said most people have multiple monitors... but *I do*. You already know the answer to your own question as well. I don't appreciate the rhetorical questions. It's not a lot of money, but it's easily a 12-pack of beer every month or two that I wouldn't otherwise have otherwise. While GPU alone might not make a difference, inefficiencies add up which contributes to the amount of heat in your machine. If you want a quiet machine, you're not going to do it by eating power for breakfast. The simple fact is that it doesn't simply come down to the cost of electricity. A HTPC with a 200-watt idle like my tower wouldn't be a very quiet HTPC, now would it?


----------



## THE_EGG (Feb 9, 2015)

RejZoR said:


> Who cares about efficiency really. If they can still manage noise levels, I don't really care. Efficiency mattered for miners, but you don't play games 24/7 or do you?


I care for efficiency as long as it doesn't come at an ungodly increase in price for the end product. The reason I care for efficiency is that the cooler's fans can run slower (and therefore quieter) on the efficient graphics card because it will be producing less heat from using less energy than a more hungry graphics card that uses more energy and thus producing more heat and thus needing to spin the fan(s) at a higher rpm (noisy) to achieve the same/similar temperatures as the more efficient graphics card. At the high end spectrum of graphics cards it seems that liquid cooling could become the norm for managing temperatures at a reasonable noise level (e.g. 295X2). That being said if I was into custom liquid cooling, then I probably would not care all that much for efficiency as it would be pretty easy to add more rads.


----------



## 64K (Feb 9, 2015)

Serpent of Darkness said:


> For Nvidia, GTX 760 is a GTX 680 rebrand.  GTX 960 is a GTX 780 rebrand.



GTX 770 was a refresh of the GTX 680. GTX 960 is a totally different architecture from the GTX 780. When the GM210 drops we will see that Nvidia intended the 960,970,980 to be mid range GPUs even though the 980 is the fastest GPU so far in the Maxwell lineup.

I've been reading some rumors that say the R9 380X will be more power efficient and with the same number of cores as the R9 290X and more room to push the clocks higher due to improved efficiency then it should take the title from Nvidia as the fastest GPU around.


----------



## jabbadap (Feb 9, 2015)

Hmh 640GB/s, I believe it should be 512GB/s. Hynix does not have other than 1GB density with speed of 128GB/s hbm memory chips available. I doubt amd will overclock them, heck there's no even need for that.


----------



## alwayssts (Feb 9, 2015)

xfia said:


> It would have been nice to see a 20nm chip. probably could have lowered the power draw anyway but maybe hbm is more efficient and will help out. have not seen anything about the efficiency...



Yeah, but understandable given cost/yields/etc.   20nm would have likely saved 20% on the gpu side.  HBM should save about 15w on the memory side (vs the old product, granted this is faster).



Assimilator said:


> 640GB/sec, why? Unless AMD is planning to address the "4K problem" by swapping out textures all the time, I don't see any benefit to this, and lots of drawbacks (price being one of them). Considering nVIDIA's GPUs have always been able to match AMD's for performance, while using much narrower bus widths (Hawaii: 512bit, Maxwell: 256bit), I'm not seeing any good reason, unless of course AMD's architecture is far more bandwidth-dependant than nVIDIA's.



Okay, that's a loaded question.  The short-ish answer is yes, AMD's arch is more memory dependant because of (typically) greater available floating point/shader resources (amd's units do double-duty for sf as well....where-as nvidia uses less fpu and smaller special function units at a ratio that is often close to fully utilized in many scenarios) plus the fact that around 1/3 of what would be nvidia's similarish required bandwidth is derived from a mixture of greater on-die cache and whatever benefits their newer compression allows.  If I had to guess, the split on that is something like 75% cache improvements, 25% compression improvements.  IOW, the compression improvements help around 8% to slightly more, just like Tonga for AMD.



HumanSmoke said:


> The really odd thing about this lineup, is what AMD expects to field in the discrete mobile arena. Presently, the top part is Pitcairn based (M290X) in its third generation of cards. The M295X's (Tonga) heat production in the iMac probably preclude its use in laptops, and Hawaii is clearly unsuitable.
> 
> That's where HBM starts. Better too have too much bandwidth than too little.
> 
> ...



First, never say never...IIRC nvidia sold big-ass Fermi (granted cut wayyy down and clocked in the basement) in laptops.

Right about HBM...plus, if they shrink/reconfigure the core design on 14/16nm for 50% more resources, the mem controller could probably be reasonably recycled....it's also possible they could cut cache or what-not because of that off-die bw.  Not saying they did/will...but who knows?  It's possible it's there for more than being ahead-of-it's-time (but a necessary evil) on bw and behind-it's-time on density configuration.  Even if they didn't change anything, it should be good for a good extra chunk of performance (double over what is required gives typically around a 16% boost...this in essence could give something like 8% over what one might expect given the other specs and typical core usage).

Either way you look at it, this thing *has* to compete with a GM200 21SMM part.  Say that can do 1400mhz best-case, that essentially means this has to do 1200 to compete.  The bandwidth required for quite literally 10TF is..well...a lot.  You'd be talking needing an 8ghz/512-bit controller which wouldn't exactly be small nor power efficient (if even possible with within die size limits).  As odd as it sounds, twice the controllers at (what apparently amounts to 5ghz) is likely both less transistors and more efficient within the gpu logic..



the54thvoid said:


> Don't try comparing the arguments over a 970's memory shenanigans to AMD's next uber chip (Fiji).  I'm not clued up on it but many say HBM only caters for 4GB memory allowance (for now?...).  The 970 is the cheaper Maxwell performance part whereas 390X will be the single gpu top tier.
> 
> And yes, those that bought cards with 4GB (or not as the 970 is) would have figured that into their choice.  If 390X is to be AMD's next gen top tier card, you would hope it would have more as they have already seen fit (AIB's) to release a 8GB 290X with albeit small rewards at 4k.
> 
> ...



Correct.  HBM is currently 1GB.  Implementation, unlike the setup of gddr5, is limited to four chips.  That means 4GB.  2nd gen is due end of year.  Does that mean a refresh before 14/16nm?  Conceivably...but who knows how fast amd is transitioning to the smaller process.  I continue to disagree about 4GB being enough...If one was to argue things should be properly coded for 4k/8GB (or possibly 6GB in many instances), we could have a conversation.  That said, it's not going to stop HBM memory density from increasing and badly optimized console ports targeted toward that shared pool of memory at a low resolution from being a scaling factor in some regards.  I still stand by GM200/R390x for the most part being 1440p-targeted chips (think in terms of a 900p 30fps ps4 game scaled to 1440p/60)...just like GM204 is mostly a 1080p-targeted chip.  In those respects, it can be argued 4GB(/6GB in some cases) is sufficient.


----------



## GhostRyder (Feb 9, 2015)

Well now we seem to be getting a bigger picture of whats going on, so it seems the R9 380X is going to be an upgraded R9 290X similar to how the R9 285 was an upgraded R9 280 (though probably without reducing the memory this time).  If that's the case then the power differences in the improved design is going to be put towards improved overclocks to knock the performance up a bit.  The R9 390X sounds cool but I do agree the 4gb even with that insane bandwidth is going to be the part I worry about on the gaming front especially since that is probably going to be the first (Depending on when Nvidia release their Titan II) chip that will handle 4K on a single card (Not perfectly of course but decently enough).  Hopefully by that point I we might see some way to do 8gb versions of the cards for the hardcore users out there who want the extra VRAM though that is skeptical based on the HBM limits currently.

Cannot wait to see more of the develop!


----------



## jabbadap (Feb 9, 2015)

GhostRyder said:


> Well now we seem to be getting a bigger picture of whats going on, so it seems the R9 380X is going to be an upgraded R9 290X similar to how the R9 285 was an upgraded R9 280 (though probably without reducing the memory this time).  If that's the case then the power differences in the improved design is going to be put towards improved overclocks to knock the performance up a bit.



Not really comparable. r9-280 is gcn1.0 and r9-285 is two generations advanced gcn1.2. R9-290x is gcn1.1 and r9-380x seems to be still gcn1.1, redefined optimized hawaii that its. More like a hd7970GHz tahiti to r9-280x tahiti xtl.



GhostRyder said:


> The R9 390X sounds cool but I do agree the 4gb even with that insane bandwidth is going to be the part I worry about on the gaming front especially since that is probably going to be the first (Depending on when Nvidia release their Titan II) chip that will handle 4K on a single card (Not perfectly of course but decently enough).  Hopefully by that point I we might see some way to do 8gb versions of the cards for the hardcore users out there who want the extra VRAM though that is skeptical based on the HBM limits currently.
> 
> Cannot wait to see more of the develop!


There's no bigger than 1GB hbm memories out yet. 8GB memory would need higher density hbm 2GB memory packages from hynix, before getting 8GB card. Memory available now it would need 2048bit memory interface from the gpu(I don't think you can split memory bandwidth between 2 hbm memory package, I could be wrong though).

In short r9-380/x, r9-370/x and r7-260/x(I really hope this isn't rebranded pitcairn as 3dcenter rumored it to be) is kind of meh been there done that. Really interesting parts will be r9-390 -series.


----------



## GhostRyder (Feb 9, 2015)

jabbadap said:


> Not really comparable. r9-280 is gcn1.0 and r9-285 is two generations advanced gcn1.2. R9-290x is gcn1.1 and r9-380x seems to be still gcn1.1, redefined optimized hawaii that its. More like a hd7970GHz tahiti to r9-280x tahiti xtl.


Yes but my reference was to the fact that its supposed to be the upgraded variants shown only currently in Tonga (For instance the compression method) not so much the jump in differences.



jabbadap said:


> There's no bigger than 1GB hbm memories out yet. 8GB memory would need higher density hbm 2GB memory packages from hynix, before getting 8GB card. Memory available now it would need 2048bit memory interface from the gpu(I don't think you can split memory bandwidth between 2 hbm memory package, I could be wrong though).
> 
> In short r9-380/x, r9-370/x and r7-260/x(I really hope this isn't rebranded pitcairn as 3dcenter rumored it to be) is kind of meh been there done that. Really interesting parts will be r9-390 -series.


That's why I said its "skeptical based on the HBM limits currently", its only going to be revealed once we get the full rundown on the stacked memory as to if this is going to be possible at all to double up or we will be waiting for bigger chips.


----------



## MxPhenom 216 (Feb 9, 2015)

I wonder what the pricing will be like on the 390/390x.


----------



## 64K (Feb 9, 2015)

MxPhenom 216 said:


> I wonder what the pricing will be like on the 390/390x.



Just speculation right now but the R9 280X launched for $300 and the R9 290 launched for $400 and the R9 290X launched for $550. I would guess that the 300 series will launch within $50 of the 200 series. I expect the R9 380X will outperform the GTX 980 if the rumors are true. That would be something if AMD launched that GPU for $300-$350 in a few months. $200-$250 cheaper than the 980. Ouch.


----------



## Sony Xperia S (Feb 9, 2015)

Yes, AMD cards' severe weakness is their tremendous idling power consumption> It's so high to the level of being ridiculously high. They are amateurs in this regard compared to nvidia engineers. 

Also, yes, stacked memory needs some time to ramp up but I guess they needed to start at some point and it is now.


----------



## RejZoR (Feb 9, 2015)

40W is ridiculously high for GPU's that came out 2,5 years ago and you're all comparing it to a brand new GPU from NVIDIA released 4 months ago. Ooooook...


----------



## 64K (Feb 9, 2015)

Sony Xperia S said:


> Yes, AMD cards' severe weakness is their tremendous idling power consumption> It's so high to the level of being ridiculously high. They are amateurs in this regard compared to nvidia engineers.
> 
> Also, yes, stacked memory needs some time to ramp up but I guess they needed to start at some point and it is now.



Idle power draw for the cards

R9 290X Reference Card 17 watts
R9 290X Lightning 22 watts

GTX 980 Reference 8 watts
GTX 980 Gaming 14 watts

If leave your computer on 24 hours a day every day idling and you pay the national average per kWh (12 cents) then the reference R9 290X will add 78 cents a month to your power bill. For the factory OC cards the R9 290X will add 70 cents a month. It's just not much at all and if that amount matters to you or you pay a lot more for electricity then I would say turn your rig off when not in use.


----------



## Rahmat Sofyan (Feb 9, 2015)

RejZoR said:


> 40W is ridiculously high for GPU's that came out 2,5 years ago and you're all comparing it to a brand new GPU from NVIDIA released 4 months ago. Ooooook...



Ouchhhh, bullseye 

goodpoint bro, just wait R300 and we can talk and compare it to G9 series, but if AMD too late, it'll another story..


----------



## ShurikN (Feb 9, 2015)

64K said:


> Idle power draw for the cards
> 
> R9 290X Reference Card 17 watts
> R9 290X Lightning 22 watts
> ...


I can probably find more money than that on the pavement... daily 
Idling is not an issue. People are grasping at straws...


----------



## Ferrum Master (Feb 9, 2015)

IMHO AMD screwed on one thing... Bermuda should have been a triangle  a triple head


----------



## D1RTYD1Z619 (Feb 9, 2015)

They need to release them already. IM VIDEO CARDLESS.


----------



## xorbe (Feb 9, 2015)

the54thvoid said:


> Yeah, that's wrong too.  760 was the 670



That's wrong too ... 670 had 1344 cores, 760 had 1152, and so on.


----------



## ZoneDymo (Feb 9, 2015)

Kinda dissapointed with this, just some higher clocks? thats not going to put much of a dent in the landscape . I want actual new cards with new tech that are actually more power efficient and faster and full 4k capable 

#4Kapable


----------



## Casecutter (Feb 9, 2015)

64K said:


> If leave your computer on 24 hours a day every day idling...


Wouldn't such time you'd be "Sleeping" and wouldn't AMD ZeroCore be in play.
Just saying.


----------



## arbiter (Feb 9, 2015)

the54thvoid said:


> Yeah, that's wrong too.  760 was the 670 and the 770 was the 680.  Both Kepler.
> 
> As Breit rightly says, GM is not GK.



Least with 680 to 770, Nvidia did clock bumps and made 770 faster, unlike AMD where 7970 to 280x had its clocks lowered.



Ferrum Master said:


> Actually you both are exaggerating... 290x stock blower is a disaster and that's a fact, it's poorly made to begin with. But yes you are right, news article should not contain such bashing already, that's our task to do .



news articles can bash all they want pointing at past where a company complete screwed up on a product and yes that stock blower from a 6000 series card was a complete screw up, Pretty bad that you lose 20% performance after 5min of gaming



refillable said:


> Wow, I was disappointed... I thought 380X is Fiji and Bermuda is 390X. Well I guess to put this in perspective is 380X will shoot straight at 970. 390X will compete with the future Titan II/980 Ti.
> 
> What could be disappointing is the efficiency. With these chips you will only be getting 285 like efficiency, which is no where near Maxwell. Heat should be maintained pretty well IMO. AIB coolers with double fans are going to keep the temperatures down.



I am laughinh up a storm atm, remember all the AMD fans jumping on rumors and thinking 380x was gonna be a 4096 gcn monster yet its not even close to what they were expecting.


----------



## the54thvoid (Feb 9, 2015)

xorbe said:


> That's wrong too ... 670 had 1344 cores, 760 had 1152, and so on.



Pedant.

You know fine well what I meant.


----------



## HumanSmoke (Feb 9, 2015)

RejZoR said:


> Who cares about efficiency really.


You should if you value the viability of the company.
Efficiency at the high end really isn't that pressing a concern - Nvidia built markets share fielding the GT200 and GF100 - although might be a consideration in non-gaming scenarios.
Where you should care about efficiency is that the architecture scales efficiently with the smaller GPUs. As I said earlier, AMD aren't competitive in discrete mobile, and that also reflects on low/mid range OEM builds, where the vast majority of discrete sales happen.
You can say that efficiency doesn't matter (to you), but it just means one more strike against sales and revenue, which means less R&D funding, which means that AMD may not be able to field a top-to-bottom GPU refresh(!). So many things don't seem to matter with AMD - GPU efficiency, enthusiast desktop, the x86 server market....sooner or later the sum total of these "doesn't matter" must indeed matter.


----------



## 64K (Feb 9, 2015)

Casecutter said:


> Wouldn't such time you'd be "Sleeping" and wouldn't AMD ZeroCore be in play.
> Just saying.



Yes, that would further cut back on watts being used while idling. For simplicities sake I used the idle watts measurements from the cards reviewed on this site.


----------



## TheHunter (Feb 9, 2015)

As long as they release Fiji XT aka 390/390X soon. I dont mind if they rebrand Cayman chip for that matter, nvidia has been doing too a lot so not really important.

So you all argue about power consumption with 800w+ PSUs? Really, chill.. Power consumption is overrated. 


Also there is no TITAN-X or what ever you all like about Titan, GM200 won't have DP so no Titan variant, just Geforce (google Nvidia Japan conference 30-12-2014)..
Talk about Wccf spreading this false Titan-X hype to the max..


----------



## Lionheart (Feb 9, 2015)

arbiter said:


> Least with 680 to 770, Nvidia did clock bumps and made 770 faster, unlike AMD where 7970 to 280x had its clocks lowered.
> 
> news articles can bash all they want pointing at past where a company complete screwed up on a product and yes that stock blocker from a 6000 series card was a complete screw up, Pretty bad that you lose 20% performance after 5min of gaming
> 
> I am *laughing* up a storm atm, remember all the AMD fans jumping on rumors and thinking 380x was gonna be a 4096 gcn monster yet its not even close to what they were expecting.




HD7970 - 925mhz GPU / 1375mhz Memory  ||   R9 280x 1000mhz GPU / 1500mhz   You were saying?   But I'm pretty sure you were referring to the HD7970Ghz edition. Still the 280x traded blows well with the GTX 770 while being slightly cheaper depending on the country you lived in.

Agreed those reference coolers were awful just like the GTX 480's cooler. 

I don't see the big deal  If the 380x was going to be the 4096 monster, pretty sure the 390x would of been the dual GPU config. Just different labels


----------



## Slizzo (Feb 9, 2015)

xfia said:


> View attachment 62540
> 
> I could see how it would be possible for shrinking gpu's to show many of the same problems.  they are loving smaller lith for for mobile devices but perhaps there is bigger hurtles on the high end gpu side of things.



I find it funny people simply looked over this. For good reason too.

You're comparing apples to oranges here, for two different reasons.

1. You're comparing CPU architecture to GPU, which are very different in design.
2. You're comparing a chip produced by Intel's fabs to that of one designed by AMD, but produced at either TSMC or Global Foundries.


----------



## HumanSmoke (Feb 9, 2015)

TheHunter said:


> Also there is no TITAN-X or what ever you all like about Titan, GM200 won't have DP so no Titan variant, just Geforce (google Nvidia Japan conference 30-12-2014)..


Why would the name Titan have to be linked to double precision? Why couldn't Nvidia differentiate a Titan from a 980 Ti by higher clocks, higher board power limit, larger vRAM allocation. Just because something happened in a previous series, it doesn't automatically follow that the convention is set in stone. There are plenty of examples where just a difference in clock has divided two completely separate models - two that spring immediately to mind are the HD 3870/3850 and HD 4870/4850.


TheHunter said:


> Talk about Wccf spreading this false Titan-X hype to the max..


WTFtech is all about the hype, whatever flavour. Take it seriously at your peril. I seem to remember that this forum went batshit crazy when WCCF announced AMD's GPUs were going to be 20nm even as people (including myself) attempted to inject some reason by showing that 20nm isn't particularly feasible for large GPUs. If the rumour fits peoples wish list, good luck with trying to dissuade them.


----------



## arbiter (Feb 9, 2015)

Lionheart said:


> HD7970 - 925mhz GPU / 1375mhz Memory  ||   R9 280x 1000mhz GPU / 1500mhz   You were saying?   But I'm pretty sure you were referring to the HD7970Ghz edition. Still the 280x traded blows well with the GTX 770 while being slightly cheaper depending on the country you lived in.



I was refering to ghz since its pretty much only card people remember of 7970, originally rlsed in janurary, 2 month later in march nvidia dropped the 680 bomb shell on them. In may AMD pushed bios for ghz clocks and new cards.


----------



## Eric_Cartman (Feb 9, 2015)

It seems AMD is doubling down on their Fermi 2.0!


----------



## Casecutter (Feb 9, 2015)

64K said:


> Yes, that would further cut back on watts being used while idling. For simplicities sake I used the idle watts measurements from the cards reviewed on this site.


Well there's one site that has at times has shown "monitor-off" power numbers, although anymore I can't recall which.  That's site has shown how something like a R9 280 will drop to like 2 Amps, while a 760 is still pulling 7Amps all while doing nothing... It's incredible that isn't factored into the conversation and equation on efficiency.


----------



## TheHunter (Feb 9, 2015)

HumanSmoke said:


> Why would the name Titan have to be linked to double precision? Why couldn't Nvidia differentiate a Titan from a 980 Ti by higher clocks, higher board power limit, larger vRAM allocation. Just because something happened in a previous series, it doesn't automatically follow that the convention is set in stone. There are plenty of examples where just a difference in clock has divided two completely separate models - two that spring immediately to mind are the HD 3870/3850 and HD 4870/4850.
> 
> WTFtech is all about the hype, whatever flavour. Take it seriously at your peril. I seem to remember that this forum went batshit crazy when WCCF announced AMD's GPUs were going to be 20nm even as people (including myself) attempted to inject some reason by showing that 20nm isn't particularly feasible for large GPUs. If the rumour fits peoples wish list, good luck with trying to dissuade them.


Because that's what a Titan is, a crippled Tesla card for consumers with Double precision, without it its not worthy of a Titan name, simple as.

What's more, nvidia said @ that Japan tech conference  there won't be any DP gpu with Maxwell, only with  Pascal and then we will see new Tesla/Titans again.
http://www.kitguru.net/components/g...lopment-of-graphics-processing-architectures/

Besides these GM200 Geforce 1080gtx? will feature 6Gb vram anyway, so Titan name is irrelevant now if that extra 3gb vram buffer by GK110 made them a little more special.



Anyway OT, can't wait for this FijiXT, really interested in that 3D memory, should make a big change or two at higher resolutions.


----------



## xfia (Feb 9, 2015)

Slizzo said:


> I find it funny people simply looked over this. For good reason too.
> 
> You're comparing apples to oranges here, for two different reasons.
> 
> ...



I suppose your right to a certain point but its all silicone and it takes a lot of money and engineering to shrink..


----------



## HumanSmoke (Feb 9, 2015)

TheHunter said:


> Because that's what a Titan is, a crippled Tesla card for consumers with Double precision, without it its not worthy of a Titan name, simple as.


The company might choose to use the name in any way it sees fit. Just because the previous Titan had a certain feature set, it doesn't mean that the next is bound by the same criteria. The name is Titan, not Titan Double Precision. You're welcome to your opinion, but please don't represent it as absolute fact.
Personally, I'd like to see the name changed to Zeus (son of Titans) if only to troll AMD rumour lovers, btarunr, and RCoon 


TheHunter said:


> What's more, nvidia said @ that Japan tech conference  there won't be any DP gpu with Maxwell


That is incorrect. GM 200 likely has double precision at the same rate of GM 204 ( 1:32 ). Insufficient for Tesla duties, but that is why the rep said that Kepler will continue to be the Tesla option - simply because GK 210's development has been in tandem with GM 200.


----------



## TheHunter (Feb 10, 2015)

Titan is a class of its own, not really a high-end reference chip name. but if they do call it Titan, it won't have a Titan like price.

Current Titans cost so much because of FP64 DP, not because of extra 3GB vram.


HumanSmoke said:


> That is incorrect. GM 200 likely has double precision at the same rate of GM 204 ( 1:32 ). Insufficient for Tesla duties, but that is why the rep said that Kepler will continue to be the Tesla option - simply because GK 210's development has been in tandem with GM 200.



Exactly, its improved FP32, but its not FP64. So it can't be used like with GK110 Titan @ FP64 mode.

Btw that GK210 is 2x improved and energy efficient GK110 each with 2496cores.. So its not really Maxwell either 



What Im also trying to say is, all this means no "absurd" prices for us end-users,  from both camps AMD FijiXT and NV GM200, the usual 550-650$/€.


----------



## Razorfang (Feb 10, 2015)

HumanSmoke said:


> The company might choose to use the name in any way it sees fit. Just because the previous Titan had a certain feature set, it doesn't mean that the next is bound by the same criteria. The name is Titan, not Titan Double Precision. You're welcome to your opinion, but please don't represent it as absolute fact.
> Personally, I'd like to see the name changed to Zeus (son of Titans) *if only to troll AMD rumour lovers, btarunr, and RCoon *
> 
> That is incorrect. GM 200 likely has double precision at the same rate of GM 204 ( 1:32 ). Insufficient for Tesla duties, but that is why the rep said that Kepler will continue to be the Tesla option - simply because GK 210's development has been in tandem with GM 200.



http://i.imgur.com/68pxm0d.gif


----------



## HumanSmoke (Feb 10, 2015)

TheHunter said:


> Btw that GK210 is 2x improved and energy efficient GK110 each with 2496cores.. So its not really Maxwell either


Didn't say it was. G*K* 210 has to be* K*epler (as should be apparent in the link I supplied). The revised silicon was obviously designed because Maxwell wasn't going to offer full FP64. Just for the record, GK 210 has twice the cache of the GK 110, a significant boost algorithm, and improved double precision performance in addition to the energy efficiency you mentioned - so quite a bit of design reworking, which would account for the length of its gestation (and why the company invested in the R&D if Maxwell wasn't targeting the co-processor market).


TheHunter said:


> What Im also trying to say is, all this means no "absurd" prices for us end-users,  from both camps AMD FijiXT and NV GM200, the usual 550-650$/€.


IF there aren't significant differences between the top GM 200 and the salvage parts, I'd agree. Titan's productivity popularity probably had less to do with FP64 than it did the 6GB of vRAM it carries for content creation duties. I wouldn't be at all surprised if Nvidia leveraged a top price against a 12GB top part. There are already 8GB 290X's, and I'd assume that would spill over to any rejigged SKUs. Assuming Nvidia answers that with 8GB GTX 980's, a 6GB GM 200 looks a little "underspecced" from a marketing viewpoint.


----------



## Convexrook (Feb 10, 2015)

When will we have a universal physics support system from both Companies. Games do feel better with Nvidia physics applied to them. I cannot be the only one that wants this from game developers. Something like Havok physics did but GPU based.


----------



## RejZoR (Feb 10, 2015)

They just "feel" better, but the aren't better. Especially if you'd know all the shit NVIDIA has been doing to push their PhysX crap (removing entire physics effects from games that used to be done through CPU in other games, basic stuff like smashed glass falling and staying on the ground)

Oh and for general public information, AMD's TressFX works on ALL graphic cards, not just AMD, because unlike NVIDIA's proprietary crap, TressFX works through DirectCompute, which means support on all modern graphic cards.


----------



## Sony Xperia S (Feb 10, 2015)

64K said:


> Idle power draw for the cards
> 
> R9 290X Reference Card 17 watts
> R9 290X Lightning 22 watts
> ...



Mistake. 

GTX 980 8 W;
GTX 970 9 W;

R9 290 16 W;
R9 290X 17 W.

http://www.techpowerup.com/reviews/Gigabyte/GTX_960_G1_Gaming/26.html

That is exactly double the power consumption and the question is principled....
In multi-display it is even worse.

GTX 980 9 W;
GTX 970 10 W;

R9 290 51 W;
R9 290X 54 W. 






ShurikN said:


> I can probably find more money than that on the pavement... daily
> Idling is not an issue. People are grasping at straws...



You are mistaken. It actually shows that something is not working properly


----------



## HumanSmoke (Feb 10, 2015)

AMD sort-of-news devolves into flame war. Colour me shocked!


RejZoR said:


> They just "feel" better, but the aren't better. Especially if you'd know all the shit NVIDIA has been doing to push their PhysX crap


FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
FUN FACT #3: When the PhysX hack for AMD cards arrived, *it was AMD who threw up the roadblock*.

So, ATI/AMD couldn't be bothered buying PhysX, couldn't be bothered licensing it once Nvidia purchased it, and actively blocked the development of a workaround that would allow the AMD community from using it. If you have an Nvidia card you can use it. If you have an AMD card, why should you care? AMD certainly don't.


----------



## xfia (Feb 10, 2015)

physx.. whats that donkey shit people assign a extra gpu to handle? haha it was never needed but it sure has a nice name.. maybe most of the people at AMD didnt want to feel dirty about putting a logo on something that pc's can just do. are we talking about freesync? got confused there for a sec.


----------



## arbiter (Feb 10, 2015)

RejZoR said:


> They just "feel" better, but the aren't better. Especially if you'd know all the shit NVIDIA has been doing to push their PhysX crap (removing entire physics effects from games that used to be done through CPU in other games, basic stuff like smashed glass falling and staying on the ground)
> 
> Oh and for general public information, AMD's TressFX works on ALL graphic cards, not just AMD, because unlike NVIDIA's proprietary crap, TressFX works through DirectCompute, which means support on all modern graphic cards.



You do know Tressfx is limited to HAIR. PhysX does a lot more then that, as in how body falls down stairs or when a bullet hits a wall and how pieces of the wall hit the floor. Next time read up on tech before spouting off like you know anything.



HumanSmoke said:


> AMD sort-of-news devolves into flame war. Colour me shocked!
> 
> FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
> FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
> ...



yea sad how so many people forget the fact AMD had their chance to license it a long time ago yet refused and now they create PR that they are/were locked outta it. AMD wants everything for free cause well they don't have money to do it them selves. Nvidia is a business, they're not UNICEF.




xfia said:


> physx.. whats that donkey shit people assign a extra gpu to handle? haha it was never needed but it sure has a nice name.. maybe most of the people at AMD didnt want to feel dirty about putting a logo on something that pc's can just do. are we talking about freesync? got confused there for a sec.



O i thought we were talking about Mantle for a second there. (cue the AMD fan to claim freesync is the industry standard or mantle is Open source.)


----------



## TheGuruStud (Feb 10, 2015)

HumanSmoke said:


> AMD sort-of-news devolves into flame war. Colour me shocked!
> 
> FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
> FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
> ...



Why would AMD give money to those crooks at Nvidia (their arch nemesis no less)? Nvidia locks physx to gpu even though it can all be done on CPU (even for their poor vid card owners)...it's pointless, 100% pointless.
Nvidia are such crybaby bitches that they actively block their cards from using physx when made secondary to an AMD card. You made that point...and it goes against your propaganda!

Obviously, you have no rebuttal against TressFX LOL. Nvidia won't have ANY of this open standard stuff. They'll bankrupt the company before they let it happen. That's how arrogant and greedy they are.

And I know you fanboys are INCREDIBLY butt hurt about  Mantle and Freesync. Let me see those tears, baby!


----------



## xfia (Feb 10, 2015)

nice one


----------



## arbiter (Feb 10, 2015)

TheGuruStud said:


> Why would AMD give money to those crooks at Nvidia (their arch nemesis no less)? Nvidia locks physx to gpu even though it can all be done on CPU (even for their poor vid card owners)...it's pointless, 100% pointless.
> Nvidia are such crybaby bitches that they actively block their cards from using physx when made secondary to an AMD card.
> 
> Obviously, you have no rebuttal against TressFX LOL. Nvidia won't have ANY of this open standard stuff. They'll bankrupt the company before they let it happen. That's how arrogant and greedy they are.
> ...



Cause cpu is to slow to do the work, GPU is much faster do the kinda calculations needed for it.

Last i checked Freesync and Mantle are Proprietary CLOSED software for AMD. SO tell us another AMD fanboy blind lie.


----------



## RejZoR (Feb 10, 2015)

HumanSmoke said:


> AMD sort-of-news devolves into flame war. Colour me shocked!
> 
> FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
> FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
> ...



FUN FACT. LEGAL REASONS.

And so what if TressFX uis limited only to hair. It does work on ANY graphic card with DirectCompute support. You can't even have PhysX hardware accelerated hair if you just happent o have Radeon...


----------



## xfia (Feb 10, 2015)

most cpu's can handle any game fine but not when it only runs on 1 core. physx is shady at best..  it was some what relevant when it got going but they should have just showed developers how to make a game scale across more cores and reduce cpu dependency.


----------



## ZoneDymo (Feb 10, 2015)

A new standard in physics would be nice.
Was Havok 2.0 not coming with massive improvements?

PhysX as is is just a joke, mainly because no one should bother with it on any serious level if only Nvidia users can make use of it.
All it is is gimmicky effects here and there, some smoke moving in batman, some flying orbs in borderlands or warframe, whoopdishit yo.


----------



## arbiter (Feb 10, 2015)

RejZoR said:


> FUN FACT. LEGAL REASONS.
> 
> And so what if TressFX uis limited only to hair. It does work on ANY graphic card with DirectCompute support. You can't even have PhysX hardware accelerated hair if you just happent o have Radeon...



Fun Fact: AMD wasn't breaking any laws if they let it go. As long as they didn't provide any support for the hack or promote it, they would been free and clear of any liability.



ZoneDymo said:


> All it is is gimmicky effects here and there, some smoke moving in batman, some flying orbs in borderlands or warframe, whoopdishit yo.



"sarcasm" yea those effects don't make the game look more real as smoke would in real live "/sarcasm"

I am done with this thread, turning in an AMD fanboy thread trying to twist history to make AMD look like the super hero with a can do no wrong persona and Nvidia the super villain.


----------



## the54thvoid (Feb 10, 2015)

Thread has died....


----------



## xfia (Feb 10, 2015)

just way off topic.. its whatev anyway.. us pc gaming nerds gotta argue about something..


----------



## Frick (Feb 10, 2015)

Aquinus said:


> I care that the 970 has a multi-monitor idle consumption of <5-watts and the 290 is closer to 55-watts. So, yes. People like me, who aren't gaming most of the time (but do still game regularly) but are using multiple monitor for productivity reasons, do care a little bit as power usage as it adds up over time. Is it a huge factor? No. Is it one worth considering? Sure.
> 
> Also, higher efficiency would mean lower temps or more overhead for higher clocks which is never a bad thing.



This is a tangent, but it's partly the reason I didn't buy that GTX 570 for €35. 80W in multi monitor mode is awful, and since I'm running old monitors just the GPU and monitors would be about 180W. 

Anyway, bring on the low/mid level I say! That's where the REAL action is!


----------



## Ryrynz (Feb 10, 2015)

RichF said:


> 28nm?
> 
> "The only truly new silicon with the R9 300 series, is 'Fiji.'"
> 
> ...



I wholeheartedly agree. No doubt they'll know of the demand for them and some will become available at a later stage.  Quite a few games managing to break 3GB now so those new cards only just cut it.


----------



## HumanSmoke (Feb 10, 2015)

the54thvoid said:


> Thread has died....


Aye, was there any doubt?

Once the words "rebrand" and "PhysX" got posted it was light the blue touch paper and cue the Mission Impossible theme.


----------



## bencrutz (Feb 10, 2015)

HumanSmoke said:


> FUN FACT #3: When the PhysX hack for AMD cards arrived, *it was AMD who threw up the roadblock*.





RejZoR said:


> FUN FACT. LEGAL REASONS.





arbiter said:


> Fun Fact: AMD wasn't breaking any laws if they let it go. *As long as they didn't provide any support* for the hack or promote it, they would been free and clear of any liability.



ok, this is amusing


----------



## HumanSmoke (Feb 10, 2015)

bencrutz said:


> ok, this is amusing


Well, that's what happens when everybody is making a different point I guess. The original point I was making was that ATI (and later AMD) had ample opportunity to acquire PhysX. They simply didn't want it in any way, shape or form...and by all accounts (especially judging by the reaction here), people here don't either (FWIW. its not a must have feature for me either), yet the frothing at the bung Pavlovian response over what is supposed to be a worthless feature nobody wants (least of all AMD) is evident every time that four-letter word is posted.


----------



## TRWOV (Feb 10, 2015)

Late to the party. Not much to add except some off-topic in regard to Physx: I can't understand why would nVidia remove support for running Physx on a Hybrid setup  They cited "support reasons" (read: they won't test nVidia + AMD configs so they can't officially support them) but why not get out a Beta driver or something for Hybrid configurations with a "no support" disclaimer? Or at the very least don't block mods?

Not that it matters much nowadays (can't recall any recent Physx game except for the Batmans) but I resented nVidia a lot back in the day... more so considering that I was one of the suckers that bought the Ageia card back then and nVidia threw us under a bus as soon as they bought them.

/rant


----------



## HumanSmoke (Feb 10, 2015)

TRWOV said:


> Not that it matters much nowadays (can't recall any recent Physx game except for the Batmans) but I resented nVidia a lot back in the day... more so considering that I was one of the suckers that bought the Ageia card back then and nVidia threw us under a bus as soon as they bought them.


That was on the cards as soon as the ink was dry. AGEIA went belly up because the PPU was too expensive for the feature set. Nvidia wouldn't make the same mistake (as both the original release and the comments alluded to in this news item). FWIW, even if AMD had pulled the trigger on buying AGEIA, the exact same outcome would have eventuated. Remember that ATI/AMD was all about GPU accelerated physics back when it was fashionable (their "Boundless Gaming" initiative). As you say, it matters little now. CPU physics is widely available (Havok, Bullet etc), and more game engines with their own physics engines arrive on occasion.


----------



## RejZoR (Feb 10, 2015)

I just wish physics would get standardized under DirectX. This is the only way to move gaming technology and realism further. Because without the unified physics support, physics CANNOT be used as core game element, they can only be used for useless eye candy. That would mean one or another GPU camp wouldn't even be allowed to play the game. If they could standardize everything else, why not physics as well? Dedicated physics API would be great. Something DirectCompute could become, but just didn't...


----------



## xfia (Feb 10, 2015)

microsoft doesn't need to implement a useless nv technology.. dx12 will do away with nonsense on how the cpu should be used.. dx11 is already good at it but it will be easier anyway.


----------



## Krekeris (Feb 10, 2015)

the54thvoid said:


> Thread has died....



It was nice to read, well untill arbiter comments. Sad to see such a hardcore fanboys.


----------



## FordGT90Concept (Feb 10, 2015)

I am disappoint.  I was hoping 380X was the card with 4096 stream processors, not 390/390X.  The only thing I'm not disappointed about is 380/380x/390/390X are all coming really soon. 

I'm guessing 390 will go for $400 and 390X will go for $500 or more.  If those prices are $100 cheaper than that guesstimate, it'll be a tough choice for me to pick between the two.


----------



## Sony Xperia S (Feb 10, 2015)

FordGT90Concept said:


> I am disappoint.  I was hoping 380X was the card with 4096 stream processors, not 390/390X.  The only thing I'm not disappointed about is 380/380x/390/390X are all coming really soon.
> 
> I'm guessing 390 will go for $400 and 390X will go for $500 or more.  If those prices are $100 cheaper than that guesstimate, it'll be a tough choice for me to pick between the two.



Don't worry, guys, the most important was already said.

Many will be simply skipping anything on this pesky 28 nm process. 

R.I.P thread!


----------



## Akrian (Feb 10, 2015)

Those specs seem impressive. But 4gigs of vram? I mean, at 4K I can already hit the wall with current AMD GPUs (running quad r9 290x). Do they plan to use that memory bandwidth to swap out textures so fast that it will solve the issue of hitting the memory limits and stuttering? Isn't it a gable, because it will require a lot of driver optimizations to do it efficiently? And AMD's drivers are lacking in quality department for the past few itterations at least.


----------



## GhostRyder (Feb 10, 2015)

Well its funny this thread became very foolish again, seems we cannot have a thread regarding GPU's without resorting to name calling from each of the hardcore fans or resorting in the usual "well my company has (insert feature that I will pointlessly rant about being the best thing since sliced bread) and your doesn't, praise (Insert company) ".

 If you guys have a problem with certain people making fanboy comments, ignore the person and move on already otherwise you just make them feel important while they cook up excuses/retorts and that you care which in turn ruins threads.

Back to the topic at hand, the disappointment for this announcement only to me is that the R9 380X is not going to be the next big part/a new part.  Though I guess depending on how well they improve/refine Hawaii into Grenada we might see something truly impressive.  Though the real chip everyone has their eyes on is the R9 390X and what it brings to the table.


----------



## jabbadap (Feb 10, 2015)

Well hawaii has slow memories bundled with 512bit bus, so there's room of improvement(tuning memory controller and supporting faster vrams). 

Then of course depending which tmsc manufacturing node they are using, moving to more effiency 28nm node might improve energy consumption(I think nvidia uses 28nm hpc for gm204/gm206 and amd uses 28nm hp? not the same node anyway). So can Grenada based r9-380-series be faster than gtx980/970 sure, but better perf/W very unlikely.


----------



## THU31 (Feb 10, 2015)

xfia said:


> View attachment 62540
> 
> I could see how it would be possible for shrinking gpu's to show many of the same problems.  they are loving smaller lith for for mobile devices but perhaps there is bigger hurtles on the high end gpu side of things.



The problem exists because the die size gets smaller and smaller, as they are not increasing the number of cores. Lynnfield was 290 mm2, Sandy Bridge was 216 mm2, Ivy Bridge was 160 mm2. With Broadwell this will probably get below 120 mm2.
By the way, we are still paying pretty much the same price for quad core CPUs, and that is absolutely pathetic. The manufacturing costs must be insanely low.

2009 Lynnfield 45 nm 290 mm2 - 196 $
2014 Haswell 22 nm 177 mm2 - 182 $ (242 $ for a model that allows overclocking, sick)

We should have had six-core CPUs for 200 $ by now.



Die size is not a problem for GPUs. High-end GPUs are usually between 400 and 600 mm2, so heat dissipation is not a problem.
Whenever they change the node, they pack a lot more transistors into the chips, making them much faster while keeping a similar die size. Intel do not do that anymore, they are reducing the die size without increasing performance or clock speeds.


----------



## xfia (Feb 10, 2015)

thanks.. was hoping someone could bridge the difference if I threw it in there. 

is it possible they cant fit anymore transistors since its smaller? 

where would I look for some more inside info on chip engineering?


----------



## Sony Xperia S (Feb 10, 2015)

Harry Lloyd said:


> The problem exists because the die size gets smaller and smaller, as they are not increasing the number of cores. Lynnfield was 290 mm2, Sandy Bridge was 216 mm2, Ivy Bridge was 160 mm2. With Broadwell this will probably get below 120 mm2.
> By the way, we are still paying pretty much the same price for quad core CPUs, and that is absolutely pathetic. The manufacturing costs must be insanely low.
> 
> 2009 Lynnfield 45 nm 290 mm2 - 196 $
> ...



Yeah, we discovered the hot water.... 

By the way.... you can counter this trend simply by ignoring the existence of Intel.

Just be smarter and buy all AMD.


----------



## the54thvoid (Feb 10, 2015)

Sony Xperia S said:


> Yeah, we discovered the hot water....
> 
> By the way.... you can counter this trend simply by ignoring the existence of Intel.
> 
> Just be smarter and buy all AMD.



No thanks, moar power required. Zen is too far away. Might mix Intel and AMD when 390X comes out though.


----------



## THU31 (Feb 10, 2015)

If AMD finally caught up, maybe Intel would have to do something. Right now they are focusing on mobile, while desktops just sit there doing nothing, because they have had the most powerful CPUs since 2006.
I miss the days where AMD CPUs were better for gaming (Duron, Athlon XP, Athlon 64), while being cheaper as well.

At least the GPUs are ok, though power hungry, but they are not really AMD, they just bought what was good.


----------



## xfia (Feb 11, 2015)

GCN ftw  I would be excited to see what a desktop carrizo could do.


----------



## ZoneDymo (Feb 12, 2015)

arbiter said:


> Fun Fact: AMD wasn't breaking any laws if they let it go. As long as they didn't provide any support for the hack or promote it, they would been free and clear of any liability.
> 
> 
> 
> ...



Ermm very odd to throw that AMD Fanboy statement after your reaction to my comment.
That is not even close to appropriate.

With the physics I was speaking from experience as an Nvidia user (some of us just switch between brands and are not glued down).
And no, the way smoke reacts to you in physX enabled Batman AA is not realistic at all, it becomes this palpable substance that roles off batman almost, that is just overdoing it massively.
Glowing flying orbs with every specal attack in Warframe makes no sense/is not realisic either, its just shiney colored orbs (ooh so pretty right?).
Maaaaybe Mirror's Edge can be considered a visual increase with it on, breaking glass and tearing cloth etc.

Although on that note, I am playing Splinter Cell 1 on the gamecube again and im mighty impressed with the way cloth reacts in that old old game, and that without PhyX.

I like what PhysX COULD do for us, but the way it is, exclusive to Nvidia, its going nowhere, unless Nvidia to borderline buy the development of an entire game so they can make it about physX from the start.
But that will not happen so it will be just a gimmicky addition and never what it should be.
Oh well, Havok 2.0 is coming still, maybe that will move some mountains.


----------



## ZoneDymo (Feb 12, 2015)

Harry Lloyd said:


> If AMD finally caught up, maybe Intel would have to do something. Right now they are focusing on mobile, while desktops just sit there doing nothing, because they have had the most powerful CPUs since 2006.
> I miss the days where AMD CPUs were better for gaming (Duron, Athlon XP, Athlon 64), while being cheaper as well.
> 
> At least the GPUs are ok, though power hungry, but they are not really AMD, they just bought what was good.



Kinda hard when everybody and their mother buys Intel and tells everybody and their mother to do the same.
AMD is a muuuuuch smaller company and does not nearly have the research resources Intel has.
It might be the only competition Intel has but calling it competition is pushing it.

Luckily the pricing makes up for that making it all viable options though.


----------



## THU31 (Feb 12, 2015)

ZoneDymo said:


> I like what PhysX COULD do for us, but the way it is, exclusive to Nvidia, its going nowhere, unless Nvidia to borderline buy the development of an entire game so they can make it about physX from the start.



PhysX in Borderlands 2 and Pre-sequel is insane. It completely changes the game. It can be done, if only developers want to do it.

XBO and PS4 officially support PhysX, so developers can implement it in any game. Unfortunately those consoles have no power, which will make that rather difficult.


----------



## TheHunter (Feb 12, 2015)

And what does this nvidia stuff have to do with this AMD thread? Yeah nothing...


----------



## dyonoctis (Feb 12, 2015)

Funny to see that nvidia is willing to work to make some of their physx/visualfx enjoyable on console running amd hardware, but do not give much option to optimize these effects on amd desktop...this is the part of the competition that i hate the most, that basically mean that amd would have to work on a solution of their own, and a game running both nvidia and amd "extra" will never happen.....


----------



## Sony Xperia S (Feb 12, 2015)

Unfortunately this shit always comes from nvidia only. I don't remember AMD doing anything closed standard......


----------



## HumanSmoke (Feb 12, 2015)

Sony Xperia S said:


> Unfortunately this shit always comes from nvidia only. I don't remember AMD doing anything closed standard......


You don't? Google AMD's XGP. To protect what they thought was going to be a growth market, they made the connections proprietary. Needless to say, it died a horribly protracted death. If you're sailing the proprietary course, you need to determine the market viability and back the technology fully. ATI (and later AMD) did neither.


----------



## ZoneDymo (Feb 12, 2015)

Harry Lloyd said:


> PhysX in Borderlands 2 and Pre-sequel is insane. It completely changes the game. It can be done, if only developers want to do it.
> 
> XBO and PS4 officially support PhysX, so developers can implement it in any game. Unfortunately those consoles have no power, which will make that rather difficult.



Insane? 
Totally changes the game?









http://www.geforce.com/whats-new/articles/borderlands-2-physx

Ermmm you mean some extra particles that bounce away when you shoot something or some particle made water flowing somewhere?
Because that does not change the game in any way shape or form.
Its exactly the same gimmicky nonsense that physX does in Warframe.
Hell in that article they do not refer to the physX as "effects" for nothing, thats all it adds, some effects.

It adds nothing but some orbs flying around, while it could be the entire basis for how things are build up and react (ya know... Physics) like those tech demo's they show of it.

The fact that you can turn it off is pretty much the dead give away that it infact does not "totally change the game" because a game that is based around those physX would not work without it.
You cannot turn havok off in for example HL2, because the game does not function anymore if that would be the case.


----------



## arbiter (Feb 12, 2015)

dyonoctis said:


> Funny to see that nvidia is willing to work to make some of their physx/visualfx enjoyable on console running amd hardware, but do not give much option to optimize these effects on amd desktop...this is the part of the competition that i hate the most, that basically mean that amd would have to work on a solution of their own, and a game running both nvidia and amd "extra" will never happen.....



MS and Sony both Licensed the use of PhysX for XBone and PS4 so yea nvidia will work with them.



Sony Xperia S said:


> Unfortunately this shit always comes from nvidia only. I don't remember AMD doing anything closed standard......



Hrm, lets see mantle comes to mind. Freesync as well since that is a proprietary solution of the standard. Wonder what else i am forgetting.


----------



## THU31 (Feb 12, 2015)

ZoneDymo said:


> Ermmm you mean some extra particles that bounce away when you shoot something...



Not SOME.

I played the game for the first time without PhysX. When I played it the second time, I was blown away. There are hundreds of those particles, if not thousands in huge firefights. Also, a lot of debris actually stays on the ground, and interact with your shots and grenade.
Best PhysX implementation I have ever seen.

It does not change the gameplay, it changes the visuals. If you cannot appreciate it, then you must be really spoiled.


----------



## wiak (Feb 22, 2015)

HumanSmoke said:


> The really odd thing about this lineup, is what AMD expects to field in the discrete mobile arena. Presently, the top part is Pitcairn based (M290X) in its third generation of cards. The M295X's (Tonga) heat production in the iMac probably preclude its use in laptops, and Hawaii is clearly unsuitable.
> 
> That's where HBM starts. Better too have too much bandwidth than too little.
> 
> ...


well the solution is to double to 8GB? , but for consumers pretty sure they will start with 4GB on fairly new memory type, GDDR5 was pretty when amd introduced it


----------



## HumanSmoke (Feb 22, 2015)

wiak said:


> well the solution is to double to 8GB?


No mean feat since HBM1 is limited to 4 stacks of 1GB each (4 layers @ 2Gbit per chip), and the Fiji chip is specced for 4 stacks (1024Gbit x 4 for 4096Gbit bus width)


----------



## DinaAngel (Feb 22, 2015)

I bought nvidia gpus because of physx and I totally regretted it over time. It's just a performance drop. The physx effects are used by like maybe 30 games.

If the 380x or the 390x is very good then I might jump over to AMD


----------



## petteyg359 (Mar 1, 2015)

arbiter said:


> Hrm, lets see mantle comes to mind. Freesync as well since that is a proprietary solution of the standard. Wonder what else i am forgetting.




Freesync: No license fee, no additional hardware needed to add to your monitor, uses a built-in feature of DisplayPort 1.2a, hmm... It's an implementation of a standard. None of the connotations of "proprietary" apply. Do you also complain that AMI and Phoenix have their own BIOS implementations, and that Intel and AMD have their own x86 implementations, and that BouncyCastle and OpenSSL each have their own crypto implementations?

Mantle: Duh. It's an interface for their specific hardware. Do you expect Atheros to write drivers that work with Broadcom chips, too?


----------



## HumanSmoke (Mar 2, 2015)

petteyg359 said:


> Freesync: No license fee, no additional hardware needed to add to your monitor, uses a built-in feature of DisplayPort 1.2a, hmm... It's an implementation of a standard. None of the connotations of "proprietary" apply.


Seems you don't understand some of the terms you're using. FreeSync is hardware limited to the extent that even a large number of AMD's own graphics cards aren't supported (notably the HD 7000, 6000 series, and a whole bunch of *R* (-5/-7/-9) ebranded cards launched - in some cases, less than a year ago)








petteyg359 said:


> Do you also complain that AMI and Phoenix have their own BIOS implementations,


arbiter was showing examples of AMD proprietary tech, and you quote other examples of proprietary tech for what reason exactly?
American Megatrends and Phoenix BIOS's are proprietary tech, and has been since Phoenix devised their BIOS and started *charging $290,000 per vendor licence* and $25 per BIOS ROM chip in May 1984.


petteyg359 said:


> Mantle: Duh. It's an interface for their specific hardware


"an interface for their specific hardware" is pretty much a definition of proprietary.


petteyg359 said:


> Do you expect Atheros to write drivers that work with Broadcom chips, too?


How is this any different from expecting Nvidia to write engine code for AMD's driver and hardware? Because one thing is certain, AMD has no interest in using, nor supporting PhysX.


----------



## petteyg359 (Mar 2, 2015)

HumanSmoke said:


> "an interface for their specific hardware" is pretty much a definition of proprietary.



Hence the "duh" when somebody complains that it is proprietary. Of course it is proprietary. The whole point is that you don't expect nVidia to code for it, no more than you expect Atheros to write Broadcom drivers.



HumanSmoke said:


> How is this any different from expecting Nvidia to write engine code for AMD's driver and hardware? Because one thing is certain, AMD has no interest in using, nor supporting PhysX.



1. Because nobody ever asked nVidia to write Mantle code.
2. I doubt nVidia ever offered the implementation details without requesting a bunch of money in exchange.

You think it's okay to complain that nVidia would have to license Mantle, but whine about AMD not having licensed PhysX in the same breath? 



HumanSmoke said:


> Seems you don't understand some of the terms you're using. FreeSync is hardware limited



Seems you don't understand your own argument. It's hardware limited just like 4k over HDMI, 9k jumbo packets, and AVX. It requires DisplayPort 1.2a because that's where the specification exists to be implemented. A claim of it being "proprietary" is more like "My hardware is too old for this new stuff! Why can't I run this x64 AVX code on my Pentium 4?! I'm gonna whine about it!" nVidia supports DisplayPort; there's absolutely nothing stopping them from creating their own Adaptive-Sync driver-side implementation (and they could even call it Expensive$ync if they want to confuse people who don't understand that it is just an implementation of a damn standard just like FreeSync).


----------



## HumanSmoke (Mar 2, 2015)

petteyg359 said:


> Hence the "duh" when somebody complains that it is proprietary. *Of course it is proprietary*. The whole point is that you don't expect nVidia to code for it, no more than you expect Atheros to write Broadcom drivers.


So now its proprietary. You've just said it wasn't


petteyg359 said:


> None of the connotations of "proprietary" apply.


You can understand how people might no be following your logic, right?


petteyg359 said:


> 1. Because nobody ever asked nVidia to write Mantle code.


Don't think they care TBH. Also, they'd be shit out of luck even if they had. AMD's Mantle is closed Beta. Even Intel has been denied access.


> "I know that Intel have approached us for access to the Mantle interfaces, et cetera," Huddy said. " *And right now, we've said, give us a month or two, this is a closed beta*, and we'll go into the 1.0 [public release] phase sometime this year, which is less than five months if you count forward from June. - Richard Huddy, *June 2014*





petteyg359 said:


> 2. I doubt nVidia ever offered the implementation details without requesting a bunch of money in exchange.


And? 1. Nobody is disputing PhysX isn't proprietary, and 2. Even if the point being made concerned PhysX (which it isn't- It's about whether AMD produce proprietary IP), Nvidia paid $150 million for AGEIA - why would they give anyone free access to something they paid dearly for?


petteyg359 said:


> *You think it's okay to complain that nVidia would have to license Mantle*, but whine about AMD not having licensed PhysX in the same breath?


I've said no such thing. What you fail to understand is that stating the facts - that Mantle is not open source, automatically means that it is proprietary in nature. Now if you can find ANY post where I said that Mantle should be made available to Nvidia gratis then quote it. If you cannot (and you will not since I am already on record as stating Nvidia would never use code ultimately controlled AMD) then I kindly suggest you STFU with regards to misquoting me or anyone else. Creating straw man arguments and misquoting is not the way to advance a position.


----------



## petteyg359 (Mar 2, 2015)

HumanSmoke said:


> So now its proprietary. You've just said it wasn't



Never said any such thing, unless you choose to interpret it very strangely and believe that I said Atheros/Broadcom hardware/software is non-proprietary.


----------



## HumanSmoke (Mar 2, 2015)

petteyg359 said:


> Never said any such thing, unless you choose to interpret it very strangely and believe that I said Atheros/Broadcom hardware/software is non-proprietary.


I smell yet another straw man...


petteyg359 said:


> Freesync: No license fee, no additional hardware needed to add to your monitor, uses a built-in feature of DisplayPort 1.2a, hmm... It's an implementation of a standard. *None of the connotations of "proprietary" apply*.


Plenty of proprietary tech is free. Nvidia's CUDA is a prime example - proprietary and freeware. With that in mind, FreeSync certification is free as well, but AMD decide who gets the certification:


> *Certification of FreeSync monitors will be handled by AMD directly*. The company says it wants to ensure its brand is synonymous with a "good experience."


FreeSync _as a brand_ requires an AMD graphics card, a FreeSync approved monitor, and AMD's Catalyst Control Center. That makes it proprietary by definition.


----------



## petteyg359 (Mar 2, 2015)

And now you're countering arguments about Mantle with arguments about FreeSync. I smell "moving the goal posts", if things must be smelled.

WRT FreeSync "certification", that's no different than any other certification. If you want to stick somebody else's brand name on your product, you've got to get permission from them. That's how trademarks work, silly.


----------



## HumanSmoke (Mar 2, 2015)

petteyg359 said:


> And now you're countering arguments about Mantle with arguments about FreeSync. I smell "moving the goal posts", if things must be smelled.


WTF are you talking about. This whole exchange is centred around whether AMD produce proprietary IP as you took initially took issue with arbiter's post. Mantle and Freesync are both AMD tech, they are both proprietary either in practice, or branding.


petteyg359 said:


> WRT FreeSync "certification", that's no different than any other certification. If you want to stick somebody else's brand name on your product, you've got to get permission from them. That's how trademarks work, silly.


Of course, you're welcome to point out that in your view FreeSync is just VESA's Adaptive Sync by another name - and I wouldn't argue any differently, but FreeSync marketing is AMD trade marked, and of course if they are exactly the same then AMD's stance is at odds with reality:


----------



## arbiter (Mar 2, 2015)

petteyg359 said:


> Freesync: No license fee, no additional hardware needed to add to your monitor, uses a built-in feature of DisplayPort 1.2a, hmm... It's an implementation of a standard. None of the connotations of "proprietary" apply. Do you also complain that AMI and Phoenix have their own BIOS implementations, and that Intel and AMD have their own x86 implementations, and that BouncyCastle and OpenSSL each have their own crypto implementations?
> 
> Mantle: Duh. It's an interface for their specific hardware. Do you expect Atheros to write drivers that work with Broadcom chips, too?





HumanSmoke said:


> Seems you don't understand some of the terms you're using. FreeSync is hardware limited to the extent that even a large number of AMD's own graphics cards aren't supported (notably the HD 7000, 6000 series, and a whole bunch of *R* (-5/-7/-9) ebranded cards launched - in some cases, less than a year ago)



I will add another bit to this, You said no additional hardware is required. Besides that List of limited gpu's that support it. Just cause a monitor has DP 1.2a doesn't mean it supports adaptive sync. There was new hardware required in the monitors in the form of a new scaler chip which g-sync had on its module to start with. So the monitors needed this knew scaler to support adaptive sync. Adaptive part of the spec is optional not required for 1.2a. When AMD claimed no new hardware was needed well, a new monitor with one the new scaler is needed.

AMD when they announced freesync they claimed some current monitors supported it with no new hardware. Well that is AMD PR marketing for ya.



HumanSmoke said:


> WTF are you talking about. This whole exchange is centred around whether AMD produce proprietary IP as you took initially took issue with arbiter's post. Mantle and Freesync are both AMD tech, they are both proprietary either in practice, or branding.
> 
> Of course, you're welcome to point out that in your view FreeSync is just VESA's Adaptive Sync by another name - and I wouldn't argue any differently, but FreeSync marketing is AMD trade marked, and of course if they are exactly the same then AMD's stance is at odds with reality:



In the end Freesync is a proprietary implication of the standard. No matter how you cut that its still proprietary no matter what.

"AMD freesync tech is a unique AMD hardware/software"
                                         ^ Another way to say proprietary


----------



## petteyg359 (Mar 2, 2015)

arbiter said:


> In the end Freesync is a proprietary implication of the standard. No matter how you cut that its still proprietary no matter what.



All vendor implementations of all standards are proprietary by that definition. Way to completely invalidate your own argument!


----------



## AsRock (Mar 2, 2015)

ZoneDymo said:


> Insane?
> Totally changes the game?
> 
> 
> ...




Basically dumbed down what could be done without PhysX hardware and made the extra which is not really NV PhysX.  Lets face it it's nothing in that vid that a CPU could not handle, never mind the game being cartoon like it seen much better in other games and even Arma 3 has better shit than that.

Shit GTA 4 has better Physics than that game and that ran on a good system today runs really well.


----------



## arbiter (Mar 2, 2015)

petteyg359 said:


> All vendor implementations of all standards are proprietary by that definition. Way to completely invalidate your own argument!



No,  you are trying to twist things to fit your own logic. AMD took what was standard and used it in their own way that is LOCKED to their hardware and software. It can't be used by Nvidia that is what makes it proprietary.  What you are tring to claim is HDMI 2.0 on gtx900 cards is proprietary and AMD can't use HDMI 2.0, that is your ass-backwards logic.



AsRock said:


> Basically dumbed down what could be done without PhysX hardware and made the extra which is not really NV PhysX.  Lets face it it's nothing in that vid that a CPU could not handle, never mind the game being cartoon like it seen much better in other games and even Arma 3 has better shit than that.



Um cpu could handle it? Try using same setting and set the phyx to cpu and see how well the game runs then.


----------



## AsRock (Mar 2, 2015)

arbiter said:


> No,  you are trying to twist things to fit your own logic. AMD took what was standard and used it in their own way that is LOCKED to their hardware and software. It can't be used by Nvidia that is what makes it proprietary.  What you are tring to claim is HDMI 2.0 on gtx900 cards is proprietary and AMD can't use HDMI 2.0, that is your ass-backwards logic.
> 
> 
> 
> Um cpu could handle it? Try using same setting and set the phyx to cpu and see how well the game runs then.



It would need to be optimized for CPU which i bet not much was done.  Other company's can do it so they could if they really wanted too, same ol BS over again to try to make it look better than it actually is.

BL2 is BS anyways 1/2 made frigging game from lazy asses.  They just got lucky as they were out of time with the 1st one and people liked it so they pushed more 1/2 done bs and not make what they intended to make in the 1st time.


----------



## petteyg359 (Mar 3, 2015)

arbiter said:


> No,  you are trying to twist things to fit your own logic. AMD took what was standard and used it in their own way that is LOCKED to their hardware and software. It can't be used by Nvidia that is what makes it proprietary.  What you are tring to claim is HDMI 2.0 on gtx900 cards is proprietary and AMD can't use HDMI 2.0, that is your ass-backwards logic.



No, they took a standard, and branded their implementation of it. No duh the "FreeSync" is locked to their hardware. It's a brand name. You're complaining that Asus can't go around selling screens with "Dell UltraSharp" labels. If nVidia chooses to support Adaptive-Sync for G-Sync instead of their _ACTUALLY-PROPRIETARY-WITH-ALL-THE-NEGATIVE-CONNOTATIONS-BECAUSE-IT-ISN'T-STANDARDIZED_ custom hardware, there's no reason a monitor couldn't be both "FreeSync certified" and "G-Sync certified" at the same time.

There's "ass-backwards logic" in here, but it ain't in my posts.


----------



## Aquinus (Mar 3, 2015)

petteyg359 said:


> No, they took a standard, and branded their implementation of it.


Actually, it's because they really don't have HDMI 2.0 but rather 1.4a with some modifications to support 2.0-like features. Get your terms right.


petteyg359 said:


> No duh the "FreeSync" is locked to their hardware. It's a brand name. You're complaining that Asus can't go around selling screens with "Dell UltraSharp" labels.


One is a technology, the other is branding. There is a big difference.

Lets clarify one thing here because I think people don't know definitions:


petteyg359 said:


> All vendor implementations of all standards are proprietary by that definition. Way to completely invalidate your own argument!


Lets take a quote:


> _Proprietary software_ is software that is owned by an individual or a company (usually the one that developed it). There are almost always major restrictions on its use, and its _source code_ is almost always kept secret.


That does not make the fact a company developed it turn it into a proprietary software. It's proprietary because it's not open which imposes restrictions. Something is proprietary usually by how much of it you're willing to share. There is no such thing as open source proprietary software, which is what you would get if a company wrote open software by your definition.

For someone with only 7 posts, you've dug yourself a nice little hole for yourself rather quickly on a seemingly stupid topic (the definition of "proprietary").


petteyg359 said:


> there's no reason a monitor couldn't be both "FreeSync certified" and "G-Sync certified" at the same time.


I will agree with this statement unless there are specific rules for either that forbid having both at once.

With that all said, it's entirely possible to have both a proprietary and open source versions of an implemented specification, but doing it doesn't make it proprietary de-facto.


----------



## petteyg359 (Mar 3, 2015)

Aquinus said:


> Lets take a quote:
> 
> That does not make the fact a company developed it turn it into a proprietary software. It's proprietary because it's not open which imposes restrictions. Something is proprietary usually by how much of it you're willing to share. There is no such thing as open source proprietary software, which is what you would get if a company wrote open software by your definition.



I like how you stripped the context there so you could claim I said the opposite of what I said.


----------



## bencrutz (Mar 4, 2015)

on every amd threads @ TPU? good god


----------

