• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

AMD sort-of-news devolves into flame war. Colour me shocked!

FUN FACT #1: ATI looked at buying AGEIA but wouldn't meet the asking price.
FUN FACT #2: Nvidia offered AMD a PhysX licence (after paying the $150 million asking fee to buy AGEIA), but AMD decided to go with HavokFX, because OpenCL gaming was the next big thing. This is the same Havok that required a licensing fee and was supported by exactly zero games.
FUN FACT #3: When the PhysX hack for AMD cards arrived, it was AMD who threw up the roadblock.

So, ATI/AMD couldn't be bothered buying PhysX, couldn't be bothered licensing it once Nvidia purchased it, and actively blocked the development of a workaround that would allow the AMD community from using it. If you have an Nvidia card you can use it. If you have an AMD card, why should you care? AMD certainly don't.

FUN FACT. LEGAL REASONS.

And so what if TressFX uis limited only to hair. It does work on ANY graphic card with DirectCompute support. You can't even have PhysX hardware accelerated hair if you just happent o have Radeon...
 
most cpu's can handle any game fine but not when it only runs on 1 core. physx is shady at best.. it was some what relevant when it got going but they should have just showed developers how to make a game scale across more cores and reduce cpu dependency.
 
A new standard in physics would be nice.
Was Havok 2.0 not coming with massive improvements?

PhysX as is is just a joke, mainly because no one should bother with it on any serious level if only Nvidia users can make use of it.
All it is is gimmicky effects here and there, some smoke moving in batman, some flying orbs in borderlands or warframe, whoopdishit yo.
 
FUN FACT. LEGAL REASONS.

And so what if TressFX uis limited only to hair. It does work on ANY graphic card with DirectCompute support. You can't even have PhysX hardware accelerated hair if you just happent o have Radeon...

Fun Fact: AMD wasn't breaking any laws if they let it go. As long as they didn't provide any support for the hack or promote it, they would been free and clear of any liability.

All it is is gimmicky effects here and there, some smoke moving in batman, some flying orbs in borderlands or warframe, whoopdishit yo.

"sarcasm" yea those effects don't make the game look more real as smoke would in real live "/sarcasm"

I am done with this thread, turning in an AMD fanboy thread trying to twist history to make AMD look like the super hero with a can do no wrong persona and Nvidia the super villain.
 
Thread has died....
 
just way off topic.. its whatev anyway.. us pc gaming nerds gotta argue about something..
 
I care that the 970 has a multi-monitor idle consumption of <5-watts and the 290 is closer to 55-watts. So, yes. People like me, who aren't gaming most of the time (but do still game regularly) but are using multiple monitor for productivity reasons, do care a little bit as power usage as it adds up over time. Is it a huge factor? No. Is it one worth considering? Sure.

Also, higher efficiency would mean lower temps or more overhead for higher clocks which is never a bad thing.

This is a tangent, but it's partly the reason I didn't buy that GTX 570 for €35. 80W in multi monitor mode is awful, and since I'm running old monitors just the GPU and monitors would be about 180W. :(

Anyway, bring on the low/mid level I say! That's where the REAL action is!
 
28nm?

"The only truly new silicon with the R9 300 series, is 'Fiji.'"

Ok.. Nevermind. At least there will be something that's actually new.

"4 GB could be the standard memory amount."

Awful. 6 GB should be the standard for high-end cards going forward.

I wholeheartedly agree. No doubt they'll know of the demand for them and some will become available at a later stage. Quite a few games managing to break 3GB now so those new cards only just cut it.
 
Thread has died....
Aye, was there any doubt?

Once the words "rebrand" and "PhysX" got posted it was light the blue touch paper and cue the Mission Impossible theme.
 
ok, this is amusing :roll:
Well, that's what happens when everybody is making a different point I guess. The original point I was making was that ATI (and later AMD) had ample opportunity to acquire PhysX. They simply didn't want it in any way, shape or form...and by all accounts (especially judging by the reaction here), people here don't either (FWIW. its not a must have feature for me either), yet the frothing at the bung Pavlovian response over what is supposed to be a worthless feature nobody wants (least of all AMD) is evident every time that four-letter word is posted.
 
Late to the party. Not much to add except some off-topic in regard to Physx: I can't understand why would nVidia remove support for running Physx on a Hybrid setup :banghead: They cited "support reasons" (read: they won't test nVidia + AMD configs so they can't officially support them) but why not get out a Beta driver or something for Hybrid configurations with a "no support" disclaimer? Or at the very least don't block mods?

Not that it matters much nowadays (can't recall any recent Physx game except for the Batmans) but I resented nVidia a lot back in the day... more so considering that I was one of the suckers that bought the Ageia card back then and nVidia threw us under a bus as soon as they bought them.

/rant
 
Not that it matters much nowadays (can't recall any recent Physx game except for the Batmans) but I resented nVidia a lot back in the day... more so considering that I was one of the suckers that bought the Ageia card back then and nVidia threw us under a bus as soon as they bought them.
That was on the cards as soon as the ink was dry. AGEIA went belly up because the PPU was too expensive for the feature set. Nvidia wouldn't make the same mistake (as both the original release and the comments alluded to in this news item). FWIW, even if AMD had pulled the trigger on buying AGEIA, the exact same outcome would have eventuated. Remember that ATI/AMD was all about GPU accelerated physics back when it was fashionable (their "Boundless Gaming" initiative). As you say, it matters little now. CPU physics is widely available (Havok, Bullet etc), and more game engines with their own physics engines arrive on occasion.
 
I just wish physics would get standardized under DirectX. This is the only way to move gaming technology and realism further. Because without the unified physics support, physics CANNOT be used as core game element, they can only be used for useless eye candy. That would mean one or another GPU camp wouldn't even be allowed to play the game. If they could standardize everything else, why not physics as well? Dedicated physics API would be great. Something DirectCompute could become, but just didn't...
 
o_O microsoft doesn't need to implement a useless nv technology.. dx12 will do away with nonsense on how the cpu should be used.. dx11 is already good at it but it will be easier anyway.
 
Last edited:
I am disappoint. I was hoping 380X was the card with 4096 stream processors, not 390/390X. The only thing I'm not disappointed about is 380/380x/390/390X are all coming really soon. :D

I'm guessing 390 will go for $400 and 390X will go for $500 or more. If those prices are $100 cheaper than that guesstimate, it'll be a tough choice for me to pick between the two.
 
I am disappoint. I was hoping 380X was the card with 4096 stream processors, not 390/390X. The only thing I'm not disappointed about is 380/380x/390/390X are all coming really soon. :D

I'm guessing 390 will go for $400 and 390X will go for $500 or more. If those prices are $100 cheaper than that guesstimate, it'll be a tough choice for me to pick between the two.

Don't worry, guys, the most important was already said.

Many will be simply skipping anything on this pesky 28 nm process. :(

R.I.P thread! :D
 
Those specs seem impressive. But 4gigs of vram? I mean, at 4K I can already hit the wall with current AMD GPUs (running quad r9 290x). Do they plan to use that memory bandwidth to swap out textures so fast that it will solve the issue of hitting the memory limits and stuttering? Isn't it a gable, because it will require a lot of driver optimizations to do it efficiently? And AMD's drivers are lacking in quality department for the past few itterations at least.
 
Well its funny this thread became very foolish again, seems we cannot have a thread regarding GPU's without resorting to name calling from each of the hardcore fans or resorting in the usual "well my company has (insert feature that I will pointlessly rant about being the best thing since sliced bread) and your doesn't, praise (Insert company) :respect:".

If you guys have a problem with certain people making fanboy comments, ignore the person and move on already otherwise you just make them feel important while they cook up excuses/retorts and that you care which in turn ruins threads.

Back to the topic at hand, the disappointment for this announcement only to me is that the R9 380X is not going to be the next big part/a new part. Though I guess depending on how well they improve/refine Hawaii into Grenada we might see something truly impressive. Though the real chip everyone has their eyes on is the R9 390X and what it brings to the table.
 
Well hawaii has slow memories bundled with 512bit bus, so there's room of improvement(tuning memory controller and supporting faster vrams).

Then of course depending which tmsc manufacturing node they are using, moving to more effiency 28nm node might improve energy consumption(I think nvidia uses 28nm hpc for gm204/gm206 and amd uses 28nm hp? not the same node anyway). So can Grenada based r9-380-series be faster than gtx980/970 sure, but better perf/W very unlikely.
 
View attachment 62540

I could see how it would be possible for shrinking gpu's to show many of the same problems. they are loving smaller lith for for mobile devices but perhaps there is bigger hurtles on the high end gpu side of things.

The problem exists because the die size gets smaller and smaller, as they are not increasing the number of cores. Lynnfield was 290 mm2, Sandy Bridge was 216 mm2, Ivy Bridge was 160 mm2. With Broadwell this will probably get below 120 mm2.
By the way, we are still paying pretty much the same price for quad core CPUs, and that is absolutely pathetic. The manufacturing costs must be insanely low.

2009 Lynnfield 45 nm 290 mm2 - 196 $
2014 Haswell 22 nm 177 mm2 - 182 $ (242 $ for a model that allows overclocking, sick)

We should have had six-core CPUs for 200 $ by now.



Die size is not a problem for GPUs. High-end GPUs are usually between 400 and 600 mm2, so heat dissipation is not a problem.
Whenever they change the node, they pack a lot more transistors into the chips, making them much faster while keeping a similar die size. Intel do not do that anymore, they are reducing the die size without increasing performance or clock speeds.
 
thanks.. was hoping someone could bridge the difference if I threw it in there.

is it possible they cant fit anymore transistors since its smaller?

where would I look for some more inside info on chip engineering?
 
The problem exists because the die size gets smaller and smaller, as they are not increasing the number of cores. Lynnfield was 290 mm2, Sandy Bridge was 216 mm2, Ivy Bridge was 160 mm2. With Broadwell this will probably get below 120 mm2.
By the way, we are still paying pretty much the same price for quad core CPUs, and that is absolutely pathetic. The manufacturing costs must be insanely low.

2009 Lynnfield 45 nm 290 mm2 - 196 $
2014 Haswell 22 nm 177 mm2 - 182 $ (242 $ for a model that allows overclocking, sick)

We should have had six-core CPUs for 200 $ by now.



Die size is not a problem for GPUs. High-end GPUs are usually between 400 and 600 mm2, so heat dissipation is not a problem.
Whenever they change the node, they pack a lot more transistors into the chips, making them much faster while keeping a similar die size. Intel do not do that anymore, they are reducing the die size without increasing performance or clock speeds.

Yeah, we discovered the hot water.... :)

By the way.... you can counter this trend simply by ignoring the existence of Intel.

Just be smarter and buy all AMD. ;)

beargn.jpg
 
Yeah, we discovered the hot water.... :)

By the way.... you can counter this trend simply by ignoring the existence of Intel.

Just be smarter and buy all AMD. ;)

beargn.jpg

No thanks, moar power required. Zen is too far away. Might mix Intel and AMD when 390X comes out though.
 
Back
Top