Monday, February 9th 2015

Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

AMD's upcoming Radeon R9 380X and R9 380 graphics cards, with which it wants to immediately address the GTX 980 and GTX 970, will be based on a "new" silicon codenamed "Grenada." Built on the 28 nm silicon fab process, Grenada will be a refined variant of "Hawaii," much in the same way as "Curacao" was of "Pitcairn," in the previous generation.

The Grenada silicon will have the same specs as Hawaii - 2,816 GCN stream processors, 176 TMUs, 64 ROPs, and a 512-bit wide GDDR5 memory interface, holding 4 GB memory. Refinements in the silicon over Hawaii could allow AMD to increase clock speeds, to outperform the GTX 980 and GTX 970. We don't expect the chip to be any more energy efficient at its final clocks, than Hawaii. AMD's design focus appears to be performance. AMD could save itself the embarrassment of a loud reference design cooler, by throwing the chip up for quiet custom-design cooling solutions from AIB (add-in board) partners from day-one.
In other news, the "Tonga" silicon, which made its debut with the performance-segment Radeon R9 285, could form the foundation of Radeon R9 370 series, consisting of the R9 370X, and the R9 370. Tonga physically features 2,048 stream processors based on the more advanced GCN 1.3 architecture, 128 TMUs, 32 ROPs, and a 384-bit wide GDDR5 memory interface. Both the R9 370 and R9 370X could feature 3 GB of standard memory amount.

The only truly new silicon with the R9 300 series, is "Fiji." This chip will be designed to drive AMD's high-end single- and dual-GPU graphics cards, and will be built to compete with the GM200 silicon from NVIDIA, and the GeForce GTX TITAN-X it will debut with. This chip features 4,096 stream processors based on the GCN 1.3 architecture - double that of "Tonga," 256 TMUs, 128 ROPs, and a 1024-bit wide HBM memory interface, offering 640 GB/s of memory bandwidth. 4 GB could be the standard memory amount. The three cards AMD will carve out of this silicon, are the R9 390, the R9 390X, and the R9 390X2.
Source: 3DCenter.org
Add your own comment

156 Comments on Radeon R9 380X Based on "Grenada," a Refined "Hawaii"

#51
RejZoR
Your GTX 970 is NEVER using just 5W when idling on a desktop...
Posted on Reply
#52
Aquinus
Resident Wat-man
RejZoRYour GTX 970 is NEVER using just 5W when idling on a desktop...
My bad, it's 10 watts. Still 40 lower... Also I find it funny when you say "Your 970" as I don't own one as I'm pretty sure my specs still say I'm rocking two 6870s which also suck with multi-monitor power consumption. :)

All I'm saying is that it's a selling point and nothing bad comes out of improving efficiency.
Posted on Reply
#53
RejZoR
Because multi-monitor setup is somehow the majority of what users have? And if you can afford GTX 970 and 2 monitors, surely that few watts of difference makes zero real world difference.

Now show me a single monitor difference? Something large majority of users have...
Posted on Reply
#54
Aquinus
Resident Wat-man
RejZoRBecause multi-monitor setup is somehow the majority of what users have? And if you can afford GTX 970 and 2 monitors, surely that few watts of difference makes zero real world difference.

Now show me a single monitor difference? Something large majority of users have...
Why would I care what other users want when I'm looking at a GPU for myself? I think you misread my post...
AquinusI care that the 970 has a multi-monitor idle consumption of <5-watts and the 290 is closer to 55-watts.
I never said most people have multiple monitors... but I do. You already know the answer to your own question as well. I don't appreciate the rhetorical questions. It's not a lot of money, but it's easily a 12-pack of beer every month or two that I wouldn't otherwise have otherwise. While GPU alone might not make a difference, inefficiencies add up which contributes to the amount of heat in your machine. If you want a quiet machine, you're not going to do it by eating power for breakfast. The simple fact is that it doesn't simply come down to the cost of electricity. A HTPC with a 200-watt idle like my tower wouldn't be a very quiet HTPC, now would it?
Posted on Reply
#55
THE_EGG
RejZoRWho cares about efficiency really. If they can still manage noise levels, I don't really care. Efficiency mattered for miners, but you don't play games 24/7 or do you?
I care for efficiency as long as it doesn't come at an ungodly increase in price for the end product. The reason I care for efficiency is that the cooler's fans can run slower (and therefore quieter) on the efficient graphics card because it will be producing less heat from using less energy than a more hungry graphics card that uses more energy and thus producing more heat and thus needing to spin the fan(s) at a higher rpm (noisy) to achieve the same/similar temperatures as the more efficient graphics card. At the high end spectrum of graphics cards it seems that liquid cooling could become the norm for managing temperatures at a reasonable noise level (e.g. 295X2). That being said if I was into custom liquid cooling, then I probably would not care all that much for efficiency as it would be pretty easy to add more rads.
Posted on Reply
#56
64K
Serpent of DarknessFor Nvidia, GTX 760 is a GTX 680 rebrand. GTX 960 is a GTX 780 rebrand.
GTX 770 was a refresh of the GTX 680. GTX 960 is a totally different architecture from the GTX 780. When the GM210 drops we will see that Nvidia intended the 960,970,980 to be mid range GPUs even though the 980 is the fastest GPU so far in the Maxwell lineup.

I've been reading some rumors that say the R9 380X will be more power efficient and with the same number of cores as the R9 290X and more room to push the clocks higher due to improved efficiency then it should take the title from Nvidia as the fastest GPU around.
Posted on Reply
#58
alwayssts
xfiaIt would have been nice to see a 20nm chip. probably could have lowered the power draw anyway but maybe hbm is more efficient and will help out. have not seen anything about the efficiency...
Yeah, but understandable given cost/yields/etc. 20nm would have likely saved 20% on the gpu side. HBM should save about 15w on the memory side (vs the old product, granted this is faster).
Assimilator640GB/sec, why? Unless AMD is planning to address the "4K problem" by swapping out textures all the time, I don't see any benefit to this, and lots of drawbacks (price being one of them). Considering nVIDIA's GPUs have always been able to match AMD's for performance, while using much narrower bus widths (Hawaii: 512bit, Maxwell: 256bit), I'm not seeing any good reason, unless of course AMD's architecture is far more bandwidth-dependant than nVIDIA's.
Okay, that's a loaded question. The short-ish answer is yes, AMD's arch is more memory dependant because of (typically) greater available floating point/shader resources (amd's units do double-duty for sf as well....where-as nvidia uses less fpu and smaller special function units at a ratio that is often close to fully utilized in many scenarios) plus the fact that around 1/3 of what would be nvidia's similarish required bandwidth is derived from a mixture of greater on-die cache and whatever benefits their newer compression allows. If I had to guess, the split on that is something like 75% cache improvements, 25% compression improvements. IOW, the compression improvements help around 8% to slightly more, just like Tonga for AMD.
HumanSmokeThe really odd thing about this lineup, is what AMD expects to field in the discrete mobile arena. Presently, the top part is Pitcairn based (M290X) in its third generation of cards. The M295X's (Tonga) heat production in the iMac probably preclude its use in laptops, and Hawaii is clearly unsuitable.

That's where HBM starts. Better too have too much bandwidth than too little.

Hey, someone has to lead the charge. Just imagine the marketing mileage from 640GB/sec. It's like those nutty theoretical fillrates pumped up to eleven!

Fiji will do double duty as a compute chip, where on-card bandwidth will play a much greater role in GPGPU. FWIW, even Nvidia are unlikely to go below 384-bit for their compute chip. The one thing that will hold Fiji back is the 4GB as a FirePro (not the bandwidth). AMD already has the W9100 with 16GB of onboard GDDR5 for a reason.
First, never say never...IIRC nvidia sold big-ass Fermi (granted cut wayyy down and clocked in the basement) in laptops.

Right about HBM...plus, if they shrink/reconfigure the core design on 14/16nm for 50% more resources, the mem controller could probably be reasonably recycled....it's also possible they could cut cache or what-not because of that off-die bw. Not saying they did/will...but who knows? It's possible it's there for more than being ahead-of-it's-time (but a necessary evil) on bw and behind-it's-time on density configuration. Even if they didn't change anything, it should be good for a good extra chunk of performance (double over what is required gives typically around a 16% boost...this in essence could give something like 8% over what one might expect given the other specs and typical core usage).

Either way you look at it, this thing *has* to compete with a GM200 21SMM part. Say that can do 1400mhz best-case, that essentially means this has to do 1200 to compete. The bandwidth required for quite literally 10TF is..well...a lot. You'd be talking needing an 8ghz/512-bit controller which wouldn't exactly be small nor power efficient (if even possible with within die size limits). As odd as it sounds, twice the controllers at (what apparently amounts to 5ghz) is likely both less transistors and more efficient within the gpu logic..
the54thvoidDon't try comparing the arguments over a 970's memory shenanigans to AMD's next uber chip (Fiji). I'm not clued up on it but many say HBM only caters for 4GB memory allowance (for now?...). The 970 is the cheaper Maxwell performance part whereas 390X will be the single gpu top tier.

And yes, those that bought cards with 4GB (or not as the 970 is) would have figured that into their choice. If 390X is to be AMD's next gen top tier card, you would hope it would have more as they have already seen fit (AIB's) to release a 8GB 290X with albeit small rewards at 4k.

IMO, I don't know if we need >4GB for gaming purpose except on poorly coded things (look at the recent CoD for bad memory hogging, or Titanfall IIRC). But if we do need >4GB in the next year or so, I'm pretty sure there will be developments to allow higher memory usage on the AMD chips.

So, to be plain - 4GB won't be an immediate worry and I'm sure it will be addressed when needed.
Correct. HBM is currently 1GB. Implementation, unlike the setup of gddr5, is limited to four chips. That means 4GB. 2nd gen is due end of year. Does that mean a refresh before 14/16nm? Conceivably...but who knows how fast amd is transitioning to the smaller process. I continue to disagree about 4GB being enough...If one was to argue things should be properly coded for 4k/8GB (or possibly 6GB in many instances), we could have a conversation. That said, it's not going to stop HBM memory density from increasing and badly optimized console ports targeted toward that shared pool of memory at a low resolution from being a scaling factor in some regards. I still stand by GM200/R390x for the most part being 1440p-targeted chips (think in terms of a 900p 30fps ps4 game scaled to 1440p/60)...just like GM204 is mostly a 1080p-targeted chip. In those respects, it can be argued 4GB(/6GB in some cases) is sufficient.
Posted on Reply
#59
GhostRyder
Well now we seem to be getting a bigger picture of whats going on, so it seems the R9 380X is going to be an upgraded R9 290X similar to how the R9 285 was an upgraded R9 280 (though probably without reducing the memory this time). If that's the case then the power differences in the improved design is going to be put towards improved overclocks to knock the performance up a bit. The R9 390X sounds cool but I do agree the 4gb even with that insane bandwidth is going to be the part I worry about on the gaming front especially since that is probably going to be the first (Depending on when Nvidia release their Titan II) chip that will handle 4K on a single card (Not perfectly of course but decently enough). Hopefully by that point I we might see some way to do 8gb versions of the cards for the hardcore users out there who want the extra VRAM though that is skeptical based on the HBM limits currently.

Cannot wait to see more of the develop!
Posted on Reply
#60
jabbadap
GhostRyderWell now we seem to be getting a bigger picture of whats going on, so it seems the R9 380X is going to be an upgraded R9 290X similar to how the R9 285 was an upgraded R9 280 (though probably without reducing the memory this time). If that's the case then the power differences in the improved design is going to be put towards improved overclocks to knock the performance up a bit.
Not really comparable. r9-280 is gcn1.0 and r9-285 is two generations advanced gcn1.2. R9-290x is gcn1.1 and r9-380x seems to be still gcn1.1, redefined optimized hawaii that its. More like a hd7970GHz tahiti to r9-280x tahiti xtl.
GhostRyderThe R9 390X sounds cool but I do agree the 4gb even with that insane bandwidth is going to be the part I worry about on the gaming front especially since that is probably going to be the first (Depending on when Nvidia release their Titan II) chip that will handle 4K on a single card (Not perfectly of course but decently enough). Hopefully by that point I we might see some way to do 8gb versions of the cards for the hardcore users out there who want the extra VRAM though that is skeptical based on the HBM limits currently.

Cannot wait to see more of the develop!
There's no bigger than 1GB hbm memories out yet. 8GB memory would need higher density hbm 2GB memory packages from hynix, before getting 8GB card. Memory available now it would need 2048bit memory interface from the gpu(I don't think you can split memory bandwidth between 2 hbm memory package, I could be wrong though).

In short r9-380/x, r9-370/x and r7-260/x(I really hope this isn't rebranded pitcairn as 3dcenter rumored it to be) is kind of meh been there done that. Really interesting parts will be r9-390 -series.
Posted on Reply
#61
GhostRyder
jabbadapNot really comparable. r9-280 is gcn1.0 and r9-285 is two generations advanced gcn1.2. R9-290x is gcn1.1 and r9-380x seems to be still gcn1.1, redefined optimized hawaii that its. More like a hd7970GHz tahiti to r9-280x tahiti xtl.
Yes but my reference was to the fact that its supposed to be the upgraded variants shown only currently in Tonga (For instance the compression method) not so much the jump in differences.
jabbadapThere's no bigger than 1GB hbm memories out yet. 8GB memory would need higher density hbm 2GB memory packages from hynix, before getting 8GB card. Memory available now it would need 2048bit memory interface from the gpu(I don't think you can split memory bandwidth between 2 hbm memory package, I could be wrong though).

In short r9-380/x, r9-370/x and r7-260/x(I really hope this isn't rebranded pitcairn as 3dcenter rumored it to be) is kind of meh been there done that. Really interesting parts will be r9-390 -series.
That's why I said its "skeptical based on the HBM limits currently", its only going to be revealed once we get the full rundown on the stacked memory as to if this is going to be possible at all to double up or we will be waiting for bigger chips.
Posted on Reply
#62
MxPhenom 216
ASIC Engineer
I wonder what the pricing will be like on the 390/390x.
Posted on Reply
#63
64K
MxPhenom 216I wonder what the pricing will be like on the 390/390x.
Just speculation right now but the R9 280X launched for $300 and the R9 290 launched for $400 and the R9 290X launched for $550. I would guess that the 300 series will launch within $50 of the 200 series. I expect the R9 380X will outperform the GTX 980 if the rumors are true. That would be something if AMD launched that GPU for $300-$350 in a few months. $200-$250 cheaper than the 980. Ouch.
Posted on Reply
#64
Sony Xperia S
Yes, AMD cards' severe weakness is their tremendous idling power consumption> It's so high to the level of being ridiculously high. They are amateurs in this regard compared to nvidia engineers. :(

Also, yes, stacked memory needs some time to ramp up but I guess they needed to start at some point and it is now.
Posted on Reply
#65
RejZoR
40W is ridiculously high for GPU's that came out 2,5 years ago and you're all comparing it to a brand new GPU from NVIDIA released 4 months ago. Ooooook...
Posted on Reply
#66
64K
Sony Xperia SYes, AMD cards' severe weakness is their tremendous idling power consumption> It's so high to the level of being ridiculously high. They are amateurs in this regard compared to nvidia engineers. :(

Also, yes, stacked memory needs some time to ramp up but I guess they needed to start at some point and it is now.
Idle power draw for the cards

R9 290X Reference Card 17 watts
R9 290X Lightning 22 watts

GTX 980 Reference 8 watts
GTX 980 Gaming 14 watts

If leave your computer on 24 hours a day every day idling and you pay the national average per kWh (12 cents) then the reference R9 290X will add 78 cents a month to your power bill. For the factory OC cards the R9 290X will add 70 cents a month. It's just not much at all and if that amount matters to you or you pay a lot more for electricity then I would say turn your rig off when not in use.
Posted on Reply
#67
Rahmat Sofyan
RejZoR40W is ridiculously high for GPU's that came out 2,5 years ago and you're all comparing it to a brand new GPU from NVIDIA released 4 months ago. Ooooook...
Ouchhhh, bullseye :)

goodpoint bro, just wait R300 and we can talk and compare it to G9 series, but if AMD too late, it'll another story..
Posted on Reply
#68
ShurikN
64KIdle power draw for the cards

R9 290X Reference Card 17 watts
R9 290X Lightning 22 watts

GTX 980 Reference 8 watts
GTX 980 Gaming 14 watts

If leave your computer on 24 hours a day every day idling and you pay the national average per kWh (12 cents) then the reference R9 290X will add 78 cents a month to your power bill. For the factory OC cards the R9 290X will add 70 cents a month. It's just not much at all and if that amount matters to you or you pay a lot more for electricity then I would say turn your rig off when not in use.
I can probably find more money than that on the pavement... daily :D
Idling is not an issue. People are grasping at straws...
Posted on Reply
#69
Ferrum Master
IMHO AMD screwed on one thing... Bermuda should have been a triangle :D a triple head :laugh:
Posted on Reply
#70
D1RTYD1Z619
They need to release them already. IM VIDEO CARDLESS.
Posted on Reply
#71
xorbe
the54thvoidYeah, that's wrong too. 760 was the 670
That's wrong too ... 670 had 1344 cores, 760 had 1152, and so on.
Posted on Reply
#72
ZoneDymo
Kinda dissapointed with this, just some higher clocks? thats not going to put much of a dent in the landscape :(. I want actual new cards with new tech that are actually more power efficient and faster and full 4k capable :(

#4Kapable
Posted on Reply
#73
Casecutter
64KIf leave your computer on 24 hours a day every day idling...
Wouldn't such time you'd be "Sleeping" and wouldn't AMD ZeroCore be in play.
Just saying.
Posted on Reply
#74
arbiter
the54thvoidYeah, that's wrong too. 760 was the 670 and the 770 was the 680. Both Kepler.

As Breit rightly says, GM is not GK.
Least with 680 to 770, Nvidia did clock bumps and made 770 faster, unlike AMD where 7970 to 280x had its clocks lowered.
Ferrum MasterActually you both are exaggerating... 290x stock blower is a disaster and that's a fact, it's poorly made to begin with. But yes you are right, news article should not contain such bashing already, that's our task to do :D.
news articles can bash all they want pointing at past where a company complete screwed up on a product and yes that stock blower from a 6000 series card was a complete screw up, Pretty bad that you lose 20% performance after 5min of gaming
refillableWow, I was disappointed... I thought 380X is Fiji and Bermuda is 390X. Well I guess to put this in perspective is 380X will shoot straight at 970. 390X will compete with the future Titan II/980 Ti.

What could be disappointing is the efficiency. With these chips you will only be getting 285 like efficiency, which is no where near Maxwell. Heat should be maintained pretty well IMO. AIB coolers with double fans are going to keep the temperatures down.
I am laughinh up a storm atm, remember all the AMD fans jumping on rumors and thinking 380x was gonna be a 4096 gcn monster yet its not even close to what they were expecting.
Posted on Reply
#75
the54thvoid
Super Intoxicated Moderator
xorbeThat's wrong too ... 670 had 1344 cores, 760 had 1152, and so on.
Pedant.

You know fine well what I meant.
Posted on Reply
Add your own comment
Oct 20th, 2024 07:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts