Monday, May 19th 2014

AMD Readies 28 nm "Tonga" to Take on GM107

NVIDIA's energy-efficiency leap achieved on existing 28 nanometer process, using the "Maxwell" based GM107, appears to have rattled AMD. The company is reportedly attempting a super-efficient, 28 nm, mid-range chip of its own, codenamed "Tonga." The chip could power graphics cards that compete with the GeForce GTX 750 Ti and GTX 750. The chip is likely to be based on Graphics CoreNext 2.0 micro-architecture, the same one that drives "Hawaii," which means AMD isn't counting on the micro-architecture for efficiency gains. It could feature an evolution of PowerTune, which works closer to the metal than its existing implementation on "Hawaii." Other features could include Mantle, TrueAudio, and perhaps even XDMA CrossFire (no cables needed). The chip could be wired to up to 2 GB of memory.

Another equally plausible theory doing rounds is that "Tonga" could be a replacement to "Tahiti Pro," designed to compete with the GK104 at much lower power footprint (than "Tahiti"), so AMD could more effectively compete with the GeForce GTX 760. The chip could be similar in feature-set to "Tahiti," with a narrower memory bus (256-bit wide), but higher clock speeds to make up for it. If this theory holds true, then "Tonga" could disrupt both Tahiti Pro and "Curacao XT." Curacao XT (R9 270X) is designed to offer a value-conscious alternative to the $250 GTX 760. The R9 280 is competitive in performance, but takes a beating on the energy-efficiency front, and is also costlier to manufacture, due to the higher transistor count and four additional memory chips. We could hear more at Computex 2014.
Source: VideoCardz
Add your own comment

27 Comments on AMD Readies 28 nm "Tonga" to Take on GM107

#1
seronx


Tonga is replacing Tahiti Pro as you can tell;
FirePro W8100(Tonga XT) from FirePro W8000(Tahiti Pro)
R9 M295X(Amethyst XT) is above Pitcairn by a large margin.

Iceland is a rebranded Oland.

Maui has yet to be revealed.
Posted on Reply
#2
The Von Matrices
It would be great if AMD could achieve it, but I find it hard to believe that AMD could equal or exceed the efficiency of GM107 while using the exact same architecture as Hawaii.
Posted on Reply
#3
GhostRyder
Interesting, but I will be curious to see what this could mean for low power options. Honestly I think this maybe a bit late to the game if they are still using GCN Hawaii, it seems like it should be time to be focused on the next generation.
Posted on Reply
#4
Steevo
The Von MatricesIt would be great if AMD could achieve it, but I find it hard to believe that AMD could equal or exceed the efficiency of GM107 while using the exact same architecture as Hawaii.
I agree, and have to wonder why they are focusing on this at all, when a APU system with a slightly higher TDP has a much better market appeal than a power miser system with a anemic CPU to choke a meh GPU in the name of saving a few watts. Our design setups for systems such as these are out of sync with what we want, good CPU performance for interwebs and excel/word stuff and OK GPU performance for watching cat videos on youtube and netflix. Why buy a $50 CPU and $60 GPY when I would be just as happy with a 65/95W APU doing the same work?
Posted on Reply
#5
LeonVolcove
More Competition the better, good job AMD
Posted on Reply
#6
v12dock
Block Caption of Rainey Street
The Von MatricesIt would be great if AMD could achieve it, but I find it hard to believe that AMD could equal or exceed the efficiency of GM107 while using the exact same architecture as Hawaii.
Did you forget about the 5870/5850 and the 480/470 being the direct competitor
Posted on Reply
#7
The Von Matrices
SteevoI agree, and have to wonder why they are focusing on this at all, when a APU system with a slightly higher TDP has a much better market appeal than a power miser system with a anemic CPU to choke a meh GPU in the name of saving a few watts. Our design setups for systems such as these are out of sync with what we want, good CPU performance for interwebs and excel/word stuff and OK GPU performance for watching cat videos on youtube and netflix. Why buy a $50 CPU and $60 GPY when I would be just as happy with a 65/95W APU doing the same work?
There's always the upgrade market; it's easier to add a GPU than to replace your CPU and motherboard.

It would make sense AMD use a mainstream 28nm GPU to introduce some new architectural changes since

1.) The 20nm process is not ready yet
2.) Mainstream GPUs are going to be on 28nm for the foreseeable future (since it less than half the price of 20nm) which means that this 28nm GPU will not be obsolete once 20nm GPUs come out
v12dockDid you forget about the 5870/5850 and the 480/470 being the direct competitor
That's two different architectures; the OP states that this new GPU will use the same architecture as the old one. A better comparison is the 3870 to 4870, which were both on the 55nm process and both used VLIW5, although the 4870 did have architectural tweaks compared to the 3870. The OP indicates that the new GPU doesn't even have that opportunity to improve performance.
Posted on Reply
#8
Steevo
But that is the crux, if you need a GPU update you probably need a CPU update, systems close to their 5 year EOL for me were always better replaced opposed to adding a new component. If you are looking to spend $200 to upgrade a new APU/board and RAM would be a better fit.
Posted on Reply
#9
FrustratedGarrett
So how much of OP's post is confirmed or valid? Is this new chip meant to compete against Maxwell or GK104? Is it really based on the same two and a half year old GCN microarchitecture? I mean it's not like Maxwell is a revolutionary microarchitecture in comparison to Kepler, it's more a rearrangement of the logic blocks with slight alterations here and there.

Considering all the upcoming games are the same old **** recycled, visuals and content wise, I'm not sure there's much to get excited about here...
Posted on Reply
#10
alwayssts
seronx

Tonga is replacing Tahiti Pro as you can tell;
FirePro W8100(Tonga XT) from FirePro W8000(Tahiti Pro)
R9 M295X(Amethyst XT) is above Pitcairn by a large margin.

Iceland is a rebranded Oland.

Maui has yet to be revealed.
Interesting.

I've been saying since the launch of Tahiti that this part should exist (as a refresh part...launching more than a year ago). I totally understand why Tahiti is what it is. Everything from the 'extra' units on the early process for yields; it really only needs 28-30 CUs on the over-under (similar to GK110 only needing 12-13 smx, where-as gk104 'needs' all 8), to having extra compute as a differenciator, to having sufficient bandwidth up to the processes' max clock for those units at a 300w TDP that really only made sense with a 384-bit bus (or help Tahiti Pro's performance per clock be similar to part with up to the ideal count of ~1880sp shaders for 32 rops), to having the extra ram for higher resolution (at the time vs 2GB). It made sense from a certain point of view, for that certain point in time, but what it was never going to be was under 225w (in a non-wasteful config) or a shorter-length card because of that...and currently Tahiti is really weird shoe-horned as a 'max 250w' part (because of Hawaii).

Nvidia went that opposite route with gk104. It hurt them in the beginning for yields because of the tight design and faster controller, but people paid the absurd price when they branded it a high-end card because of that 'efficiency' (in power vs cost). As time went on and yields got better that design made more and more sense for it's now current market, especially when you look at where the 225w max tdp puts their clocks versus the average low-voltage (power-efficient) clock on the process...it's fairly genius and perfect. To this day, a shorter-length 680 (equivalent to 1792sp but requiring less bandwidth) would be a ideal card for a lot of people, as compared to the over-reach they did with 770. I imagine we something similar with maxwell (10-12 SMM)...getting as close to ~1880sp (a smm is similar to 160sp but needing bw for 128 because of cache/unit structure of sp/sfu) with best power usage through more efficient ram, more cache, etc, more ideal clock/v while staying somewhere around 225w.

I wouldn't begrudge AMD getting gpu SKUs out the door that were (over?/)under 225w and the best mix of units/clock/bandwidth per that tdp/die size/cost with a 256-bit bus, as this showdown was enevitable (be it earlier versus gk104 or later versus an ~11-12 unit Maxwell)...but damn, they missed that boat by a long, long, way...even if only now does newer ram (ie 1.5-1.55v 7gbps, perhaps 1.6v 8gbps, certainly 4Gb if not 2Gb because of HBM) make total efficiency and more perfect balance of units/clocks/bandwidth fit within that tdp.
Posted on Reply
#11
Solid State Brain
I really hope these cards will address the unacceptable idle power consumption with multimonitor usage and AMD videocards. That would not be an insignificant improvement at all! It's actually the main reason why I'm still not sure whether to upgrade my videocard to a new NVidia or AMD one.

Have a look at this chart to understand what I'm talking about:

Posted on Reply
#12
Sony Xperia S
Solid State BrainI really hope these cards will address the unacceptable idle power consumption with multimonitor usage and AMD videocards.
That ridiculous power consumption comes from the memory chips running at full speed. In Blu-Ray playback it is again similar story.

They have, for now, some issues understanding how exactly to implement improvements addressing this problem. :)

They had the problem with GDDR5 memory controller on Radeon HD 4890 which was fixed in Radeon HD 6870.
Posted on Reply
#13
Solid State Brain
Sony Xperia SThat ridiculous power consumption comes from the memory chips running at full speed. In Blu-Ray playback it is again similar story.

They have, for now, some issues understanding how exactly to implement improvements addressing this problem. :)

They had the problem with GDDR5 memory controller on Radeon HD 4890 which was fixed in Radeon HD 6870.
Long ago I noticed on my HD7770 that trying to work around the problem manually by lowering memory speed during multimonitor usage causes both screens to occasionally flicker when GPU load changes. The more memory speed is reduced, the more often this happens. Lowering it too much causes the videocard to hang badly, forcing the user to hard reset the system.
Posted on Reply
#14
techy1
that is good news that AMD is back on mainstream GPUs... I means they made some $$$ from 290x miners and left GTX 750-750ti without any reasonable competition, but mining scam-pyramid is gone and left a dip in AMDs GPU market share. I hope it was worth it (I really do - cuz I wish all the mest for AMD, that cash in losses on Q basis regulary)
Posted on Reply
#15
Sony Xperia S
techy1AMD, that cash in losses on Q basis regulary
AMD is in negative figures because they have big debt and are transferring money to cover it.
Posted on Reply
#16
Casecutter
btarunrappears to have rattled AMD... reportedly attempting a super-efficient... AMD isn't counting on the micro-architecture for efficiency gains. equally plausible "compete" with the GeForce GTX 760, a value-conscious alternative to the $250 GTX 760. The R9 280 takes a beating, is also costlier...
Is it just me or when this joker writes an AMD cover there's always an under-tone of biased-banter that creeps into the what is aspiring to be supposed "credible journalism".
Posted on Reply
#17
Eagleye
CasecutterIs it just me or when this joker writes an AMD cover there's always an under-tone of biased-banter that creeps into the what is aspiring to be supposed "credible journalism".
I find btarunr to be quit neutral compared to other sites like wccftech and S/A etc. But I cant say the same for Mr Wizard on this site., as I have yet to see a negative Nvidia or positive AMD review from him, or maybe I missed it.

Either way I think its a fact that Nvidia have been better for the past two generations in Performance/Watt in Games. I think Nvidia are ahead of AMD`s newly designed architecture in fine tuning it, also AMD has no excuse next gen.

Edit: I agree with OP that AMD has been rattled by the GTX 750 TI, and is scrambling for a response. AMD always chasing the leader and always a step behind Intel and Nvidia it seems
Posted on Reply
#18
The Von Matrices
EagleyeI cant say the same for Mr Wizard on this site., as I have yet to see a negative Nvidia or positive AMD review from him, or maybe I missed it.
I think the problem you have with W1zzard is that he emphasizes engineering over price when making his reviews. Therefore, at a given performance level, NVidia usually wins because the company almost always has the better engineered card (even though it costs more than the AMD equivalent). It's just a different approach to reviewing, for better or for worse.
Posted on Reply
#19
Casecutter
Don't get me wrong the 750Ti is a nice entry-mainstream gaming card for those looking to start gaming at 1080p. Though really nothing different than the 5670 did back when entry-mainstream was predominantly 1600x900 resolution. The hard thing is even with inflation now you pay some 50-60% more to enter. Nvidia has playing Blu-Ray consumption down, and something AMD lacks. However while a GTX750Ti has great gaming efficiency, how much would entry-mainstream actually play, is it even 2hr's a day?

Looking at the two they'll offer very similar "visual experience’s" when in the fairly mundane i3 OEM boxes they mostly find their way into, while depending on the game spar somewhat consistent. Power efficiency for the GTX750Ti is while at "peak gaming" is 57W vs the 260X of 93W so yes like 50%. The question how much would gaming say 2hr's a day change your monthly bill if you then factor in that 80% of the month your computer is in Sleep. At which point AMD has ZeroCore meaning the 260X almost shuts-down (~1-2W), while the 750Ti continues drawing its' idle power of 7W. I'd like to see the total power used from such identical OEM i3 systems, running the identical average person's work/gaming/sleep and see the difference in total power on a monthly time-frame.
Posted on Reply
#20
FrustratedGarrett
The Von MatricesI think the problem you have with W1zzard is that he emphasizes engineering over price when making his reviews. Therefore, at a given performance level, NVidia usually wins because the company almost always has the better engineered card (even though it costs more than the AMD equivalent). It's just a different approach to reviewing, for better or for worse.
What's funny about what you're saying is the fact that it's impossible for a reviewer who has no engineering degree in computer or electrical engineering, or at least a firm understanding in sequential logic design and CMOS circuits fabrications to perform an "engineering" evaluation of those cards. This is assuming you're right about the person in question trying to give us "engineering" evaluations of the products.

Kepler and GCN were designed with different goals in mind. I"m not gonna delve into that now, but a Kepler processing cluster is simpler decode and schedule wise than a GCN one. Maxwell is more of a move in the direction of GCN and Fermi than in the direction of Kepler.
The GM107 chip is bigger than 0.5 the size of GK104 yet it contains only 0.4x the number of ALUs of GK104.

Lastly, from the reviews I've seen, Maxwell's efficiency, which is more like 20% more efficient than GCN, could
be mostly due to optimization on the fabrication level, something AMD did with their new lower power Beema chips.
Posted on Reply
#21
The Von Matrices
FrustratedGarrettWhat's funny about what you're saying is the fact that it's impossible for a reviewer who has no engineering degree in computer or electrical engineering, or at least a firm understanding in sequential logic design and CMOS circuits fabrications to perform an "engineering" evaluation of those cards. This is assuming you're right about the person in question trying to give us "engineering" evaluations of the products.

Kepler and GCN were designed with different goals in mind. I"m not gonna delve into that now, but a Kepler processing cluster is simpler decode and schedule wise than a GCN one. Maxwell is more of a move in the direction of GCN and Fermi than in the direction of Kepler.
The GM107 chip is bigger than 0.5 the size of GK104 yet it contains only 0.4x the number of ALUs of GK104.

Lastly, from the reviews I've seen, Maxwell's efficiency, which is more like 20% more efficient than GCN, could
be mostly due to optimization on the fabrication level, something AMD did with their new lower power Beema chips.
I'm using the term "engineering" quite liberally, more as a reference to end product of the card as opposed to the actual logic design of the chips. W1zzard never goes into very low level details of the chips, so I doubt he rates them based upon that.

In his reviews, he talks about card design (through disassembling it), performance in games, power consumption, overclocking, noise, and temperatures. Nothing is mentioned about software bundles or GPGPU, and accessories and price are only a minor part. To sum it up, he favors game performance, power consumption, and overclocking, and NVidia usually beats AMD in those categories. If he put a focus on software bundles, accessories, and price, AMD would have much better review scores. You could call it bias that he favors categories in which NVidia usually does well, but he does applaud AMD when the company does excel in those categories (e.g. HD 5970), and he hits NVidia hard when the company does not produce a card that does well in those categories (e.g. GTX 590). In my opinion it's just a different approach to reviewing, and it's the reason why there are many review sites all with different opinions.
Posted on Reply
#22
buildzoid
The Von MatricesI'm using the term "engineering" quite liberally, more as a reference to end product of the card as opposed to the actual logic design of the chips. W1zzard never goes into very low level details of the chips, so I doubt he rates them based upon that.

In his reviews, he talks about card design (through disassembling it), performance in games, power consumption, overclocking, noise, and temperatures. Nothing is mentioned about software bundles or GPGPU, and accessories and price are only a minor part. To sum it up, he favors performance, power consumption, and overclocking, and NVidia usually beats AMD in those categories. If he put a focus on software bundles, accessories, and price, AMD would have much better review scores. You could call it bias that he favors categories in which NVidia usually does well, but he does applaud AMD when the company does excel in those categories (e.g. HD 5970), and he hits NVidia hard when the company does not produce a card that does well in those categories (e.g. GTX 590). In my opinion it's just a different approach to reviewing, and it's the reason why there are many review sites all with different opinions.
W1zzards reviews look more at end normal user experience. The reviews focus heavily on noise and power draw and FPS. They do not cover heavy OCing, the VRM section or other things the card can do. His reviews are great for the average gamer that builds his own PC because he cares about power draw noise and FPS more than the other things. AMD on the other hand has cards that have VRMs that are 40+% over powered has unlocked voltage doesn't have an annoying boost algorithm and gets almost same FPS as much more expensive Nvidia cards.
Posted on Reply
#23
Casecutter
Interesting this write-up doesn’t mention the GloFlo rumors... Really hope AMD can start getting parts from them.

Now I don't see super great things from Tonga, as both AMD/Nvidia want to prop-up their pricing structure to not dilute it to much once 20nm does come along. They have to work with what they’ve got, while not showing so much of their architectures (prowess) to offer substancial progress once 20nm mainstream make real financial sense.

If AMD get R9 280+ performance, while similar perf/w % found with a GM107 that would be fine. The bigger up side is if AMD got a better pricing on the wafers from GloFlo which should make even beter pricing , while the best part the geldings for $200 or less?


I don't see TSCM giving Nvidia any break on wafer pricing at least at this point. If Nvidia ends up on a similar GK104 size part, being their pricing at this point appears they can't move GK104 chips/cards for much under $230, AMD will have the BfB on their side. Heck Egg has had a Sapphire 280 (Tahiti) for $200... now down to $190 for a week or better. Consider why would AMD move to GloFlo if they can give Tahiti's for that; they'll have to see a significant difference on the price to help justify the risk .
Posted on Reply
#24
Sony Xperia S
CasecutterInteresting this write-up doesn’t mention the GloFlo rumors... Really hope AMD can start getting parts from them.

Consider why would AMD move to GloFlo if they can give Tahiti's for that; they'll have to see a significant difference on the price to help justify the risk .
I think it should be considered as confirmed that AMD's GPUs will no longer be manufactured by TSMC.

In our April 28th AAPL Update, we noted that 20nm production levels at Samsung Austin were in the 3000-4000 wpm range. These volumes were sufficient to debug/improve their yields as they vied for second source position for the AAPL designs. But our latest checks indicate a surprising twist to the 20nm development story. We are getting indications that Samsung Austin is planning to ramp their 20nm technology designs to 12,000 wpm by July, but the upside is for QCOM, not AAPL. It is our understanding that QCOM is not happy with the 20nm development/yield progress at TSM and thus have been qualifying their latest technology node designs at Samsung. Obviously, the potential loss of business from AAPL and QCOM would be bad news for TSM after recently losing the AMD (AMD) GPU business. And while 20nm demand will continue to be strong for TSM, we expect Samsung to be a viable threat to TSM for the advanced process nodes going forward.

blogs.barrons.com/techtraderdaily/2014/05/08/taiwan-semi-increased-risk-of-losses-to-samsung-says-bluefin/

Posted on Reply
#25
Steevo
Solid State BrainI really hope these cards will address the unacceptable idle power consumption with multimonitor usage and AMD videocards. That would not be an insignificant improvement at all! It's actually the main reason why I'm still not sure whether to upgrade my videocard to a new NVidia or AMD one.

Have a look at this chart to understand what I'm talking about:

forums.geforce.com/default/topic/648176/geforce-drivers/monitor-display-blank-screen-issue-after-driver-update-updated-1-23-14-/22/

Nvidia has their own issues with their low power cards and new "performance" drivers, and other drivers it seems. I would rather burn off a whole extra 10 watts to keep my screen from going black or flickering.
Posted on Reply
Add your own comment
Dec 18th, 2024 10:23 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts