Tuesday, September 20th 2016

AMD Vega 10, Vega 20, and Vega 11 GPUs Detailed

AMD CTO, speaking at an investors event organized by Deutsche Bank, recently announced that the company's next-generation "Vega" GPUs, its first high-end parts in close to two years, will be launched in the first half of 2017. AMD is said to have made significant performance/Watt refinements with Vega, over its current "Polaris" architecture. VideoCardz posted probable specs of three parts based on the architecture.

AMD will begin the "Vega" architecture lineup with the Vega 10, an upper-performance segment part designed to disrupt NVIDIA's high-end lineup, with a performance positioning somewhere between the GP104 and GP102. This chip is expected to be endowed with 4,096 stream processors, with up to 24 TFLOP/s 16-bit (half-precision) floating point performance. It will feature 8-16 GB of HBM2 memory with up to 512 GB/s memory bandwidth. AMD is looking at typical board power (TBP) ratings around 225W.
Next up, is "Vega 20." This is one part we've never heard of today, and it's likely scheduled for much later. "Vega 20" is a die-shrink of Vega 10 to the 7 nm GF9 process being developed by GlobalFoundries. It will feature 4,096 stream processors, too, but likely at higher clocks, up to 32 GB of HBM2 memory running full-cylinders at 1 TB/s, PCI-Express gen 4.0 bus support, and a typical board power of 150W.

The "Vega 11" part is a mid-range chip designed to replace "Polaris 10" from the product-stack, and offer slightly higher performance at vastly better performance/Watt. AMD is expecting to roll out the "Navi" architecture some time in 2019, and so AMD will hold out for the next two years with "Vega." There's even talk of a dual-GPU "Vega" product featuring a pair of Vega 10 ASICs.
Source: VideoCardz
Add your own comment

194 Comments on AMD Vega 10, Vega 20, and Vega 11 GPUs Detailed

#52
Fx
hardcore_gamerLooking at the specs, Vega 10 will have similar performance of a GTX 1080.

Nvidia can easily charge $800 for their next mid range GV 104.
Shiet... I'd NEVER pay $800 for even a high end graphics card, and that isn't because I couldn't afford it but because I refuse to be played that hard.
Posted on Reply
#53
qubit
Overclocked quantum bit
cdawall8800GTS 512, 9800GTX, 9800GTX+, GTS 250. Know what those 4 different graphics cards have in common? They are all the same damn card. Don't forget nvidia has rebranded and carried over cards for generations just as often as AMD.
I'd love to add all of those cards to my NVIDIA collection and bench them lol. Oh look! They all perform to within 5% of each other! :eek::laugh:
Posted on Reply
#54
Captain_Tom
the54thvoidAnd on the flip side Nvidia haven't had to innovate too much because of a lack of serious competition. Anything AMD released, Nvidia matched or beat within weeks, they're always holding back. Until Vega, Nvidia have the high ground and will (ab)use that position. If Vega doesn't dislodge Titan X as the undisputed champion (especially in Vulkan Doom) all hope is lost.
I want to be clear that I do think the rebranding is stupid. Just because they can doesn't mean they should. Sure the 280X humiliated the 760, but they could have had the full tonga GPU ready instead (380X). That would have nearly matched the 960's efficiency and launched before it with an even more commanding performance lead.


As for "all hope is lost". I think people need to realize that (at least for now) it seems like AMD'S current strategy is working. Marketshare is far more important to AMD than a halo product right now. Also anyone remember the days when RADEON had 40-52% marketshare and was profiting like crazy? Well that was back in the 4000/5000/6000 Era when they werent trying to win a absolute performance. I loved the 7970/290X, but apparently it didn't make AMD much money... :/
Posted on Reply
#55
bug
Captain_TomGood ol Nvidia fanboy posting old benches. Man are you guys scared of the future

www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/24.html

They are roughly equal, and that isn't even including the new games like Deus Ex.
How did you get from "higher binned 7970 beats 780" to posting 1070 benchmarks? Maybe your logic is too nimble for me to keep up, help me out here.
Posted on Reply
#56
KainXS
What AMD really needs is a better driver team, they have good hardware but don't utilize it properly. All of their recent high end offering have been more beefier than Nvidia's and they simply can't take the crown and thats why they're pushing vulcan. With vulcan you can get closer to that theoretical performance mark but even then your still relying on the developer to program correctly for that title as they are in control and theres only what 1 or 2 games out that can use vulcan. It sounds promising but thats about all its promising and maybe in the future there will be real support behind it but right now I don't see it as everyone is flocking to DX12. I'm no fortune teller and neither are you guys.

On a different note Nvidia did rebrand years ago because they were selling cards better than amd and still does but refresh now but amd has been doing that more than nvidia has recently and are doing worst in graphics sales.
Posted on Reply
#57
Captain_Tom
qubitI'd love to add all of those cards to my NVIDIA collection and bench them lol. Oh look! They all perform to within 5% of each other! :eek::laugh:
I think everyone here can agree that we all hate rebrands. However they aren't going anywhere for either company.
Posted on Reply
#58
Captain_Tom
bugHow did you get from "higher binned 7970 beats 780" to posting 1070 benchmarks? Maybe your logic is too nimble for me to keep up, help me out here.
Erhhh can you not read a chart?

The 780 and 280X are on it buddy...
Posted on Reply
#59
thesmokingman
Captain_TomI loved the 7970/290X, but apparently it didn't make AMD much money... :/
I don't know about that, in context it kept selling when they were stretched thin with no new product stack for years till fiji. It's a testament to the design of tahiti for it to be pushed well beyond typical product cycles.
Posted on Reply
#60
Captain_Tom
thesmokingmanI don't know about that, in context it kept selling when they were stretched thin with no new product stack for years till fiji. It's a testament to the design of tahiti for it to be pushed well beyond typical product cycles.
I mean it is a fact that sales went down after the 290X (Although the 290 series itself sold really well). Problem is AMD was stretched thin (they are a smaller company) and so their mid range well objectively better than Nvidia's, was still full of rebrand. People only like buying new things even if they are worse lol
Posted on Reply
#61
bug
Captain_TomErhhh can you not read a chart?

The 780 and 280X are on it buddy...
They are, but they're absent from individual game benchmarks, so I don't where the overall performance value comes from or whether any of the tested titles were actually playable.
Posted on Reply
#62
Patriot
Caring1What's with reducing the FP first from double to single, now half?
"with up to 24 TFLOP/s 16-bit (half-precision) floating point performance"
Obviously reducing the compute side increases gaming usability, as proven by Nvidia cards doing the same.
Because the P100 lists it that way.
Deep learning/neural networks can use half precision...
Gaming doesn't have a 1:1 correlation with Tflops anyways so just go with the bigger number ;)
Posted on Reply
#63
64K
Captain_TomI mean it is a fact that sales went down after the 290X (Although the 290 series itself sold really well). Problem is AMD was stretched thin (they are a smaller company) and so their mid range well objectively better than Nvidia's, was still full of rebrand. People only like buying new things even if they are worse lol
R9 290 was "best bang for the buck" there for a little while. I've never seen a break down of how much profit is made by AMD by category but due to the number of people I see on the Steam Hardware Survey running entry level and mid range GPUs I suspect that AMD makes more profit in those categories.
Posted on Reply
#64
Recon-UK
FordGT90ConceptAs more and more people move from cable and satellite TV to IPTV/internet streaming, those products are in great demand and that demand is growing. The thing is, the profit margins on them are so small, Intel would rather use their fabs to build Core I# and Xeon processors that they can sell for a hefty mark up. NUCs basically get the left overs (older fabs) because Intel really doesn't care. The fact of the matter is that market will never have huge profit margins--it's always destined to be a volume seller. This is the same reason why Intel doesn't care about smartphones.


Consoles (they are custom SOCs so addressing both here...) are in the same boat as NUCs, tablets, and smartphones: volume products with tight profit margins. AMD may dominate the console market but NVIDIA dominates the desktop market. Games are developed on Windows which overwhelmingly run on NVIDIA hardware. Developers may be quite familiar with the ins and outs of GCN because of optimization for consoles but they're optimized on NVIDIA first because that's what they're coding on.

AMD is open-sourcing virtually all of its APIs. That's great for Linux but that isn't going to translate to profits for AMD. Well it could because AMD is more appealing now on Linux but realize we're talking about a minority of minority of systems here.


It's a DirectX 12 thing. Some things, especially textures, don't require 32-bits of precision. At 16-bit, it should be able to process two calculations for the price of one.

Developers aren't using 16-bit because hardware support is iffy. Five years from now, 16-bits to handle textures will likely become common place.
Where i bolded, any competition is a great thing and i don't exactly like seeing Nvidia or AMD plastered all over my games either but if that is how it is going to be then i want solid competition from AMD too, i'm a warm blooded soul so red runs through my veins whilst i cut my green grass in the garden. Without the blood i can't live and that person who is not alive can't cut that grass.
Posted on Reply
#65
thesmokingman
Captain_TomI think everyone here can agree that we all hate rebrands. However they aren't going anywhere for either company.
I remember when the 280x dropped. PPL were gushing over it, some thought they were just plain cooler because they bought that and looked down on 7970s roflmao. In reality 7970s were much better, clocked higher, no stupid boost, and did not get neutered by AIB partners. So much for the new bins lol. But the once the masses make up their minds, there's no discouraging it.
Posted on Reply
#66
Recon-UK
thesmokingmanI remember when the 280x dropped. PPL were gushing over it, some thought they were just plain cooler because they bought that and looked down on 7970s roflmao. In reality 7970s were much better, clocked higher, no stupid boost, and did not get neutered by AIB partners. So much for the new bins lol. But the once the masses make up their minds, there's no discouraging it.
280x was a fine card it was a middle of the road option to make up the gap between the 270 and the 290x.
Posted on Reply
#67
m1dg3t
bugYes, because it's Nvidia that sells rebranded products from 3 years ago.
Neither Nvidia nor AMD will innovate unless pushed to. Baseless extrapolations do not help.
Both nVidia & AMD are currently selling re-branded older products. From what I can recall though ATi/AMD have been consistently first to adopt/implement and even directly assist in advancing GFX memory.

I see a bunch of others share my sentiments, good thing I held off on hitting 'post reply' LoLoLoLoL
Posted on Reply
#68
Recon-UK
From evidence and experience AMD are the only company to start using new memory tech first and will push for it.

Nvidia always threw a larger bus at inferior memory.

HD 4870 vs GTX 280 etc, GDDR5 on 256 vs GDDR3 on 512.

ATi 3K series with DDR4.
Posted on Reply
#69
KainXS
Recon-UKFrom evidence and experience AMD are the only company to start using new memory tech first and will push for it.

Nvidia always threw a larger bus at inferior memory.

HD 4870 vs GTX 280 etc, GDDR5 on 256 vs GDDR3 on 512.

ATi 3K series with DDR4.
True but GDDR4 is a bad example, that was a complete flop due to its crazy latency.They did push GDDR5 though when Nvidia was not planning to use it. Nvidia tries to not take risks really. That does slow down progression in graphics a little though but with the way AMD was performing recently until polaris in graphics it didn't really matter.
Posted on Reply
#70
Chaitanya
KainXSWhat AMD really needs is a better driver team, they have good hardware but don't utilize it properly. All of their recent high end offering have been more beefier than Nvidia's and they simply can't take the crown and thats why they're pushing vulcan. With vulcan you can get closer to that theoretical performance mark but even then your still relying on the developer to program correctly for that title as they are in control and theres only what 1 or 2 games out that can use vulcan. It sounds promising but thats about all its promising and maybe in the future there will be real support behind it but right now I don't see it as everyone is flocking to DX12. I'm no fortune teller and neither are you guys.

On a different note Nvidia did rebrand years ago because they were selling cards better than amd and still does but refresh now but amd has been doing that more than nvidia has recently and are doing worst in graphics sales.
Yep thats exactly where AMD is lagging and why they are betting on low level APIs.
Posted on Reply
#71
Recon-UK
KainXSTrue but GDDR4 is a bad example, that was a complete flop due to its crazy latency. Nvidia tries to not take risks really. That does slow down progression in graphics a little though but with the way AMD was performing recently until polaris in graphics it didn't really matter
I never spoke of the performance of their GPU's just that AMD would push for better tech rather than undercutting new tech and only focusing on who bench's higher.

AMD showed us that with the Bulldozer arch... crap CPU but the idea was cool.
Posted on Reply
#72
ZoneDymo
Lets hope they make good on these claims :)

and RX480 with consistent R9 390x levels of performance at a lower power usage sounds like music to my ears.
And anything to bring the prices down of the higher segment is great :)
Posted on Reply
#73
sith'ari
Vayra86................................
Look at how Fury X excels on certain games in newer APIs and you can see how HBM2 on an even wider GPU will absolutely be king of the hill,.....................................
Here goes all over again the same story just like the period prior FuryX's release!!
I still remember all the glorious comments from AMD about the use of HBM memory, and after months & months of hype and brainwash, this supreme card struggle to compete with a reference 980Ti (*and stayed far behind the aftermarket Ti s).
I've said it back then and i'll say it again. HBM technology was already known to the companies years ago. There was no chance a colossus company like NVidia not having done their own research with HBM. So in order not to use them (*back then at least), made me suspect that there were disadvantages at HBM's usage.
Indeed, the HBM memory was only limited to 4GB size, which led to the downfall of FuryX's effort for the top.
( Just like last time, there is no way that again NVidia not having made their own research for the HBM2.)
Posted on Reply
#74
the54thvoid
Intoxicated Moderator
Captain_TomI want to be clear that I do think the rebranding is stupid. Just because they can doesn't mean they should. Sure the 280X humiliated the 760, but they could have had the full tonga GPU ready instead (380X). That would have nearly matched the 960's efficiency and launched before it with an even more commanding performance lead.


As for "all hope is lost". I think people need to realize that (at least for now) it seems like AMD'S current strategy is working. Marketshare is far more important to AMD than a halo product right now. Also anyone remember the days when RADEON had 40-52% marketshare and was profiting like crazy? Well that was back in the 4000/5000/6000 Era when they werent trying to win a absolute performance. I loved the 7970/290X, but apparently it didn't make AMD much money... :/
I'm surprised the 7970 lost them money as it launched at a surprisingly high price point (I know - I bought two). But I think that also was a bit of their problem - they went from the better value option to being the same price as the contemporary Nvidia offering, the GTX680. From there on it allowed Nvidia to 'justifiably' hike prices if their next card beat AMD's.

I do believe though that from a shareholder point of view, Vega HAS to perform well. I look forward to seeing it in action.
Posted on Reply
#75
geon2k2
What AMD needs to do is implement at once tile based rendering, and only after this is done think about ridiculous buses and memory architectures.
Maxwell and Pascal have this, and that's why they are so efficient and have such lower buses and yet perform on par or better with double bus on AMD side.

Tile based rendering was introduced by PowerVR back in 1996.
Read more:
www.anandtech.com/show/735/3

en.wikipedia.org/wiki/Tiled_rendering
Posted on Reply
Add your own comment
May 21st, 2024 13:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts