Tuesday, September 20th 2016

AMD Vega 10, Vega 20, and Vega 11 GPUs Detailed

AMD CTO, speaking at an investors event organized by Deutsche Bank, recently announced that the company's next-generation "Vega" GPUs, its first high-end parts in close to two years, will be launched in the first half of 2017. AMD is said to have made significant performance/Watt refinements with Vega, over its current "Polaris" architecture. VideoCardz posted probable specs of three parts based on the architecture.

AMD will begin the "Vega" architecture lineup with the Vega 10, an upper-performance segment part designed to disrupt NVIDIA's high-end lineup, with a performance positioning somewhere between the GP104 and GP102. This chip is expected to be endowed with 4,096 stream processors, with up to 24 TFLOP/s 16-bit (half-precision) floating point performance. It will feature 8-16 GB of HBM2 memory with up to 512 GB/s memory bandwidth. AMD is looking at typical board power (TBP) ratings around 225W.
Next up, is "Vega 20." This is one part we've never heard of today, and it's likely scheduled for much later. "Vega 20" is a die-shrink of Vega 10 to the 7 nm GF9 process being developed by GlobalFoundries. It will feature 4,096 stream processors, too, but likely at higher clocks, up to 32 GB of HBM2 memory running full-cylinders at 1 TB/s, PCI-Express gen 4.0 bus support, and a typical board power of 150W.

The "Vega 11" part is a mid-range chip designed to replace "Polaris 10" from the product-stack, and offer slightly higher performance at vastly better performance/Watt. AMD is expecting to roll out the "Navi" architecture some time in 2019, and so AMD will hold out for the next two years with "Vega." There's even talk of a dual-GPU "Vega" product featuring a pair of Vega 10 ASICs.
Source: VideoCardz
Add your own comment

194 Comments on AMD Vega 10, Vega 20, and Vega 11 GPUs Detailed

#76
thesmokingman
sith'ariHere goes all over again the same story just like the period prior FuryX's release!!
I still remember all the glorious comments from AMD about the use of HBM memory, and after months & months of hype and brainwash, this supreme card struggle to compete with a reference 980Ti (*and stayed far behind the aftermarket Ti s).
I've said it back then and i'll say it again. HBM technology was already known to the companies years ago. There was no chance a colossus company like NVidia not having done their own research with HBM. So in order not to use them (*back then at least), made me suspect that there were disadvantages at HBM's usage.
Indeed, the HBM memory was only limited to 4GB size, which led to the downfall of FuryX's effort for the top.
( Just like last time, there is no way that again NVidia not having made their own research for the HBM2.)
What is your point? Did you state anything in particular? Nvidia didn't make HBM, they put their efforts in HMC and Micron. They lost out with HMC. HBM was adopted as the standard. HBM was limited to 4GB in its first generation, thus ultimately limiting the Fiji. You want to down the Fury for 4GB, but that's a limitation of the bleeding edge tech. Now that its on Gen 2, both Nvidia and AMD are now racing to capitalize on HBM. Fury not scaling higher or not competing with 980ti in certain areas isn't indicative of the true potential of HBM. That said, in some instances the Fury X runs toe to toe with 1080s today in dx12, omg?
Posted on Reply
#77
Jism
Looks on pair with release of microsoft's new console.

Edit: Don't confuse HBM2 to better then HBM in general. It's still the same clocks and bandwidth compared to HBM. The only difference is is the stacking of chips which makes it up to 16GB of HBM2 memory. Perhaps even more.
Posted on Reply
#78
sith'ari
thesmokingmanWhat is your point? ......................
My point is that just like in the past i didn't believe AMD's ultra-hype for HBM, i'm also not going to believe anything about HBM2 untill i see reviews first. It's AMD's standard policy to create great hype with mediocre results.
( P.S. You might want to check FuryX's & RX480's pathetic performance at VR benchmarks. www.hardocp.com/reviews/vr/
So much again for Raja's hype about "premium VR performance" !!)
Posted on Reply
#79
RejZoR
It's not hype. HBM actually works. Just look at Fury X. In most cases it goes against 8GB graphic cards with ease because it has basically twice the bandwidth of any graphic card on the market. Or at least 1/3 more. Being stuck with 4GB was simply technological limit which isn't causing much trouble. Now, they have full potential of 8-16GB. Which is plenty even for professional usage. I mean, I have a GTX 980 which used to be the king, but I kinda regret I haven't taken a Fury X or a vanilla Fury. Dunno why, they just have some sort of charm with HBM because it's so exotic.
Posted on Reply
#81
Recon-UK
I understand fetishes and all that but having it off over a GPU brand is really something else...
Posted on Reply
#82
thesmokingman
sith'ariMy point is that just like in the past i didn't believe AMD's ultra-hype for HBM, i'm also not going to believe anything about HBM2 untill i see reviews first. It's AMD's standard policy to create great hype with mediocre results.
( P.S. You might want to check FuryX's & RX480's pathetic performance at VR benchmarks. www.hardocp.com/reviews/vr/
So much again for Raja's hype about "premium VR performance" !!)
It's clear you bleed green but one doesn't have everything to do with the other.
Posted on Reply
#83
sith'ari
thesmokingmanIt's clear you bleed green but one doesn't have everything to do with the other.
Yeah, i also have to mention that i wrote the text about this pathetic VR performance of AMD's cards at Hardocp's VR benchmarks with my green blood!!:rolleyes:
(Probably you didn't even bother to check the link i put because your thoughts are pure red!;) )
Posted on Reply
#84
thesmokingman
sith'ariYeah, i also have to mention that i wrote the text about this pathetic VR performance of AMD's cards at Hardocp's VR benchmarks with my green blood!!:rolleyes:
(Probably you didn't even bother to check the link i put because your thoughts are pure red!;) )
Now you continue to troll? I don't need to see your link because we all know already. You seem to think you are spouting news?
Posted on Reply
#85
sith'ari
yeah you know it but you just "neglected" to mention it then. You only mentioned what suited you best.
Next time i'll ask for your approval before i post a link. You seem to be an objective person after all.
Posted on Reply
#86
thesmokingman
sith'ariyeah you know it but you just "neglected" to mention it then. You only mentioned what suited you best.
Next time i'll ask for your approval before i post a link. You seem to be an objective person after all.
What drugs are you on? HBM good or bad doesn't have a lot to do with the Fiji chip, it was limited because it was first gen tech. Its now on 2nd gen and both NV and AMD will be pushing it. Wtf does that have to do with your link troll?
Posted on Reply
#87
sith'ari
I was talking about AMD's ultra-hype ....troll !! And i put a link about AMD's pathetic results at VR benchmarks on the contrary to Raja's and AMD's claims about "premium VR performance" !! Is that clear to you now?
Posted on Reply
#88
looncraz
Captain_TomP.S. Where is this 512 GB/s coming from? HBM2 comes in 720 GB/s and 1 TB/s flavors so I am calling BS on that spec.
2048-bit bus and two HBM2 stacks. I've been suggesting this for at least a month now - it's the most prudent option for AMD to take - they simply don't need more bandwidth... or capacity... that you can get from two HBM2 stacks in the consumer market.

Smaller interposer, reduced cost, and many knock-on benefits from this.

Some models MAY have four stacks (16GB, IIRC, would actually require that), but I don't think we need more than 8GB for consumer cards for the next couple of years - even at the high end. We have yet to see 8GB be stressed... it's sometimes hard enough just to max out 4GB.
Posted on Reply
#89
looncraz
bugNo, we cannot. If Titan X was HBM2 powered it would be a 225W part with more power than Vega 10.
Don't be so certain that Vega 10 will be that weak.

It is updated from GCN4 (which is already 15% faster than the GCN in Fury X), has all of the updated geometry engines, schedulers, etc... of Polaris - all updated yet another generation. It is using the second generation HBM memory, with double the bandwidth of RX 480 (but only 77% more shaders to feed).

So, at WORST, Vega 10 is Fury X + 15% + 15% + 15% = ~50% faster than Fury X.

That is to say:

~15% higher IPC
~15% boost from clock speed
~15% boost from bandwidth

And that's just assuming a scaled-up Polaris GPU... which Vega 10 is not.

Still, maybe only add 10% for architectural improvements after Polaris - Vega only has another six months of development on it compared to Polaris... but that's 67% faster than Fury X...
Posted on Reply
#90
NDown
They really need something like what i'd call the "290X/290 release moments" Titan performance for ~$450/600 less

Everything after that has been pretty underwhelming for me

Regardless of what the naysayers saying about the noise and heat, it was the best gpu release i've ever witnessed

cant speak much about the probability of it happening with the Vega release but one can hope at least :^(

I dont want to see x70 cards in the $500-700 range next time Nvidia releases something new again
Posted on Reply
#91
qubit
Overclocked quantum bit
Recon-UKI understand fetishes and all that but having it off over a GPU brand is really something else...
Oh no, what has been imagined cannot be unimagined... :twitch:
Posted on Reply
#92
m1dg3t
qubitOh no, what has been imagined cannot be unimagined... :twitch:
What you think DVI stand for? D1nk33V4#g33n41n73rf4c3

:roll: :roll: :roll:
Posted on Reply
#93
yotano211
Recon-UKI understand fetishes and all that but having it off over a GPU brand is really something else...
Especially the feet kind.
Posted on Reply
#94
BiggieShady
LightningJRYou can't compare TFLOPS across GPU manufacturers if you could then the RX480 with 5.8TFLOPS would wreck a GTX 980 with only 5TFLOPS and it doesn't. The card falls somewhere in between the 980 and a 970 (with 4TFLOPS)

The TFLOPS are there but it just doesn't translate in to equalivent performance so 12TFLOPS looks sexy it just don't translate 1:1 with nVidia unfortunately.
That's because those values represent peak theoretical maximum compute performance. With all different shaders/compute code in the wild running on both GPU architectures, nvidia on average operates closer to its peak maximum (better gpu cache hierarchy)
There are cases where code can actually pull 5.8 TFLOPS effective out of RX480, but that happens rarely even for gpgpu compute devs let alone gamers :laugh:
geon2k2What AMD needs to do is implement at once tile based rendering, and only after this is done think about ridiculous buses and memory architectures.
Maxwell and Pascal have this, and that's why they are so efficient and have such lower buses and yet perform on par or better with double bus on AMD side.

Tile based rendering was introduced by PowerVR back in 1996.
Read more:
www.anandtech.com/show/735/3

en.wikipedia.org/wiki/Tiled_rendering
Bingo. We had a little discussion about it here on TPU: www.techpowerup.com/forums/threads/maxwell-and-pascal-secretly-use-tile-based-rasterization.224773/
Their tile based rendering isn't completely "tile based", only the rasterization part ... they keep the tile size small enough so it can fit completely into first level cache ... all for a magical efficiency increase
Posted on Reply
#95
renz496
bugEven worse, it's a 225W part. GP102 is 250W, but that's without HBM and Vega 10 is supposed to be "between GP104 and GP102". Meaning weaker than GP102.
AMD is skating to where the puck is, not where it will be :(

Also, first half of 2017 can easily turn into a July or back-to-school launch. Oh well, I wasn't planning on buying any of these anyway.
if you look at the rumor at VCZ this info is coming from AMD server roadmap. hence the spec mention for firepro variant and not regular consumer variant (radeon). i still remember when there is leak about Firepro (tonga based) rated at 150w i see in some people in another forum touting the maxwell killer are finally arrive. but in the end 285 power was rated way more than 150w.
Posted on Reply
#96
renz496
Captain_TomReally rare? Mantle was "rare", but this is different.

Tomb Raider, Hitman, DOOM, DEUS EX, and now BF1.

It's pretty clear that this will be the new standard of 2017.
yeah. a new standard for "broken" games?
Posted on Reply
#97
Vayra86
sith'ariMy point is that just like in the past i didn't believe AMD's ultra-hype for HBM, i'm also not going to believe anything about HBM2 untill i see reviews first. It's AMD's standard policy to create great hype with mediocre results.
( P.S. You might want to check FuryX's & RX480's pathetic performance at VR benchmarks. www.hardocp.com/reviews/vr/
So much again for Raja's hype about "premium VR performance" !!)
You don't really seem to get it.

VR performance? Hand in the air if you really care about VR performance right now. Worried that tech demo run will not be smooth?? VR is a niche, that is all. Let's move on.

Fury X was showing lackluster performance and meh overclockability when it was just released.

TODAY Fury X is showing its true power and there is actually a lot more still in the can (stock cooled Fury X throttles) - it is still competing against top end GPUs like 980ti (comfortably beating it at 4K, and equal or better in almost any DX12/Vulkan game, 2-3% lower in DX11 lower res), 1070 and in an edge case, the 1080. So let's turn this around: AMD's ONLY HBM offering in last years' (!) line up is now competing with Nvidia's most shiny new arch. The entire rest of AMDs line up, including the RX480, is getting swamped with a lower amount of shaders from the competition.

And there you are, saying HBM is junk and not of use for GPUs. Mkay :)

Also, a general note and to unburden our kind mods... STOP DOUBLE POSTING AND USE AN EDIT BUTTON
Posted on Reply
#98
Recon-UK
I will be the first admit just how great GCN is, it's not up there with Nvidia's newest but it's gained so much performance over time.
Posted on Reply
#99
BiggieShady
Recon-UKI will be the first admit just how great GCN is, it's not up there with Nvidia's newest but it's gained so much performance over time.
True and also when it comes to async compute it's vice versa ... that's where nvidia has some catching up to do
Posted on Reply
#100
bug
looncrazDon't be so certain that Vega 10 will be that weak.

It is updated from GCN4 (which is already 15% faster than the GCN in Fury X), has all of the updated geometry engines, schedulers, etc... of Polaris - all updated yet another generation. It is using the second generation HBM memory, with double the bandwidth of RX 480 (but only 77% more shaders to feed).

So, at WORST, Vega 10 is Fury X + 15% + 15% + 15% = ~50% faster than Fury X.

That is to say:

~15% higher IPC
~15% boost from clock speed
~15% boost from bandwidth

And that's just assuming a scaled-up Polaris GPU... which Vega 10 is not.

Still, maybe only add 10% for architectural improvements after Polaris - Vega only has another six months of development on it compared to Polaris... but that's 67% faster than Fury X...
This very leak (or whatever it is) says Vega 10 is between GP104 and GP102, placing a hard limit on the upper limit.
Posted on Reply
Add your own comment
May 21st, 2024 14:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts