Thursday, January 12th 2017

AMD's Vega-based Cards to Reportedly Launch in May 2017 - Leak

According to WCCFTech, AMD's next-generation Vega architecture of graphics cards will see its launch on consumer graphics solutions by May 2017. The website claims AMD will have Vega GPUs available in several SKUs, based on at least two different chips: Vega 10, the high-end part with apparently stupendous performance, and a lower-performance part, Vega 11, which is expected to succeed Polaris 10 in AMD's product-stack, offering slightly higher performance at vastly better performance/Watt. WCCFTech also point out that AMD may also show a dual-chip Vega 10×2 based card at the event, though they say it may only be available at a later date.
AMD is expected to begin the "Vega" architecture lineup with the Vega 10, an upper-performance segment part designed to disrupt NVIDIA's high-end lineup, with a performance positioning somewhere between the GP104 and GP102. This chip should carry 4,096 stream processors, with up to 24 TFLOP/s 16-bit (half-precision) floating point performance. It will feature 8-16 GB of HBM2 memory with up to 512 GB/s memory bandwidth. AMD is looking at typical board power (TBP) ratings around 225W.

Next up, is "Vega 20", which is expected to be a die-shrink of Vega 10 to the 7 nm GF9 process being developed by GlobalFoundries. It will supposedly feature the same 4,096 stream processors, but likely at higher clocks, up to 32 GB of HBM2 memory at 1 TB/s, PCI-Express gen 4.0 bus support, and a typical board power of 150W.

AMD plans to roll out the "Navi" architecture some time in 2019, which means the company will use Vega as their graphics architecture for two years. There's even talk of a dual-GPU "Vega" product featuring a pair of Vega 10 ASICs.
Source: WCCFTech
Add your own comment

187 Comments on AMD's Vega-based Cards to Reportedly Launch in May 2017 - Leak

#76
Captain_Tom
the54thvoidOh, that's quite far away. :(

One year after Pascal. FWIW, a Titan X (not full GP102 core) is 37% faster than a 1080 at 4k Vulkan Doom. It's just when people call Vega's performance stupendous and then repeat such things, it's a bit baity. Once Vega is out it's on a smaller node, has more cores and more compute with more 'superness' than Pascal. So if it doesn't beat Titan X, it's not superb enough frankly.



The architecture is a year old.
I fully expect Vega 10 to beat the Titan XP. In fact I would say the real question is if it beats the Titan XP Black.
Posted on Reply
#77
efikkan
RejZoRYeah, well, they apparently found a way. Because the traditional "prefetching" just wouldn't make any kind of sense, we already do that in existing games. All Unreal 3.x and 4.x games use texture streaming which is location based around the player, so you don't store textures in memory for entire level, but just for the segment player is in and has a view of. The rest is streamed into VRAM per need basis as you move around and done by the engine itself.
Manual prefetching in a game is possible, because the developer might be able to reason about possible movement several frames ahead, I've done this myself in simulators.
But it's impossible for the GPU to do this itself, since it only sees memory accesses and instructions and clear patterns in these.
Posted on Reply
#78
RejZoR
efikkanManual prefetching in a game is possible, because the developer might be able to reason about possible movement several frames ahead, I've done this myself in simulators.
But it's impossible for the GPU to do this itself, since it only sees memory accesses and instructions and clear patterns in these.
You do know GPU isn't just hardware these days, it's also software part and drivers sure can know a whole lot of things about what and how game is shuffling stuff through VRAM.
Posted on Reply
#79
efikkan
RejZoRYou do know GPU isn't just hardware these days, it's also software part and drivers sure can know a whole lot of things about what and how game is shuffling stuff through VRAM.
Even the driver is not able to reason about how the camera might move and which resource which might be needed in the future. The only thing the driver and the GPU sees is simple buffers, textures, meshes, etc. The GPU have no information about the internal logic of the game, so it has no ability to reason about what might happen. The only way to know this is if the programmer intentionally designs the game engine and tells the GPU this somehow.
Posted on Reply
#80
RejZoR
It does. It sees what resources are being used on what basis and arrange the usage accordingly on a broader scale and not down to individual texture level.
Posted on Reply
#81
Steevo
efikkanEven the driver is not able to reason about how the camera might move and which resource which might be needed in the future. The only thing the driver and the GPU sees is simple buffers, textures, meshes, etc. The GPU have no information about the internal logic of the game, so it has no ability to reason about what might happen. The only way to know this is if the programmer intentionally designs the game engine and tells the GPU this somehow.
If you played through a game and identified all the data points of FPS less than 60 you could go find the reasons, and preemptively correct the issue by either prefetching data, precooking some physics (just like Nvidia still does with PhysX on their GPU's) or whatever else caused the slowdown and implement that in drivers to fetch said data or start processing said data.
Posted on Reply
#82
theeldest
TotallyGood catch I didn't see till you pointed that out I had assumed that it was relative performance(%) between the best performing card and all the others but seeing the top performing card at 98.4 indicates that is not the case.
I mentioned it in my first post with charts/tables.

Performance is from the "Performance Summary" table in the Titan X Pascal review for 1440p. And I made the disclaimer that this methodology is super "hand-wavey".

Yeah, I get it. It's imperfect for many reasons and wrong for others. But it at least provides some sort of methodology for trying to make predictions.
Posted on Reply
#83
theeldest
TotallyYour conclusion based on the data is wrong. You need to break the data into their proper components. you need to look at the 9XX, 10XX, 3XX, and 4XX separately since they are all different arc, when lump them together like that you are hiding the fact that the scaling of the 10XX(1060->1080) is pretty bad and being propped up by the 9XX when lumped together as AMD vs. Nvidia.
It's not 'wrong'. Those data points hold meaningful information.

If you think that a different methodology would be better, then put the info into a spreadsheet and show us what you come up with.

EDIT:
Once you plot everything out, it's pretty easy to see that those changes don't actually mean much for either company. The linear model actually fits pretty well over the past couple generations.



Personally, I think the biggest issue with this model is that the FuryX drastically skews the projection down for high FLOPs cards. If we set the intercepts to 0 (as makes logical sense) the picture changes a bit:



If you want to point out how this methodology is flawed/wrong/terrible, it would help to show us what you think is better. With pictures and stuff.
Posted on Reply
#84
jabbadap
The problem on your Gflops chart is that Gflops depends on gpu clock and you have taken them on ihvs given numbers. Thumb of rule were that on amd side given gflops are probably little too big, card is throttling clocks back before given up-to clock GFlops(furyX being exception because of water cooling, oh and GFlops for RX480 are not from up-to clock's which should be 5,834GFlops). And for nvidia given GFlops figure is too small, because of real gpu clock is higher than given boost clock GFlops.

I.E. Titan X Gflops are given as 2*1.417GHz*3584cc=10,157GFlops while in gaming gpu clock is higher than that, in tpus review boost clock for titan xp is ~1.62GHz which means 2*1.62GHz*3584cc= 11,612GFlops. Why this matters is that real boost clock for nvidia's cards varies more per card and thus the real GFlops differs more from the given value.
Posted on Reply
#85
efikkan
RejZoRIt does. It sees what resources are being used on what basis and arrange the usage accordingly on a broader scale and not down to individual texture level.
Neither the driver nor the GPU knows anything about the internal state of the game engine.
SteevoIf you played through a game and identified all the data points of FPS less than 60 you could go find the reasons, and preemptively correct the issue by either prefetching data
The GPU have no ability to reason about why data is missing, it's simply a device processing the data it's ordered to.
Posted on Reply
#86
rruff
CammAMD just didn't play in the high end this generation for various reasons.
Reasons being they are a very small company building both CPUs and GPUs. They have to pick their battles, and the bleeding edge highest performing product isn't it.

I'm more interested in Vega 11. It a more mainstream product, and if it really does deliver good performance/watt (better than Pascal), that would be something. It would also give AMD a true laptop card.
Posted on Reply
#87
Divide Overflow
I'm amused how some people are highly skeptical of some info from WCCFTech but other info is considered gospel. :rolleyes:
Posted on Reply
#88
Kanan
Tech Enthusiast & Gamer
TotallyThe reason they crippled it was that was Titan cutting into their own workstation graphics business. People aren't going to give up their hard earned when they don't have to and the original Titan presented a very viable alternative to their Quadro line, so Titan in that form had to go. I say it was more a self-preservation move than cost-cutting.
Nah, nothing got crippled after the original Titan. It's just so that Titan X (Maxwell and Pascal) are gaming architectures, they never had DP in the first place.
Posted on Reply
#89
theeldest
jabbadapThe problem on your Gflops chart is that Gflops depends on gpu clock and you have taken them on ihvs given numbers. Thumb of rule were that on amd side given gflops are probably little too big, card is throttling clocks back before given up-to clock GFlops(furyX being exception because of water cooling, oh and GFlops for RX480 are not from up-to clock's which should be 5,834GFlops). And for nvidia given GFlops figure is too small, because of real gpu clock is higher than given boost clock GFlops.

I.E. Titan X Gflops are given as 2*1.417GHz*3584cc=10,157GFlops while in gaming gpu clock is higher than that, in tpus review boost clock for titan xp is ~1.62GHz which means 2*1.62GHz*3584cc= 11,612GFlops. Why this matters is that real boost clock for nvidia's cards varies more per card and thus the real GFlops differs more from the given value.
That doesn't actually change any of the prediction because nothing in the model is dependent on nVidia's "performance/flop" when 'predicting' Vega's performance. All it does is drop the dot for Titan X a little more below the linear fit line.

The graphs & charts I made aren't 'wrong' or 'flawed'. They follow standard practices.

What you're doing--making specific modifications due to information you have--is more in line with Bayesian statistics.

It's not better, it's not worse. It's different.
Posted on Reply
#90
ensabrenoir
......dosent matter when they release it cause nvdia has a counter punch ready. Honestly in spite of how good the "10" series cards are, I feel as though they were just a fund raiser for whats coming next
......but in the mean time even after Amd's new power up Nvdia's like

Posted on Reply
#91
efikkan
ensabrenoir......dosent matter when they release it cause nvdia has a counter punch ready.
I'm just curious, what is the counterpart from Nvidia? (Except for 1080 Ti which will arrive before Vega)
Posted on Reply
#92
ensabrenoir
efikkanI'm just curious, what is the counterpart from Nvidia? (Except for 1080 Ti which will arrive before Vega)
....counter "punch" .......Nvdia could just drop the price of the 1080s, or start talking about the 20 series or do nothing and let the 1080ti ride. They seem confident that it'll retain the crown even after vega......
Posted on Reply
#93
the54thvoid
Super Intoxicated Moderator
Captain_TomI fully expect Vega 10 to beat the Titan XP. In fact I would say the real question is if it beats the Titan XP Black.
Your enthusiasm is... Optimistic. Would be lovely if true.
Posted on Reply
#94
efikkan
ensabrenoir....counter "punch" .......Nvdia could just drop the price of the 1080s, or start talking about the 20 series or do nothing and let the 1080ti ride. They seem confident that it'll retain the crown even after vega......
They may lower the price, but they have no new chips until Volta next year.
Posted on Reply
#95
jahramika
cdawallNot really these are targeted at the 1080Ti not 1080. This would be about average for AMD vs Nvidia release schedule. I do however find it to be a bit annoying on the hype train as per usual. I was expecting a Jan-Feb release.
Posted on Reply
#96
jahramika
Hype = Pascal DX12 Hardware support, HBM2. When will Pascal justify the 1080/Titan pricing.
Posted on Reply
#97
Captain_Tom
owen10578Vega 11 replacing Polaris 10? So is this going to be a complete next gen product stack aka RX 5xx? I thought its supposed to add to the current product stack at the high end. The information seems off.
No the information is dead on with literally everything we have been told from AMD.


Look at their product roadmap:

2015 = Fury HBM, R9 300 28nm

2016 = Polaris 10, 11

2017 = Vega 10, 11


Just read their bloody map!!! 2016 had the 480 as a stopgap while they got their nearly all HBM2 line-up ready.
Posted on Reply
#98
dalekdukesboy
TheGuruStudFilling the SPs with the much enhanced scheduler should do it easily.
I'm no expert here but just from looking at specs and being on 7nm tech especially if it clocks like hell (plus whatever they set standard clocks at) this card I would honestly suspect is going to be in 1080ti territory and depending on application/game and drivers etc it may beat it.
Posted on Reply
#99
efikkan
Captain_TomJust read their bloody map!!! 2016 had the 480 as a stopgap while they got their nearly all HBM2 line-up ready.
I've not seen the details of Vega 11 yet, but I assume it will be GDDR 5(X).

HBM has so far been a bad move for AMD, and it's not going to help Vega 10 for gaming either. GP102 doesn't need it, and it's still going to beat Vega 10. HBM or better will be needed eventually, but let's see if even Volta needs it for gaming. AMD should have spent their resources on the GPU rather than memory bandwidth they don't need.
Posted on Reply
#100
Blueberries
If AMD can release a dual-Vega card with HBM and a closed loop cooler around $600 or so it would blow nVidia's gaming market out of the water until 2018.

It's not likely though

(I'm assuming Vega 10 will beat Pascal in Perf/watt but not in throughput)
Posted on Reply
Add your own comment
Dec 23rd, 2024 14:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts