Thursday, January 12th 2017

AMD's Vega-based Cards to Reportedly Launch in May 2017 - Leak

According to WCCFTech, AMD's next-generation Vega architecture of graphics cards will see its launch on consumer graphics solutions by May 2017. The website claims AMD will have Vega GPUs available in several SKUs, based on at least two different chips: Vega 10, the high-end part with apparently stupendous performance, and a lower-performance part, Vega 11, which is expected to succeed Polaris 10 in AMD's product-stack, offering slightly higher performance at vastly better performance/Watt. WCCFTech also point out that AMD may also show a dual-chip Vega 10×2 based card at the event, though they say it may only be available at a later date.
AMD is expected to begin the "Vega" architecture lineup with the Vega 10, an upper-performance segment part designed to disrupt NVIDIA's high-end lineup, with a performance positioning somewhere between the GP104 and GP102. This chip should carry 4,096 stream processors, with up to 24 TFLOP/s 16-bit (half-precision) floating point performance. It will feature 8-16 GB of HBM2 memory with up to 512 GB/s memory bandwidth. AMD is looking at typical board power (TBP) ratings around 225W.

Next up, is "Vega 20", which is expected to be a die-shrink of Vega 10 to the 7 nm GF9 process being developed by GlobalFoundries. It will supposedly feature the same 4,096 stream processors, but likely at higher clocks, up to 32 GB of HBM2 memory at 1 TB/s, PCI-Express gen 4.0 bus support, and a typical board power of 150W.

AMD plans to roll out the "Navi" architecture some time in 2019, which means the company will use Vega as their graphics architecture for two years. There's even talk of a dual-GPU "Vega" product featuring a pair of Vega 10 ASICs.
Source: WCCFTech
Add your own comment

187 Comments on AMD's Vega-based Cards to Reportedly Launch in May 2017 - Leak

#151
BiggieShady
theeldestperformance per TFLOP decreases
That's completely nonsensical, performance is measured in TFLOPs, there is no such thing as performance per TFLOP.
You probably meant that ratio of average performance in TFLOPs versus peak theoretical performance in TFLOPs decreases.
Posted on Reply
#152
Kanan
Tech Enthusiast & Gamer
krukJust a moment here. Did you even read the reviews? The reference 780 Ti consumed 15 Watt less when gaming (muh better efficiency!!111), it was 8% faster and 28% more expensive vs the reference 290x. And even if the 290x was 8% slower, it still managed to push 60+ in most games tested here on TPU at 1080p. People only bought the 780 Ti because it was nVidia, not because it was that much better as you say. The only two problems with 290x were it's multimonitor power consumption and poor reference colling. Otherwise it was a great card! Stop making things up ...
Did you even read or understand what I've written? I talked about CUSTOM 780 Ti mainly, and those were a great deal faster back then, not only 8%. And afaik the 290X even being Ref run pretty good, because it was tweaked by W1zzard and afaik was a cherry picked GPU of AMD too. But you can't say that about the ref 780 Ti which runs on pretty pretty low clocks compared to custom ones. Over 200 MHz lower than decent customs.
Captain_TomNo I have nothing inherently against Nvidia, or any company that makes a product. I don't call people "Nvidiots" because they buy Nvidia cards, I do so when I truly believe they are a fanboy. And yeah I assume most people who defend Kepler are in fact fanboys, because kepler was a big joke if you actually know what you are talking about.
It's not and I still know what I'm talking about. And you're still behaving like a fanboy or someone who actually has no clue himself. Just a FYI: 1536 shaders GTX 680, that consumed way less power than HD 7970, was faster than the 2048 shader GPU of AMD. AMD had only one way to react to it: power the HD 7970 to retarded clocks, calling it "GHz Edition" with 1050 MHz, but also increasing it's power consumption a lot by the way. The whole HD 7970 and R9 290X GPUs were about brute power with wide buses (384 / 512 bit) and high shader counts. Basically, the mistakes Nvidia did before with GTX 280/480/580 were copied by AMD onto their HD 7000 and later lineups, and Nvidia basically tried to do what AMD pulled off with HD 5000 / 6000 which were pretty efficient compared to GTX 200/400/500 series. Only when the 290X was released it put enough stress on Nvidia to counter it with their own full force GPU, the GK110 with all of its shaders enabled (2880). It was more expensive, but also better in every single aspect.
I have owned plenty of Nvidia cards (Haven't owned any for a few years now though). However I honestly don't believe you when you say you own AMD cards considering the continued noob arguments I keep hearing.
Said the angry fanboy. Who cares. I owned HD 5850 from release to 2013 when I replaced it with HD 5970 and used it until it started to be defective, end of 2015. I own a HD 2600 XT now. Basically I owned 2 of the best AMD/ATI GPUs ever made. The reason why I chose HD 5000 over GTX 200 series back then was simple: because it was a lot better. Because I don't care about brands.
The 390X is noisy huh? Um no they were all AIB whisper quiet cards.
I said they are 'noisier', not noisy. Also I was mostly talking about R9 200 series, not 300, which are pretty irrelevant to this discussion. But if you want to talk about the R9 300 series: yes compared to GTX 900 series they were noisy. You can't compare R9 300 series to GTX 700 series because they are a different generation. Go and read some reviews.
Of course you probably don't know that because you are clearly uninformed on all of these cards from top-to-bottom.
Childish behaviour all the way.
I mean the 290X was hot, but not loud if using its default fan settings,; and again - that's for the cheap launch cards. If you bought the plentifully available AIB cards you would know they were very quiet. Quite funny you bring up noise when the Titan series (And now 1070/80 FE) have been heavily criticized for their under-performing and noisy fan systems.
We are not talking about Titan series. And 1070/1080 are doing relatively good for what they are (exhaust style coolers). AMD's exhaust coolers were just a mess after HD 5000 series. Loud and hot. And later ineffective (R9 290X fiasko).

I can bring up noise every time, since I'm absolutely right about Nvidia cards being more quiet and way less power hungry which has to do with the noise as well.
Also the efficiency argument is my favorite myth. Only with the release of Maxwell did Nvidia start to have any efficiency advantage at all, and that was only against the older generation AMD cards. I will leave you with this:




^WOW! A full 5-10% more efficient! (depending on the card). Anyone who thinks that is worth mentioning is simply looking for reasons to support "Their side."

Pascal was really the first generation Nvidia won efficiency in any meaningful way.
Then compare R9 290X custom vs. custom 780 Ti and then you'll see I'm right. They are way more efficient and a lot faster. Also Multi Monitor, idle and Bluray / Web power consumption is still a mess on those AMD cards. Efficiency is not only, when you play games.

I'm gonna drop this discussion now, cause you're a fanboy and just a waste of time. Save to say, you didn't prove anything of what I've said wrong. The opposite is true and you totally confirmed me with your cherry picking. Try that with someone else.

You're on Ignore, byebye.
Posted on Reply
#153
kruk
KananDid you even read or understand what I've written? I talked about CUSTOM 780 Ti mainly, and those were a great deal faster back then, not only 8%. And afaik the 290X even being Ref run pretty good, because it was tweaked by W1zzard and afaik was a cherry picked GPU of AMD too. But you can't say that about the ref 780 Ti which runs on pretty pretty low clocks compared to custom ones. Over 200 MHz lower than decent customs.
Here are two benchmarks made at the same time with custom 780 Ti and custom 290x:
www.techpowerup.com/reviews/ASUS/R9_290X_Direct_Cu_II_OC/
www.techpowerup.com/reviews/MSI/GTX_780_Ti_Gaming/

The custom 780 Ti was 15% faster than the custom 290x at 1080p and cost 18% more, which makes the custom 290x have better performance/price. The custom 780 Ti also consumed 6% more power on average gaming session making it only 10% more efficient at gaming than the custom 290x. Nowadays, in modern games, the reference 780TI and 290x are on par with performance at 1080p, which we can probably safely extrapolate to custom cards. You do the math.

As I said before, you either didn't read the reviews or you have a really bad memory. Pick one. I'm pulling out of this debate as it's not the topic of this thread, however I did enjoy proving you wrong ;).
Posted on Reply
#154
Kanan
Tech Enthusiast & Gamer
krukHere are two benchmarks made at the same time with custom 780 Ti and custom 290x:
www.techpowerup.com/reviews/ASUS/R9_290X_Direct_Cu_II_OC/
www.techpowerup.com/reviews/MSI/GTX_780_Ti_Gaming/

The custom 780 Ti was 15% faster than the custom 290x at 1080p and cost 18% more, which makes the custom 290x have better performance/price. The custom 780 Ti also consumed 6% more power on average gaming session making it only 10% more efficient at gaming than the custom 290x. Nowadays, in modern games, the reference 780TI and 290x are on par with performance at 1080p, which we can probably safely extrapolate to custom cards. You do the math.

As I said before, you either didn't read the reviews or you have a really bad memory. Pick one. I'm pulling out of this debate as it's not the topic of this thread, however I did enjoy proving you wrong ;).
You didn't prove me wrong, but thanks for proving me correct. And while you're at it don't forget the bad power consumption of R9 290X on idle/multi monitor and BD/Web. And looking into the future is still impossible. Maybe more people would've bought the 290X knowing it's better in 2-3 years, maybe not, because Nvidia had better drivers back then. I'm also pretty sure most people didn't care, they wanted the best performance NOW, and not in years to come, most enthusiast users that pay over 600 bucks for a GPU don't keep it for 2-3 years anyway, they replace GPUs way faster than normal users. AMD always was a good brand for people that kept their GPU a long time, their GPUs had more memory and future oriented technology like DX11 on HD 5000 series, low level API prowess on R9 200/300 series. Doesn't change the fact, the last GPUs AMD had, that was a real "winner", was the HD 5000 series. Everything after that was just a follow-up to Nvidia, always 1 or 2 steps behind.
Posted on Reply
#155
kruk
KananYou didn't prove me wrong, but thanks for proving me correct. And while you're at it don't forget the bad power consumption of R9 290X on idle/multi monitor and BD/Web. And looking into the future is still impossible. Maybe more people would've bought the 290X knowing it's better in 2-3 years, maybe not, because Nvidia had better drivers back then. I'm also pretty sure most people didn't care, they wanted the best performance NOW, and not in years to come, most enthusiast users that pay over 600 bucks for a GPU don't keep it for 2-3 years anyway, they replace GPUs way faster than normal users.
Look, I know you got Choice-supportive bias because you own 780 Ti, but avoiding the numbers won't make the nVidia card better. In my first post, which you obviously didn't read properly, I said clearly that the multi-monitor was one of the two problems with the 290x.

Let's analyze the power consumption in other areas (single monitor only of course, since the number of multimonitor users is way lower):
The difference in single monitor idle is 10 Watts. This is also the difference between the cards at average gaming. If you leave the 290x idling, it can turn off to ZeroCore Power modus which consumes virtually no power (i measured it myself on my 7750 and you can find measurements online), but I'm putting 2 W there so you won't say I'm cheating ...

If you leave the computer running 24/7 and it idles 12 hours, you game 6 hours, watch a BlueRay movie for 2 hours and do other things for 4 hours the 780 Ti will consume: (12h*10W + 6h*230W + 2h*22W+4h*10W)/24h = 60-70W per hour and the 290X will consume (12h*2W + 6h*219W + 2h*74W+4h*20W)/24h = 60-70W per hour. Virtually the same! You can say whatever you want, but the numbers are on my side :D.
Posted on Reply
#156
Kanan
Tech Enthusiast & Gamer
krukLook, I know you got Choice-supportive bias because you own 780 Ti, but avoiding the numbers won't make the nVidia card better. In my first post, which you obviously didn't read properly, I said clearly that the multi-monitor was one of the two problems with the 290x.

Let's analyze the power consumption in other areas (single monitor only of course, since the number of multimonitor users is way lower):
The difference in single monitor idle is 10 Watts. This is also the difference between the cards at average gaming. If you leave the 290x idling, it can turn off to ZeroCore Power modus which consumes virtually no power (i measured it myself on my 7750 and you can find measurements online), but I'm putting 2 W there so you won't say I'm cheating ...

If you leave the computer running 24/7 and it idles 12 hours, you game 6 hours, watch a BlueRay movie for 2 hours and do other things for 4 hours the 780 Ti will consume: (12h*10W + 6h*230W + 2h*22W+4h*10W)/24h = 60-70W per hour and the 290X will consume (12h*2W + 6h*219W + 2h*74W+4h*20W)/24h = 60-70W per hour. Virtually the same! You can say whatever you want, but the numbers are on my side :D.
Maybe so, because I'm kinda tired of this discussion. And yeah I have to admit some bias, though I bought this 780 Ti used after 390X were already released and I still don't regret it a single bit. Maybe I mixed power consumption numbers of 290X with 390X in my mind, because 390X consumes more than 290X (8 GB vs. 4 GB, higher clocks all around), for the same reason, because I compared 780 Ti with 390X back then. So it's okay I don't disagree with your numbers. However, I still don't like the Multi Monitor power consumption on those GPUs, I didn't like it when I had HD 5850/5970 as well, those had the same problem and Web power consumption is way too high too (I don't care about BD, I just use the wording from TPU). For me it matters, for others maybe not. I didn't had the choice between 390 and 780 Ti anyway, I bought this from a friend for a discount. Originally I had a R9 380 delivered to me, but it was defective from the start, so I asked him if he wants to sell me his 780 Ti, because he had just bought a 980 Ti, and returned the 380. That's it.
Posted on Reply
#157
medi01
That "people buy good products" crap is annoying in 2017. We have seen it with Prescott we have seen it with Fermi, it is clearly not the case.
Posted on Reply
#158
cdawall
where the hell are my stars
medi01That "people buy good products" crap is annoying in 2017. We have seen it with Prescott we have seen it with Fermi, it is clearly not the case.
Fermi was the superior performing card? It took two AMD cards in crossfire to best the GTX480.
Posted on Reply
#159
BiggieShady
cdawallFermi was the superior performing card? It took two AMD cards in crossfire to best the GTX480.
Fermi was hot but great, big improvement over 200 series ... however, wasn't as good as evergreen series until 580 model, timeline went something like this:

Fermi had huge die size compared to evergreen and succesors ... AMD already had 5870 that 480 couldn't quite dethrone without high tesselation (evergreen's Achilles heel)... thing is 580 was a proper fermi improvement (performance and power, temperature and noise improvement) and 6000 series wasn't improvement at all over evergreen.
Two 5000 or 6000 series gpus in xfire had huge issues with frame pacing back then (and it was an issue all the way to the crimson driver suite), so they would actually beat any fermi in average frame rate, but measured frame time variations made it feel almost the same. Maybe that's what you are referring to?
Posted on Reply
#160
Kanan
Tech Enthusiast & Gamer
cdawallFermi was the superior performing card? It took two AMD cards in crossfire to best the GTX480.
The gtx 480 wasn't bested anyway because crossfire back then was a complete mess, frame variances all over the place, only difference being it wasn't public knowledge like it is now. But the gtx 480 bested itself by being loud, hot and power hungry, it wasn't a good gpu if you ask me. Maybe 20-30% faster than HD 5870 but not really worth the hassle. The best *card* was the HD 5970 if you ignore the crossfire problems. It was actually pretty nice with frame pacing, too bad that feature was introduced in 2013 not Nov. 2009 when the GPU was released.
Posted on Reply
#161
cdawall
where the hell are my stars
KananThe gtx 480 wasn't bested anyway because crossfire back then was a complete mess, frame variances all over the place, only difference being it wasn't public knowledge like it is now. But the gtx 480 bested itself by being loud, hot and power hungry, it wasn't a good gpu if you ask me. Maybe 20-30% faster than HD 5870 but not really worth the hassle. The best *card* was the HD 5970 if you ignore the crossfire problems. It was actually pretty nice with frame pacing, too bad that feature was introduced in 2013 not Nov. 2009 when the GPU was released.
I agree my point was performance wise it was almost two full AMD GPU's ahead
Posted on Reply
#162
BiggieShady
I remember HD5870 vs GTX480 at launch being neck to neck ... radeon was even better in Crysis which was the uber benchmark at the time ...
I believe performance lead for fermi came later with driver maturity ... and by that time it was GTX 580 vs HD 6970 win for nvidia and very soon after GTX 580 vs 7970 win for amd until kepler and so on.
Posted on Reply
#163
Kanan
Tech Enthusiast & Gamer
BiggieShadyI remember HD5870 vs GTX480 at launch being neck to neck ... radeon was even better in Crysis which was the uber benchmark at the time ...
I believe performance lead for fermi came later with driver maturity ... and by that time it was GTX 580 vs HD 6970 win for nvidia and very soon after GTX 580 vs 7970 win for amd until kepler and so on.
I only remember out of my head, the GTX 480 being mostly faster but with a few games that favoured ATI tech with the HD 5870 and the HD 5970 being always on top (always but a few games that didn't support CF).

btw.
www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_480_Fermi/32.html
Posted on Reply
#165
Kanan
Tech Enthusiast & Gamer
cdawallIt was faster in almost everything lol
Yeah I screwed the wording lol. Such a nice card and the cooler was designed like a grill too, so it was ready for barbecue. I think they did a great job fixing the issues with the GTX 580 though.
Posted on Reply
#167
cdawall
where the hell are my stars
KananYeah I screwed the wording lol. Such a nice card and the cooler was designed like a grill too, so it was ready for barbecue. I think they did a great job fixing the issues with the GTX 580 though.
I loved my 470's to the point where I still have three of them with DD blocks, just in case I want to feel nostalgia.
Posted on Reply
#168
medi01
cdawallI agree my point was performance wise it was almost two full AMD GPU's ahead
What on earth:

`
Posted on Reply
#169
cdawall
where the hell are my stars
medi01What on earth:

Wait lets look at this graph. 100% performance match to the 5970. Would you mind telling everyone how many GPU's the 5970 contained?
Posted on Reply
#170
theeldest
BiggieShadyThat's completely nonsensical, performance is measured in TFLOPs, there is no such thing as performance per TFLOP.
You probably meant that ratio of average performance in TFLOPs versus peak theoretical performance in TFLOPs decreases.
I'm referencing my previous posts with the graphs and tables. There I define performance as the Average Performance as listed in the summary of the Titan X Pascal review.

Post 1: www.techpowerup.com/forums/threads/amds-vega-based-cards-to-reportedly-launch-in-may-2017-leak.229550/#post-3584964

Post 2: www.techpowerup.com/forums/threads/amds-vega-based-cards-to-reportedly-launch-in-may-2017-leak.229550/page-4#post-3585723


The table for reference:


AMD's larger die have lower performance per flop.

Before you reply that this is a stupid methodology, please put together something based on a methodology that you think is good and use that supporting data.
Posted on Reply
#171
EarthDog
Can you tell us why the relationship of die size to flop matters to people? Its not like that translates to anything tangible for the user, like FPS, or compute power. Its just some math that divides TFlops by die size...

Am I missing something?
Posted on Reply
#172
cdawall
where the hell are my stars
EarthDogCan you tell us why the relationship of die size to flop matters to people? Its not like that translates to anything tangible for the user, like FPS, or compute power. Its just some math that divides TFlops by die size...

Am I missing something?
No you are missing nothing. He posted graphs in one of the other threads claiming how there is performance per flop etc. Basically he used excel and is proud of it. Let him have his excel moment remember not everyone can use excel.
Posted on Reply
#173
EarthDog
It just seems so arbitrary.. like, the size of the rims on the car compared to how many windows it has. It really has no bearing on anything. I mean, cool metric, but, what is it actually telling us? How can a consumer use that data to gauge anything??

Consumers could care less if there was something the size of a postage stamp or a 8.5"x11" die under there... really.
Posted on Reply
#174
cdawall
where the hell are my stars
EarthDogIt just seems so arbitrary.. like, the size of the rims on the car compared to how many windows it has. It really has no bearing on anything. I mean, cool metric, but, what is it actually telling us? How can a consumer use that data to gauge anything??

Consumers could care less if there was something the size of a postage stamp or a 8.5"x11" die under there... really.
Consumers don't care about anything. but RGB lights.
Posted on Reply
#175
BiggieShady
theeldestBefore you reply that this is a stupid methodology, please put together something based on a methodology that you think is good and use that supporting data.
Ah, that little table from your original post explains where are you coming from ... I wouldn't use word stupid though, but slightly unscientific ... FLOPS is a unit and performance is measured using that unit (among other things such as fps but that's resolution dependent unlike flops). That little table in column 4 has a ratio "average frame rate per second / peak theoretical floating operations per second" ... and it would be great to have unitless ratio for that purpose, but very difficult because actual performance is measured in fps.

Since you are depicting a trend in your graph, it should work out if and only if all is measured in same resolution (either that or all resolution summary) and gpu usage is same in all games/samples. The second condition is a difficult one.
Why usage, because for example 8 TFLOPS GPU @75% usage skews your perf/flops ratio compared to say 8 TFLOPS GPU @100% usage.
As I said, good thing you extrapolated a linear trend out of the data points to combat the gpu usage skew.

I'm not even going into how in one game to calculate a single pixel you need X floating point operations, and for another game 2X floating point operations ... but again same games were used so relative trend analysis works out

I may have came across as a dick which wasn't my intention, but I admit that the fact that you are looking for a trend and relative amd vs nvidia relation between trends (using averages and same games) helps making your methodology work.
Posted on Reply
Add your own comment
Jul 22nd, 2024 19:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts