# NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power



## AleksandarK (Jan 3, 2020)

As we approach the release of NVIDIA's Ampere GPUs, which are rumored to launch in the second half of this year, more rumors and information about the upcoming graphics cards are appearing. Today, according to the latest report made by Taipei Times, NVIDIA's next-generation of graphics cards based on "Ampere" architecture is rumored to have as much as 50% performance uplift compared to the previous generations of Turing GPUs, while using having half the power consumption.

Built using Samsung's 7 nm manufacturing node, Ampere is poised to be the new king among all future GPUs. The rumored 50% performance increase is not impossible, due to features and improvements that the new 7 nm manufacturing node brings. If utilizing the density alone, NVIDIA can extract at least 50% extra performance that is due to the use of a smaller node. However, performance should increase even further because Ampere will bring new architecture as well. Combining a new manufacturing node and new microarchitecture, Ampere will reduce power consumption in half, making for a very efficient GPU solution. We still don't know if the performance will increase mostly for ray tracing applications, or will NVIDIA put the focus on general graphics performance.




*View at TechPowerUp Main Site*


----------



## HwGeek (Jan 3, 2020)

I bet that the "50% uplift" is in RTX ON performance while non RTX performance will remain the same in each segment.


----------



## s3thra (Jan 3, 2020)

Wow. That’s like the performance uplift between generations in the old days. Though yes, this could just be for ray tracing.


----------



## ZoneDymo (Jan 3, 2020)

HwGeek said:


> I bet that the "50% uplift" is in RTX ON performance while non RTX performance will remain the same in each segment.



^ this, I was just going to say the same thing.

Also, if any of this is true, Im not expecting prices to return to normal any time soon....in fact it will probably just get worse.


----------



## R0H1T (Jan 3, 2020)

Yes & pigs will also fly when that happens 

Reminds me of Intel's PR these days with multiple *****


----------



## Hyderz (Jan 3, 2020)

cant wait to see the benchmarks


----------



## Wyverex (Jan 3, 2020)

Benchmarks or didn't happen 

Would be great if there's truth to this though


----------



## Calmmo (Jan 3, 2020)

This is talking about ampere, will nvidia even make gaming GPUs from that or are they again going to hold off for a year then release a cut down variant under a different name? AMD have nothing to show still, officially, so why would they unless the new node allows for a significant reduction in costs. If this is an indication of anything.. this is probably about their new titan 3000$ card..


----------



## The Quim Reaper (Jan 3, 2020)

So that means I'll be able to play 'Control' at 4K, RTX High at 23fps instead of 15!!

..Amazeballs.

That will be so worth spending another $1000+ for.


----------



## FARGOUS (Jan 3, 2020)

hi guys 
i have my MSI GTX 970 Twin Frozer since 2014 , is very very good graphic card on my Samsung CF791 4k monitor ultra wide 21:9 .

i wait so much time to can buy this RTX 3080 because the RTX 2080 is so expensive for 4k performance / price

i hope the RTX on in 4k are very good in this new next gen NVIDIA .


----------



## Vayra86 (Jan 3, 2020)

HwGeek said:


> I bet that the "50% uplift" is in RTX ON performance while non RTX performance will remain the same in each segment.



That kinda means the better half of us will stick to the rather cheap discounted Turing Supers, that offered good perf/dollar, or Navi. If they don't push the upper mid range to 2080 S levels, this thing is dead in the water and RT adoption will slow to a crawl.

Keep in mind mainstream res. 1080p. By an immense margin. People have no urge, Pascal's midrange can push most of the games just fine, still. That's a 2016 release..

So no, I think we'll see a jump. 50%, not so sure, indeed. But 25-30% (as usual) definitely.



FARGOUS said:


> hi guys
> i have my MSI GTX 970 Twin Frozer since 2014 , is very very good graphic card on my Samsung CF791 4k monitor ultra wide 21:9 .
> 
> i wait so much time to can buy this RTX 3080 because the RTX 2080 is so expensive for 4k performance / price
> ...



4K will always be a tough one. Especially when RT gets bigger. I'd accept 1080p on that panel for the next half decade if I were you 

4K and ultrawide though. It doesn't exist.


----------



## Space Lynx (Jan 3, 2020)

Good thing Navi 2 is coming, I no longer have to bow down to Nvidia.  I don't intend to do 4k Gaming, don't see the point, so Navi 2 will do me just fine wherever it lands performance wise.


----------



## TheGuruStud (Jan 3, 2020)

Remember that turding vs Pascal chart?  
Nvidia math is worse than Intel math.


----------



## HTC (Jan 3, 2020)

Something doesn't sound right.

When moving to a smaller node, you either get X% higher performance @ the same power *or* same performance using Y% less power, but not both.


----------



## FARGOUS (Jan 3, 2020)

Vayra86 said:


> That kinda means the better half of us will stick to the rather cheap discounted Turing Supers, that offered good perf/dollar, or Navi. If they don't push the upper mid range to 2080 S levels, this thing is dead in the water and RT adoption will slow to a crawl.
> 
> Keep in mind mainstream res. 1080p. By an immense margin. People have no urge, Pascal's midrange can push most of the games just fine, still. That's a 2016 release..
> 
> ...



it's 3440x1440   not 3840 x 2160


----------



## Vayra86 (Jan 3, 2020)

FARGOUS said:


> it's 3440x1440   not 3840 x 2160



A lot more doable, that.


----------



## Italia1 (Jan 3, 2020)

Mmmh... After years with Nvidia(FX 5950, 6800ultra, 7950gx2), then years with Amd (1950xt - until now with Vega 64 liquid)... I'm waiting for best video card this summer. Will be Navi or Ampere ? I need a serious upgrade


----------



## Recus (Jan 3, 2020)

TheGuruStud said:


> Remember that turding vs Pascal chart?
> Nvidia math is worse than Intel math.



What chart?



HTC said:


> Something doesn't sound right.
> 
> When moving to a smaller node, you either get X% higher performance @ the same power *or* same performance using Y% less power, but not both.



Maxwell 28nm to Pascal 16nm. Pascal got performance and efficiency.


----------



## ZoneDymo (Jan 3, 2020)

Italia1 said:


> Mmmh... After years with Nvidia(FX 5950, 6800ultra, 7950gx2), then years with Amd (1950xt - until now with Vega 64 liquid)... I'm waiting for best video card this summer. Will be Navi or Ampere ? I need a serious upgrade



depends on your definition of "best"


----------



## theGryphon (Jan 3, 2020)

50% more performance at same power? Possibly... depends on the case.

50% more performance at half the power? BS.


----------



## low (Jan 3, 2020)

AMD: release big navi
Nvidia: what shall we do? Lets spread news about the next gen to prevent ppl buying big navi.


----------



## kings (Jan 3, 2020)

What big Navi? Released when?

Big Navi is a rumor at this point, just like Ampere or whatever name Nvidia may call it.


----------



## Vya Domus (Jan 3, 2020)

The Turing SM was also supposedly 50% faster but ended up being no more than 30% in best case scenario.


----------



## 64K (Jan 3, 2020)

I believe it's possible to gain 50% in performance. Going from the 12nm process to the 7nm process should increase the efficiency and allow for more cores/faster clocks for the same wattage as Turing uses.

I doubt that Nvidia will lower prices though. They won't have a reason to unless Intel comes out with something really good this year for competition. Also, if there are shortages for any reason then we can expect retailer gouging which will make prices higher than they should be for awhile after release.


----------



## TheDeeGee (Jan 3, 2020)

50% Speed, Half the Power, double the cost.

That's Nvidia alright!


----------



## Valantar (Jan 3, 2020)

HwGeek said:


> I bet that the "50% uplift" is in RTX ON performance while non RTX performance will remain the same in each segment.


That is the only plausible explanation for something like this, or more precisely: a 50% uplift in perf/W for RT operations. A 300% general perf/W increase in a single generation, even with a full node jump, is completely unheard of. Not going to happen for ordinary rasterized graphics. Period.


----------



## PanicLake (Jan 3, 2020)

Half the power double the price?


----------



## notb (Jan 3, 2020)

HTC said:


> Something doesn't sound right.
> 
> When moving to a smaller node, you either get X% higher performance @ the same power *or* same performance using Y% less power, but not both.


And this is a result of which fundamental law of physics?

There's no reason why Nvidia would offer the traditional +30% performance with new generation just based on architecture improvements and optimizations.
Sometimes there's also some power efficiency bonus (despite the same node).
7nm mean more cores.

Going straight to 7nm EUV means even higher efficiency gain than AMD got with Polaris -> Navi.


----------



## HTC (Jan 3, 2020)

64K said:


> I believe it's possible to gain 50% in performance. *Going from the 12nm process to the 7nm process should increase the efficiency and allow for more cores/faster clocks for the same wattage as Turing uses.*
> 
> I doubt that Nvidia will lower prices though. They won't have a reason to unless Intel comes out with something really good this year for competition. Also, if there are shortages for any reason then we can expect retailer gouging which will make prices higher than they should be for awhile after release.



But they are claiming 50% performance increase WHILE using 50% less power. It's perfectly believable to be either one ... but both @ the same time? Seriously doubt it.

EDIT



notb said:


> And this is a result of which fundamental law of physics?
> 
> *There's no reason why Nvidia would offer the traditional +30% performance with new generation just based on architecture improvements and optimizations.
> Sometimes there's also some power efficiency bonus (despite the same node).
> ...



But they're not claiming 30%, are they? They could claim ... say ... 20% more performance @ 30% less power and it would be much more believable but 50% more performance @ 50% less power?

Seriously SERIOUSLY doubt this.


----------



## 64K (Jan 3, 2020)

True. That's why I only focussed on the possibility of a 50% increase in performance for the same wattage as Turing. Having 50% increase in performance and 50% less watts used at the same time isn't possible imo.


----------



## fynxer (Jan 3, 2020)

Why do you think nVidia wants to "surprise" all their customers in the absolute last second...

They don't want to cause panic to early before next gen release because they know the 2080Ti will drop to 1/3 of it's value instantly when they do their announcement

*Don't give them the satisfaction... Sell your 2080Ti NOW!!!*


----------



## renz496 (Jan 3, 2020)

low said:


> AMD: release big navi
> Nvidia: what shall we do? Lets spread news about the next gen to prevent ppl buying big navi.



i think we heard this every time new GPU is about to come out. the competitor starts spreading rumor to stop them from buying competitor product. sometimes i think company like AMD and nvidia does not really need to do this. because their fanboys will do the job for them for free. still remember when some people said you should hold back from getting GTX1080 because Vega will coming out in october 2016. they said there is shortage anyway with 1080 because of demand so you might just as well wait until october 2016.


----------



## Xex360 (Jan 3, 2020)

Finally we'll get the promised 2080ti, not the garbage we have now.


----------



## ZeroFM (Jan 3, 2020)

nVidia is the new Intel , YOU WILL SEE . New gpus no cheaper , tiny faster becouse is no competition


----------



## Fluffmeister (Jan 3, 2020)

Talk of high-end Navi are rumours not from AMD so don't blame them!

Nvidia.... It's all BS Fu Nvidia!

Man this forum.


----------



## Xaled (Jan 3, 2020)

Why is there no "UP TO" in the title here while there was in the Intel news? 
I don't like intel but I hate double standards


----------



## ppn (Jan 3, 2020)

If Nvidia skips 10nm, 7nm directly to 7+/6nm, the density is increased from 30 on 12/14/16nm  to 77 Mtr/mm2 on 7+. 2080ti gets smaller 770 to 300mm2, drops 384 to 256 bit bus 16Gb buffer and 286 mm2, 4096 cuda, compensates with gpu clock speed. bigger chip with 6144 cuda around 429 mm2 so +50% perf is perfectly doable.


----------



## Valantar (Jan 3, 2020)

Xaled said:


> Why is there no "UP TO" in the title here while there was in the Intel news?
> I don't like intel but I hate double standards


Because the source (apparently) doesn't say "up to". They're just reporting on someone else's report, so conveying this accurately is a must, no matter if the information itself is accurate. If this was data TPU had sourced themselves, your complaint would be entirely valid, but in this case they're just the messenger and do not deserve to be shot.


----------



## Vya Domus (Jan 3, 2020)

64K said:


> I believe it's possible to gain 50% in performance. Going from the 12nm process to the 7nm process should increase the efficiency and allow for more cores/faster clocks for the same wattage as Turing uses.



You're not going to get 50% more clockspeed on 7nm, it's impossible. The only thing you'll get is more cores but the power wont change much.


----------



## 64K (Jan 3, 2020)

Vya Domus said:


> You're not going to get 50% more clockspeed on 7nm, it's impossible. The only thing you'll get is more cores but the power wont change much.



I didn't say 50% faster clocks. I said a combination of faster clocks/more cores should allow for a 50% increase in performance for the same watts used as Turing.


----------



## notb (Jan 3, 2020)

HTC said:


> But they're not claiming 30%, are they? They could claim ... say ... 20% more performance @ 30% less power and it would be much more believable but 50% more performance @ 50% less power?


But based on what we've seen with Zen and Navi, would you say halving power draw is believable? Based on new node alone.
Remember this is 14nm -> 7nm EUV.
So it's analogous to what AMD has already shown with Zen2/Navi (DUV) + what they promise for this year.

I think it's quite feasible.

And now we get to the architectural improvements.
We know Nvidia can do +30% with each gen (every ~2 years). There's no reason why this wouldn't be true with Ampere.

So we now have something like +30% performance and -50% power draw. Not quite the +50%/-50%, but still not bad.

Except, as someone said above, the +50% may be taking into account a large boost in RTRT.
Of course RTRT is still a source of arguments today, but few years from now we'll just call it "gaming performance" - just like we stopped splitting 3D and 2D in the late 90s.


----------



## Xaled (Jan 3, 2020)

Valantar said:


> Because the source (apparently) doesn't say "up to". They're just reporting on someone else's report, so conveying this accurately is a must, no matter if the information itself is accurate. If this was data TPU had sourced themselves, your complaint would be entirely valid, but in this case they're just the messenger and do not deserve to be shot.









Now, double standards or not?


----------



## Naito (Jan 3, 2020)

Regardless of where the 50% uplift applies, I'm just glad I didn't buy into the RTX 2000 series


----------



## P4-630 (Jan 3, 2020)

Ah, Cyberpunk 2077 Release date April 16 2020, recommended system requirements RTX30xx.....


----------



## Valantar (Jan 3, 2020)

notb said:


> And now we get to the architectural improvements.
> We know Nvidia can do +30% with each gen (every ~2 years). There's no reason why this wouldn't be true with Ampere.


Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.



notb said:


> But based on what we've seen with Zen and Navi, would you say halving power draw is believable? Based on new node alone.
> Remember this is 14nm -> 7nm EUV.
> So it's analogous to what AMD has already shown with Zen2/Navi (DUV) + what they promise for this year.


That sounds rather unlikely to me, though I'm no expert by any stretch of the imagination. The jump from 28nm to 16nm did not halve power for Nvidia, so going from 12nm to 7nm EUV doesn't sound likely to do so either. Beyond that they're moving between foundries (at least for some GPUs), so comparisons could be difficult. 

It's not analogous to AMD's move from Vega on GloFo 12nm to Navi on TSMC 7nm either, as that is a completely new architecture with very significant efficiency improvements. You'd be better off looking at the Radeon Vega 64 vs. the Radeon VII, as those are very similar in design but on a new node with slightly bumped clocks, and that improved perf/W by < 30%.



Xaled said:


> View attachment 141147View attachment 141149
> 
> Now, double standards or not?


Not really, no. The first is reporting on IPC, which is (at least supposed to be) an average number based on a number of tests. An "up to" number in this case could thus be 18%, 30% or 45% - it's impossible to know, as we only know the average. Look at SPEC testing, for example - gen-to-gen IPC (clock equalized) testing in that normally results in a wide array of performance differences. The other reports on one specific single data point with little context. Is this a high number, an average, or a low? We have no idea, but given that it's an officially released number from the company itself, it's logical to infer it to be above average to present the product in the best light.


----------



## Xaled (Jan 3, 2020)

Valantar said:


> Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.
> 
> 
> That sounds rather unlikely to me, though I'm no expert by any stretch of the imagination. The jump from 28nm to 16nm did not halve power for Nvidia, so going from 12nm to 7nm EUV doesn't sound likely to do so either. Beyond that they're moving between foundries (at least for some GPUs), so comparisons could be difficult.
> ...


Then then the title should've included words as : Claim or say ( Nvidia says, or claims) at worst, just like did in the Intel news.
Saying that IT WILL BE %50 FASTER this way is just wrong and biased.


----------



## cucker tarlson (Jan 3, 2020)

It will be very good but too expensive for the first couple of quarters,2020 may bring on some competition and price cuts.


----------



## kings (Jan 3, 2020)

Personally, I don´t think that Nvidia will be very concerned with consumption, because it has been seen that AMD even with a new architecture and 7nm, is not brilliant in that regard.

I think they are more likely to try to keep the TDPs at or slightly below and pull performance to the max in that envelope. About 250W will probably remain normal for the Nvidia top-end card.


----------



## Tsukiyomi91 (Jan 3, 2020)

if Ampere really has a 50% gain over Turing in all benchmarks/real world use while using less power is a good thing. Problem here is many bought the "refreshed" RTX20 Series Super cards & GTX16 Series cards... so those folks might be at a loss-ish? That said, I wonder how the naming convention will be? RTX 22xx? or RTX 3xxx since it's an entirely new silicon? 2020 & 2021 will be an interesting year.


----------



## wolf (Jan 3, 2020)

As always take the super early info with an enormous pinch of salt. 50% is a silly large number, possibly pulled from thin air to keep money in wallets for it instead of buying a competing product today, or it might represent a super outside shot at a best case scenario like RTX or using a new gen specific feature like VRS did for Turing over Pascal.

What is almost certain is that, the cards as a product stack should reasonably outperform their Turing counterparts, they should also present a reasonable performance per watt improvement, and a reasonable hardware ray tracing improvement. Prices... well who knows, if AMD can't keep up with their upper tier products expect much of the same.

I won't count on anything solid whatsoever until W1zzard (and other trusted sites) publish a review of an actual product. Hopefully in any case 2020 brings a compelling upgrade for a GTX1080 (I own) / vega 56/64 owners that isn't the halo product and has a competitive price to perf ratio.


----------



## cucker tarlson (Jan 3, 2020)

I smell A106 dies at $700
Mark my words


----------



## notb (Jan 3, 2020)

Valantar said:


> Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.


So it was or wasn't? Because I'm not sure what you mean.

I don't care about SMs, clocks and all that internal stuff (much like I don't care about IPC in CPUs). It's not what I'm paying for as a gamer.

1660Ti came out roughly 2.5 years after 1060.
It's slightly more expensive, the same power draw, similar form factor and features.
1660Ti is 30-40% faster.


----------



## cucker tarlson (Jan 3, 2020)

notb said:


> So it was or wasn't? Because I'm not sure what you mean.
> 
> I don't care about SMs, clocks and all that internal stuff (much like I don't care about IPC in CPUs). It's not what I'm paying for as a gamer.
> 
> ...


Valantar can't do simple maths
2070 super matches 1080Ti with 2560 cuda vs 3584 cuda
clocks go slightly in favor of 2070S by 100mhz (5-6%),bandwidth in favor of 1080Ti by 40GB/s (8%)
that's around 1.4x performance per CUDA on average,not 1.3x,not 1.1x
in some cases 2070s ends up 15% faster,in some a few perrcent slower.A right way to estimate it would be 1.25x-1.5x depending on a game,but certainly at least 1.3x on average


----------



## HTC (Jan 3, 2020)

notb said:


> 1660Ti came out roughly 2.5 years after 1060.
> It's slightly more expensive, *the same power draw*, similar form factor and features.
> *1660Ti is 30-40% faster*.



So 30 - 40% faster @ the same power draw for the 1660Ti VS 1060 but now they say you'll get even more performance uplift @ HALF the power draw for Ampere VS Turing?

See the problem?


----------



## cucker tarlson (Jan 3, 2020)

HTC said:


> So 30 - 40% faster @ the same power draw for the 1660Ti VS 1060 but now they say you'll get even more performance uplift @ HALF the power draw for Ampere VS Turing?
> 
> See the problem?


except they're going down a node,and already to a tweaked version,with new uarch on top.
we never had this recently.
maxwell was just uarch,pascal was node,turing was uarch on same node but tweaked,with die sizes so big the power went to 250W on 104 die.
ampere will be new uarch on a small,efficient node.

I'm guessing $650-700 for the A106 die,and I'm not kidding.
mu gut tells me my 2070 Super will serve me for at least 6-10 months after this thing launches.


----------



## notb (Jan 3, 2020)

HTC said:


> So 30 - 40% faster @ the same power draw for the 1660Ti VS 1060 but now they say you'll get even more performance uplift @ HALF the power draw for Ampere VS Turing?
> 
> See the problem?


Power consumption: new node. 
Performance uplift: some from traditional improvements, (maybe) some from RTX polishing.

+30% performance and -40% power draw is the minimum we should expect.
But I won't be shocked if they managed +50%/-50%. They've been developing Ampere for a very long time.


----------



## HTC (Jan 3, 2020)

cucker tarlson said:


> except they're going down a node,and already to a tweaked version,with new uarch on top.
> we never had this recently.
> maxwell was just uarch,pascal was node,turing was uarch on same node but tweaked,with die sizes so big the power went to 250W on 104 die.
> ampere will be new uarch on a small,efficient node.
> ...





notb said:


> Power consumption: new node.
> Performance uplift: some from traditional improvements, (maybe) some from RTX polishing.
> 
> +30% performance and -40% power draw is the minimum we should expect.
> But I won't be shocked if they managed +50%/-50%. They've been developing Ampere for a very long time.



It's possible but i still think it's very VERY unlikely.

I suppose i'll have to eat my words if i'm wrong: we shall see ...


----------



## cucker tarlson (Jan 3, 2020)

HTC said:


> It's possible but i still think it's very VERY unlikely.
> 
> I suppose i'll have to eat my words if i'm wrong: we shall see ...


it is a rumor after all.
but the fact it's new uarch on a new,tweaked node,I don't think 50/50 is impossible.I think it's just the best case scenario. on average it'll be slightly lower.

sometimes I think people don't even read the article,they've got their comments pre-written, saved and ready to roll depending on what brand we're discussing in the news.



> If utilizing the density alone, NVIDIA can extract at least 50% extra performance that is due to the use of a smaller node. However, performance should increase even further because Ampere will bring new architecture as well. Combining a new manufacturing node and new microarchitecture, Ampere will reduce power consumption in half, making for a very efficient GPU solution.



I don't know if you get what the 50/50 numbers mean too.
It means the new 3070 (a106 die if the trend continues) matches 2080Ti at ~150W.


----------



## Vya Domus (Jan 3, 2020)

For one thing I cannot see how the SM could be improved to get 50% more raster performance out of it or make it more power efficient in any significant way, they've already pulled pretty much every trick in the book with Turing, huge caches, more ILP, etc. It's just not within the realm of possibility.

With the newer node they'll either get a higher transistor count at the same speeds or get higher clocks but they'll have to maintain more or less the same core counts to stay within the same power. Suffice to say they'll go with the former, so 50% less power from the node is out of the question. Nothing can possibly explain 50% more performance for 50% less power or anywhere close to that, it simply isn't doable. There are two possibilities here, rumors are wrong, or this comes with a million caveats involving RTX, DLSS, variable shading and whatever else they come up with to mitigate the cost of all the extra silicon and justify increased prices.


----------



## 64K (Jan 3, 2020)

I'm a gamer. I buy GPUs to play games. I don't care about Synthetics. I care about real-world performance in gaming and going from reviews here I find that the original RTX 2070 on average was 39% faster than it's Pascal counterpart GTX 1070. The test bed of games are mostly not raytracing games and even if they had some element of raytracing you could just turn it off.

The original RTX 2080 was on average 37% faster than it's Pascal counterpart GTX 1080.

The RTX 2080 Ti was on average 39% faster than it's Pascal counterpart GTX 1080 Ti.

Granted the Turings are using around 50 more watts than the Pascals.

Now if Nvidia can make those gains going from the 16nm process to the 12nm process then why is a 50% performance increase for the same watts as Turing not possible going from the 12nm process to the 7nm process?

Remember that I'm not comparing watts used by Ampere to Pascal. I'm comparing watts used by Ampere to Turing.

I think a 50% increase in performance by Ampere over Turing using the same watts is very possible.



Spoiler: Relative Performance in Games


----------



## cucker tarlson (Jan 3, 2020)

Vya Domus said:


> For one thing I cannot see how the SM could be improved to get 50% more raster performance out of it or make it more power efficient in any significant way, they've already pulled pretty much every trick in the book with Turing, huge caches, more ILP, etc. It's just not within the realm of possibility.
> 
> With the newer node they'll either get a higher transistor count at the same speeds or get higher clocks but they'll have to maintain more or less the same core counts to stay within the same power. Suffice to say they'll go with the former, so 50% less power from the node is out of the question. Nothing can possibly explain 50% more performance for 50% less power or anywhere close to that, it simply isn't doable. There are two possibilities here, rumors are wrong, or this comes with a million caveats involving RTX, DLSS, variable shading and whatever else they come up with to mitigate the cost of all the extra silicon and justify increased prices.


for ampere they've had good foundations to build on,a narrow focus,plenty of time and huge rd budget.


----------



## HwGeek (Jan 3, 2020)

64K said:


> I'm a gamer. I buy GPUs to play games. I don't care about Synthetics. I care about real-world performance in gaming and going from reviews here I find that the original RTX 2070 on average was 39% faster than it's Pascal counterpart GTX 1070. The test bed of games are mostly not raytracing games and even if they had some element of raytracing you could just turn it off.
> 
> The original RTX 2080 was on average 37% faster than it's Pascal counterpart GTX 1080.
> 
> ...


you got those uplift because NV tricked all of us and changed the names of the cards, in real life you should compare 1070 vs 2060, 1080 vs 2070, 1080Ti vs 2080.
the 2080Ti isn't 1080Ti replacement, it's the Titan, that why the prices shifted.


----------



## DeeJay1001 (Jan 3, 2020)

TheDeeGee said:


> 50% Speed, Half the Power, double the cost.
> 
> That's Nvidia alright!



More like 
50% more speed (in very specific use cases)
Half the power (in extremely limited lab test scenarios)
Double the cost  ALL THE TIME


----------



## 64K (Jan 3, 2020)

HwGeek said:


> you got those uplift because NV tricked all of us and changed the names of the cards, in real life you should compare 1070 vs 2060, 1080 vs 2070, 1080Ti vs 2080.
> the 2080Ti isn't 1080Ti replacement, it's the Titan, that why the prices shifted.



I'm not going to derail this thread with a lengthy discussion on Nvidia naming conventions, memory bus width etc. but if the RTX 2080 Ti is the replacement for the Titan Xp then what is the RTX Titan? Don't just look at pricing. That's not how to compare a counterpart  with Nvidia.

Also my best guess is that we will see a 50% increase in performance with Ampere over Turing using the same watts across the board but that's just speculation on my part until the cards actually wind up in W1zzard's hands to bench and then we will know for sure.


----------



## lexluthermiester (Jan 3, 2020)

R0H1T said:


> Yes & pigs will also fly when that happens


As a general rule, when NVidia makes a claim, they come very close to it if not actually achieving it.


TheDeeGee said:


> double the cost.


That's not going to happen. The price is very likely to come down..


64K said:


> Don't just look at pricing. That's not how to compare a counterpart with Nvidia.


Exactly right. Prices fluctuate and are not a reliable indicator of generational steppings, but replacement model naming conventions generally are.


----------



## Valantar (Jan 3, 2020)

Xaled said:


> Then then the title should've included words as : Claim or say ( Nvidia says, or claims) at worst, just like did in the Intel news.
> Saying that IT WILL BE %50 FASTER this way is just wrong and biased.


I'd say the headline ought to be along the lines of "Nvidia's Next-Generation Ampere GPUs reportedly 50% faster than Turing at Half the Power". I'll agree that a headline with zero reservations or considerations taken as to this being second-hand information with dubious sourcing making bombastic claims at a very early point is poor journalism, but I don't see it as a double standard or bias - inaccuracies like this are quite par for the course across the board for TPU, sadly.


Tsukiyomi91 said:


> if Ampere really has a 50% gain over Turing in all benchmarks/real world use while using less power is a good thing. Problem here is many bought the "refreshed" RTX20 Series Super cards & GTX16 Series cards... so those folks might be at a loss-ish?


Logic like that is _always_ wrong. If you buy a card today and a better card comes out for the same price tomorrow, you have lost nothing whatsoever. Sure, you could have gotten a _better_ deal if you waited, but that is _always_ the case. There will always be better hardware in the future for a better price, so you just have to pull the trigger at some point, and your purchase will always be made to look less than ideal at some point. That doesn't mean it was a bad purchase, nor does it change the performance/dollar that you tacitly agreed to pay when you made the purchase decision.


notb said:


> So it was or wasn't? Because I'm not sure what you mean.
> 
> I don't care about SMs, clocks and all that internal stuff (much like I don't care about IPC in CPUs). It's not what I'm paying for as a gamer.
> 
> ...


The problem is that you weren't talking about absolute performance in the post I responded to, you were talking about architectural performance improvements specifically. While there are (very broadly) two ways for these to work (increased clock speeds not due to node changes, and "IPC" for lack of a better term), most clock speed improvements come from node changes, and most arch improvements are normally down to improving IPC. There are exceptions to this, such as Maxwell clocking significantly higher than previous architectures, but for the most part this holds true. If you're talking about perf/$ on an absolute level, you are right, but that's another matter entirely. So, if you don't care about _how_ one arrives at a given performance level, maybe don't get into discussions about it?



cucker tarlson said:


> Valantar can't do simple maths
> 2070 super matches 1080Ti with 2560 cuda vs 3584 cuda
> clocks go slightly in favor of 2070S by 100mhz (5-6%),bandwidth in favor of 1080Ti by 40GB/s (8%)
> that's around 1.4x performance per CUDA on average,not 1.3x,not 1.1x
> in some cases 2070s ends up 15% faster,in some a few perrcent slower.A right way to estimate it would be 1.25x-1.5x depending on a game,but certainly at least 1.3x on average


2070S is a tuned and binned half-gen refresh SKU based on mature silicon, not really directly comparable to the 1st-gen 1080Ti. That doesn't mean it doesn't have good absolute performance, it just isn't a 1:1 comparison. The fact that the 2070S performs within a few percent of the 2080 makes this pretty clear. So, if you want a like-for-like comparison, the 2080 is the correct card. And then you suddenly have a rather different equation:
2080 beats the 1080Ti by about 8% with 2944 CUDA cores vs. 3584.
Clocks go slightly in favor of the 2080 by 100MHz (5-6%), and memory bandwidth is essentially a tie.
In other words the clock speed increase and performance increase pretty much cancel each other out, leaving us with ~22% more perf/CU. That of course ignores driver improvements and other uncontrollable variables, to which some improvements should reasonably be attributed. My 10% number might have been a bit low, but your 40% number is silly high.


----------



## cucker tarlson (Jan 3, 2020)

Valantar said:


> 2070S is a tuned and binned half-gen refresh SKU


how is 2070S more tuned and binned than 1080ti really ? that's a made up point.
pointless discussion.

bandwidth on 1080ti is 8% higher than 2080,like 2070S,they share 14bgps 256-bit ddr6.


----------



## Valantar (Jan 3, 2020)

lexluthermiester said:


> As a general rule, when NVidia makes a claim, they come very close to it if not actually achieving it.


...but this isn't Nvidia making a claim, it's some analyst from an investment consulting agency.



cucker tarlson said:


> how is 2070S more tuined and binned than 1080ti really ?


By being a mid-cycle release with a differently binned die? That's kind of how this works.


cucker tarlson said:


> if you don't want to compare 2070s,comparre 2080.you'll end up with the same numbers.


...that's what I did?


cucker tarlson said:


> bandwidth on 1080ti is 8% higher than 2080,like 2070S,they share 14bgps 256-bit ddr6.


Sorry, got that mixed up what with all the Turing SKUs out there. Doesn't make much of a difference though. 22%+8%= 23,8%. Nonetheless nowhere near 40.


----------



## xkm1948 (Jan 3, 2020)

Nvidia is not Intel. This is nice


----------



## cucker tarlson (Jan 3, 2020)

Valantar said:


> ...but this isn't Nvidia making a claim, it's some analyst from an investment consulting agency.
> 
> 
> By being a mid-cycle release with a differently binned die? That's kind of how this works.
> ...


1080ti is not first pascal either,it is mid-cycle too.

no,you didn't get the numbers correctly.if you did you'd end up with 1.32x not 1.22x.that's 0.08 (8%) from 2070S at 40%.Still,nowhere near your 10% and more than 30%.


----------



## the54thvoid (Jan 3, 2020)

Well, without using any maths or forum pseudo-engineering, it's quite probable, at such a die shrink with a better arch, the chips will be a generational leap, like we used to see. 50%, perhaps in best of circumstances. But despite the naysayers, Nvidia will hit it out the park, again. And yeah, it'll cost a fortune.


----------



## phanbuey (Jan 3, 2020)

ZoneDymo said:


> ^ this, I was just going to say the same thing.
> 
> Also, if any of this is true, Im not expecting prices to return to normal any time soon....in fact it will probably just get worse.



I think the prices will depend more on AMD than anything -- it's actually looking like a GPU war is on the horizon (about time).


----------



## SIGSEGV (Jan 3, 2020)

NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power and Triple the Price


----------



## SaLaDiN666 (Jan 3, 2020)

God, I love the internet, a guy writes down complete FUD about 10%, compares 1080ti with 2080 instead of 2080Ti and then he even feels entitled to pester others how their more realistic estimations are silly. What a time to be alive.


----------



## tomc100 (Jan 3, 2020)

I'll believe it when I see it.


----------



## TheoneandonlyMrK (Jan 3, 2020)

50% better RTX they mean


----------



## lexluthermiester (Jan 3, 2020)

theoneandonlymrk said:


> 50% better RTX they mean


Very likely.


----------



## xkm1948 (Jan 3, 2020)

Fluffmeister said:


> Talk of high-end Navi are rumours not from AMD so don't blame them!
> 
> Nvidia.... It's all BS Fu Nvidia!
> 
> Man this forum.



Hey hey logic and reasoning is offensive to my gut feelings and love for certain brands, take it back!  /s

On a serious note, tech progression don’t give a flying fk about fans. Nvidia dominates GPU accelerated computing and AI in industry and academia. Nvidia has good reason to keep pushing large gen to gen performance boost. This is the best way to entrench themselves in the lucrative market.


----------



## lexluthermiester (Jan 3, 2020)

xkm1948 said:


> This is the best way to entrench themselves in the lucrative market.


They're already entrenched, they want to maintain their market lead. And they have no choice, AMD is hot on their heels and Intel will soon enter the discrete GPU market as well. They're the current GPU king of the hill and they have to fight to maintain that status. This level of competition is great for the market!


----------



## EarthDog (Jan 3, 2020)

HwGeek said:


> I bet that the "50% uplift" is in RTX ON performance while non RTX performance will remain the same in each segment.


I bet you will be eating crow with this statement later in the year... 

If you or anyone that agrees with this/thanked his post didnt read. This is simply talking about the process and die shrink and what can theoretically come along with it. To make a statement like this (isolating where the improvement comes from and thinking this is solely marketing/rtx) is premature at best, ignorant at worst.


----------



## ZoneDymo (Jan 3, 2020)

EarthDog said:


> I bet you will be eating crow with this statement later in the year...
> 
> If you or anyone that agrees with this didnt read, this is simply talking about the process and die shrink and what can theoretically come along with it. To make a statement like this (isolating where the improvement comes from and thinking this is solely marketing/rtx) is premature at best, ignorant at worst.



that ORRRR we have been part of this business for long enough to expect some bs from these sort of articles.


----------



## Turmania (Jan 3, 2020)

Say what you want but Nvidia is at least 2 generations ahead of AMD Radeon and the gap will increase most likely, not that I like it but this is the situation.


----------



## EarthDog (Jan 3, 2020)

ZoneDymo said:


> that ORRRR we have been part of this business for long enough to expect some bs from these sort of articles.


Nobody in their right mind is expecting a 50% uptick in actual performance. However, to pin it down to RTX performance only is incredibly premature (at best). With the information given, there is no way anyone can tell. And I assure you there will be an increase in both rtx and non rtx performance.

Just scrolled through the thread.. same lulz..


----------



## Amite (Jan 3, 2020)

Uhh great but at 2080 ti prices ? Would be  irrelevant to me.


----------



## lexluthermiester (Jan 3, 2020)

ZoneDymo said:


> that ORRRR we have been part of this business for long enough to expect some bs from these sort of articles.


Perhaps. However NVidia's track record says otherwise. They delivered RTX as claimed, more or less on time. This information may not have come from an NVidia statement, but it's from a source that is seemingly legit and NVidia hasn't debunked it, which they have a habit of doing.


----------



## RH92 (Jan 3, 2020)

Amite said:


> Uhh great but at 2080 ti prices ? Would be  irrelevant to me.



Last time i checked Nvidia had an entire portfolio of  segments going from 200 bucks up to 1200 bucks , same applies to Ampere ( or whatever Nvidia calls the new gen ) nobody will force you to pay 2080Ti money !       If Ampere end up being anywhere close those numbers you will have 2080Ti performance for  mid tier money , this is why innovation is great !


----------



## xkm1948 (Jan 3, 2020)

lexluthermiester said:


> They're already entrenched, they want to maintain their market lead. And they have no choice, AMD is hot on their heels and Intel will soon enter the discrete GPU market as well. They're the current GPU king of the hill and they have to fight to maintain that status. This level of competition is great for the market!



On the CPU front AMD is gaining some mindshare and marketshare. On the GPU computing front RTG is just horrible.


----------



## Tsukiyomi91 (Jan 3, 2020)

@Valantar regardless, I would still look forward on how it performs.


----------



## TheoneandonlyMrK (Jan 3, 2020)

EarthDog said:


> Nobody in their right mind is expecting a 50% uptick in actual performance. However, to pin it down to RTX performance only is incredibly premature (at best). With the information given, there is no way anyone can tell. And I assure you there will be an increase in both rtx and non rtx performance.
> 
> Just scrolled through the thread.. same lulz..


It's just a rumour bro, it pins nothing down just like navi's x2 rumour, hyperbolic nonesense that's likely to be true only in 2 out of 10 applications not all, as ever.


----------



## EarthDog (Jan 3, 2020)

theoneandonlymrk said:


> It's just a rumour bro, it pins nothing down just like navi's x2 rumour, hyperbolic nonesense that's likely to be true only in 2 out of 10 applications not all, as ever.


I know that. I was replying to, and quoted, someone (and the 'supporters') who thinks this is rtx only...








						NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power
					

50% better RTX they mean




					www.techpowerup.com
				




...stick with me...


----------



## RH92 (Jan 3, 2020)

theoneandonlymrk said:


> It's just a rumour bro, it pins nothing down just like navi's x2 rumour, hyperbolic nonesense that's likely to be true only in 2 out of 10 applications not all, as ever.



True it is only a rumor , this being said if there is anyone in the GPU industry that can pull those numbers that's precisely Nvidia  so the rumor becomes much more credible in this case !

No matter how you look at it Nvidia on 12nm which is closer to 16nm is dominating the market  on both perf and efficiency so you can only expect mind-blowing efficiency now that they move to 7nm EUV which is a huge step from 12nm . From there everything becomes possible it's just a matter of how you adjust  your efficiency/perf ratio .


----------



## efikkan (Jan 3, 2020)

I see that all the usual suspects that give AMD the benefit of the doubt is unwilling to extend Nvidia the same courtesy.

I assume this performance claim is nothing but a bold prediction; based on nothing but thin air.
Speculation is fine, but only when clearly labeled as such. I also expect Nvidia's next gen to be a "big step up"; a good node shrink, a refined node and a new architecture, this certainly has potential, but remember they still can fail.



renz496 said:


> i think we heard this every time new GPU is about to come out. the competitor starts spreading rumor to stop them from buying competitor product. sometimes i think company like AMD and nvidia does not really need to do this. because their fanboys will do the job for them for free. still remember when some people said you should hold back from getting GTX1080 because Vega will coming out in october 2016. they said there is shortage anyway with 1080 because of demand so you might just as well wait until october 2016.


Companies certainly seed rumors from time to time, but most of the time it's just certain webpages and youtubers trying to drive traffic.

But unfortunately sometimes the fanboys take it too far and end up hurting their "team". Back when GTX 1080 Ti launched, some claimed it was to spoil Vega's triumph, and when Vega finally shipped, many refused to accept that it was the biggest Vega chip. The same thing happened with Navi; fueling the rumor of Navi12 being "big Navi" to crush RTX 2080 Ti…


----------



## trparky (Jan 3, 2020)

Will it be half the price?

Yeah, I know... good luck with that.


----------



## laszlo (Jan 3, 2020)

i believe it will be "up to" 50 % which is reasonable...


----------



## Valantar (Jan 3, 2020)

EarthDog said:


> g
> I know that. I was replying to, and quoted, someone (and the 'supporters') who thinks this is rtx only...


I don't think anyone is saying "it might be 50% faster in RTX, but everything else will stay the same", which is what you're saying here. All that's being said is that a 50% generation-on-generation absolute performance increase (for all workloads or for conventional rasterization workloads) is quite optimistic, and combining that with a 50% absolute power draw drop is borderline crazy. After all, that would make for a 300% increase in performance per watt (150% perf at 50% power = 300% perf at 100% power). When has that _ever_ happened between two generations? If they could pull that off Nvidia would own the GPU market entirely for the foreseeable future, but it sounds very unrealistic. I at least do not doubt that there will be significant improvements in absolute performance when Nvidia moves to 7nm - anything else would be a scandal - but +300% perf/W? No. I'll gladly admit how wrong I was if that comes to pass, but for now, I'm putting this in the "make-believe" column.


----------



## EarthDog (Jan 3, 2020)

Valantar said:


> I don't think anyone is saying "it might be 50% faster in RTX, but everything else will stay the same", which is what you're saying here.


lol, I quoted the person who said exactly that. So did you earlier, in fact. Perhaps they meant something else? No idea... but that is what was posted. I'll quote it again...



HwGeek said:


> I bet that the "50% uplift" is in RTX ON performance while non RTX performance will remain the same in each segment.




I dont knkw what that means outside of rtx getting the improvement while everything else stay the same...

Nobody should hold on to the 50% value mentioned. Its not based on anything outside of the die shrink, really. I'd gather we'll see notable (25%+) increases in both rtx and non rtx gaming. P /W will also dramatically increase if that is a metric one cares about.


----------



## mike dar (Jan 3, 2020)

Is it going to 6,000 .... 7000 ... 8000 cores and all the other bells and whistles to match? smaller die isn't everything/


----------



## Valantar (Jan 3, 2020)

EarthDog said:


> lol, I quoted the person who said exactly that. So did you earlier, in fact. Perhaps they meant something else? No idea... but that is what was posted. I'll quote it again...
> 
> 
> 
> ...


I obviously missed that one, definitely don't agree there (tough I wouldn't put it past Nvidia to accompany the new gen with another per-tier price bump either). Completely agree that even increases in both (or ideally a bit more in RT to balance things out) ought to be their target.


----------



## BArms (Jan 3, 2020)

Vayra86 said:


> 4K will always be a tough one. Especially when RT gets bigger. I'd accept 1080p on that panel for the next half decade if I were you



3070 should be a decent entry level 4K card, think GTA V @ 60fps @ 4k at fairly high settings minus high end AA, which is less necessary at 4K anyway.


----------



## TheGuruStud (Jan 3, 2020)

Recus said:


> What chart?
> 
> 
> 
> Maxwell 28nm to Pascal 16nm. Pascal got performance and efficiency.








The game performance one was bullshit, too, but not this egregious.

And more BS


----------



## Razrback16 (Jan 3, 2020)

Naito said:


> Regardless of where the 50% uplift applies, I'm just glad I didn't buy into the RTX 2000 series



Absolutely, right there with ya. It was terribly priced for meager gains. Very glad I've just held steady to this point. And if the Ampere news turns out to be crap, well the Turing cards will still be getting cheaper and cheaper on the 2nd hand market.


----------



## TesterAnon (Jan 3, 2020)

As long its not 200% the cost.


----------



## Casecutter (Jan 3, 2020)

AleksandarK said:


> Built using TSMC's 7 nm manufacturing node,



But, but...  btarunr just indicated... "NVIDIA, too, is expected to built its next-gen 7 nm EUV GPUs on Samsung instead of TSMC".  So which is it?  As by this we are being told Ampere greatness is like perhaps 6 months from being, although there's no clue as to who's going to be spinning these wafer's for Nvidia... but they'll be great.


----------



## trog100 (Jan 3, 2020)

i think i about enter what i would call my consolidation phase.. this means i will stop buying new stuff just because i can and simply use what i have..

my last consolidation phase lasted well over six years.. he he 

trog


----------



## Xaled (Jan 3, 2020)

Valantar said:


> I'd say the headline ought to be along the lines of "Nvidia's Next-Generation Ampere GPUs reportedly 50% faster than Turing at Half the Power". I'll agree that a headline with zero reservations or considerations taken as to this being second-hand information with dubious sourcing making bombastic claims at a very early point is poor journalism, but I don't see it as a double standard or bias - inaccuracies like this are quite par for the course across the board for TPU, sadly.


I would've called poor journalism if it was consistent with all news, unbiased. But this one has been made intentionally. I highly believe that they got paid from Nvidia just to use such title. Or they would've used the same logic that they used in AMDs and Intel's news.


----------



## cucker tarlson (Jan 3, 2020)

Xaled said:


> I would've called poor journalism if it was consistent with all news, unbiased. But this one has been made intentionally. I highly believe that they got paid from Nvidia just to use such title. Or they *would've used the same logic that they used in AMDs and Intel's news*.


they do



Casecutter said:


> As by this we are being told Ampere greatness is like perhaps 6 months from being, although there's no clue as to who's going to be spinning these wafer's for Nvidia... but they'll be great.



old architecture will be replaced by a new one called something terrific.

but seriously,it's samsung.


----------



## lemonadesoda (Jan 3, 2020)

I'm more interested in the half-power thing. I don't want a nuclear oven in my house. Neither the heat, nor the noise in keeping it cool. So, I'm in!


----------



## TheGuruStud (Jan 3, 2020)

LOL, a couple of you guys just got roasted by a youtuber.


----------



## HTC (Jan 3, 2020)

TheGuruStud said:


> LOL, a couple of you guys just got roasted by a youtuber.



Elaborate, please.


----------



## TheGuruStud (Jan 3, 2020)

HTC said:


> Elaborate, please.



Crying about AMD's prices, b/c they need to be cheaper than Nvidia. Aka the double standard, since no one says that about Nvidia.

Shit, we all know they do it even when cheaper.


----------



## HTC (Jan 3, 2020)

TheGuruStud said:


> Crying about AMD's prices, b/c they need to be cheaper than Nvidia. Aka the double standard, since no one says that about Nvidia.
> 
> Shit, we all know they do it even when cheaper.



I was hoping for the video link of said youtuber roasting "a couple of you guys" but that works too ...


----------



## TheGuruStud (Jan 3, 2020)

HTC said:


> I was hoping for the video link of said youtuber roasting "a couple of you guys" but that works too ...


----------



## HTC (Jan 3, 2020)

TheGuruStud said:


>


Funny: i'm around 6:20 into that video right now, lol

EDIT

Didn't see much roasting ... some, but not much.


----------



## Xzibit (Jan 3, 2020)

Who does he use as a double standard in pricing *@ 9:30 from TPU*


----------



## HTC (Jan 3, 2020)

Xzibit said:


> Who does he use as a double standard in pricing *@ 9:30 from TPU*


It's quite easy to find: just do an advanced search with some of the words and add date parameters.

I'd rather not show who it is directly because it may lead to other types of issues: i'll leave it @ that.


----------



## Nero1024 (Jan 4, 2020)

I will be a minority, but I am more curious about the new design of cards, because I am pretty sure performance will deliver


----------



## jabbadap (Jan 4, 2020)

TheGuruStud said:


> View attachment 141185
> 
> The game performance one was bullshit, too, but not this egregious.
> 
> And more BS View attachment 141186



Well best case scenarios, but those are actually correct slides to address benefits from shader and memory subsystem evolution between the two generations... 

But yeah probably someone might have seen similar slide(s) from Ampere to Turing too and does not have a whole picture.


----------



## candle_86 (Jan 4, 2020)

R0H1T said:


> Yes & pigs will also fly when that happens
> 
> Reminds me of Intel's PR these days with multiple *****



Nvidia did it before, they only got lazy lately

Geforce 6800 Ultra - four as fast as Geforce FX 5950 Ultra
Geforce 7800GTX - Twice as fast as 6800 Ultra
Geforce 8800GTX - Twice as fast as 7900GTX
GTX 280 - Twice as fast as 8800 Ultra
GTX 480 - Twice as fast as GTX 280
GTX 680 - Twice as fast as GTX 580
And then nvidia got lazy


----------



## my_name_is_earl (Jan 4, 2020)

Bs until proven.


----------



## nguyen (Jan 4, 2020)

It's soo easy to understand when looking at 1080 Ti vs 980 Ti






(Since Nvidia use dual fans for their new GPU so I pick a dual fans 1080 Ti model with the same power consumption as 1080 Ti FE)

basically if you lower the power limit of the 1080 Ti to 50%, it would still retain 70% of it performance (I have tested this) which is a good 35% faster than 980 Ti. Obviously they could be testing a prototype 3080 Ti  with lowered power limit, but when crank to full power I suspect it would easily outperform 2080 Ti by 100%, same as Maxwell --> Pascal.


----------



## Xzibit (Jan 4, 2020)

nguyen said:


> *It's soo easy to understand when looking at 1080 Ti vs 980 Ti*



The speculation came from an investment firm to its clients.


----------



## R0H1T (Jan 4, 2020)

candle_86 said:


> Nvidia did it before, they only got lazy lately


300% more efficient outside edge cases like *RT* ~ nope I can understand one or the other but they'll literally have to defy physics to get there.


----------



## Casecutter (Jan 4, 2020)

cucker tarlson said:


> they do old architecture will be replaced by a new one called something terrific.
> 
> but seriously,it's samsung.
> [/]


Well what's in a name?  While seriously are you presenting fact?


----------



## dicktracy (Jan 4, 2020)

Finally some GPU news instead of boring CPU ones.


----------



## cucker tarlson (Jan 4, 2020)

Xzibit said:


> Who does he use as a double standard in pricing *@ 9:30 from TPU*


probably himself.
5500xt 8gb costs exactly the same as 1660 super in reality while it's a tier down in performance.






btw I bet that amd shirt is a loan


----------



## dicktracy (Jan 4, 2020)

candle_86 said:


> Nvidia did it before, they only got lazy lately
> 
> Geforce 6800 Ultra - four as fast as Geforce FX 5950 Ultra
> Geforce 7800GTX - Twice as fast as 6800 Ultra
> ...


They were obviously milking but it's not hard to see that Nvidia will push their GPUs even harder than before since Intel is jumping in and the AI revolution is right around the corner. Everyone knows the GPU is simply the future for next-generation computing. Tomorrow's landscape will revolve heavily on the GPU, and the CPU will only become less and less relevant especially when we're talking about AI-enhanced software. Intel will most likely put tons of resources into their GPU development and Nvidia will be foolish to hold back even a little. Jensen Huang is too good of a leader to let that happen though.


----------



## HTC (Jan 4, 2020)

As we all know, 7 nm process introduced something new in the form of hot spots or heat concentration: this can be seen in big dies, such as Vega VII GPU, as well as in smaller dies, such as Zen 2 chiplet's.

How will nVidia tackle this problem?

- lower speeds?
- stronger coolers?
- other?


----------



## cucker tarlson (Jan 4, 2020)

HTC said:


> As we all know, 7 nm process introduced something new in the form of hot spots or heat concentration: this can be seen in big dies, such as Vega VII GPU, as well as in smaller dies, such as Zen 2 chiplet's.
> 
> How will nVidia tackle this problem?
> 
> ...


how about they just say "110 degrees is fine" and igore it completely ?


----------



## HTC (Jan 4, 2020)

cucker tarlson said:


> how about they just say "110 degrees is fine" and igore it completely ?



Maybe ... it's a possibility ...


----------



## cucker tarlson (Jan 4, 2020)

HTC said:


> Maybe ... it's a possibility ...


it's a possibility that results from completely different dies on samsung 7nm euv will vary from amd's tsmc 7nm too.


----------



## HTC (Jan 4, 2020)

cucker tarlson said:


> it's a possibility that results from completely different dies on samsung *7nm euv* will vary from amd's tsmc 7nm too.



nVidia is jumping directly to 7 nm EUV? I thought they were going for "normal" 7 nm.

EDIT

Also: OP states Ampere will use TSMC's 7 nm and not Samsung's.


----------



## cucker tarlson (Jan 4, 2020)

HTC said:


> nVidia is jumping directly to 7 nm EUV? I thought they were going for "normal" 7 nm.
> 
> EDIT
> 
> Also: OP states Ampere will use TSMC's 7 nm and not Samsung's.











						Samsung will produce next-gen Nvidia GPUs on 7nm EUV process
					

Substantial business agreement has been confirmed at a press conference in Seoul.




					hexus.net
				






> In the words of the Nvidia Korea chief, shared by The Korea Herald, _"It is meaningful that Samsung Electronics' 7-nanometer process would be used in manufacturing our next-generation GPU,"_


----------



## ypsylon (Jan 4, 2020)

If true then I'm readying my poor wallet already. 

For gaming it means nothing - all that RTX fad-, but for rendering engines utilizing RTX, Turing leaves old generations in the dust. I'm struggling to contain my urges and this rumor has set me straight. Must not buy. I'll *not* look at 2080Tis anymore.  

Current Turing architecture on Pro side has some merits (but at inflated-monopol-price of course) , but overall if somebody buys Turing solely for gaming, it's utter waste of money.


----------



## R0H1T (Jan 4, 2020)

dicktracy said:


> *Tomorrow's landscape will revolve heavily on the GPU*, and the CPU will only become less and less relevant especially when we're talking about AI-enhanced software.


Except it's really the opposite 
The future is PoP, MCM, EMIB & 3D stacking ~ *








						An Interconnected Interview with Intel’s Ramune Nagisetty: A Future with Foveros
					






					www.anandtech.com
				



*




			https://146a55aca6f00848c565-a7635525d40ac1c70300198708936b4e.ssl.cf1.rackcdn.com/images/be20ea9409cc558936fa2623c5222792e8118c69.pdf
		


The future, as Intel have realized, is not with CPU or *GPU only* solutions. They're just following what you could say AMD showed with Zen. And if Nvidia don't get their act together they'll be caught napping, Nvidia's lead is certainly not infallible with the kind of hardware AMD, Intel have up on the horizon. Ironically their biggest strength in this filed is software i.e. CUDA.


----------



## HTC (Jan 4, 2020)

cucker tarlson said:


> Samsung will produce next-gen Nvidia GPUs on 7nm EUV process
> 
> 
> Substantial business agreement has been confirmed at a press conference in Seoul.
> ...



Then it's not going up one node but two instead and this changes things dramatically on a couple of fronts:

- the density will be higher than "normal" 7 nm meaning even more heat concentration and / or hot spots, unless Samsung's 7 nm EUV is less denser than TSMC's "normal" 7 nm
- possibly a much higher efficiency jump VS current Turing cards, which could actually enable 50% more performance while @ 50% less power


----------



## cucker tarlson (Jan 4, 2020)

HTC said:


> Then it's not going up one node but two instead and this changes things dramatically on a couple of fronts:
> 
> - the density will be higher than "normal" 7 nm meaning even more heat concentration and / or hot spots, unless Samsung's 7 nm EUV is less denser than TSMC's 7 nm EUV
> - possibly a much higher efficiency jump VS current Turing cards, which could actually enable 50% more performance while @ 50% less power


from what I can infer from having a quick look at various news pieces,including samsung's forums,7nm euv will be used "substantially",while 7nm tsmc will probably be used too.Samsung has two 7nm euv itinerations: low power which has been in mass production already and high performance which is not in mass production yet.

it is confusing,but it seems like a portion of ampere will be made on high perf 7nm euv from samsung.we'll have to wait and see.



R0H1T said:


> Except it's really the opposite
> The future is PoP, MCM, EMIB & 3D stacking


why can't gpus do the same ?


----------



## Midland Dog (Jan 4, 2020)

rdoa2 is doa


----------



## R0H1T (Jan 4, 2020)

cucker tarlson said:


> why can't gpus do the same ?


Nvidia are trying that, whether they get there is another matter. We haven't seen anything on the scale of Zen 2 or Feveros from Nvidia ever.
Yes there's always a first time but as of now AMD & Intel are way ahead in this field. Also Nvidia still lags massively in the CPU dept, that isn't changing anytime soon.


----------



## cucker tarlson (Jan 4, 2020)

Midland Dog said:


> rdoa2 is doa


it's gonna be feliz navi-dead for amd when ampere launches  

nah,but seriously,I'm confused by this tsmc 7nm mention.
would not be surprised if smaller a107/108 dies went for tsmc,they segmented their production for big and small pascals between tsmc and glofo too IIRC.I bet rdna2 desktop cards would not like to see 3050 and 3060 production get in their way too.


----------



## Vayra86 (Jan 4, 2020)

cucker tarlson said:


> probably himself.
> 5500xt 8gb costs exactly the same as 1660 super in reality while it's a tier down in performance.
> 
> 
> ...



Didnt this sad case of an eternal student comment on me a few months back? He missed the ball then as he does now.

Stop giving it airtime... its really a new level of sad



cucker tarlson said:


> it's gonna be feliz navi-dead for amd when ampere launches
> 
> nah,but seriously,I'm confused by this tsmc 7nm mention.
> would not be surprised if smaller a107/108 dies went for tsmc,they segmented their production for big and small pascals between tsmc and glofo too IIRC.I bet rdna2 desktop cards would not like to see 3050 and 3060 production get in their way too.



Smaller dies may even just remain DUV as is Navi 1st gen.



BArms said:


> 3070 should be a decent entry level 4K card, think GTA V @ 60fps @ 4k at fairly high settings minus high end AA, which is less necessary at 4K anyway.



3070 even?! That will be a repeat if how the 1070 is now not really sufficient for 1440p, then. There are no cards for a specific res. Already we recommend 2070S (1080ti) for smooth high/ultra gaming at that res...

4K will be a struggle for the next decade, make no mistake.


----------



## cucker tarlson (Jan 4, 2020)

Vayra86 said:


> Stop giving it airtime... its really a new level of sad


this is shameless promotion from "gurustud" (  )
a guy youtubing his commentary for TPU posts,how brave of him to avoid getting into a discussion.I bet that would go well for him.


----------



## efikkan (Jan 4, 2020)

Vayra86 said:


> 3070 even?! That will be a repeat if how the 1070 is now not really sufficient for 1440p, then. There are no cards for a specific res. Already we recommend 2070S (1080ti) for smooth high/ultra gaming at that res...
> 
> 4K will be a struggle for the next decade, make no mistake.


Yes, it's important to remember that it's a moving target; new games also get more demanding over time.
Personally, I'll favor 1440p 144 Hz over 4K 60 Hz any day, and 4K 144 Hz is out of reach for a card in the upper mid-range for now.


----------



## 64K (Jan 4, 2020)

The problem with 4K is the expense. When a new high end GPU comes out it does pretty well at 4K but then a couple years later it's not adequate for some of the new games coming out  (sometimes simply due to poor optimization) and you have to upgrade to the new high end GPU. So the 4K proposition is very expensive. It's there for those that want it badly enough but certainly not for mainstream gamers. I don't think Ampere will change that even with a 50% increase in performance. A couple of years after the 3080 Ti comes out you will be looking at a 4080 Ti to keep up.

I'm quite happy with 1440p 60 FPS average but even my 980 Ti is inadequate for this resolution with newer games.


----------



## Valantar (Jan 4, 2020)

HTC said:


> Then it's not going up one node but two instead and this changes things dramatically on a couple of fronts:
> 
> - the density will be higher than "normal" 7 nm meaning even more heat concentration and / or hot spots, unless Samsung's 7 nm EUV is less denser than TSMC's "normal" 7 nm
> - possibly a much higher efficiency jump VS current Turing cards, which could actually enable 50% more performance while @ 50% less power


While I don't disagree with what you're saying entirely, claiming that 7nm EUV is a full node up from 7nm DUV is... dubious. It is definitely an improved node, but not by that much.


----------



## HTC (Jan 4, 2020)

Valantar said:


> While I don't disagree with what you're saying entirely, *claiming that 7nm EUV is a full node up from 7nm DUV is... dubious*. It is definitely an improved node, but not by that much.



You're right.

That said, it still brings substantial efficiency gains (*or* performance gains) to the full node.


----------



## mandelore (Jan 4, 2020)

Anyone want to buy two 2080 ti's? Fully water cooled, fully sexual. 

Just asking...  You can never stay ahead of that techno curve baby


----------



## Valantar (Jan 4, 2020)

mandelore said:


> Anyone want to buy two 2080 ti's? Fully water cooled, fully sexual.
> 
> Just asking...  You can never stay ahead of that techno curve baby


So you're planning to go 6-9 months without a GPU?


----------



## EarthDog (Jan 4, 2020)

YouTube will put out (just about) anything people... just remember that.

Anything. Even avrona has a yt channel...I'm not surprised to see some other neophyte at it... is anyone?

Double standard my ass... get your head out of the sand.


----------



## candle_86 (Jan 4, 2020)

64K said:


> The problem with 4K is the expense. When a new high end GPU comes out it does pretty well at 4K but then a couple years later it's not adequate for some of the new games coming out  (sometimes simply due to poor optimization) and you have to upgrade to the new high end GPU. So the 4K proposition is very expensive. It's there for those that want it badly enough but certainly not for mainstream gamers. I don't think Ampere will change that even with a 50% increase in performance. A couple of years after the 3080 Ti comes out you will be looking at a 4080 Ti to keep up.
> 
> I'm quite happy with 1440p 60 FPS average but even my 980 Ti is inadequate for this resolution with newer games.



This is why I stick to 1080p I did have a 1070 I sold it when I was laid off, but it meant it was overkill at the time but would have aged more gracefully. Same thing a friend of mine did, he bought a GTX 980ti for 1080p and he's still happy.


----------



## mrthanhnguyen (Jan 4, 2020)

oh no, where my strix 2080ti gonna head to? sub $400 after new gen?


----------



## candle_86 (Jan 4, 2020)

mrthanhnguyen said:


> oh no, where my strix 2080ti gonna head to? sub $400 after new gen?



Yes before going worthless then it will slowly rise in value in 30 years


----------



## DeathtoGnomes (Jan 4, 2020)

R0H1T said:


> Yes & *pigs will also fly when* that happens
> 
> Reminds me of Intel's PR these days with multiple *****


is that RTX ON?


----------



## R0H1T (Jan 4, 2020)

Probably, will have to check with JHH once ~ ($)*2k* cards here we come


----------



## Fluffmeister (Jan 4, 2020)

xkm1948 said:


> Hey hey logic and reasoning is offensive to my gut feelings and love for certain brands, take it back!  /s
> 
> On a serious note, tech progression don’t give a flying fk about fans. Nvidia dominates GPU accelerated computing and AI in industry and academia. Nvidia has good reason to keep pushing large gen to gen performance boost. This is the best way to entrench themselves in the lucrative market.



I hear you fella, it's amusing reading this thread and the highly aggro and defensive comments (even the ones that have recently been deleted), becuase the shocking truth is Nvidia haven't even gone 7nm yet and they don't like that fact.

Hey, we all wish the fastest cards you can buy were cheaper, but those cards need competition to force the prices down.

All else is tears in the rain.


----------



## prnsforum (Jan 5, 2020)

The bad is price 100% uplift


----------



## Midland Dog (Jan 5, 2020)

cucker tarlson said:


> it's gonna be feliz navi-dead for amd when ampere launches
> 
> nah,but seriously,I'm confused by this tsmc 7nm mention.
> would not be surprised if smaller a107/108 dies went for tsmc,they segmented their production for big and small pascals between tsmc and glofo too IIRC.I bet rdna2 desktop cards would not like to see 3050 and 3060 production get in their way too.


rdna2 parts might end up competing with the 3050 and 3060 the way RTG seems to be going


----------



## Aerpoweron (Jan 5, 2020)

I can imagine a 50% speed uplift in a best case scenario, or a 50% less power draw in such a scenario. But not both at the same time. We probably don't see this in the consumer market soon, since it will be most likely focused on the AI and Datacenter markets.

And for the competition from AMD. Keep in mind, AMD is focusing on CPUs, maybe a little on the GPUs for the consoles. AMD just has to do good enough in the GPU segment for now. They just don't have enough resources to go like Ryzen on the GPU front. Most of the money is made in the mid range GPU segment. I think AMD has some good offers there to compete with Nvidia.
If AMD should focus on GPUs in the near future, it will get very interesting


----------



## Vayra86 (Jan 5, 2020)

cucker tarlson said:


> this is shameless promotion from "gurustud" (  )
> a guy youtubing his commentary for TPU posts,how brave of him to avoid getting into a discussion.I bet that would go well for him.



We're all haters man! Like, OMG, really... *add sweet boy voice*



EarthDog said:


> YouTube will put out (just about) anything people... just remember that.
> 
> Anything. Even avrona has a yt channel...I'm not surprised to see some other neophyte at it... is anyone?
> 
> Double standard my ass... get your head out of the sand.



Its almost like TV

No wait, its worse, its TV without filters



DeathtoGnomes said:


> is that RTX ON?



RTX ON means the pigs will blot out the sun!



Valantar said:


> While I don't disagree with what you're saying entirely, claiming that 7nm EUV is a full node up from 7nm DUV is... dubious. It is definitely an improved node, but not by that much.



Well, it is kind of the proposed ultimate form of 7nm since ASML started doing EUV. DUV was never really cost effective or fantastic in the first place, its just a placeholder to ramp up 7nm. That is also why you don't see that much gain in absolute performance per watt; the major gain from 7nm DUV is density.

Its more accurate to say 7nm EUV is a real 7nm node and DUV is a bandaid.


----------



## cucker tarlson (Jan 5, 2020)

Vayra86 said:


> We're all haters man! Like, OMG, really... *add sweet boy voice*
> 
> 
> 
> ...


this guy has done very poor research to begin with.










like he begins with "navi is better than polaris" since it competes with tu104 not like polaris competed with gtx1060.
this is plain wrong in the opening assumptions.
full tu106 is 2070,and that's what 5700xt is head to head against.full tu104 with 3072 cuda is 25% faster at least.
so navi goes against the same 106 die,but with a node down advantage this time.


----------



## ppn (Jan 5, 2020)

N5P in 2020/21 uses more EUV layers than N7+ and provides 33% better performance and 45% smaller die size than N7. So I might as well wait for that. it all seems like temporary nodes to me, N7,N7P,N6,N7+,N5,N5P and what not.


----------



## cucker tarlson (Jan 5, 2020)

ppn said:


> N5P in 2020/21 uses more EUV layers than N7+ and provides 33% better performance and 45% smaller die size than N7. So I might as well wait for that. it all seems like temporary nodes to me, N7,N7P,N6,N7+,N5,N5P and what not.


5nm+ cards are not coming in 2020 or 2021.


----------



## ppn (Jan 5, 2020)

N7 risk production Q2/2017, 5700XT Q3/2019, 3080 Q3/2020 +50% perf or -50% power
N5 risk production Q2/2019, 6700XT Q3/2021, 4080 Q3/2022 +25% perf or -50% power


----------



## efikkan (Jan 5, 2020)

cucker tarlson said:


> this guy has done very poor research to begin with.
> 
> 
> 
> ...


Just because someone is able to post a YouTube video about a technical topic doesn't make them an authority on the subject matter, any fool can sit in their basement and make a video where they just ramble about "technical" stuff.

"Why Nvidia Is Stuck with Tensor/RT till 2021"
Well, at least to me this title really says it all. It's really incredible how clueless and misguided some of these opinionators can be, even lacking the ability to do logical reasoning. There is just so much wrong here I don't even know where to begin. But let's just say that Nvidia have been heavily investing into RT for over a decade. What we see in Turing was not thrown quickly together, and when their next generation arrives, it will take it quite a bit further, it's not like they are "stuck" with some "poor" implementation.

I would be much more concerned about AMD's RT ventures. They have obviously been researching too, but were caught off guard when Turing brought it to market so quickly. It will be really interesting to see if they have been able to integrate it properly in such a short timeframe.


----------



## Vayra86 (Jan 5, 2020)

ppn said:


> N7 risk production Q2/2017, 5700XT Q3/2019, 3080 Q3/2020 +50% perf or -50% power
> N5 risk production Q2/2019, 6700XT Q3/2021, 4080 Q3/2022 +25% perf or -50% power



Holy crap so by 2022 we get 175% performance at 0% power!!!

I can't wait!


----------



## Midland Dog (Jan 5, 2020)

maybe
Ampere Compute Series
TSMC 7EUV
GA100 8GPC*8TPC*2SM 6144bit
GA101 4GPC*8TPC*2SM 3072bit
Ampere Gaming Series
SAMSUNG 8EUV
GA102 7GPC*6TPC*2SM 384bit NVLINK
GA103 6GPC*5TPC*2SM 320bit
GA104 6GPC*4TPC*2SM 256bit
GA106 3GPC*5TPC*2SM 192bit
GA107 2GPC*5TPC*2SM 128bit
ganked from twitter and according to my numbers that makes ga100 an 8192 cuda core part


----------



## cucker tarlson (Jan 5, 2020)

efikkan said:


> Just because someone is able to post a YouTube video about a technical topic doesn't make them an authority on the subject matter, any fool can sit in their basement and make a video where they just ramble about "technical" stuff.
> 
> "Why Nvidia Is Stuck with Tensor/RT till 2021"
> Well, at least to me this title really says it all. It's really incredible how clueless and misguided some of these opinionators can be, even lacking the ability to do logical reasoning. There is just so much wrong here I don't even know where to begin. But let's just say that Nvidia have been heavily investing into RT for over a decade. What we see in Turing was not thrown quickly together, and when their next generation arrives, it will take it quite a bit further, it's not like they are "stuck" with some "poor" implementation.
> ...


yup.exactly.I don't get the "stuck" part either.



Midland Dog said:


> maybe
> Ampere Compute Series
> TSMC 7EUV
> GA100 8GPC*8TPC*2SM 6144bit
> ...


tbh I was expecting 320-bit parts.
what are gpc/tpc ?


----------



## Midland Dog (Jan 6, 2020)

cucker tarlson said:


> yup.exactly.I don't get the "stuck" part either.
> 
> 
> tbh I was expecting 320-bit parts.
> what are gpc/tpc ?


Graphics Processing Clusters and Texture Processing Clusters, my numbers assume a volta esque partioning of sms tho



Midland Dog said:


> Graphics Processing Clusters and Texture Processing Clusters, my numbers assume a volta esque partioning of sms tho


if any thing changes in the sm its likely to be cache wise, with a chip that wide cache and rops will be the biggest concern, nv engineers will cook up some new compression method to get further on the same if not slightly higher (14-16gbps) speeds



Midland Dog said:


> Graphics Processing Clusters and Texture Processing Clusters, my numbers assume a volta esque partioning of sms tho
> 
> 
> if any thing changes in the sm its likely to be cache wise, with a chip that wide cache and rops will be the biggest concern, nv engineers will cook up some new compression method to get further on the same if not slightly higher (14-16gbps) speeds


assuming a 200-300mhz boost as aibs have reported, GA100 could be a 36 teraflop part at 2.2ghz, amdead


----------



## cucker tarlson (Jan 6, 2020)

Midland Dog said:


> Graphics Processing Clusters and Texture Processing Clusters, my numbers assume a volta esque partioning of sms tho


hard to predict the CUDA count based on that since they use various SM per GPC count.


----------



## Midland Dog (Jan 7, 2020)

cucker tarlson said:


> hard to predict the CUDA count based on that since they use various SM per GPC count.


i still expect 64, then again intel is going with smaller partitions with the dg1 design maybe theres merit to it


----------



## cucker tarlson (Jan 8, 2020)

Midland Dog said:


> i still expect 64, then again intel is going with smaller partitions with the dg1 design maybe theres merit to it


sm per gpc,not cuda per sm


----------



## Midland Dog (Jan 9, 2020)

cucker tarlson said:


> sm per gpc,not cuda per sm


cuda per sm fp32 and int32


----------



## Super XP (Jan 10, 2020)

50%? I don't see that.
The GTX 980ti was approx: 34% faster over the GTX 780ti in 12 games on average.
The GTX 1080ti was approx: 23% faster over the GTX 980ti in 12 games on average.
The RTX 2080ti was approx: 25% faster over the GTX 1080ti in 12 games on average.
The RTX 3080ti to be approx: 50% faster over the RTX 2080ti on average? I Don't Think So, lol



kings said:


> What big Navi? Released when?
> 
> Big Navi is a rumor at this point, just like Ampere or whatever name Nvidia may call it.


Big Navi is not a rumour as it was confirmed by Dr. Lisa Su and its coming. The question that remains is when? Looking at AMD's past announcements and the fact Next Generation Gaming Consoles from Micro$oft and Sony are coming Christmas 2020, I can see AMD launching RDNA2 and ZEN3 at Computex 2020, at the same time, which availability shortly there after.


----------



## 64K (Jan 11, 2020)

Let's look at W1zzard's benches for some perspective on reality.

Using 1440p as a base for comparison. 4K often offers even more of a performance boost over the previous generation in some cases but I'm trying to keep this simple. Max quality settings.

GTX 980 Ti using a 21 games suite for benching showed an average of 41% increase in performance over a GTX 780 TI

GTX 1080 Ti using a 22 game suite for benching showed an average of 75% increase in performance over a GTX 980 Ti

RTX 2080 Ti using a 23 game suite for benching showed an average 33% increase in performance over a GTX 1080 Ti

Now with Ampere going from the 12nm process node to the 7nm process node and the resultant increase in efficiency and a new architecture then why in the world can't there be a 50% increase in performance over the Turing GPUs?

I have put all of the Relative Performance percentages behind the spoiler below. Please don't subtract performances from each other. That's not how percentages work. If GPU #1 is rated as 100% and GPU #2 is rated at 50% then that doesn't mean that GPU #1 is 50% faster than GPU #2. It means that GPU #1 is 100% faster or twice as fast as GPU #2

Simply divide GPU #1 by GPU #2 to get a number.

i.e. GPU #1 is rated at 100% and GPU #2 is rated at 75% then 100/75=1.33 which means that GPU #1 is 33% faster than GPU #2. If you doubt that then multiply GPU #2 at 75% times 1.25 and you get 93.75 and not 100.



Spoiler: Relative Performance One Generation Over The Last


----------



## Fluffmeister (Jan 11, 2020)

Super XP said:


> 50%? I don't see that.
> The GTX 980ti was approx: 34% faster over the GTX 780ti in 12 games on average.
> The GTX 1080ti was approx: 23% faster over the GTX 980ti in 12 games on average.
> The RTX 2080ti was approx: 25% faster over the GTX 1080ti in 12 games on average.
> The RTX 3080ti to be approx: 50% faster over the RTX 2080ti on average? I Don't Think So, lol



There is no way a GTX 1080 Ti was just an average of 23% faster than a GTX 980 Ti, which was the last big node change for Nvidia.

Looking at W1zz review (over 22 games) comparing reference cards:

At 1080P the 1080 Ti is 53% faster:





At 1440P the 1080 Ti is 75% faster:





And at 4K the 1080 Ti is 85% faster:


----------



## Super XP (Jan 11, 2020)

I was basing the results on UserBenchmark results. As they compare 1,000's of gaming PCs.


----------



## cucker tarlson (Jan 11, 2020)

Super XP said:


> 50%? I don't see that.
> The GTX 980ti was approx: 34% faster over the GTX 780ti in 12 games on average.


41%








Super XP said:


> The GTX 1080ti was approx: 23% faster over the GTX 980ti in 12 games on average.


75%





first you claim rt cores do nothing for rt and nvidia just gimped 1080ti to make rtx line look better
then you lower performance numbers by over 50%

I know it takes a lot to believe amd is still competitive but what you are writing is quite a stretch even for those standards.


----------



## efikkan (Jan 12, 2020)

Super XP said:


> I was basing the results on UserBenchmark results. As they compare 1,000's of gaming PCs.


I don't care if it was every single PC in the whole world. Benchmarks like this are 100% useless. You need controlled test conditions, otherwise the test result is completely worthless.
I like Steve at Gamer's Nexus' comment about these pages; they are just search engine spam.


----------



## lexluthermiester (Jan 17, 2020)

efikkan said:


> You need controlled test conditions, otherwise the test result is completely worthless.


That depends on the test and the conditions thereof.


efikkan said:


> I like Steve at Gamer's Nexus' comment about these pages; they are just search engine spam.


Steve gets a lot of things right, but like every other human being, he doesn't know everything.


----------



## cucker tarlson (Jan 17, 2020)

lexluthermiester said:


> That depends on the test and the conditions thereof.


that's exactly what he's saying


----------



## EarthDog (Jan 17, 2020)

cucker tarlson said:


> 41%





cucker tarlson said:


> 75%


I just think he doesn't get the math. He's right if you change one word. Where he says faster it should read _slower_ as the graphs are based on the 980ti and 1080ti. The way you are doing it is based on how much faster the faster card is over the slower card. 

or....its based on userbench??? Wth?? Oye.


----------



## Super XP (Jan 18, 2020)

cucker tarlson said:


> 41%
> 
> 
> 
> ...


I do admit the GTX 1080ti is one of the bests GeForce graphics cards Nvidia released in a very long time. Not even the 2080ti can compete with how well the 1080ti was priced for the performance and the actual gain you got versus the previous 980ti. In other words, no I am not impressed with the RTX line of GPU's, and I believe many people agree with me.

Thanks for the TPU review nevertheless,..



efikkan said:


> I don't care if it was every single PC in the whole world. Benchmarks like this are 100% useless. You need controlled test conditions, otherwise the test result is completely worthless.
> I like Steve at Gamer's Nexus' comment about these pages; they are just search engine spam.


I can agree with that, as controlled benchmarks would show real performance improvement differences. As with non controlled can be somewhat manipulated to show altered results.


----------



## medi01 (Jan 22, 2020)

A note about 980Ti.
Fury X beat it at 4k at release (1440p later on/depending on tests) was very close to it otherwise in 1080p (who cares, but oh well).

That was "countered" by a valid argument, that that is stock 980Ti and *actual AIBs are OCable good 20% beyond that*.

However, once 1080 popped out, people have forgotten about that nice quirk of 980Ti and now we in all seriousness discuss how 1080 was 33%-ish faster.


----------



## Prima.Vera (Jan 23, 2020)

50% faster on relative or actual performance??


----------



## InVasMani (Jan 25, 2020)

50% cheaper at 50% less power so seeing as it's Nvidia it'll cost what 200% more?


----------



## Fluffmeister (Jan 25, 2020)

Whatever their next gen cards bring, they will no doubt be the fastest most efficient cards available.

Sorry about that.


----------



## InVasMani (Jan 25, 2020)

Fluffmeister said:


> Whatever their next gen cards bring, they will no doubt be the fastest most efficient cards available.
> 
> Sorry about that.


 Be realistic...


----------



## lexluthermiester (Jan 25, 2020)

InVasMani said:


> 50% cheaper at 50% less power so seeing as it's Nvidia it'll cost what 200% more?


Your logic, along with your math, is flawed.


----------



## lewis007 (Jan 29, 2020)

Calmmo said:


> This is talking about ampere, will nvidia even make gaming GPUs from that or are they again going to hold off for a year then release a cut down variant under a different name? AMD have nothing to show still, officially, so why would they unless the new node allows for a significant reduction in costs. If this is an indication of anything.. this is probably about their new titan 3000$ card..


 Agree, 50% perf uplift 150% price uplift.


----------



## BiggieShady (Feb 2, 2020)

Prima.Vera said:


> 50% faster on relative or actual performance??


Not sure if joking but if somehow serious, the answer is both, because that's how ratios work ... 50% faster is ratio of 1.5 meaning fasterSpeed = 1.5 * slowerSpeed means fasterSpeed is 50% faster than slowerSpeed. Relative or actual performance numbers, ratio stays the same, it's all relative in the end.


----------



## efikkan (Feb 2, 2020)

I wouldn't get too fixated on precise performance estimates. Until Nvidia have the final batch of qualification samples, no one really knows, including Nvidia themselves.


----------



## Anymal (Feb 3, 2020)

100% uplift at the same W! 3070 with 170w will be 100% faster than 2060. Also price, 650eur vs. 325eur.


----------



## Fluffmeister (Mar 8, 2020)

50% better performance per watt would be disappointing over Turing, exciting times ahead either way.


----------



## TheoneandonlyMrK (Mar 8, 2020)

Anymal said:


> 100% uplift at the same W! 3070 with 170w will be 100% faster than 2060. Also price, 650eur vs. 325eur.


So 100% performance uplift at 325 Euro , I think both a little optimistic personally.


----------



## TheGuruStud (Mar 8, 2020)

theoneandonlymrk said:


> So 100% performance uplift at 325 Euro , I think both a little optimistic personally.



He meant 625 euro. Or hasn't taken his crazy pills this morning.


----------



## medi01 (Mar 8, 2020)

I think it's also worth noting that 1080Ti price went way beyond 980Ti's price and even 1080 cost more than 980Ti.

If perf bumps are true, it will be reflected in pricing accordingly.


----------



## Vayra86 (Mar 9, 2020)

medi01 said:


> I think it's also worth noting that 1080Ti price went way beyond 980Ti's price and even 1080 cost more than 980Ti.
> 
> If perf bumps are true, it will be reflected in pricing accordingly.



Yes, but then if you look beyond one generational increase to the next, the price per 'Frame / FPS' is still going down every time. A bigger perf jump is only paid in full when you early adopt it. At the same time however, when performance jumps per generation stall like they did the past few years, you see the trend of price per frame stall as well. After all, cards still gotta get sold but the market is saturated with a certain performance level already. There is not much to compete over, so there is less competition.


----------



## medi01 (Mar 9, 2020)

Vayra86 said:


> Yes, but then if you look beyond one generational increase to the next, the price per 'Frame / FPS' is still going down every time


Yes, but nowhere as close as people comparing Ti vs Ti implied was my point.
1080Ti is not a correct comparison to much cheaper 980Ti, it's even arguable if 1080 is.


----------



## EarthDog (Mar 9, 2020)

InVasMani said:


> Be realistic...


It isn't? Have you seen the current landscape? Nvidia is more efficient and faster in most cases. And that is at 12nm, not 7nm...


----------



## Vayra86 (Mar 9, 2020)

medi01 said:


> Yes, but nowhere as close as people comparing Ti vs Ti implied was my point.
> 1080Ti is not a correct comparison to much cheaper 980Ti, it's even arguable if 1080 is.



780TI MSRP: $699
980TI MSRP: $649
1080TI MSRP: $699

780TI > 980TI




980TI > 1080TI





Say what again now? 1080ti's price point was far too low given its performance. And let's face it, that price point is STILL relevant today...

2070S MSRP: $499





See how bigger perf jumps translate to bigger price gaps over time? Hell, the 2070S (=1080TI) even cannibalizes the 2080 and the 2080S for most people because of its price drop. Its a performance perf/dollar king now and that is also what the 5700XT is priced towards today (of course going under the price with slightly lower perf as usual).

To drive it home




1070 MSRP: $379

Complaining about the Pascal price hike was never realistic, and here is your proof. We got a LOT more for our money there than we ever did during Kepler or Maxwell. Especially in terms of VRAM. Everything's got a lush 8 GB (or more) to work with. The ONLY reason the 980ti is still relevant is because its the only Maxwell card with 6GB. Try that with the budget 970...


----------



## medi01 (Mar 9, 2020)

Vayra86 said:


> 1080TI MSRP: $699


Nice try.
Now let's check reality, shall we:

1080 (non TI) FE: $699
1080 (non TI) "later on edition": $599

Did I mention it was "non-TI"? Thanks.
Now, TI version came later, after 1080 milked the market for about 1 year.

#HowToMilk101



Vayra86 said:


> 2070S MSRP: $499


S right? The one released after AMD spanked 2070?
That's cool.

2070 non S, however, released for $599 for FE edition.
Are you telling me it was slower than 1080Ti? Oh, what as strange "improvement" of perf/$, chuckle.


----------



## EarthDog (Mar 9, 2020)

medi01 said:


> Nice try.
> Now let's check reality, shall we:
> 
> 1080 (non TI) FE: $699
> ...


1080ti's msrp on release day was $699... look it up...so was the 1080 a few months prior. 

Last I checked, they are a for profit business. Perhaps if amd had anything worthwhile at the time, prices wouldnt have inflated so much...


----------



## Vayra86 (Mar 9, 2020)

medi01 said:


> Nice try.
> Now let's check reality, shall we:
> 
> 1080 (non TI) FE: $699
> ...



As always your strange pair of glasses made you handily gloss over a key word, competition. You are exactly right about the 2070S. That is how movement happens: when the competitor can challenge similar performance. So, if the contrary happens and performance stalls, while being unchallenged, MSRP is unlikely to drop. That was the initial point. No performance movement = no or very slow price movement.

And even without competition the performance cost was reduced by 100 bucks MSRP, so, your point?


----------



## lexluthermiester (Mar 9, 2020)

Vayra86 said:


> After all, cards still gotta get sold but the market is saturated with a certain performance level already. There is not much to compete over, so there is less competition.


I have to disagree with this. Market saturation as far as performance bar goes is as it always is, economically tiered. Those with the money and desire always have what they want in the performance range regardless of price and frequency of release, those on a lesser but still generous budget plan for upgrades but generally get the performance parts, those on an even lesser budget will bargain shop to get the best bang for buck, and finally those with the desire for a good system but have little to spend are the thrifty shoppers looking for the used parts or clearance sales and closeout deals.

This has been true for more than 30 years and has changed very little in that time.



Vayra86 said:


> Complaining about the Pascal price hike was never realistic, and here is your proof. We got a LOT more for our money there than we ever did during Kepler or Maxwell. Especially in terms of VRAM. Everything's got a lush 8 GB (or more) to work with. The ONLY reason the 980ti is still relevant is because its the only Maxwell card with 6GB. Try that with the budget 970...


However, on this we agree.


----------



## Vayra86 (Mar 10, 2020)

lexluthermiester said:


> I have to disagree with this. Market saturation as far as performance bar goes is as it always is, economically tiered. Those with the money and desire always have what they want in the performance range regardless of price and frequency of release, those on a lesser but still generous budget plan for upgrades but generally get the performance parts, those on an even lesser budget will bargain shop to get the best bang for buck, and finally those with the desire for a good system but have little to spend are the thrifty shoppers looking for the used parts or clearance sales and closeout deals.
> 
> This has been true for more than 30 years and has changed very little in that time.



This is all true, but it doesnt discard the reality that Nvidia wants to keep selling cards and a competitor will challenge those lower tiers more easily with every passing gen. If the performance of each respective tier doesn't go up noticeably, the higher tier customers won't have anything left to buy. They will resort to buying into baby steps - look at Turing. The whole reason Nvidia released the S line is because for Pascal owners, Turing had little if anything to offer and RTX didn't trigger many into doing so regardless. Sales were shit until Supers came about, and even now its not anything shocking. That is why Navi appears to sell, too, by competing aggressively on the 1080 ~ 1080ti performance tier.

Put differently: if the 2080ti wasn't priced out of this world, Navi would have sold for even less and the 2070S would also be a lot cheaper atm. Those economic tiers have comfort levels for pricing, too.

I mean yes, the budget bin hunters and mainstream bulk of sales exist. But they don't change the market radically, they just follow the path laid out by top-end performance. Trickle down. That is where you find those cut down monstrosities, varying VRAM capacities, small shader cutdowns (1060 3gb) the GDDR3/GDDR5 nonsense and rebadged old gen GPUs for mobile. Scraps and leftovers, because that is inevitably how silicon wafers work, big stuff gets scaled down to size. No big stuff, no progress. Nvidia even does this very visibly for us, remember the GP104 1060's... the 1070ti with cheap VRAM, etc etc. None of that happens without the bar being moved further up.


----------



## lexluthermiester (Mar 10, 2020)

Vayra86 said:


> They will resort to buying into baby steps - look at Turing.


As an owner of an RTX card upgraded from a GTX counterpart card, 30% to 50%(depending on the game) upgrade in performance is a serious jump and hardly a "baby-step".


Vayra86 said:


> The whole reason Nvidia released the S line is because for Pascal owners, Turing had little if anything to offer and RTX didn't trigger many into doing so regardless.


That is an inaccurate perspective. While it took time for the RTRT features to make it into games, the raw performance in non-RTRT games was an impressive jump and worth the upgrade by itself. Put another way, my 2080 kicks the crap out of my old 1080 in the exact same system. Anyone who thinks the RTX cards are not a worthy upgrade from the GTX 1xxx cards needs to take the blinders off...

I'm not going to debate the rest of your points as they mostly subjective and depend greatly on personal bias and opinion.


----------



## Vayra86 (Mar 10, 2020)

lexluthermiester said:


> As an owner of an RTX card upgraded from a GTX counterpart card, 30% to 50%(depending on the game) upgrade in performance is a serious jump and hardly a "baby-step".
> 
> That is an inaccurate perspective. While it took time for the RTRT features to make it into games, the raw performance in non-RTRT games was an impressive jump and worth the upgrade by itself. Put another way, my 2080 kicks the crap out of my old 1080 in the exact same system. Anyone who thinks the RTX cards are not a worthy upgrade from the GTX 1xxx cards needs to take the blinders off...
> 
> I'm not going to debate the rest of your points as they mostly subjective and depend greatly on personal bias and opinion.



Perf/dollar shifts between generations... that was the point of discussion. Not your personal idea of how good a 2080 is. It was one of the worst perf/dollar choices in Turing and it still is. You've also upgraded not from top end last gen performance (again: thát was the topic: 1080ti, not 1080) but from sub top. You'd have been far better off waiting on the 2070S or even simply buying the 1080ti from the get-go.

To each their own... just call it what it is. The numbers don't lie. Furthermore, Nvidia's own sales numbers pre-Turing S underline my 'inaccurate' perspective... There is very little subjective about it. There are indeed Turing cards _today _that offer meaningful upgrade paths. But the selection is small and appeared late in gen, and not with the introduction of RTX on its own. It needed a price cut and got one. Heck even today, it appears AMD has been gaining market share since Turing launch. Odd... 

We've been here before. I'm talking about the market and you're talking about your personal upgrade considerations. The latter is irrelevant here... This is the big picture.


----------



## lexluthermiester (Mar 10, 2020)

Vayra86 said:


> Not your personal idea of how good a 2080 is.


It's not my idea, it's real world performance.


Vayra86 said:


> It was one of the worst perf/dollar choices in Turing and it still is.


Subjective opinion. Not everyone agrees. I didn't pay $1000 for my 2080. I got one for $700ish. Your perf/price value ratio is heavily dependent on the price being paid and the comparative upgrade.


Vayra86 said:


> You've also upgraded not from top end last gen performance (again: thát was the topic: 1080ti, not 1080) but from sub top.


I upgraded from one model tier to it's counterpart in the RTX line. The TI models have a similar performance difference from the non-TI models, so when we talk about the price difference between the 1080ti and the 2080ti, then yes you might have a point, but only for that model tier level. And if I had jumped from a 1080 to a 1080ti, I would not have the RTRT features that were a big part of the motivation for the upgrade.


Vayra86 said:


> The numbers don't lie.


No, but they are greatly subjective and vary quite a bit from maker to maker and from region to region, something that always seems to get overlooked in discussions like this.

Once again here's an idea, how about we let people make up their own minds where value is concerned.


----------



## Vayra86 (Mar 10, 2020)

lexluthermiester said:


> It's not my idea, it's real world performance.
> 
> Subjective opinion. Not everyone agrees. I didn't pay $1000 for my 2080. I got one for $700ish. Your perf/price value ratio is heavily dependent on the price being paid and the comparative upgrade.
> 
> ...



Wooooosh... that is all


----------



## medi01 (Mar 10, 2020)

Vayra86 said:


> That is how movement happens: when the competitor can challenge similar performance.


I thought I was reading about galactic level breakthrough on the perf/$ front, but it seems I've misread it.
Oh, good to know.



Vayra86 said:


> And even without competition the performance cost was reduced by 100 bucks MSRP, so, your point?


My point... is* you were getting barely enough perf* (not necessarily perf/$) bumps, to somehow justify  selling you (not personally you, of course) stuff.
With hilarious $100 "on top" at the very beginning, when certain folks pay it even though, oh well, how far was 1080 from well OCed AIB 980Ti? 

So to summarize:

1) Major perf/$ improvements touted in this post is misleading BS, as demonstrated here.
2) if AMD is really be missing in action, for whatever reason, milking Pascal/Turing style, and I mean literally Turing style, with greed pushing it to a point sales targets are missed by 25%
3) Re-read #1, it's worth it


----------



## Vayra86 (Mar 10, 2020)

medi01 said:


> I thought I was reading about galactic level breakthrough on the perf/$ front, but it seems I've misread it.
> Oh, good to know.
> 
> 
> ...



Wooooosh....There is no AMD beef here. Stop searching for it.

Man, it must be Corona stress or something...



medi01 said:


> With hilarious $100 "on top" at the very beginning, when certain folks pay it even though, oh well, how far was 1080 from well OCed AIB 980Ti?



You really gotta learn not to twist facts to your narrative. 1080 from well OC'd 980ti was still 25% faster. So a normal tier jump for all intents and purposes. And if you OC the 1080, 30% is easy to get. And it got only better as time and demands progressed, because delta compression and memory are notably better and faster and newer games love that.

You take an FE price point that nobody in their right mind ever paid (it was common knowledge that the Pascal FE blowers were utter shit and overpriced, straight away from launch day and after MSRP was known between FE/non-FE and reviews showed throttling) and take it for granted, while you discard the real MSRP of a 1080ti because it also doesn't really suit you too well.

I can't even... You can try as hard as you like, but its clear you don't understand a thing of the marketplace, and rather seem to think its a schoolyard with bickering kids. Here's news: that is how bickering kids are getting played by AMD and Nvidia. A little leak here, a rumor there, a Tuber with a scoop there... and boom. Free press. Meanwhile in the real world, the only things that matter are price and USPs, that customers care about. The numbers. Simply. Don't. Lie.


----------



## medi01 (Mar 10, 2020)

Vayra86 said:


> Man, it must be Corona stress or something...


I write it off to green reality distortion field.
Someone can state figures, be shown they are all way off, still stick to teh narrative.



Vayra86 said:


> 1080 from well OC'd 980ti was still 25% faster.


BULLSHIT.
Even stock vs stock it was about 30% ahead



Vayra86 said:


> You take an FE price point that nobody in their right mind ever paid


That's why they were sold out.
But it's cool we need to introduce "but nobody bought this" aspect to figure out how graceful pricing model was, chuckle.

And, for the record: I don't complain about it, in fact, I'd love NV to be even greedier, chuckle, as I'm rather enjoying it.



Vayra86 said:


> (is it OK if I ramble around, like I'm high or something? Just to make my post look even more "impressive"?)


No problem, dude.


----------



## Vayra86 (Mar 10, 2020)

medi01 said:


> I write it off to green reality distortion field.
> Someone can state figures, be shown they are all way off, still stick to teh narrative.
> 
> 
> ...








24% is 30% in medi01-land.

lmao. But anyway thanks for confirming that for me, because the point was, initially, bigger perf jumps cause more price movement on the market, and they did, your whole non-discussion notwithstanding. The 1080ti only confirms that once more by offering yet another 30% at the same price point only a year later.

By the way.





Them being sold out doesn't change the fact they were shit. Once again, thanks for confirming your BS and me being 100% correct. You're even giving me the right sources now. Brilliant 



medi01 said:


> I'd love NV to be even greedier, chuckle, as I'm rather enjoying it.



Signs of a madman. I'm not going all emotional over a pricing strategy.

Oh and eh... you still haven't learned how to quote as you should it seems. Shame you gotta go so low, once again. I thought you had grown up alittle...


----------



## InVasMani (Mar 10, 2020)

I can hardly wait to see Nvidia's new line up at 7nm priced even more out of sight than RTX. It'll be a joyous time for gaming for the 3 people that can afford them. I hope at the very least NV masks a money bag onto the PCB so people know they blew their wad on it. Nvidia the way it's meant to be paid.


----------



## lexluthermiester (Mar 10, 2020)

Vayra86 said:


> Wooooosh... that is all


This comment made me think I might have missed some parts of the conversation, and after review.. YUP. If I'm not much mistaken, I was making all your points for you while at the same time arguing against you. How's *that* for irony? You'll excuse me while I extract my foot from my mouth....


Vayra86 said:


> Them being sold out doesn't change the fact they were shit.


True.


----------



## medi01 (Mar 11, 2020)

Vayra86 said:


> 24% is 30% in medi01-land.


It's a basic math (and it was in your points favor).

A: 76
B: 100

B is roughly A + 30%.


----------



## the54thvoid (Mar 11, 2020)

To clear things up, percentages are relative to a baseline.

For example, consider 50 and 100.

From an increase in performance: 100 is 100% higher than 50. However, 50 is also 50% lower than 100. It all depends where you work out the initial deduction. It happens all the time on TPU and really should be used in context, otherwise, we get these arithmetic confusions.

So, 2 is double one, and 1 is half of two. They're both right.


----------



## WeeRab (Apr 25, 2020)

There are also rumours that they are having fab difficulties - low yields, power spikes etc  with 5nm on such a large die.
 We shall soon see if there's any truth to them.


----------



## Anymal (Apr 25, 2020)

Ampere is on 7nm, 5nm should be for smaller tegra chips first.


----------



## jabbadap (Apr 26, 2020)

Anymal said:


> Ampere is on 7nm, 5nm should be for smaller tegra chips first.



Well yeah Drive AGX Orin sounds a bit right for that timeline(c.a. 2022). Albeit it being relative big chip by itself, the size will be more like a midrange gpu than a big heavy compute part. 

And you are right, unproven 5nm for large compute chip sound very unlikely.


----------



## ARF (Apr 26, 2020)

WeeRab said:


> There are also rumours that they are having fab difficulties - low yields, power spikes etc  with 5nm on such a large die.
> We shall soon see if there's any truth to them.




You mean with 7nm?
It sounds quite plausible, though. It's been a year and 2 months since AMD released the Radeon VII, and 10 months since the RX 5700 XT.

AMD is about to launch a second generation N7P products soon.


----------



## Valantar (Apr 26, 2020)

ARF said:


> You mean with 7nm?
> It sounds quite plausible, though. It's been a year and 2 months since AMD released the Radeon VII, and 10 months since the RX 5700 XT.
> 
> AMD is about to launch a second generation N7P products soon.


I'm not saying what you quoted is true, but the size difference between the Radeon VII and any follow-up to the RTX 2080 Ti would be very significant - Vega 20 on 7nm was 331mm2, while the Vega 10 on 14nm was 495mm2 - a ~35% shrinkage (with a couple of added memory controllers and PHYs etc, so actual density increase is likely a bit higher). While TSMC 12nm isn't identical in density to GloFo 16nm they should nonetheless be roughly comparable - which would leave a direct shrink of the 754mm2 TU102 at ~490mm2, barring any added CUDA cores or other hardware. That's quite a lot larger. I would be surprised if they couldn't get decent yields even at that size, but it would be by far the biggest chip in volume production on that node (though we'll see what RDNA 2 brings).


----------



## John Naylor (Jun 21, 2020)

I can't imagine that I live in a world where fanbois argue about pre-release  specs that come out of the advertising department ... and, on top of that,  arguing that their brand's fake specs are all real and the other guys are all fake.   Save ya arguing for then the cards are tested.   My bet is we just going to see more of the same ...

Mantle was gonna change everything ... it didn't
HBM2 was gonna change everything ... it didn't
7nm  was gonna change everything ... it didn't

What we do know is that the GPU market stopped being competitive with the 7xx versus 2xx series, where nVidia walked away with the top two tiers (all cards overclocked).  AMD lost another tier against the 970 and another tier against the 1060.  The next generation didn't go well for both sides in some respects ... AMD had to make huge price cuts; nVidia didn't because they didn't have to.  The  bright shining light to was the 5600 XT, pretty much nothing else got me excited out of AMD ... if they can scale that up into the upper tiers, things may finally get interesting.


----------



## InVasMani (Jun 21, 2020)

To be fair all three of those things changed things as to them changing "everything" idk whom made such absurd claims and remarks lol, but if you think HBM2 isn't great I'd hate to disappoint you what do you think both companies are using for their professional tier graphics cards exactly. Mantle isn't any worse than other GPU tech that requires developer support be it DLSS/RTX/CF/SLI/PhysX or any other proprietary 3D voodoo FX GPU hardware + developer magic. As for 7nm it's changed plenty just look at what it changed for Ryzen chips go on tell it hasn't changed anything or are you still clinging to a quad core Intel chip!? I mean let's not pretend 7nm hasn't made any difference obviously it has and will continue to do so TSMC has plenty of time for 7nm++++++++++++++ I mean Intel has paved the way for it.

I'd say the 5600m and the new Radeon Pro VII are both intriguing parts and Renoir as well. AMD just need shuffle something of the things together that's got or worked on. That would include Radeon Pro/Vega HBCC particularly the card where they utilized a M.2 slot. I think AMD is in position to do a lot of intriguing things on the GPU side similar to how Ryzen was able to shake things up on the CPU side of things. I'm not saying it'll happen immediately, but I have a feeling they are going to hit back hard again one of these days on the GPU side. Chances are rather likely that it'll be during a period when the CPU side of the business begins to wane or is waning again. It would stand to reason that would be a transition period of time where they'd certainly make a concerted effort to double or triple down on the R&D of their GPU division portfolio to leverage them as they up with a new CPU architecture design win again. 

I would like to think on the gaming side inverse of the new Radeon Pro VII's Double 64FP precision they'll go more in the opposite direction with half precision floating point 16FP which seems like it would tie in with variable rate shading more appropriately actually. The double precision 64FP seems like it would be more beneficial to less stringent non "real time" rendering requirements and flexibility while half precision I'd think to be more the opposite enabling more fine granularity though a mixture of that 32FP and double 64FP is likely in order at some stage or another for gaming to leverage them all with variable rate shading to the best extent. 

Probably something like 50% going to 32FP 37.5% to half precision 16FP and 12.5% double precision 64FP for gaming cards is what I'd expect in the future while more compute world loads would reverse half precision in favor of double precision. That ratio might be closer to 6.25%/43.75% for half precision/double precision, but I'd expect 32FP to remain rather neutral that or we could see quarter precision and quad precision take more of a split of resource allocations, but keeping FP32 the majority of resource allocation. In that scenario it would be more like 6.25% FP8, 6.25% FP16, 50% FP32, 18.75% FP64 18.75% FP128 or you could inverse the FP64/FP128 with FP8/FP16 between gaming and compute consumer orientated graphics cards. I'm mostly speculating on that, but I think more granularity is certainly beneficial especially with variable rate shading. In terms of the floating point precision aspect though I'd say that applies to AMD and Nvidia as well as Intel "if" they do ultimately become competitive at discrete graphics.

On the APU side I could see AMD teaming a APU with a x16 discrete APU that matches it's specs for both the CPU/GPU cores/cu's increasing the overall combined system resources for both tasks in the future. I maybe perhaps it's too late for that right now with it's latest APU or perhaps not, but I do see that as a very potential possibility in the future and I really do think that would have a big appeal to a great deal of people that just want a nice affordable balance and handy upgrade path. I mean sure maybe perhaps GPU's wouldn't scale perfectly being teamed together in a CF format in all instances, but additional CPU cores is likely to still be beneficial in instances where perhaps that doesn't apply so it could still be a overall net gain and positive. Basically even if that only ticks 1 out of 2 check boxes between the two it's still a net gain of either or scenario which is a cool thing to think about and AMD is best positioned right now to offer it to consumers because Intel hasn't exactly proven itself in that area nearly as well at this point then again perhaps they have more than we give them credit given how their integrated GPU's have slowly been eroding discrete graphics over the years then again that's true of any of the companies making integrated graphics in any form or another form Nvidia back on LGA775 to Intel today as well as AMD.


----------



## Valantar (Jun 21, 2020)

InVasMani said:


> To be fair all three of those things changed things as to them changing "everything" idk whom made such absurd claims and remarks lol, but if you think HBM2 isn't great I'd hate to disappoint you what do you think both companies are using for their professional tier graphics cards exactly. Mantle isn't any worse than other GPU tech that requires developer support be it DLSS/RTX/CF/SLI/PhysX or any other proprietary 3D voodoo FX GPU hardware + developer magic. As for 7nm it's changed plenty just look at what it changed for Ryzen chips go on tell it hasn't changed anything or are you still clinging to a quad core Intel chip!? I mean let's not pretend 7nm hasn't made any difference obviously it has and will continue to do so TSMC has plenty of time for 7nm++++++++++++++ I mean Intel has paved the way for it.
> 
> I'd say the 5600m and the new Radeon Pro VII are both intriguing parts and Renoir as well. AMD just need shuffle something of the things together that's got or worked on. That would include Radeon Pro/Vega HBCC particularly the card where they utilized a M.2 slot. I think AMD is in position to do a lot of intriguing things on the GPU side similar to how Ryzen was able to shake things up on the CPU side of things. I'm not saying it'll happen immediately, but I have a feeling they are going to hit back hard again one of these days on the GPU side. Chances are rather likely that it'll be during a period when the CPU side of the business begins to wane or is waning again. It would stand to reason that would be a transition period of time where they'd certainly make a concerted effort to double or triple down on the R&D of their GPU division portfolio to leverage them as they up with a new CPU architecture design win again.
> 
> ...


An add-on APU AIC over PCIe would be a terrible idea unless it included modifications to the Windows scheduler that strictly segregated the two chips with no related processes ever crossing between the two. Without that you would have absolutely _horrible _memory latency issues and other NUMA-related performance issues, just exacerbated by being connected over (for this use) slow PCIe. Remember how 1st and 2nd generation Threadripper struggled to scale due to NUMA issues? It would be that, just multiplied by several orders of magnitude due to the PCIe link latency. It could work as a compute coprocessor or something similar (running its own discrete workloads), but it would be useless for combining with the existing CPU/APU. Scaling would be horrendous.

As for FP32/16/8, most if not all modern GPU architectures (Vega and onwards from AMD) support Rapid Packed Math or similar techniques for "packing" multiple smaller instructions (INT8 or FP16) into FP32 execution units for 100% performance scaling (i.e. 2:1 FP16 to FP32 or 4:1 INT8 to FP32). No additional hardware is needed for this beyond the changes to shader cores that have already been in existence for several years. So any modern GPU with X TFLOPS FP32 should be able to compute 2X TFLOPS FP16 or 4X INT8. FP64 needs additional hardware as it is (at least for now, in consumer GPUs) not possible to combine multiple FP32 units to one FP64 unit or anything like that (might be possible if they built it that way), but FP64 as you say has little utility in consumer applications that isn't happening. CDNA is likely to aim for everything between INT8 and FP64 as the full range is useful for HPC, ML and other datacenter uses.

It will be very interesting to see if game engine developers start to utilize FP16 more in the coming years, now that GPUs generally support it well and frameworks for its utilization have been in place for  a while. It could be very useful to speed up rendering of less important parts of the screen, perhaps especially if combined with foveated rendering for HMDs with eye tracking.



John Naylor said:


> I can't imagine that I live in a world where fanbois argue about pre-release  specs that come out of the advertising department ... and, on top of that,  arguing that their brand's fake specs are all real and the other guys are all fake.   Save ya arguing for then the cards are tested.   My bet is we just going to see more of the same ...
> 
> Mantle was gonna change everything ... it didn't
> HBM2 was gonna change everything ... it didn't
> ...


Well ... 

Mantle paved the way for Vulkan and DX12, the current dominant graphics API and the clear runner-up. Without AMD's push for closer-to-the-hardware APIs we might not have seen this arrive as quickly. Has it revolutionized performance? No. But it leaves us a lot of room for growth that DX11 and OpenGL was running out of due to overhead issues. While there are typically negligible performance differences between the different APIs in games that support several (and the older often perform better), this is mainly down to a few factors: more familiarity with programming for the older API, needing to program for the lowest common denominator (i.e. no opportunity to specifically utilize the advantages of newer APIs), etc.

HBM(2) represents a true generational leap in power efficiency per bandwidth, and is still far superior to any GDDR or DDR technology. The issue is that adoption has been slow and the only major markets have been high-margin enterprise products, leading to prices stagnating at very high levels. Though to be fair, given the high price of GDDR6 this is less of an issue than two years ago. Still, the cost of entry is higher due to the need for an interposer (or something EMIB-likes) and more exotic packaging technology, and this means that GPUs using HBM have typically been expensive. Of course it's also gotten a worse than deserved reputation due to the otherwise unimpressive performance of the GPUs it's been paired with. Nonetheless, GPUs like the recently announced Radeon Pro 5600M shows just how large of an impact it can have on power efficiency while delivering excellent performance. I'm still hoping for HBM2(e?) on "big Navi".

7nm (and Zen 2, of course) took AMD from "good performance, great value for money, particularly with multithreaded applications" to "clear overall performance winner, _clear _efficiency winner, minor ST disadvantage" in the CPU space. It in combination with RDNA (which is not to be discounted in terms of efficiency when compared to 7nm GCN in the Radeon VII) brought AMD to overall perf/W parity with Nvidia even in frequency-pushed SKUs like the 5700 XT, which we hadn't seen since the Kepler/early GCN era before that. We've also seen that lower clocked versions of 7nm RDNA (original BIOS 5600 XT and Radeon Pro 5600M) are able to notably surpass anything Nvidia has to offer in terms of efficiency. Now, of course there is a significant node advantage in play here, but 7nm has nonetheless helped AMD reach a point in the competitive landscape that it hasn't seen on either the CPU or GPU side for many, many years. With AMD promising 50% improved perf/W for RDNA2 (even if that is peak and the average number is, say, 30%) we're looking at some very interesting AMD GPUs coming up.

It's absolutely true that AMD has a history of over-promising and under-delivering, particularly in the years leading up to the RDNA launch, but things are looking like that has changed. The upcoming year is going to be exciting from both GPU makers, consoles are looking exciting, and even the CPU space is showing some signs of actually being interesting again (though mostly in mobile).


----------



## Vya Domus (Jun 21, 2020)

John Naylor said:


> Mantle was gonna change everything ... it didn't
> HBM2 was gonna change everything ... it didn't
> 7nm  was gonna change everything ... it didn't



Mantle paved the way for Vulkan.
HBM is now used by both AMD and Nvidia for their highest end GPUs.
Everyone is moving to 7nm which dramatically increases transistor counts, Nvidia has a 54 billion transistor GPU, if that isn't a game changer I don't know what is.

Bottom of the line is, you don't know what you're talking about.


----------



## CandymanGR (Jun 21, 2020)

Trust me, it will not.


----------



## InVasMani (Jun 22, 2020)

Valantar said:


> An add-on APU AIC over PCIe would be a terrible idea unless it included modifications to the Windows scheduler that strictly segregated the two chips with no related processes ever crossing between the two. Without that you would have absolutely _horrible _memory latency issues and other NUMA-related performance issues, just exacerbated by being connected over (for this use) slow PCIe. Remember how 1st and 2nd generation Threadripper struggled to scale due to NUMA issues? It would be that, just multiplied by several orders of magnitude due to the PCIe link latency. It could work as a compute coprocessor or something similar (running its own discrete workloads), but it would be useless for combining with the existing CPU/APU. Scaling would be horrendous.


 Technically AMD could build a LPDDR4 and M.2 card right into a infinity fabric bridge. It could actually be quicker for gaming like quad channel LPDDR4 and a pair of raid-0 M.2 slots built into the infinity fabric bridge itself. The socketed CPU could just handle the background OS tasks independent from it running the game off the APU's CPU cores. Which honestly in such a scenario a dual core CPU built into the APU's would suffice that allows it to communicate with the bridge itself perfectly and execute a single core per CPU. Essentially quad core type CPU performance coupled with quad core performance that can cache accelerate the M.2 slots and compress/decompress/encrypt/decrypt on the fly across the bridge. Sort of like they are all networked together with ram disks on each side. The real benifit of course is any the OS stuff going on with the main socketed CPU won't adversely impact game performance. You could pretty much run a CPU stress test off it while gaming on the discrete APU's in theory I think in practice because the games themselves would be run off the APU's. That would of course require some kind of Direct X process aware scheduler segregation though. I believe the other parts with M.2 and compression and encryption though could be performed thru the driver itself and a Linux installation on the M.2 devices themselves that the APU's operate it could be headless for that matter.


----------



## Super XP (Jun 22, 2020)

John Naylor said:


> I can't imagine that I live in a world where fanbois argue about pre-release  specs that come out of the advertising department ... and, on top of that,  arguing that their brand's fake specs are all real and the other guys are all fake.   Save ya arguing for then the cards are tested.   My bet is we just going to see more of the same ...
> 
> Mantle was gonna change everything ... it didn't
> *It did change everything, it forced Microsoft into launching DX12. Mantle also evolved into Vulcan by the Krono's group. *
> ...



AMD is far from perfect, and despite its lack to compete in the GPU industry in a higher level they still managed to compete on there level with cost to performance. 2020 & 2021 will be a very crucial growth year for AMD before Nvidia and especially Intel wake up.

Nvidia's Next - Generation Ampere GPUs to be 50% Faster than Turing at Half the Power? 
*I highly doubt that, more like wishful thinking.* I believe RDNA2 forced Nvidia to rush this Ampere, which is probably a slightly tweaked & overclocked Turing design on a new manufacturing process which requires more power.
Its going to be interesting to see Ampere vs. RDNA2 battles coming later this year.


----------

