Then then the title should've included words as : Claim or say ( Nvidia says, or claims) at worst, just like did in the Intel news.
Saying that IT WILL BE %50 FASTER this way is just wrong and biased.
I'd say the headline ought to be along the lines of "Nvidia's Next-Generation Ampere GPUs reportedly 50% faster than Turing at Half the Power". I'll agree that a headline with zero reservations or considerations taken as to this being second-hand information with dubious sourcing making bombastic claims at a very early point is poor journalism, but I don't see it as a double standard or bias - inaccuracies like this are quite par for the course across the board for TPU, sadly.
if Ampere really has a 50% gain over Turing in all benchmarks/real world use while using less power is a good thing. Problem here is many bought the "refreshed" RTX20 Series Super cards & GTX16 Series cards... so those folks might be at a loss-ish?
Logic like that is
always wrong. If you buy a card today and a better card comes out for the same price tomorrow, you have lost nothing whatsoever. Sure, you could have gotten a
better deal if you waited, but that is
always the case. There will always be better hardware in the future for a better price, so you just have to pull the trigger at some point, and your purchase will always be made to look less than ideal at some point. That doesn't mean it was a bad purchase, nor does it change the performance/dollar that you tacitly agreed to pay when you made the purchase decision.
So it was or wasn't? Because I'm not sure what you mean.
I don't care about SMs, clocks and all that internal stuff (much like I don't care about IPC in CPUs). It's not what I'm paying for as a gamer.
1660Ti came out roughly 2.5 years after 1060.
It's slightly more expensive, the same power draw, similar form factor and features.
1660Ti is 30-40% faster.
The problem is that you weren't talking about absolute performance in the post I responded to, you were talking about architectural performance improvements specifically. While there are (very broadly) two ways for these to work (increased clock speeds not due to node changes, and "IPC" for lack of a better term), most clock speed improvements come from node changes, and most arch improvements are normally down to improving IPC. There are exceptions to this, such as Maxwell clocking significantly higher than previous architectures, but for the most part this holds true. If you're talking about perf/$ on an absolute level, you are right, but that's another matter entirely. So, if you don't care about
how one arrives at a given performance level, maybe don't get into discussions about it?
Valantar can't do simple maths
2070 super matches 1080Ti with 2560 cuda vs 3584 cuda
clocks go slightly in favor of 2070S by 100mhz (5-6%),bandwidth in favor of 1080Ti by 40GB/s (8%)
that's around 1.4x performance per CUDA on average,not 1.3x,not 1.1x
in some cases 2070s ends up 15% faster,in some a few perrcent slower.A right way to estimate it would be 1.25x-1.5x depending on a game,but certainly at least 1.3x on average
2070S is a tuned and binned half-gen refresh SKU based on mature silicon, not really directly comparable to the 1st-gen 1080Ti. That doesn't mean it doesn't have good absolute performance, it just isn't a 1:1 comparison. The fact that the 2070S performs within a few percent of the 2080 makes this pretty clear. So, if you want a like-for-like comparison, the 2080 is the correct card. And then you suddenly have a rather different equation:
2080 beats the 1080Ti by about 8% with 2944 CUDA cores vs. 3584.
Clocks go slightly in favor of the 2080 by 100MHz (5-6%), and memory bandwidth is essentially a tie.
In other words the clock speed increase and performance increase pretty much cancel each other out, leaving us with ~22% more perf/CU. That of course ignores driver improvements and other uncontrollable variables, to which some improvements should reasonably be attributed. My 10% number might have been a bit low, but your 40% number is silly high.