There are some reading comprehension issues on your end it seems.
Uno reverse card. My original comment stated that nvidia had consistently moved performance down a tier. which they had. The 4070 was vaster then a 3080, and a 3070 was faster then the 2080. For some reason, you then went with a strawman about the value of the cards, which doesnt change my point at all.
They offered a smaller performance increase at a higher cost gen over gen, which is LESS value. A 4070 offers the same performance as a 3080, yet 3070 to 4070 was a $100 markup over msrp gen to gen. 3080 to 4080 was an insane $500 markup in msrp (71% cost increase) for 80 to 80 series generation jump. You are literally paying more for less. It’s not a hard concept to figure out…
In case you havent noticed, there's been this thing called inflation. You may have heard of it. The price of EVERYTHING has gone up substantially. The reason I labeled the 3080 as having an "accidental" price was specifically because $700 for a 3080 was totally unsustainable, and right around the time it released, prices went through the roof. Even if supply chains had not fallen apart, that card was not staying at $700.
Realistically, almost nobody got a 3080 for anywhere near $700. $1000 was a lot more realistic.
And before you go "ah but GREED its not FAAAAAAAIR!!!!!" check out their gross margins. You'll notice that, prior to the AI boom, Nvidia's margin went up by a whopping 3% from ampere to ada, and then would proceed to FALL to levels last seen in 2019 just before the AI boom hit.
Current and historical gross margin for NVIDIA (NVDA) over the last 10 years. The current gross profit margin for NVIDIA as of October 31, 2024 is <strong>%</strong>.
www.macrotrends.net
The higher cost of TSMC wafers, inflation, energy, shipping, ece all played a role. If you don't factor in inflation, then yeah I guess they are not a great value. but in the real world, that's not how things work.
short of another economic crash, you're not seeing $500 high end cards again. The 4070 was a good value for what it offered, compared to the inflated 3080s with pitiful 10GB VRAM buffers.
I'm curious: do you have some like-for-like numbers? I recall 40 series optimized BVH traversal for faster ray tracing, but the overall benefit was modest.
Random TPU ray tracing benchmark says 40 series takes 27-31% perf hit with RT on, 30 series takes 31-32% hit. Only other CU improvements I'm aware of are Optical Flow Acceleration, Opacity Micro Meshes, and a big increase in L2 cache (like 10x).
Going from 3080 to 4070 TiS:
- cores: 8704 to 8448, -3%
- mem bw: 760 to 672, -13%
- core clock in gaming: 1870 to 2686, +44%
- overall perf: +27/+29/+25 at 1080p/1440p/4k
Based on that I don't think there was much IPC gain.
I'm either misremembering them or the numbers I saw were based on Raytracing. either way it was 2 years ago, so I dont have the numbers I saw back then.