Monday, October 28th 2024
Next-Gen GPUs: Pricing and Raster 3D Performance Matter Most to TPU Readers
Our latest front-page poll sheds light on what people want from the next generation of gaming GPUs. We asked our readers what mattered most to them, with answers including raster performance, ray tracing performance, energy efficiency, upscaling or frame-gen technologies, the size of video memory, and lastly, pricing. Our poll ran from September 19, and gathered close to 24,000 votes as of this writing. Pricing remains the king of our polls, with the option gathering 36.1% of the vote, or 8,620 votes. Our readers expect pricing of next-generation GPUs to remain flat, variant-for-variant, and not continue on the absurdly upward trend it has had for the past few generations, with the high-end being pushed beyond the $1,000-mark, and $500 barely bringing in a 1440p-class GPU, while 4K-capable game consoles exist.
Both AMD and NVIDIA know that Moore's Law is cooked, and that generational leaps in performance and transistor counts are only possible with increase in pricing for the latest foundry nodes. AMD even tried experimenting with disaggregated (chiplet-based) GPUs with its latest RDNA 3 generation, before calling it quits on the enthusiast-segment, so it could focus on the sub-$1000 performance segment. The second most popular response was Raster 3D performance (classic 3D rendering performance), which scored 27% or 6,453 votes.Generational gains in raster 3D graphics rendering performance at native resolutions remain eminently desirable for anyone following the PC hardware industry for decades now. With Moore's Law in place, we've been used to near-50% generational increases in performance, which enabled new gaming APIs and upped the eye-candy in games with each generation. Interestingly, ray tracing performance takes a backseat, polling not even 3rd, but 4th place, at 10.4% or 2,475 votes. The 3rd place goes to energy efficiency.
The introduction of 600 W-capable power connectors presented ominous signs of where power was headed with future generations of GPUs as the semiconductor fabrication industry struggles to make cutting edge sub 2 nm nodes available, which meant that for the past 3 or 4 generations, GPUs aren't getting built on the very latest foundry node. For example, by the time 8 nm and 7 nm GPUs came out, 5 nm EUV was already the cutting-edge, and Apple was making its iPhone SoCs on them. Both AMD and NVIDIA would go on to make their next-generations on 5 nm, while the cutting-edge had moved on to 4 nm and 3 nm. The upcoming RDNA 4 and GeForce Blackwell generations are expected to be built on nodes no more advanced than 3 nm, but these come out in 2025, by which time the cutting edge would have moved on to 20 A. All this impacts power, which a performance target wildly misaligns with foundry node available to GPU designers.
Our readers gave upscaling and frame-gen technologies like DLSS, FSR, and XeSS, the least votes, with the option scoring just 2.8% or 661 votes. They do not believe that upscaling technology is a valid excuse for missing generational performance improvement targets at native resolution, and take any claims such as "this looks better than native resolution" with a pinch of salt.
All said and done, the GPU buyer of today has the same expectations from the next-gen as they did a decade ago. This is important, as it forces NVIDIA and AMD to innovate, build their GPUs on the most advanced foundry nodes, and try not to be too greedy with pricing. NVIDIA's competitor isn't AMD or Intel, but rather PC gaming as a platform has competition from the consoles, which are offering 4K gaming experiences for half a grand, with technology that "just works." The onus then is on PC hardware manufacturers to keep up.
Both AMD and NVIDIA know that Moore's Law is cooked, and that generational leaps in performance and transistor counts are only possible with increase in pricing for the latest foundry nodes. AMD even tried experimenting with disaggregated (chiplet-based) GPUs with its latest RDNA 3 generation, before calling it quits on the enthusiast-segment, so it could focus on the sub-$1000 performance segment. The second most popular response was Raster 3D performance (classic 3D rendering performance), which scored 27% or 6,453 votes.Generational gains in raster 3D graphics rendering performance at native resolutions remain eminently desirable for anyone following the PC hardware industry for decades now. With Moore's Law in place, we've been used to near-50% generational increases in performance, which enabled new gaming APIs and upped the eye-candy in games with each generation. Interestingly, ray tracing performance takes a backseat, polling not even 3rd, but 4th place, at 10.4% or 2,475 votes. The 3rd place goes to energy efficiency.
The introduction of 600 W-capable power connectors presented ominous signs of where power was headed with future generations of GPUs as the semiconductor fabrication industry struggles to make cutting edge sub 2 nm nodes available, which meant that for the past 3 or 4 generations, GPUs aren't getting built on the very latest foundry node. For example, by the time 8 nm and 7 nm GPUs came out, 5 nm EUV was already the cutting-edge, and Apple was making its iPhone SoCs on them. Both AMD and NVIDIA would go on to make their next-generations on 5 nm, while the cutting-edge had moved on to 4 nm and 3 nm. The upcoming RDNA 4 and GeForce Blackwell generations are expected to be built on nodes no more advanced than 3 nm, but these come out in 2025, by which time the cutting edge would have moved on to 20 A. All this impacts power, which a performance target wildly misaligns with foundry node available to GPU designers.
Our readers gave upscaling and frame-gen technologies like DLSS, FSR, and XeSS, the least votes, with the option scoring just 2.8% or 661 votes. They do not believe that upscaling technology is a valid excuse for missing generational performance improvement targets at native resolution, and take any claims such as "this looks better than native resolution" with a pinch of salt.
All said and done, the GPU buyer of today has the same expectations from the next-gen as they did a decade ago. This is important, as it forces NVIDIA and AMD to innovate, build their GPUs on the most advanced foundry nodes, and try not to be too greedy with pricing. NVIDIA's competitor isn't AMD or Intel, but rather PC gaming as a platform has competition from the consoles, which are offering 4K gaming experiences for half a grand, with technology that "just works." The onus then is on PC hardware manufacturers to keep up.
73 Comments on Next-Gen GPUs: Pricing and Raster 3D Performance Matter Most to TPU Readers
The only real reason to upgrade is because you've moved to higher resolution, want more frames per second on the same content, or because the general perf requirement of games has surpassed what your current GPU can do. Its that simple. Artificial nonsense around it will always end up being artificial, and thus, fake progress. When I see that numerous older DX11 games show an OK (or even extremely good) picture at north of 100 FPS on midrange cards of seven years ago, there's just no explanation that slightly better looking games on DX12 struggle with todays midrange cards. There's just none. And upscale is clearly not helping that perspective either; its possibly even making it worse because why do you even need 'extra performance' if the original presentation was fine to begin with? You're basically implicitly telling us your base experience sucks donkey balls now. Which it does, too, if you remove the blur filters.
Some odd statements though in the article. 'Used to 50% gen to gen uplifts'... ?! Those are the exceptions. Definitely not the rule. I believe Pascal was the only straight up 50% gen to gen uplift in the last ten years, and maybe Maxwell was somewhat close to it too. The others all came with a heavy redesign of the stack/product tiering/pricing structure. And even Pascal came with a $50,- premium per tier. Its not a 50% gen to gen uplift if that only happens from one x90 to the next, while the rest barely moves forward at the same price point. That's just introducing new price points with new performance levels, realistically.
Could RT work be split out to like a daughter board, or a dedicated card? Have the RT calculations offloaded on that.
Always wondered if we could get more out of RT, AI and traditional GPUs if they were split out in their own individual card. Would have a ton more die space combined. Like imagine a dedicated RT card the size of big Ada and run fully path traced games on.
Btw, a missed opportunity here - the headline could have been "TPU readers do not care about VRAM". That would definitely bring more clicks.
In some games I even use it when I don't need the extra performance, for example I'm using it in Wuthering Waves and that game I could easily run natively w/o making my GPU sweat at my resolution.
Reason being is that the built in TAA paired with native res looks like crap and theres a lot of flickering 'which I'm allergic to' in the background and DLSS fixes that for the most part. 'this happened in a number of games I was playing in the past years'
I'm already paying a lot of money for even a mid range card in my country regardless of what brand it is so at that point I prefer the one with the better feature set and for my use case that means DLSS/DLAA. 'I usually upgrade every 3 years between ~mid range cards unless something unexpected happens before that'
Fake is fake. Woman breasts with silicon implants are all fake. And a man's too.
Frame generation and DLSS-like stuff not only makes game devs more lazy (they don't have to optimize that much, just turn on fake frames baby to get instant FPS boost), it is also used to obscure poor GPU optimizations and lack of progress. I remind you that you'd pay $1600 for best GPU in the market (RTX 4090) with which you'd not be able to play newest titles at 60 FPS at 4K on ultra on native. What do you pay for?
Even SONY pressured AMD to get it's sh!t together and improve RT performance and stop fooling around like what they did with RDNA3.
Personally I am going to insist in what I was saying the day reviews of RX 7900XTX/XT came out. RT performance must be a priority because that's where all the marketing is. Also upscaling and Frame Generation today is seen as a God send gift, not as cheating, we are not in 200x where cheating was exposed as something negative. Today it's a feature. This means that raster performance is more than enough when combined with upscaling and Frame Generation, meaning what AMD needs to do is to focus on RT performance. Only then they can level the field with Nvidia in performance and force Nvidia to search for another gimmick to differentiate their cards, while subotaging of course the competition.
As more elements are ray traced, the performance will drop to zero fps on today’s cards which effectively ‘zeros’ out any chance of future proofing.
Ray tracing is a scam that tries to justify high GPU prices. All manufacturers are in on it but none worse than Nvidia. I look forward to AMD and Intel bringing some sense back to the GPU market. Hopefully PC enthusiasts will reward these GPU makers with their hard earned cash as hoping better competition brings down Nvidia prices doesn’t make sense if the vast majority only buy Nvidia and refuses consideration of other GPUs due to brand loyalty or internet myths about quality. That didn’t work out so well for Intel fans for the past two gens of CPUs.
Also. I dont care about RTX. I just want a card that will do solid 165-200fps@1440p for $520 or even less.
I remember when i bought 7900 GTX back in 2006 and then cranked all my games to max. Where does this 80% number come from? If you're equating nvidia's market share to buyers who care about RT and DLSS then that's a false assumption. Not every Nvidia buyer considers these their main reason for buying Nvidia. Exactly. Advocates for RT speak like this is some sort of great thing but Tim's video posted above proves that most games do not implement it in a meaningful way and all RT games today are hybrids of raster and RT and thus raster perf still matters and will matter for a long time. A false assumption. RT is in fact double work for developers. They have to make both raster and RT lighting versions of the game and then ensure turning on RT does not cause additional problems with raster. Only games that are fully RT or PT like Quake II RTX, Portal RTX etc have the benefit of only RT lighting.
By the way i was the one who suggested this poll. Im glad TPU went rough with it. 25k votes is a pretty sizeable sample size. Most polls consider 1000 a meaningful sample size for accurate results. It also pretty much confirmed my expectations. People have not suddenly started to place RT or upscaling/fragmegen above traditional price/raster/efficiency. Tho i was surprised that efficiency was this high up the list as i consider most current gen cards to be pretty efficient.
All that matters to me is performance/watt (efficiency) as I like my GPU in TDP 220-250W range.
3 years is more like my experience so far since 2008, pretty much last around 3 years with my budget-mid range GPUs since I'm a variety gamer I also play brand new more demanding games too. 'for example the new UE 5 games are starting to really push the limits of my 3060 Ti w/o having to murder my settings too much'
Btw I only buy second hand GPUs so I aint paying full price to any of the brands and this leaves me with more options in my budget range whenever its upgrade time.:)
I'm personally more interested in Unreal Engine 5 future implementations, such as Global Illumination.