Monday, October 28th 2024

Next-Gen GPUs: Pricing and Raster 3D Performance Matter Most to TPU Readers

Our latest front-page poll sheds light on what people want from the next generation of gaming GPUs. We asked our readers what mattered most to them, with answers including raster performance, ray tracing performance, energy efficiency, upscaling or frame-gen technologies, the size of video memory, and lastly, pricing. Our poll ran from September 19, and gathered close to 24,000 votes as of this writing. Pricing remains the king of our polls, with the option gathering 36.1% of the vote, or 8,620 votes. Our readers expect pricing of next-generation GPUs to remain flat, variant-for-variant, and not continue on the absurdly upward trend it has had for the past few generations, with the high-end being pushed beyond the $1,000-mark, and $500 barely bringing in a 1440p-class GPU, while 4K-capable game consoles exist.

Both AMD and NVIDIA know that Moore's Law is cooked, and that generational leaps in performance and transistor counts are only possible with increase in pricing for the latest foundry nodes. AMD even tried experimenting with disaggregated (chiplet-based) GPUs with its latest RDNA 3 generation, before calling it quits on the enthusiast-segment, so it could focus on the sub-$1000 performance segment. The second most popular response was Raster 3D performance (classic 3D rendering performance), which scored 27% or 6,453 votes.
Generational gains in raster 3D graphics rendering performance at native resolutions remain eminently desirable for anyone following the PC hardware industry for decades now. With Moore's Law in place, we've been used to near-50% generational increases in performance, which enabled new gaming APIs and upped the eye-candy in games with each generation. Interestingly, ray tracing performance takes a backseat, polling not even 3rd, but 4th place, at 10.4% or 2,475 votes. The 3rd place goes to energy efficiency.

The introduction of 600 W-capable power connectors presented ominous signs of where power was headed with future generations of GPUs as the semiconductor fabrication industry struggles to make cutting edge sub 2 nm nodes available, which meant that for the past 3 or 4 generations, GPUs aren't getting built on the very latest foundry node. For example, by the time 8 nm and 7 nm GPUs came out, 5 nm EUV was already the cutting-edge, and Apple was making its iPhone SoCs on them. Both AMD and NVIDIA would go on to make their next-generations on 5 nm, while the cutting-edge had moved on to 4 nm and 3 nm. The upcoming RDNA 4 and GeForce Blackwell generations are expected to be built on nodes no more advanced than 3 nm, but these come out in 2025, by which time the cutting edge would have moved on to 20 A. All this impacts power, which a performance target wildly misaligns with foundry node available to GPU designers.

Our readers gave upscaling and frame-gen technologies like DLSS, FSR, and XeSS, the least votes, with the option scoring just 2.8% or 661 votes. They do not believe that upscaling technology is a valid excuse for missing generational performance improvement targets at native resolution, and take any claims such as "this looks better than native resolution" with a pinch of salt.

All said and done, the GPU buyer of today has the same expectations from the next-gen as they did a decade ago. This is important, as it forces NVIDIA and AMD to innovate, build their GPUs on the most advanced foundry nodes, and try not to be too greedy with pricing. NVIDIA's competitor isn't AMD or Intel, but rather PC gaming as a platform has competition from the consoles, which are offering 4K gaming experiences for half a grand, with technology that "just works." The onus then is on PC hardware manufacturers to keep up.
Add your own comment

73 Comments on Next-Gen GPUs: Pricing and Raster 3D Performance Matter Most to TPU Readers

#26
SOAREVERSOR
DavenThe worst part is that only some elements (water, lighting, etc) are ray traced. The performance drop for this partial quality improvement is enormous and eventually unnoticeable after playing a fast paced game for a while.

As more elements are ray traced, the performance will drop to zero fps on today’s cards which effectively ‘zeros’ out any chance of future proofing.

Ray tracing is a scam that tries to justify high GPU prices. All manufacturers are in on it but none worse than Nvidia. I look forward to AMD and Intel bringing some sense back to the GPU market. Hopefully PC enthusiasts will reward these GPU makers with their hard earned cash as hoping better competition brings down Nvidia prices doesn’t make sense if the vast majority only buy Nvidia and refuses consideration of other GPUs due to brand loyalty or internet myths about quality. That didn’t work out so well for Intel fans for the past two gens of CPUs.
Ray tracing is not a scam. The hardware isn't their yet and it's going to take several generations for it to actually be there. We aren't going to see good ray tracing or good performance at 4k for any sort of remotely reasonable price (a little over 1000 USD) until consoles can pull off RT 4k 120fps on their SOC.

RT is also not just for gaming. Get gaming out of your head for a moment it's not the end all be all. RT is used in professional editing and has been for longer than it's been on nvidia cards. However having it on the card makes it much faster than doing it on workstations or clusters for professionals. As these GPUs cover consumer (gaming), creative, professional, and AI purposes you're not getting RT or AI off them. It's just going to take a while till you see a benfit in gaming.

The frustration with all this and nvidia is you keep looking at a GPU as something soley for gaming but it has never truly been that and when the 8800GTX hit with CUDA gaming was no longer even close to the biggest focus of a GPU.

AI upscaling is take it or leave it but most people need it to actually use a 4k monitor and people have been screaming for 4k playability and it just so happens that the same stuff that produces massive gains for actual productivity can also help hit 4k. It's better to have it than to not use something that has to be in any GPU now.
Posted on Reply
#27
Am*
Glad to see the results of this poll -- lines up perfectly with what I think.

Also if Ngreedia don't want to add more VRAM to their GPUs in fear of cannibalising their AI GPU sales, they can cut away most of the tensor cores and replace them with good old fashioned CUDA cores, TMUs and ROPs. Problem solved.
Posted on Reply
#28
20mmrain
While this Poll says TPU users care most about rasterization and pricing, it's a contradiction to see that most of the cards in use according to Steam are Nvidia cards. (I would also assume that this stat is represented on TPU also). Just shows how good Nvidia's marketing department was over the last few generations, otherwise AMD would have a much larger share of the market.
And before the fanboys get all up in arms, I've given both AMD and Nvidia my money several times. I go with whoever offers the best performance vs value! But I refuse, to buy another NVidia card until they stop gouging their customers, and stop selling chips that should have been classified as a lower model for a ridiculous price. I get they have a business to run, but their tactics are just shiesty right now. Selling RTX XX50 cards for 400+ dollars as RTXXX60 or 60ti cards is ridiculous, when they should cost $250 at max even with inflation.
AMD isn't a perfect little angel either, but nowhere near as bad at the moment.
Posted on Reply
#29
Vayra86
john_While TPU voters might be caring about raster performance, 80% of the buyers care about RT performance and DLSS. That's Nvidia's market share.
Even SONY pressured AMD to get it's sh!t together and improve RT performance and stop fooling around like what they did with RDNA3.

Personally I am going to insist in what I was saying the day reviews of RX 7900XTX/XT came out. RT performance must be a priority because that's where all the marketing is. Also upscaling and Frame Generation today is seen as a God send gift, not as cheating, we are not in 200x where cheating was exposed as something negative. Today it's a feature. This means that raster performance is more than enough when combined with upscaling and Frame Generation, meaning what AMD needs to do is to focus on RT performance. Only then they can level the field with Nvidia in performance and force Nvidia to search for another gimmick to differentiate their cards, while subotaging of course the competition.
So 80% of buyers are idiots that can't see what's happening in front of them, then.

I think that's a good match with the realistic market conditions of the mainstream vs the niche. I bet the same ish 80% listens to the top music only, whatever gets aired, they listen. I bet the same happens wrt console ownership vs the gaming PC, 80/20, seems real.

But 20% of the market is still a multi billion dollar market, even if its a niche in a niche, go figure.
There's a place for all of it, and funneling all markets into a situation where they're overpaying for shitty graphics isn't The Way.

I don't think Nvidia sells cards better because of RT and DLSS. They position their products better, they market them better, their time to market is shorter, and they're first rather than last with new features. Features being much more than RT and DLSS. Those are just examples that are live today. Its really quite amazing AMD held on to some order of 40% share for so long, given its performance over the last few decades.

They simply need to do better and be actually consistent for a change. There are almost no two generations next to each other where AMD has made a simple move forward, doing what they did last time, executing their successful product strategy not once, but twice. It hasn't happened a single time since Nvidia's Kepler at least, well MAYBE with the HD7000 series, but then they just rebranded it to R-series for god knows what reason but here we are: no consistency. Suddenly a 7970 was 280x... They've been all over the place, and the customer loses trust. Its only logical and tháts where that extra 20% in market share loss was created. AMD has definitely bled some fanbase over the last few years, and they can blame only themselves. Also, bad product positioning/strategy overall: Fury X 4GB was a complete misfire, got eclipsed by the 980ti 6GB (go figure... Nvidia pulled the VRAM card on AMD, but even destroyed it at 1080p, and overclocked much better) and a year post release nearly lost all game support/optimization. Again: this kills trust.

Heck even I am not so sure I'll dive into another AMD GPU right now. Look at the per-game performance on some new titles. Its abysmal. Forget RT - AMD needs full focus on the basics first. Every time AMD needs another kick in the nuts to keep doing things right. RDNA2 was great, the consoles forced them to make a very solid driver and support cadence. Apparently they've reached that milestone now and the focus is off again. Its like... WTF dudes?
Posted on Reply
#30
londiste
ThomasKIn some cases RT might make games look better, but isn't worth the performance penalty nor is it a revolutionary technology.
I'm personally more interested in Unreal Engine 5 future implementations, such as Global Illumination.
You do understand that the next step in UE5 GI (that already does raytracing) will move more and more into hardware-accelerated raytracing, right?
Posted on Reply
#31
Vayra86
TheDeeGeeFact is, Ray Tracing is easier to work with for developers compared to baked lighting.
Right, so they're pushing part of the cost of development into our lap. Thanks, I guess?

The amount of bullshit they need to stack on top of one another to get there kills the performance, but ironically also kills the image quality.
Posted on Reply
#32
Broken Processor
I'm honestly surprised Energy Efficiency was 16% I must me a luggite because I DGAF about it.
Posted on Reply
#33
Draconis
Broken ProcessorI'm honestly surprised Energy Efficiency was 16% I must me a luggite because I DGAF about it.
Each to their own I suppose. I have an inverter, battery and solar setup so even though I game on a desktop when I play at night I'm essentially on battery.
Posted on Reply
#34
Bet0n
Ray tracing is good. It's nice.
Microsoft Ray tracing API is bad. It's a black box and so when you implement it in your game you have a vague idea about what it's gonna do. All that HUB video proves is this.
That's why Unreal does their own version of RT.

Another thing is speed of ray tracing in games would be acceptable if all of the CUDA cores could do RT. Instead what we got is a very small part of the whole GPU can do RT (1/128 to be exact on Ada cards). This also means we are very, very, very far away from games looking awesomely RTd and running fast at the same time.

In terms of Nvidia market share it's not about the average Joe buying a video card: OEMs and system integrators sell their PCs with Nvidia cards 90% of time (not to mention notebooks). Why? Because "AMD driver bad", at least that's what the management at these companies think/know about AMD and they don't want to deal with that. After someone bought their first PC/notebook if it works as intended they most likely won't switch to AMD.
So it's not about marketing. NV doesn't do jackshiet marketing because there is no need.

Btw this poll was conducted in a very small enthusiast bubble on the internet. These enthusiast bubbles tend to be more knowledgeable than the average and tend to be filled with more than average AMD users.
So view the results according to this.
Posted on Reply
#35
londiste
Bet0nMicrosoft Ray tracing API is bad. It's a black box and so when you implement it in your game you have a vague idea about what it's gonna do. All that HUB video proves is this.
That's why Unreal does their own version of RT.
UE does hardware accelerated raytracing using DX12 DXR.
Posted on Reply
#36
tommo1982
mrnagantHow much die space does AI and RT make up on Ada and RDNA3. Wonder what the cost would be if these were cut out, or how much more raster you could fit on the same die space. RT is take it or leave it, and fg/upscaling can still be decent/good and could still be made even better using regular old shaders. Seems like in the near future every chip is going to have AI on it, id rather buy a dedicated AI card. Your CPU has an NPU, the integrated GPU has NPUs, your dedicated video card has NPUs. Let's just make the NPU it's own dedicated chip.

Could RT work be split out to like a daughter board, or a dedicated card? Have the RT calculations offloaded on that.

Always wondered if we could get more out of RT, AI and traditional GPUs if they were split out in their own individual card. Would have a ton more die space combined. Like imagine a dedicated RT card the size of big Ada and run fully path traced games on.
That'd actually be a great idea. There's nothing stopping the mainboard manufacturers to add a simple socket for additional chip. It'd add some latency and all, but it's not like it cannot be mitigated with software.
Posted on Reply
#37
Bet0n
londisteUE does hardware accelerated raytracing using DX12 DXR.
Yeah but they "simplified" it so it can run on hw without RT capabilities and doesn't rely solely on MS API. Megalights wants to circumwent the API.
Btw Ubisoft did their own thing in Avatar with the Snowdrop engine (HUB somehow forgot to look at it), they circumvented the "black box" and look at that game.

The MS API is good for the hardware sales.
Posted on Reply
#38
londiste
Bet0nYeah but they "simplified" it so it can run on hw without RT capabilities and doesn't rely solely on MS API. Megalights wants to circumwent the API.
Did you happen to wonder what Lumen HWRT means?

Edit:
For all the topics brought out it is pretty surprising how much strange understandings there are.

Why the hate on DXR? It is just an API, part of DX12. There is also Vulkan and Vulkan Ray-Tracing but that seems to have less support and clout. Partly because it came noticeably later and partly because Vulkan underneath it also needed push for adoption that kind of never came in the AAA space. Unreal Engine 5 is basically built to run on DX12, which is also a Microsoft API. As a sidenote - DX12 adoption was also very slow until Nvidia came with the RTX push that required a proper DX12 engine underneath to even start using DXR.

Unreal Engine or when talking about lighting solutions then Lumen is not a separate thing. Lumen is a marketing term for Unreal Engine 5 lighting engine. While the classical rendering lighting pipeline is still there everything beyond that is concentrated under Lumen. Practically, Lumen is a global illumination system that aims to replace a number of traditional components. As far as technologies it utilizes and hardware it is able to benefit from that is a pretty wide scale. It has a bunch of configuration targets for a game developer starting from distance fields based software ray tracing solution, then a hardware-accelerated hybrid ray-tracing next and eventually a full path tracing. The quality of the resulting image and hardware or performance requirements go up in that same scale.

Why the question about Lumen HWRT above? Because this is a HardWare accelerated hybrid Ray-Tracing solution being demonstrated. Really the differences between a path traced result and a less performance-intensive configuration.
Posted on Reply
#39
Bet0n
londisteDid you happen to wonder what Lumen HWRT means?
Lumen also provides hardware RT but it's a hybrid tracing pipeline that uses software RT.
Posted on Reply
#40
londiste
Can someone explain to me what is it with this strange obsession with software when it comes to ray-tracing these days? :)
Posted on Reply
#41
LittleBro
ThomasKIn some cases RT might make games look better, but isn't worth the performance penalty nor is it a revolutionary technology.
I'm personally more interested in Unreal Engine 5 future implementations, such as Global Illumination.
Unlike DLSS-like stuff and fake frames generation, RT actually improves images. It is a step forward in achieving most realistic images. RT is extremely taxy on hardware resources. It reminds me times when Tesselation was a thing or even before that the 8xMSAA was a performance killer. RT performance will get better over time. Today it's nice, but expensive.

Then there is the other approach - let's render image in lower resolution, upscale it to native resolution while guessing missing image data by interpolation or such algorithms. This is a step backwards. This deviates from image realism, and bundling such stuff on top of each other just helps it to deviate even more. Sometimes I think: what the heck is the goal of game devs nowadays? They add RT to games but in order to run that game at reasonable FPS, you need to turn on DLSS/FSR/XeSS and frame generation. What's the point in adding that RT then? I mean you're increasing realism but straight after you're f*cking it up.

When Crysis or Metro was released back then, they were considered kind of "etalons" of game graphics. Performance was so bad, but at least it was about increasing image quality. Same will happen for RT over time.
Posted on Reply
#42
RedelZaVedno
It's still all about raster performace for me. Sure DLSS is nice to have, but then again higher raw raster frames also translates into higher DLSS frames. My rule of thumb is upgrade GPU when the new thing is at least 50% faster than the old one and 30% for a CPU. I used to upgrade every 2nd gen on average then it came to GPUs and every 3th gen in CPUs now it looks like I'm gonna be upgrading every 3th gen or even less frequently when it comes to GPUs and maybe every 4-5th gen then it comes to CPUs. It looks like Ngreedia/AMD/Intel don't want our money anymore as it's all in AI and servers atm. Well things might change in the future if/when AI buble bursts.
Posted on Reply
#43
tommo1982
LittleBroUnlike DLSS-like stuff and fake frames generation, RT actually improves images. It is a step forward in achieving most realistic images. RT is extremely taxy on hardware resources. It reminds me times when Tesselation was a thing or even before that the 8xMSAA was a performance killer. RT performance will get better over time. Today it's nice, but expensive.

Then there is the other approach - let's render image in lower resolution, upscale it to native resolution while guessing missing image data by interpolation or such algorithms. This is a step backwards. This deviates from image realism, and bundling such stuff on top of each other just helps it to deviate even more. Sometimes I think: what the heck is the goal of game devs nowadays? They add RT to games but in order to run that game at reasonable FPS, you need to turn up DLSS/FSR/XeSS and frame generation. What's the point in turning that RT then? I mean you're inreasing realism but straight after you're f*cking it up.

When Crysis or Metro was released back then, they were considered kind of "etalons" of game graphics. Performance was so bad, but at least it was about increasing image quality. Same will happen for RT over time.
I've seen plenty games with great visuals and those were before RT. Every game with RT I saw is basically shadows and lightning. That's not very great. There's a greater change in visual quality by simply changing from low to high and the devs could use those game engines and API's to the fullest.
Posted on Reply
#44
HD64G
100+FPS @1080P and high quality settings are more than enough to me. VFM is the most critical parameter in my decisions of GPU purchasing selection. Next are stability, cooling capacity, efficiency, features in this specific order.
Posted on Reply
#45
Legacy-ZA
Bet0nYeah but they "simplified" it so it can run on hw without RT capabilities and doesn't rely solely on MS API. Megalights wants to circumwent the API.
Btw Ubisoft did their own thing in Avatar with the Snowdrop engine (HUB somehow forgot to look at it), they circumvented the "black box" and look at that game.

The MS API is good for the hardware sales.
The graphics sure are impressive, however, I like the "fantasy" setting games have, don't want it too real, dunno about others, but that's how I feel about it. :)
Posted on Reply
#46
NoneRain
I would love to trade these for a better GPU or a cheaper one:
  1. RT
  2. Upscaling & frame gen
  3. AI
RT is cool, but tanks FPS, and is not a big deal.
Frame-generation is a sin, and you guys will all to go hell for using it.
AI is just a promise for gaming. I would not pay for it rn. Imagine the AAA-trash released recently but with "AI features"... yeah...
Posted on Reply
#47
Gameslove
Yea. UE 5 Lumen do great job! Hellblade 2 Senua's Saga the best pc graphics at the moment.
Posted on Reply
#48
phints
A lot of armchair engineering going on here. Rasterization is a dead end. Caring about more performance there instead of RT is pointless. There are too many things you can't go any further with on rasterization to care about or develop for now that we have acceptable RT performance with RTX or competing cores (including all related tech: DLSS, RR, FG, etc.). By next-gen consoles everything will have some element of RT going on for much more realistic lighting, shadows, reflections, global illumination, etc. and normal rasterization will start to look like "hacks" to simply avoid (SSR and SSAO are good examples that never looked very good).
Posted on Reply
#49
loracle706
TheDeeGeeFact is, Ray Tracing is easier to work with for developers compared to baked lighting.


As if AMD, Intel and NVIDIA are going to listen, lol
Up to them, next gpus generation rtx 6xxx/ rx 8xxx will surely handle 4k 60fps+ easily there will be no need for that bullshit specially if graphics does not make a huge evolution !!
Posted on Reply
#50
Mr. Perfect
When the poll first dropped, it swung wildly back and forth between the niche options that ended up loosing. I think at one point it suddenly shot up to like 90% RT. Where people spamming the poll?
Posted on Reply
Add your own comment
Nov 21st, 2024 11:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts