Monday, October 28th 2024
Next-Gen GPUs: Pricing and Raster 3D Performance Matter Most to TPU Readers
Our latest front-page poll sheds light on what people want from the next generation of gaming GPUs. We asked our readers what mattered most to them, with answers including raster performance, ray tracing performance, energy efficiency, upscaling or frame-gen technologies, the size of video memory, and lastly, pricing. Our poll ran from September 19, and gathered close to 24,000 votes as of this writing. Pricing remains the king of our polls, with the option gathering 36.1% of the vote, or 8,620 votes. Our readers expect pricing of next-generation GPUs to remain flat, variant-for-variant, and not continue on the absurdly upward trend it has had for the past few generations, with the high-end being pushed beyond the $1,000-mark, and $500 barely bringing in a 1440p-class GPU, while 4K-capable game consoles exist.
Both AMD and NVIDIA know that Moore's Law is cooked, and that generational leaps in performance and transistor counts are only possible with increase in pricing for the latest foundry nodes. AMD even tried experimenting with disaggregated (chiplet-based) GPUs with its latest RDNA 3 generation, before calling it quits on the enthusiast-segment, so it could focus on the sub-$1000 performance segment. The second most popular response was Raster 3D performance (classic 3D rendering performance), which scored 27% or 6,453 votes.Generational gains in raster 3D graphics rendering performance at native resolutions remain eminently desirable for anyone following the PC hardware industry for decades now. With Moore's Law in place, we've been used to near-50% generational increases in performance, which enabled new gaming APIs and upped the eye-candy in games with each generation. Interestingly, ray tracing performance takes a backseat, polling not even 3rd, but 4th place, at 10.4% or 2,475 votes. The 3rd place goes to energy efficiency.
The introduction of 600 W-capable power connectors presented ominous signs of where power was headed with future generations of GPUs as the semiconductor fabrication industry struggles to make cutting edge sub 2 nm nodes available, which meant that for the past 3 or 4 generations, GPUs aren't getting built on the very latest foundry node. For example, by the time 8 nm and 7 nm GPUs came out, 5 nm EUV was already the cutting-edge, and Apple was making its iPhone SoCs on them. Both AMD and NVIDIA would go on to make their next-generations on 5 nm, while the cutting-edge had moved on to 4 nm and 3 nm. The upcoming RDNA 4 and GeForce Blackwell generations are expected to be built on nodes no more advanced than 3 nm, but these come out in 2025, by which time the cutting edge would have moved on to 20 A. All this impacts power, which a performance target wildly misaligns with foundry node available to GPU designers.
Our readers gave upscaling and frame-gen technologies like DLSS, FSR, and XeSS, the least votes, with the option scoring just 2.8% or 661 votes. They do not believe that upscaling technology is a valid excuse for missing generational performance improvement targets at native resolution, and take any claims such as "this looks better than native resolution" with a pinch of salt.
All said and done, the GPU buyer of today has the same expectations from the next-gen as they did a decade ago. This is important, as it forces NVIDIA and AMD to innovate, build their GPUs on the most advanced foundry nodes, and try not to be too greedy with pricing. NVIDIA's competitor isn't AMD or Intel, but rather PC gaming as a platform has competition from the consoles, which are offering 4K gaming experiences for half a grand, with technology that "just works." The onus then is on PC hardware manufacturers to keep up.
Both AMD and NVIDIA know that Moore's Law is cooked, and that generational leaps in performance and transistor counts are only possible with increase in pricing for the latest foundry nodes. AMD even tried experimenting with disaggregated (chiplet-based) GPUs with its latest RDNA 3 generation, before calling it quits on the enthusiast-segment, so it could focus on the sub-$1000 performance segment. The second most popular response was Raster 3D performance (classic 3D rendering performance), which scored 27% or 6,453 votes.Generational gains in raster 3D graphics rendering performance at native resolutions remain eminently desirable for anyone following the PC hardware industry for decades now. With Moore's Law in place, we've been used to near-50% generational increases in performance, which enabled new gaming APIs and upped the eye-candy in games with each generation. Interestingly, ray tracing performance takes a backseat, polling not even 3rd, but 4th place, at 10.4% or 2,475 votes. The 3rd place goes to energy efficiency.
The introduction of 600 W-capable power connectors presented ominous signs of where power was headed with future generations of GPUs as the semiconductor fabrication industry struggles to make cutting edge sub 2 nm nodes available, which meant that for the past 3 or 4 generations, GPUs aren't getting built on the very latest foundry node. For example, by the time 8 nm and 7 nm GPUs came out, 5 nm EUV was already the cutting-edge, and Apple was making its iPhone SoCs on them. Both AMD and NVIDIA would go on to make their next-generations on 5 nm, while the cutting-edge had moved on to 4 nm and 3 nm. The upcoming RDNA 4 and GeForce Blackwell generations are expected to be built on nodes no more advanced than 3 nm, but these come out in 2025, by which time the cutting edge would have moved on to 20 A. All this impacts power, which a performance target wildly misaligns with foundry node available to GPU designers.
Our readers gave upscaling and frame-gen technologies like DLSS, FSR, and XeSS, the least votes, with the option scoring just 2.8% or 661 votes. They do not believe that upscaling technology is a valid excuse for missing generational performance improvement targets at native resolution, and take any claims such as "this looks better than native resolution" with a pinch of salt.
All said and done, the GPU buyer of today has the same expectations from the next-gen as they did a decade ago. This is important, as it forces NVIDIA and AMD to innovate, build their GPUs on the most advanced foundry nodes, and try not to be too greedy with pricing. NVIDIA's competitor isn't AMD or Intel, but rather PC gaming as a platform has competition from the consoles, which are offering 4K gaming experiences for half a grand, with technology that "just works." The onus then is on PC hardware manufacturers to keep up.
73 Comments on Next-Gen GPUs: Pricing and Raster 3D Performance Matter Most to TPU Readers
RT is also not just for gaming. Get gaming out of your head for a moment it's not the end all be all. RT is used in professional editing and has been for longer than it's been on nvidia cards. However having it on the card makes it much faster than doing it on workstations or clusters for professionals. As these GPUs cover consumer (gaming), creative, professional, and AI purposes you're not getting RT or AI off them. It's just going to take a while till you see a benfit in gaming.
The frustration with all this and nvidia is you keep looking at a GPU as something soley for gaming but it has never truly been that and when the 8800GTX hit with CUDA gaming was no longer even close to the biggest focus of a GPU.
AI upscaling is take it or leave it but most people need it to actually use a 4k monitor and people have been screaming for 4k playability and it just so happens that the same stuff that produces massive gains for actual productivity can also help hit 4k. It's better to have it than to not use something that has to be in any GPU now.
Also if Ngreedia don't want to add more VRAM to their GPUs in fear of cannibalising their AI GPU sales, they can cut away most of the tensor cores and replace them with good old fashioned CUDA cores, TMUs and ROPs. Problem solved.
And before the fanboys get all up in arms, I've given both AMD and Nvidia my money several times. I go with whoever offers the best performance vs value! But I refuse, to buy another NVidia card until they stop gouging their customers, and stop selling chips that should have been classified as a lower model for a ridiculous price. I get they have a business to run, but their tactics are just shiesty right now. Selling RTX XX50 cards for 400+ dollars as RTXXX60 or 60ti cards is ridiculous, when they should cost $250 at max even with inflation.
AMD isn't a perfect little angel either, but nowhere near as bad at the moment.
I think that's a good match with the realistic market conditions of the mainstream vs the niche. I bet the same ish 80% listens to the top music only, whatever gets aired, they listen. I bet the same happens wrt console ownership vs the gaming PC, 80/20, seems real.
But 20% of the market is still a multi billion dollar market, even if its a niche in a niche, go figure.
There's a place for all of it, and funneling all markets into a situation where they're overpaying for shitty graphics isn't The Way.
I don't think Nvidia sells cards better because of RT and DLSS. They position their products better, they market them better, their time to market is shorter, and they're first rather than last with new features. Features being much more than RT and DLSS. Those are just examples that are live today. Its really quite amazing AMD held on to some order of 40% share for so long, given its performance over the last few decades.
They simply need to do better and be actually consistent for a change. There are almost no two generations next to each other where AMD has made a simple move forward, doing what they did last time, executing their successful product strategy not once, but twice. It hasn't happened a single time since Nvidia's Kepler at least, well MAYBE with the HD7000 series, but then they just rebranded it to R-series for god knows what reason but here we are: no consistency. Suddenly a 7970 was 280x... They've been all over the place, and the customer loses trust. Its only logical and tháts where that extra 20% in market share loss was created. AMD has definitely bled some fanbase over the last few years, and they can blame only themselves. Also, bad product positioning/strategy overall: Fury X 4GB was a complete misfire, got eclipsed by the 980ti 6GB (go figure... Nvidia pulled the VRAM card on AMD, but even destroyed it at 1080p, and overclocked much better) and a year post release nearly lost all game support/optimization. Again: this kills trust.
Heck even I am not so sure I'll dive into another AMD GPU right now. Look at the per-game performance on some new titles. Its abysmal. Forget RT - AMD needs full focus on the basics first. Every time AMD needs another kick in the nuts to keep doing things right. RDNA2 was great, the consoles forced them to make a very solid driver and support cadence. Apparently they've reached that milestone now and the focus is off again. Its like... WTF dudes?
The amount of bullshit they need to stack on top of one another to get there kills the performance, but ironically also kills the image quality.
Microsoft Ray tracing API is bad. It's a black box and so when you implement it in your game you have a vague idea about what it's gonna do. All that HUB video proves is this.
That's why Unreal does their own version of RT.
Another thing is speed of ray tracing in games would be acceptable if all of the CUDA cores could do RT. Instead what we got is a very small part of the whole GPU can do RT (1/128 to be exact on Ada cards). This also means we are very, very, very far away from games looking awesomely RTd and running fast at the same time.
In terms of Nvidia market share it's not about the average Joe buying a video card: OEMs and system integrators sell their PCs with Nvidia cards 90% of time (not to mention notebooks). Why? Because "AMD driver bad", at least that's what the management at these companies think/know about AMD and they don't want to deal with that. After someone bought their first PC/notebook if it works as intended they most likely won't switch to AMD.
So it's not about marketing. NV doesn't do jackshiet marketing because there is no need.
Btw this poll was conducted in a very small enthusiast bubble on the internet. These enthusiast bubbles tend to be more knowledgeable than the average and tend to be filled with more than average AMD users.
So view the results according to this.
The MS API is good for the hardware sales.
Edit:
For all the topics brought out it is pretty surprising how much strange understandings there are.
Why the hate on DXR? It is just an API, part of DX12. There is also Vulkan and Vulkan Ray-Tracing but that seems to have less support and clout. Partly because it came noticeably later and partly because Vulkan underneath it also needed push for adoption that kind of never came in the AAA space. Unreal Engine 5 is basically built to run on DX12, which is also a Microsoft API. As a sidenote - DX12 adoption was also very slow until Nvidia came with the RTX push that required a proper DX12 engine underneath to even start using DXR.
Unreal Engine or when talking about lighting solutions then Lumen is not a separate thing. Lumen is a marketing term for Unreal Engine 5 lighting engine. While the classical rendering lighting pipeline is still there everything beyond that is concentrated under Lumen. Practically, Lumen is a global illumination system that aims to replace a number of traditional components. As far as technologies it utilizes and hardware it is able to benefit from that is a pretty wide scale. It has a bunch of configuration targets for a game developer starting from distance fields based software ray tracing solution, then a hardware-accelerated hybrid ray-tracing next and eventually a full path tracing. The quality of the resulting image and hardware or performance requirements go up in that same scale.
Why the question about Lumen HWRT above? Because this is a HardWare accelerated hybrid Ray-Tracing solution being demonstrated. Really the differences between a path traced result and a less performance-intensive configuration.
Then there is the other approach - let's render image in lower resolution, upscale it to native resolution while guessing missing image data by interpolation or such algorithms. This is a step backwards. This deviates from image realism, and bundling such stuff on top of each other just helps it to deviate even more. Sometimes I think: what the heck is the goal of game devs nowadays? They add RT to games but in order to run that game at reasonable FPS, you need to turn on DLSS/FSR/XeSS and frame generation. What's the point in adding that RT then? I mean you're increasing realism but straight after you're f*cking it up.
When Crysis or Metro was released back then, they were considered kind of "etalons" of game graphics. Performance was so bad, but at least it was about increasing image quality. Same will happen for RT over time.
- RT
- Upscaling & frame gen
- AI
RT is cool, but tanks FPS, and is not a big deal.Frame-generation is a sin, and you guys will all to go hell for using it.
AI is just a promise for gaming. I would not pay for it rn. Imagine the AAA-trash released recently but with "AI features"... yeah...