• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Next-Gen GPUs: What Matters Most to You?

Next-Gen GPUs: What Matters Most to You?

  • Raster Performance

    Votes: 6,487 27.0%
  • RT Performance

    Votes: 2,490 10.4%
  • Energy Efficiency

    Votes: 3,971 16.5%
  • Upscaling & Frame Gen

    Votes: 662 2.8%
  • VRAM

    Votes: 1,742 7.3%
  • Pricing

    Votes: 8,667 36.1%

  • Total voters
    24,019
  • Poll closed .
I clearly stated FE, maybe you replied before I added that.
1726763787969.png

When it should've been priced not a penny over $650 at the launch in 2022.
You can complain all you want, Nvidia doesn't need to sell consumer cards anymore, I seriously don't think they're listening.
 
The 4080 shouldn't have been over $800, the 3080 launched at $700 with a much larger die size and higher bus bandwidth.
If the poll were multiple choice, pricing, raster performance, and VRAM are important, power consumption is nice but I'd be fine with higher power consumption if it uses 8 pin power connectors.

Power consumption is a non-issue, as long as power limits can be adjusted and the GPU be brought to clock speeds where it can run more efficiently. I routinely use my 3090 limited to 250W (for 3D gaming, LLM inference or finetuning) with minimal performance differences in practice.
 
Power consumption is a non-issue, as long as power limits can be adjusted and the GPU be brought to clock speeds where it can run more efficiently. I routinely use my 3090 limited to 250W (for 3D gaming, LLM inference or finetuning) with minimal performance differences in practice.
The gain can go further down as well, so e.g. playing dune spice wars, which isnt heavy on the GPU, I can save 60w or so power just from capping the clock speed to base 3d clocks.
 
The good choices are there already so I'll just add:

2-slot 5070 with 4070 Super performance or better at the top end and efficiency during lower 3D demand for $499. Or similar in a 5060 Ti for $399, I'm not stuck to a name, just performance. Cherry on top is 8-pin power but that ain't gonna happen.
 
4070 Super performance
That should come as a 5060 at $350 to be good. Not gonna happen, however. Won't expect thrilling TDP numbers, either.
 
No it's not. Prices go up, like it or not. There are cheaper cards to buy, give it a rest.
Odd that others seem to be able to use the same manufacturing process for their products and their prices remain the same or even decline on occasion.
Only Nvidia's prices seem to be rising every generation. Almost as if others have lower, more sane margins.
Even the cheapest cards have increased in price. The whole stack moves when the top end moves.
 
4070: $600

3070: $500

2070: $600

1070: $450
Why are you lying so blatantly?
1070 - $380
2070 - $500
3070 - $500
4070 - $600
 
What matters to me is that ngreedia would not scam potential buyers of the next-gen cards with cut down eunuchs instead of normal gpu's. I bought rtx 4070ti for 800$ equiv in my country, which is twice the median month salary around here, and got myself a card that will probably die in most new games couple of years down the line because this asian schmuck decided to put 12gigs on 192 bus in it. And then, same sucker less than a year later launches a normal card with sufficent 16 gigs and 256 bit bus, calls it a super. But i don't have another 800-1000 bucks, so shame on me.
But what the hell am i talking about? Of cource they will scam us every gen. Because they will never forgive themselves for Pascal architecture and how they failed to line up their pockets because for most of the gamers 10-series was sufficent to play any game up until ue5 maybe.
 
This is a tough one. I'd say energy efficiency (at least during idle and low load), and price. I only voted energy efficiency because I'm planning the next gen to be my last single generational upgrade (if briefly owning a 7800 XT before returning to my 6750 XT counts), so I'll consider it a long-term buy.

Raster performance is already fine, RT is and will be too demanding for at least 3-4 generations for me to care about, and upscaling is a gimmick unless you're on a low-end card.
 
No option for Raster, RT, Price & VRam? (I want it all)
 
got myself a card that will probably die in most new games couple of years down the line
Imagine having this mentality 20 years ago when low-end next-gen GPUs were parring or even faster than last-gen premium segment and you literally couldn't even launch new games on old GPUs. You would've upgraded every half a year lmao.

The only 4070 Ti's problem is how much it cost. For 450ish USD, it would've been a marvellous GPU.
 
Looks like the poll was spammed. RT performance jumped up to number one and barely anyone knows what RT is much less shop for the best RT card.
Nvidia says hi :laugh:
 
Imagine having this mentality 20 years ago when low-end next-gen GPUs were parring or even faster than last-gen premium segment and you literally couldn't even launch new games on old GPUs. You would've upgraded every half a year lmao.

The only 4070 Ti's problem is how much it cost. For 450ish USD, it would've been a marvellous GPU.
You are right. But why AMD puts 16+gigs in that same price range and nvidia only gives 12? I can't see it as anything else than a scam. GDDR6/X is practically at the end of the line, new cards will feature GDDR7, so current chips probably being bought for cents. So it is not a production cost, it is pure marketing to make customer think that he should pay more money because every position in the series from 4060 up to 4080 (before supers were released) had disadvantages. People with a strict budget had to stop at some point like i did with 4070 ti - i didnt have enough cash for 4080. But for those who had money to burn, the only choice was 4090 because others werent "good enough" deals. And thats exactly what Ngreedia wants - for you to pay more.
 
I was going to have to figure out a three way shoot out between Raster Performance, Energy Efficiency and VRAM before seeing the last option: pricing. So yeah, pricing.
 
For me, It should be pricing. Price to Performance is all that matters, next would be VRAM as video editing in After Effects eats VRAM too.
 
why AMD puts 16+gigs
It's the only metric they can win at.

Most games aren't that VRAM taxing so "only" having 12 GB on 4070 Ti is fine considering its calculating power. Not ideal but fine. Does a 16 GB version win by a lot? It does not and even in games it does it's like one third difference at best. At 4K. At ridiculously high settings. At <60 FPS for either GPU. VRAM issue is greatly mitigated by DLSS. Mitigated, not nullified, mind me.

That said, 12 GB is okay for a GPU like 4070 Ti from an engineer's standpoint but it's definitely scam at $800. An eight Franklin SKU should at least get 16 GB and at least a dozen percent raw performance more.
 
Nvidia says hi :laugh:
To be fair, I think raster performance might have gotten a little spam as well. The most obvious rankings would be price by a huge margin, then maybe energy efficiency at a distance second. What falls after that would be too low of percentages to care. The fact that RT and raster performance are so high percentages and beat price tells me that we are seeing some brand loyalty battles going on here.
 
It's the only metric they can win at.

Most games aren't that VRAM taxing so "only" having 12 GB on 4070 Ti is fine considering its calculating power. Not ideal but fine. Does a 16 GB version win by a lot? It does not and even in games it does it's like one third difference at best. At 4K. At ridiculously high settings. At <60 FPS for either GPU. VRAM issue is greatly mitigated by DLSS. Mitigated, not nullified, mind me.

That said, 12 GB is okay for a GPU like 4070 Ti from an engineer's standpoint but it's definitely scam at $800. An eight Franklin SKU should at least get 16 GB and at least a dozen percent raw performance more.

DLSS only shaves off 0.5 GB to 1.0GB, it saves VRAM by lowering the render resolution but resolution just doesn't account for nearly as much VRAM usage as it used to.

I'd argue what's "saving" cards with lower VRAM is not DLSS but the fact that Nvidia accounts for 88% of the market and game devs have no choice but to optimize for 8 and 12GB cards because that's what most people can afford / have. That's less saving and more Nvidia bending the market to it's will. Nvidia can keep 8GB at the "entry" level (hard to call it that given the price) for next gen and game devs have no choice but to continue to optimize for it. Not that I'd want people to be forced to upgrade given current pricing.

That's for the gaming market. For people using their systems to run LLMs, 16GB is vastly superior to 12GB. It works out great for Nvidia, makes it extremely difficult to do anything semi-serious with your graphics card unless you spend big money.
 
and game devs have no choice but to continue to optimize for it
Given latest trends I don't see how it's any problematic. If a game is in a state finished enough you start considering VRAM optimisation a bigger problem you can call that game a major success. Those "poor" companies really should put more effort to deserve bigger VRAM buffers. A 4 year old Cyberpunk 2077 being one of the greatest games proves that current state of affairs is vastly grim.
DLSS only shaves off 0.5 GB to 1.0GB
It's not that little anyway.
For people using their systems to run LLMs, 16GB is vastly superior to 12GB. It works out great for Nvidia, makes it extremely difficult to do anything semi-serious with your graphics card unless you spend big money.
Commercial use has always been taxed, why are we supposed to look at this as at something new? And yeah, 24 GB 3090s are about 6 to 7 hundred dollars used now. It's a lot but not that much all things considered.
 
It works out great for Nvidia, makes it extremely difficult to do anything semi-serious with your graphics card unless you spend big money.
Microsoft: WHY are you producing these absolute junk cards that struggle with our AI technology? This is unacceptable. We already raised our minimum requirements but we will start blacklisting this impotent trash and any more SKUs you produce that are loaded with buzzwords and half-assed performance.

It could happen and the way this has been going is already hilarious. We already have a hardware war going in the middle of an AI war. Just waiting to watch the devs cut and run. :rolleyes:
 
Looks like the poll was spammed. RT performance jumped up to number one and barely anyone knows what RT is much less shop for the best RT card.

Barely anyone knows what RT is? You should have a higher opinion of your fellow tech enthusiasts here. Most members here know what RT is and they are expressing what they would like to see in the next gen. If your accusation of spamming votes is that there are a large number of alt accounts being used here to pad the votes then I highly doubt that.

Besides that, if RT is a trivial concern then why are AMD and Intel committed to improving RT performance considerably in their next generation GPUs?
 
Commercial use has always been taxed, why are we supposed to look at this as at something new? And yeah, 24 GB 3090s are about 6 to 7 hundred dollars used now. It's a lot but not that much all things considered.

Stable diffusion and GPT4All aren't commercial use. There are a ton of hobbyists using these. Commercial use would be the people training models, which you typically want at the bare minimum 24GB but preferably 40GB+.

It's not that little anyway.

It is when you consider most games typically take 12GB+ on ultra. An 8GB card will still run these just fine but a portion of that data will spill over to main system memory. DLSS is making no difference in these instances. There might be a single scenario where it helps a game with too much is spilling over into main system memory but that is going to be an exceedingly rare example that I have personally yet to see. Certainly nothing to imply that it's somehow helping these card's lack of VRAM. Mind you it would be bad if video cards did start requiring you to run upscaling in order to fit the game into it's VRAM budget. I don't see the plus side there, that's a lose-lose scenario to be in because Nvidia didn't want to add $30 more RAM to a $400 video card with it's 78% margins. It gets even worse when you look at cards like the 4080 and how little it would have costed as a percentage of the price to give it 24GB.

Given latest trends I don't see how it's any problematic. If a game is in a state finished enough you start considering VRAM optimisation a bigger problem you can call that game a major success.

Something working now isn't any indication of it being easy to do. Of course you don't find it problematic, you are speaking from a customer perspective without any empathy.

Those "poor" companies really should put more effort to deserve bigger VRAM buffers. A 4 year old Cyberpunk 2077 being one of the greatest games proves that current state of affairs is vastly grim.

Nothing new is created without first being done sub-optimally. There are games that release in a variety of states, all VRAM limitations do is put a cap on creativity. Sure it might allow more unoptimized games to skate by but it also allows more creative devs to do something cleaver with the additional memory.

VRAM is not so expensive that developers should have to earn it. That's just a silly notion to begin with. I don't understand the persistence to blame devs when it would have been dirt cheap for Nvidia to simply include more VRAM.

Barely anyone knows what RT is? You should have a higher opinion of your fellow tech enthusiasts here. Most members here know what RT is and they are expressing what they would like to see in the next gen. If your accusation of spamming votes is that there are a large number of alt accounts being used here to pad the votes then I highly doubt that.

Besides that, if RT is a trivial concern then why are AMD and Intel committed to improving RT performance considerably in their next generation GPUs?

Microsoft might just optimize for NPUs instead (like those built into AMD's latest laptop chips) as they are vastly more efficient than GPUs. I would definitely consider getting a dedicated NPU / NPU equipped CPU myself if they turn out much better than GPUs. I use AI to automatically upscale my 480p and 720p videos to 1080p and I also use it for Stable diffusion and it would be great if instead of consuming 120 - 250w it consumes 15w.
 
well after 60fps, Raytracing. but i have a 7900xtx, so not going to upgrade, most likely never.
Had Nvidia priced the 4080 at 1300Can, would tried it, seen how it was better for cp2077 rt, and kept it.
no way i was even considering it, seeing how it was (1800 CAN) and beaten by 7900xtx in TPU 4k benchmark, of course, it is now 1300can, but too late.

oddly, that does not mean that i will not get a ps5pro, or a ps6… just that i Envision my 7900xtx to be the last GPU i buy…

does raster performance mean AMD, and RT performance mean NVIDIA?… if so… does not mean i will buy Nvidia…
 
Back
Top