• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

will gpu continue to have crazy TDP?

Feels like my 4070 is the most effecient GPU I've ever had. Tons and tons of processing power and usually in the 180W range. The CPU can pull more power depending on scenario!
 
The old high-end is the modern mid-range. The old dual- or triple-GPU setup is the modern high-end. I think this is going to stay.

With that said, there are quite a few reasonably sized and still quite capable models in the mid-range, so I'm not complaining.

If there's one thing that bothers me, it's the price.
 
I seriously considered downgrading all the way from an Evga 3080 Ti Ftw3 Ultra (424W gaming per TPU) to a 4060 Ti 16GB (165W gaming per TPU). However I caught the 4080S FE in stock which TPU has at 302W gaming, which is still 122W less than my current card, so it's a move in the right direction. Maybe 5000 series refresh in a couple years I can move down into the 250W range again. 424W is a lot, I can feel the heat just rolling out of the PC. Heh, TPU has the 4090 at "just" 346W gaming.
 
I don't see halo cards dropping off that 450w range anytime soon (maybe even up around 600w but I doubt they'd have a cards TDP be AT the limits of the power connectors), but as has been pointed out by multiple members, and I try to remind people every time the cringe "nuclear power station required" memes (and all the ones like it) start surfacing when new gens are about to drop;

  • If you have a wattage target, you will be able to adhere to it and get from a little to significantly more performance than older gen at the same wattage, depending what you're coming from
  • The price across the whole lineups is by far the bigger concern
  • As cards are pushed to their limit, effectively being sold nearly maxed out (hence very little OC potential compared to yesteryear), undervolting is the new overclocking.
Pretty sure with any contemporary RDNA3 or Ada card (and a few gens before them) you can likely shave 5-30% power use off with little-to-no impact to framerates, or go even more drastic if the card outperforms your target. A mate with a 4090 runs his in the 300-350w range with typically zero performance loss across most titles, having said that Ada might be the first gen in a while where NV cards are not all constantly bouncing off the board power limit at virtually all times.
 
Undervolt or even underclock the 4090 a little and you have an incredibly efficient gpu. Even 10% less performance than stock is dramatically less power used.

At the most efficient settings I have found, 2000mhz and 900mv, my 6750xt is ~60% more efficient than stock and ~13% slower. If you want efficiency, undervolt and underclock.
 
i could see 4080/4090 class gpu's staying up in power draw but i would expect 4070 to be powerful but still reasonable draw.
 
Undervolt or even underclock the 4090 a little and you have an incredibly efficient gpu. Even 10% less performance than stock is dramatically less power used.

At the most efficient settings I have found, 2000mhz and 900mv, my 6750xt is ~60% more efficient than stock and ~13% slower. If you want efficiency, undervolt and underclock.
Yep i can confirm got a 4090 and running at 2565mhz@890mv and memory overclock made the card not even 10% slower but use 35% less power, with a beefy cooler like all 4090 have is near silent even fully utilized... undervolting is fun
Though i should say 4090/ada is quite efficient already when not running at full throttle and only then is where undervolting really shines
 
Last edited:
yes since power comes from power.

ampere had to tear the efficiency of the 250 watts top cards because rdna 2 was to strong for them that why the 80 card got the top dog chip too and the samung node was not that good. thats also the reason the 4090 is so fast because nvidia pretty much always does the same thing if amd was close in one generation they went all out in the next. thats why blackwell will be extrremely boring turing 2.00 gen. 5090 will be a 2080ti 2.0 barely faster a bit more efficient maybe rt is better by a good margin but who cares for that 2 games in year that really needs it lol.(imo)

The latest gen is the most efficient.

Power draw ≠ efficiency.
thats just logical that the next gen is always more efficient thats the nature of the beast. because otherwise it would mean the cards gets slower and thirstier which would be a death sentence for any gpu manufacturer.

still we had 250-260 watts at the top from nvidia for a long time but now we are facing the limits of monolithic chips design. just look what the 4090 brings to the street in comparison to the 4080 it should be much faster than what it is, if we go by specs but it isnt because physic limits.

MCM will fix that to a degree but still power comes from power. they will need a a compeltely new way to make gpus if we want to keep the perfromance gains. And raytracing didnt even start yet. Real rt will need 100 times more gpu power. rasterization will be here for a very very long time, decades still-

i could see 4080/4090 class gpu's staying up in power draw but i would expect 4070 to be powerful but still reasonable draw.
the 70 cards suck all 200-285 Watts , pascal was at 150 watts, turing was at 175 watts. its ridiculous and the regular 4070 is a 4060ti at best still 200 watts.

sure they are efficient but they alo need a lot more power that before. pascal was insane and we will never see anything like it ever again. efficient, fast, a lot of vram, power very low in total terms.
 
Look at VSYNC 60 Hz results from reviews to find out what kind of power draw you're using for the same performance. Smaller dies naturally draw less power, the 4080 is actually the most efficient die here when under heavier loads.
There is a caveat with these graphs. Cyberpunk 2077 at 1080p is still a very light load for something like RTX4090 or 7900XTX. At full blast both run the same thing at 3x FPS. I bet they are CPU bound at that rate as well.

Smaller dies draw less power when they are at a very low power state. In this case the problem is probably not even the die itself but memory subsystem. RTX4090 has more memory and more memory dies say compared to RTX4080. RX7900XTX/XT have the chiplet design to deal with outside the problem of more memory dies.

still we had 250-260 watts at the top from nvidia for a long time but now we are facing the limits of monolithic chips design. just look what the 4090 brings to the street in comparison to the 4080 it should be much faster than what it is, if we go by specs but it isnt because physic limits.

MCM will fix that to a degree but still power comes from power. they will need a a compeltely new way to make gpus if we want to keep the perfromance gains. And raytracing didnt even start yet. Real rt will need 100 times more gpu power. rasterization will be here for a very very long time, decades still-
No, MCM will not fix efficiency or power usage problems but exacerbate them. MCM will inherently bring in additional overhead that monolithic chip does have and this is especially true for power.
Raytracing is a whole different topic mainly coming down to the chicken and the egg problem. "Real RT" will need a standardized enough API which could be hardware accelerated in more aspects or with more units than it is today.

Undervolt or even underclock the 4090 a little and you have an incredibly efficient gpu. Even 10% less performance than stock is dramatically less power used.

At the most efficient settings I have found, 2000mhz and 900mv, my 6750xt is ~60% more efficient than stock and ~13% slower. If you want efficiency, undervolt and underclock.
This.
Given that GPU is actually decently used bigger is almost always better. Undervolt, limit power, efficiency is on a curve.
Problem is the part where you might not want to pay $1600 for 250W RTX4090 or $1000 for 250W RX7900XTX when you get more performance for the same GPUS with higher power limits.
 
Last edited:
There is a caveat with these graphs. Cyberpunk 2077 at 1080p is still a very light load for something like RTX4090 or 7900XTX. At full blast both run the same thing at 3x FPS. I bet they are CPU bound at that rate as well.

More likely lack of game optimisation.
Look, CPU load at 1080p stays at very low 51%.

1707816895271.png


1707816922816.png


 
More likely lack of game optimisation.
Look, CPU load at 1080p stays at very low 51%.
Overall utilization is inherently a bad measure for what CPU does. Even well optimized games have bottleneck on some CPU threads especially at high FPS. Cyberpunk usually has 4-6 threads running at very high usage - say over 85% - and that will be the bottleneck. Optimization targets are not usually at high frame rates - it is much more important to get the 60-120 range right.
 
If you want a big performance advance without a big node advance, this is what you get. There really isn't another way.
 
Take 980 Ti and 4090 for example, a big node advance 28 to 5nm, 8 B to 76,3 B transistors, 10 fold in the same die size 600mm2. Power increase from 250 to 450 W. for the 10 angstroms / 1nm node in 2030 we are looking at 650 to 850 W if they manage to keep the leakage under control.
 
I don't get the complaints about power consumption for GPUs. Within a family of GPUs, all of them have about the same efficiency, yeah some a bit more than others. You want more performance? It will cost you proportionately more power. You want less power? It will perform proportionately slower.

There's no problem with excessive power draw nowadays unless you're looking for a GPU using under 100W. Sticking to Nvidia, the 4090 is delivering you a freakton of performance and it uses a similar amount of power as the 4060 per performance delivered. Don't like the wattage? Don't buy it!! Buy a 4060. Or 4070 or 4080. You have choices. Use them.

Want even better efficiency? Buy a bit above the performance you need and undervolt and even underclock to find your GPU's max efficiency.

Does this logic not make sense to some people?
weird how cpu seem to give more and more performance, dont grow to mamomoth sizes and weight that are on the verge to snap your pci slot from its place, and do quite well with power efficiency. except some intel higher end chips. so no it makes no sense. im 1000% certain they could do better with efficiency. but getting to market and selling cards for the craziest prices ever seen is more important. I only blame the buyers for that. influencers are pieces of shet for pushing it on people.
If there's one thing that bothers me, it's the price.
yes. prices have jumped to crazy unreasonable levels and I simply dont support them and buy used. most times im finding someone selling with the item still having some warranty.
 
ive undervolted my 3080 to 950mv @2000mhz it uses about 190w when gaming but i still feel bad, maybe its a good ruse to get me misses to buy me a 4070 eh :) .
 
Look at VSYNC 60 Hz results from reviews to find out what kind of power draw you're using for the same performance. Smaller dies naturally draw less power, the 4080 is actually the most efficient die here when under heavier loads.

View attachment 334306

I really wish w1z did vsync tests for 1440p as well, maybe even 4K for the higher end cards.
 
4070 is the best GPU you can get at 200w though the 4070 super is a better buy and only 230 watts, then you have the 7700xt/7800xt at 230/250w

Those are all gaming max loads btw

True my Asus GeForce RTX 4070 Dual OC peaks at 185W with 1200MHz+ on mem runs stable no issues.

I also run my Ryzen 7 7700 non-x at locked 65W so it doesn't get more awesome for performance vs watt I believe for 1440p gaming ;)

ive undervolted my 3080 to 950mv @2000mhz it uses about 190w when gaming but i still feel bad, maybe its a good ruse to get me misses to buy me a 4070 eh :) .

I do like the bigger cards even the RX 7900 XT I had but with double the power usage compared to my RTX 4070 and only missing 30% on relative performance so I am really happy :)
 
Overall utilization is inherently a bad measure for what CPU does.

Wrong. It is the best measure.

Cyberpunk usually has 4-6 threads running at very high usage - say over 85% - and that will be the bottleneck.

That's not "CPU bound". The CPU has 16 threads, if you want to load only four or six of them, that's your fault and the game lacks optimisation.
You are CPU bound when the CPU is loaded at full 100% and there is a significant performance drop..
 
True my Asus GeForce RTX 4070 Dual OC peaks at 185W with 1200MHz+ on mem runs stable no issues.
Did you check you memory temp ? Mine goes over 80ºC with an undervolt and no OC on the vRAM. Last summer when i had over 30ºC in my room it got to 88-90ºC. Asus Dual dose not seem to have a good solution for cooling the vRAM as other coolers do.
 
weird how cpu seem to give more and more performance, dont grow to mamomoth sizes and weight that are on the verge to snap your pci slot from its place, and do quite well with power efficiency. except some intel higher end chips. so no it makes no sense. im 1000% certain they could do better with efficiency. but getting to market and selling cards for the craziest prices ever seen is more important. I only blame the buyers for that. influencers are pieces of shet for pushing it on people.

You conveniently ignore or dismiss the part where high power consumer CPUs also need very large air coolers or 360mm AIOs just to keep them from reaching 100°C. Wow, just like a GPU. Crazy. And these large GPU coolers keep their GPUs in the 65°C (edge temp) to 90°C (hotspot) range, much cooler than CPUs.

Show me in the TPU efficiency charts where the 4090, 4080 or even 7900s are not efficient. You can't:

TPUefficiency.png


The most energy efficient GPUs are available now as that continues to improve with each generation. Stop feeding at the pig slop trough of misinformation and do some analysis. Buy a lower powered card like a 4060 Ti or 4070 Super because you don't need a 4090. That's how you get lower power and efficiency. These are not difficult concepts.

And if you need more efficiency, undervolt your card and find its efficiency sweet spot. You'll often get -25% power for less than 5% performance reduction.
 
You guys forgot about Fermi.. running a couple of those is probably like running a stock 4090 lol..
 
You conveniently ignore or dismiss the part where high power consumer CPUs also need very large air coolers or 360mm AIOs just to keep them from reaching 100°C.
nope. never once bought an AIO and I overclock every cpu I have and run heavy things with no issues with heat. I dont know what the hell youre talking about and air coolers can only take X space in predetermined area. with gpu, those things are so ugly huge heavy and gets wider and thicker over and over. you cant even have a cage for hdd cause gpu just go into the case area there. 3 lanes, whats next 5 lanes? lets start making motherboards just for a 6lane gpu so they can make them ugly and as fast as possible. gpu far exceeded air coolers in size over time.
 
You conveniently ignore or dismiss the part where high power consumer CPUs also need very large air coolers or 360mm AIOs just to keep them from reaching 100°C. Wow, just like a GPU. Crazy. And these large GPU coolers keep their GPUs in the 65°C (edge temp) to 90°C (hotspot) range, much cooler than CPUs.
This is not a fair comparison. Using a large cooler either air or water has much faster diminishing returns on a cpu compared to a gpu.
  1. Modern consumer cpu's, especially AMD, are designed to run upto 90c unless you manually limit them. AMD's boost clock algorithm and PBO will run the cpu as fast as the cooling allows. Better cooling can increase your performance with Ryzen cpu's.
  2. Modern consumer cpu's are becoming so compact it is challenging to transfer heat through the ihs to whatever cooler you have available. Run the cpu direct die cooling and temperature is a non-issue.
  3. GPU's are all direct die cooling. A 4090 with a good cooler may run cooler than a 7800x3d for example, but the 4090 is producing more than 3x as much heat. It is simply easier to manage heat with a gpu.
Otherwise I agree with you that despite the high power usage, the 4090 has rather impressive performance per watt even without undervolting or power limiting it.
 
More likely lack of game optimisation.
Look, CPU load at 1080p stays at very low 51%.

View attachment 334343

View attachment 334344

That's exactly what a CPU limit looks like - GPU usage at 59%. You don't need a 100% CPU usage for a CPU limit, as no game ever uses 100% of your CPU.

Yes. prices have jumped to crazy unreasonable levels and I simply dont support them and buy used. most times im finding someone selling with the item still having some warranty.
That's one way to do it. I still want the full warranty, so I just don't buy high-end anymore. I don't want my PC to eat a kilowatt an hour just to play games anyway.
 
looking for a gpu to learn editing, the watts is ridiculous. having an 850w psu is not the solution. those electrical bills are unnecessarily high. from 6650 onward in the heirarchy, the tdp is crazy high so your "more efficient" is not true. id consider 200w for a gpu as efficient anything over that, id say they just didnt give a rats ass. and you can see it. the jumps in TDP is not linear with past models.
Measuring power consumption alone says nothing really about efficiency
You must put the performance factor in for a judgment.

Let my walk you through my GPU history the last 8~9 years

2015
Radeon R9 390X AIB (MSI) card that was a ~350W
2017
Radeon RX 580, 180W (-48%) and ~same performance
2020
Radeon RX 5700XT 225W (+25%) for +80% performance
2024
Radeon RX 7900XTX 315~460W (set to 366W, +63%) for +300% performance

Now comparing the R9 390X to RX 7900XTX over 9 years the power is the about the same but performance has increased by almost a factor of x6 (+600%)
So that indicates a ~600% increase in efficiency by AMD and nVidia is on similar path. Maybe a bit less (300~400%) because back then nVidia's GPU was way more efficient than AMD's.
 
Back
Top