• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

will gpu continue to have crazy TDP?

well and then there were times were gpus were passive cooled.

its simple physics why we got where we are.
I still have a passive 1050 Ti on my shelf that was a proper entry-mid-range gaming card 10 years ago. It's not simple physics that graphics card power consumption grew out of proportions since then.
 
i talked about far far earlier than pascal.

yes it is lol.
 
deccades earlier. i guess you dont get it. also that class is taken by apus today.

why because more fps qill require more power at some point its that simple. we are not anyore on 28nm or higher we are hittring already a wall with 5nm and the 4090. why because physics.
 
deccades earlier. i guess you dont get it. also that class is taken by apus today.

why because more fps qill require more power at some point its that simple. we are not anyore on 28nm or higher we are hittring already a wall with 5nm and the 4090. why because physics.
Pascal is not decades earlier! That's the point!

I'm not so sure if we're hitting a wall. Look at advancements in CPUs... how much performance we get for just under 100 Watts (either with a 7800X3D, or a finely tuned Intel chip).
 
Pascal is maxwell on steroids. ada is ampere on steroids. look at the core clocks and speed diferences if put to the same clockspeed ;)

my point here is pascal was exclusively a gaming Architecture especially, dx 11 optimized to the fullest extent. an arch that is purely made for gaming will always be much more efficient. rdna 2 was amds pascal.

that is out of the window right now tthanks to the ai garbage with both manufacturers okay lets say all 3.

also gpus now are much much much more compex than they were. you want more fps ? at one point it will only wortk with more power. we say here cubic capacity. There is no substitute for displacement except with even more displacement. (i hope thats the right word for what i mean since im not a native english speaker)



i can prove it look at the hardware spec difference beetween the 4080 and the 4090. yet the 4090 is only 20-35% faster in some rt titles up to 50% but overall its 30-35%. now look at the huge difference in the specs, if we would not hit a wall the 4090 should be 70% faster at least than a 4080. monolithic design is hitting a wall. intel proves this with their cpus for years now. they need to pump in the power to even get a few % more out of it than they can.

ryzen is not monolithic its chiplet like rdna 3. and nvidias future gpus will also be chiplets for a good reason ;)

ryzen is running circles around intel right now its actually really emberrassing.
 
stock 4070 super i have was running 1.1 volts and 220watts which clocked in around 2740mhz, under volted it to .95volts at 2700mhz and it plays Watch dog legion with medium RT on ultra settings at 150-175wattts.
 
my point here is pascal was exclusively a gaming Architecture especially, dx 11 optimized to the fullest extent. an arch that is purely made for gaming will always be much more efficient. rdna 2 was amds pascal.

that is out of the window right now tthanks to the ai garbage with both manufacturers okay lets say all 3.
So it's not "just physics", then? ;)

I agree with this notion by the way - the AI plague should leave gaming alone, imo.
 
it still is physics in the end it all comes down to this why. so yes it really is. you wont change natural laws and no one ever will.

without ai we would have no nvidia gpu for gamers right now lol ada gpus are 100% ai. gaming gpus are just the crap chips that couldnt make it to ai chips for selling for 10k plus.

but yes dedicated gaming gpus would be the best but wishfull thinking.

so yes unless they find a completely new way of making gpus we will not come down with the watts ever again. unlesss you would like to have the same performance like now, than they can bring that down but they will never release a next gen with the same power as before. (which they actually already do. 3060 ti to 4060 ti and rx 6800xt to 7800xt). thats how much we are near the wall.

but expect nice stuff from mcm design gpus when they figured it out on gpus but not on the tdp front. but fps will explode when they put on it 2 140sms chips but damn that 1000 watts consumption with the tech of today lol. that shit would run cp 2077 cyberpunk pathtracing in native 4k with no issues.

the issues remains x86 platform its extremely inefficient. look at the m chips from apple they are years and years ahead. but since windows dominates the world with x86 i see there no change in the near future.

arm/apple if they would make deidcated gaming gpus with their tech we would have 4090 with 250 watts but still everyting for devs is on x86 so you see the issue. spoken very very simplified
 
it seems out of hand. lets not focus on the monstrosity sizes they have become and the sag, but why arent they doing anything about tdp with them. 300-450W for gpu is out of hand. cpu have stayed pretty stable with theirs, maybe higher intel chips have gotten high but they are just a few but if cpu can stay pretty low why gpu are getting higher and higher the jumps are quite large for the higher end ones. will this trend continue more and more? at the pace were in now, well be well over 600w for a card in the next few years. and with electricity prices jumping, thats paying the card twice in its use time. or a good chunk. seems like they put mno effort into the efficiency of them. hell I remember a time when I was thinking 180w was a lot for a gpu.
Eventually a break through will be made and a gen will come out thats much more efficient. When that happens no idea.

Problem is performance sells more than efficiency so for the TDP to come down they need to be able to get the extra generational performance whilst having enough of an efficiency boost to also reduce power consumption at same time.
 
To address the thread title: Yes, and no.

We're currently hitting a "thermal density" barrier.
Exemplified in the R&D and patenting of integral nano-peltier devices, to (inefficiently) pump heat from active/dense parts to inactive/low-logic-density portions of the die.
This is also why we've seen Industry Alliances in developing MCM designs.

Absolutely, we will see higher TDPs. However, things will get 'strange' rather than 'linear'.
(In terms of bigger coolers and more capable power-carrying interfaces)

Performance/W efficiency is continuing to improve but, that gained efficiency is 'eaten' in pushing Net Performance higher. The Thermal Density Issue, is pushing us towards active thermal management at the lithographic-level, with MCM spreading that heat over more surface/mass.

As far as what I mean by 'strange':
Looking at CPUs, Dynamic Clocking and Active Power Management for subcomponents of the die(s) has allowed overall performance and peak clocks that were previously impossible.
Yet, when pushed to their 'fullest', those devices need extraordinary power and cooling.

Another (personal favorite) is something like Vega 10.
Full-Fat Vega 10 (Vega 64/FE/WX9100/MI25) was known as a 'hot and hungry' GPU.
But... That's only when clocked towards the limits of the architecture. Yet, Vega in Zen APUs 'sip power', with the last revisions managing to perform faster overall, with less actual hardware.
Speaking from experience, the 'biggest' 'fattest' 'hottest' GPUs will "Sip Juice, and Chill Out" when drastically underclocked.

Considering the ever-growing (and highly profitable) computing power needs of Big Data, I can only assume that we will continue to see TDPs increase, w/in the 'scope' of what's practicable and profitable.
(Don't forget, Submersion Cooling isn't marketed to Enthusiasts, because of extremely broad IPs, held by those that near-exclusively serve Big Data)
 
Last edited:
well and then there were times were gpus were passive cooled.

its simple physics why we got where we are.
Because back then games were simple too... a bunch of squares and cubes fooling around.

You cant have this with a passive cooled GPU today. You need at least 200-250W

Far Cry® 62024-2-4-10-48-53.jpg

Far Cry® 62024-1-26-20-16-20.jpg

And the (every) next gen games will always "demand" more computing power
 
triangles...
I meant the final shape of objects, but yeah all of them are trianges and vertices, but from Ks to many many millions

Let alone the many after applications on top of the geometry
 
Pretty sure with any contemporary RDNA3 or Ada card (and a few gens before them) you can likely shave 5-30% power use off with little-to-no impact to framerates
To provide some numbers for this statement: with the stock 200W power limit, my RTX 3060 Ti LHR averaged 26.2 FPS in Unigine Superposition (1440p ultra). After raising the power limit to 220W via a VBIOS flash, it averaged 27.2 FPS. Sample size is roughly 18 test passes at each power target. In both cases, the card was throttling against its power limit.

If you bought a fancier 3060 Ti, it would have come out of the box with the higher 220W power limit. If you have one of those cards, based on my data, you could shave 10% off power consumption with only a 3% performance loss, at most - and that's without touching VFC.
 
Its true that almost every CPU/GPU today its pushed to the actual edge (by default) beyond its sweetspot on the power/efficiency curve for the sake of competition
 
Sure, that's a fair point and that's why aftermarket coolers exist. My point was that in the past, single slot coolers were sufficient, not required, for high end cards, and these days we're having to use dual slot coolers even on midrange parts.
Aftermarket coolers were always really overpriced. I got a very cheap 6750xt with MSI's lowest end cooler. The cooling is sufficient but not overkill. I looked at the prices for modern after market coolers and they were ~$150. If I wanted to spend that kind of money I could have bought a 6900xt with a much better cooler.

Way back in the day with my 8800gt I preferred bigger coolers then too. The single slot was sufficient sure but it was not quiet.
Remember that more and more people want higher frame rates too. In some games i am using less than my old 390X which was in 1090p and not 4k.

View attachment 334868
Is that a backpack or a 1 bedroom apartment on your back? :roll:
 
Because back then games were simple too... a bunch of squares and cubes fooling around.

You cant have this with a passive cooled GPU today. You need at least 200-250W

View attachment 334922

View attachment 334923

And the (every) next gen games will always "demand" more computing power
What?! No. You can run this fine on half the TDP. Much like GPU performance/power, high and ultra settings are heavily in diminishing returns territory. If you just want baseline performance you can shave half or even more off your power usage with ease. After 15 minutes youll probably forget what settings you are actually using now.

And then there is FPS- another area of diminishing returns. We just default into 'needing' this because we can, but its complete bullshit. You dont need any of this and it frankly doesnt really make a lot of sense. We just want it.

Take note of those Steam Deck numbers. Less than 10W running a game.
 
What?! No. You can run this fine on half the TDP. Much like GPU performance/power, high and ultra settings are heavily in diminishing returns territory. If you just want baseline performance you can shave half or even more off your power usage with ease. After 15 minutes youll probably forget what settings you are actually using now.

And then there is FPS- another area of diminishing returns. We just default into 'needing' this because we can, but its complete bullshit. You dont need any of this and it frankly doesnt really make a lot of sense. We just want it.
Yeah I do not disagree at all with what you're saying.
I was specifically talking about max settings and 60-65FPS

At 3440x1440 rendered at x1.5 that image was
 
It is also about yields. Your always going to have imperfections in silicon that render dies either DOA or defective.

Imagine what the yield/ability to make monolithic versions of the current EPYC dies that are 100% working. Also means its far easier to bin individual parts as dies that dont make say 7950x turbo speeds may be fine for EPYC dies due to the lower intended clocks.

That flexibility will be a great boon when you are a foundry customer as being able to effectively harvest dies per wafer will be so much higher vs monolithic dies. Has anyone noticed how they havent had to add dual quad core CCDs to make up 8 core parts for Ryzen etc? I suspect this is because they have been able to get decent enough yields with this approach to not require it as well as enough demand in the EPYC lineup to be able to use them there.


You can also see this movement with Intel with their Tile based approach and Foveros technolgy. I look forward to the possibility of Intel being able to put the I/O die below the higher heat output cores hopefully meaning we dont see socket sizes following the same size increases we have seen with GPUs over the years. (Look at Threadripper Pro boards and the sizes we are looking at already)
The dies are so small for Ryzen that even 6 core SKYs are only there to satisfy demand. With the reported TSMC defect rates, a Ryzen CCD should have worst case yields around 94%.
 
Yeah I do not disagree at all with what you're saying.
I was specifically talking about max settings and 60-65FPS
People also speak of 8K. It wont stop and its never enough, but it only gets more retarded going forward. The current idiot on the block is RT. Burning watts, halving FPS, and inflating prices for extremely minor changes. The vast majority doesnt even tell the difference and might as well say a raster image has RT lighting.
 
People also speak of 8K. It wont stop and its never enough, but it only gets more retarded going forward. The current idiot on the block is RT. Burning watts, halving FPS, and inflating prices for extremely minor changes.
A fact (IMO)
Changes will be minor going forward along with great computation demand for anyone that wants the highest visuals

Personally I like having them and I dont really have any problem with DLSS/FSR upscaling tech too as long as they not messing too much with visuals.
 
No, I think there are physical limitations. Around 250W is the upper limit for the sane high-end users, while extreme overclockers can go far beyond that, with their responsibility about PC case usage, cooling solutions, noise, electricity bills, etc.

Look at the history. Rage 128 Pro was an 8W card
Today Radeon RX 7900 XTX is whooping crazy abnormal 355W

Radeon RX 7900 XTX 355W TBP 2022
Radeon RX 6900 XT 300W
Radeon RX 5700 XT 225W
Radeon VII 300W
Radeon RX Vega 64 295W
Radeon RX 580 185W
Radeon RX 480 150W
Radeon Pro Duo 350W
Radeon R9 Fury X 275W
Radeon R9 390X 275W
Radeon R9 295X2 500W
Radeon RX 290X 250W
Radeon HD 7990 375W
Radeon HD 7970 250W
Radeon HD 6990 375W
Radeon HD 6970 250W
Radeon HD 5970 294W
Radeon HD 5870 188W 2010
Radeon HD 4870 X2 286W
Radeon HD 4870 150W
Radeon HD 3870 X2 165W
Radeon HD 3870 106W
Radeon HD 2900 XT 215W
Radeon X1950 XTX 125W
Radeon X850 XT 69W
Radeon X800 XT 54W
Radeon 9800 XT 60W
Radeon 7500 23W
Rage Fury MAXX 13W
Rage 128 Ultra 8W
Rage 128 Pro 8W 1999

This is a good reference, but I can't help but wonder how many tens of thousands of times has the performance of the hardware improved since Rage 128, even if you disregard the functionality, that's quite fascinating really
 
A fact (IMO)
Changes will be minor going forward along with great computation demand for anyone that wants the highest visuals

Personally I like having them and I dont really have any problem with DLSS/FSR upscaling tech too as long as they not messing too much with visuals.
But they do mess with visuals; every game today is a blurry mess, slow as molasses gameplay included. Every single game today that pushes on graphics hard is a high latency 30 FPS optimized POS. You mentioned FC6... its a good example. The game is slow AF. 'Cinematic experience' they call it. Lol... all I see is a race to the bottom of good gameplay.

We have FG... its not palatable without another tech to heavily reduce the latency hit. Do you need more writings on the wall?

I play a lot of games new and old and every time I compare Im struck with the infinite amount of nonsense in new games and often dumbfounded by the abysmal performance for whats on screen. This hits even harder if you take a long look at older games and how little they differ, but still run 3-4x faster.

Imho, we arent progressing much anymore.
 
Last edited:
Find the difference
Its small but also the performance hit is ~equally small
Its an addition nevertheless

Untitled_78.png
Untitled_79.png

But they do mess with visuals; every game today is a blurry mess, slow as molasses gameplay included. Every single game today that pushes on graphics hard is a high latency 30 FPS optimized POS. You mentioned FC6... its a good example. The game is slow AF. 'Cinematic experience' they call it. Lol... all I see is a race to the bottom of good gameplay.

We have FG... its not palatable without another tech to heavily reduce the latency hit. Do you need more writings on the wall?

I play a lot of games new and old and every time I compare Im struck with the infinite amount of nonsense in new games and often dumbfounded by the abysmal performance for whats on screen. This hits even harder if you take a long look at older games and how little they differ, but still run 3-4x faster.

Imho, we arent progressing much anymore.
FC6 is not very demading anyway...
Cinematic? You cant have that when your in-game AI behaves like people from a mental institution... lol (especially driving cars)

I've run CP2077 benchmark (3440x1440) a few times trying different settings and I find it much more than ok with non-Ultra settings, RT medium (non-Path) and FSR2.1 quality.
 
Last edited:
Back
Top