• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

4080 vs 7900XTX power consumption - Optimum Tech

So back then I was really into COD, and at that time it was World at War... Everything had a wierd white outline around it, and the game looked very cartoony. I tried multiple Windows installs, drivers.. nothing fixed it. It did not help it was pretty much the only game that I played at the time. It sucked compared to my 295 that was in RMA with XFX :D

And then the overclocking. It did 1GHz core no problem, but if you even breathed on the memory by even 1MHz, it would just get the screen blinking like mad.

I actually gave that card away to a kid who lived in a van at OC Forums :kookoo:

I only played cod on consoles back then... actually in the 2000s I really only played ARPGs and RTS games on pc thankfully I don't remember having many issues. I did play Crysis and Crysis 2 and remember some weird stuff with vsync etc but that's about it in general.
 
It's disappointing that somehow there's notions of whataboutism, openly attacking a staff member, and managing to crap on Nvidia buyers??? reading between the lines there is hilariously disturbing, and so off topic, but some people can't help themselves... :shadedshu:

Probably time to close this one off Mods, unless anyone can think of something on topic and constructive to be gained from keeping it open. All the rational takes have already been stated.
 
If you go further back in history, AMD was more efficient. That title goes back and forth.

The last time I remember AMD having an uncontested lead in performance per watt was with the HD 5000 series over 14 years ago. They had Nvidia beat to the punch for DX11 too, GeForce wouldn't support it at all for the first 6 months of Windows 7's lifetime, and 10.1 support was pretty much an AMD exclusive through all of its (ir)relevancy, as few developers ever bothered as Nvidia hardware couldn't do it at all and by the time it was widely available, the cards supported the much better DX11 codepath anyway. This early implementation had some implications that remain true even today, such as their driver not supporting command lists/deferred contexts to this day, as it was considered optional by Microsoft and AMD opted not to implement it.

Even then, Cypress had a limitation with its very poor tessellation performance caused by low triangle throughput when compared to Fermi, and the drivers were a mess through most of its lifecycle - owners of the HD 5970 such as myself had to deal with pure asinine garbage such as negative scaling and bugs because some genius at AMD decided that making CrossFire forced-on in the drivers with no way to switch it off was a great idea. By the time they allowed this without a direct registry edit, it was too late. Rookie as I was at the time, registry tweaks were simply not something I dared touch because I just didn't understand it.

Game developers being lazy and high-end games such as Crysis 2 targeting Fermi hardware primarily due to the GTX 580 being much better than the rest of the cards of its time and not bothering with handling occlusion well (particularly as Fermi could manage the geometry) rendered more than a few accusations of sabotage by Nvidia at the time. Even the original GF100 could handle twice as much geometry as Cypress back then. That isn't to say AMD is lazy or even utterly incompetent buffoons, because they aren't - they're pioneers even, their problem is of a managerial nature - for example, the RV770 used in the HD 4870 already had a programmable tessellator, but as it wasn't conformant with the DX11 shader model 5.0 spec it went largely unused at the time. I honestly and sincerely believe that the problem with Radeon is of a corporate nature, it's the company culture and the boomers at the helm who just can't keep up with the industry any longer.

This would again repeat with the infamous "Gimpworks" from Nvidia, which could largely be attributed to the AMD driver's low instancing performance as it was restricted to immediate context and the CPUs of the time simply didn't have the IPC and frequency to muscle through 200,000+ individual hair strands + the game's usual requirements. When Nvidia ran into the instancing wall with the Fermi hardware, they simply disabled it on Fermi and removed the hardware command scheduler altogether beginning with Kepler, which meant the driver handled it - and in fact, still does handle it with deferred contexts and can even defer commands from and originally immediate context, sacrificing what is now ample CPU power to maximize GPU performance. This is the true reason why AMD cards have had their long standing (and founded) reputation of performing better with low-end processors with few threads available.

With the longevity of the current generations, software innovations will make or break the user experience, particularly as hardware has grown quite powerful and games have not seen an extreme increase in fidelity since the PS4/Xbox One days, most of the power of these new consoles is spent in sugar coating the graphics with fancy ray traced illumination and high-resolution textures. The latter has never been a problem for a powerful desktop GPU and you can disable the former in most cases. Nvidia understands this, and that is why they zealously and jealously guard and segment their star features such as DLSS 3 as an incentive to have people upgrade. AMD, on the other hand, is still struggling with high idle power, TDRs while playing videos, and the occasional completely botched driver release... and FSR 3 is delayed/still a no show.
 
Last edited:
If you go further back in history, AMD was more efficient. That title goes back and forth.
Yeah, but I was talking about efficiency with framecap, v-sync etc. It seems Nvidia is better at running cards efficient at partisl load vs AMD and has been now for several generations.
 
Yeah, but I was talking about efficiency with framecap, v-sync etc. It seems Nvidia is better at running cards efficient at partisl load vs AMD and has been now for several generations.
Yeah this is what I'm interested in. I spoke with W1z about adding more Vsync/framecap testing to future GPU reviews. It's possible we could add 120 FPS and maybe even 240 FPS in the Vsync testing.

This is relevant to me greatly as I cap frames at 237 FPS for input latency, efficiency and lower stress on hardware, the diminishing returns of overall frame latency from higher rendered frames, of which some aren't displayed, aren't worth it. I know a lot of other people run at set FPS as well, so knowing how well architectures scale when not either idle or at full load is relevant data.

Maybe competitor X is 90% as efficient at full load than competitor Y, but if that drops to 60% for example at less than full load, say 70% load, then that's a major difference. Especially as many game engines/resolutions will not take full advantage of a GPU's horsepower, or perhaps the user is CPU limited.

1689682783934.png
 
Last edited:
I always cap my fps 5 fps below the monitor refresh rate. There is literally no reason to run it past the refresh rate... smoothness can only equate to the frames the monitor can deliver. I have never understood people who leave frames uncapped, maybe e-sports I guess, but that's not my thing.
 
I always cap my fps 5 fps below the monitor refresh rate. There is literally no reason to run it past the refresh rate... smoothness can only equate to the frames the monitor can deliver. I have never understood people who leave frames uncapped, maybe e-sports I guess, but that's not my thing.
There's some minor frame latency improvements to running uncapped, but you run the risk of input latency issues, especially as you'll further stress the CPU, and at esports/competitive grade framerates you'll want to be running a 4K or 8K Hz mouse too, which is more stress on CPU.
 
The last time I remember AMD having an uncontested lead in performance per watt was with the HD 5000 series over 14 years ago. They had Nvidia beat to the punch for DX11 too, GeForce wouldn't support it at all for the first 6 months of Windows 7's lifetime, and 10.1 support was pretty much an AMD exclusive through all of its (ir)relevancy, as few developers ever bothered as Nvidia hardware couldn't do it at all and by the time it was widely available, the cards supported the much better DX11 codepath anyway. This early implementation had some implications that remain true even today, such as their driver not supporting command lists/deferred contexts to this day, as it was considered optional by Microsoft and AMD opted not to implement it.

Even then, Cypress had a limitation with its very poor tessellation performance caused by low triangle throughput when compared to Fermi, and the drivers were a mess through most of its lifecycle - owners of the HD 5970 such as myself had to deal with pure asinine garbage such as negative scaling and bugs because some genius at AMD decided that making CrossFire forced-on in the drivers with no way to switch it off was a great idea. By the time they allowed this without a direct registry edit, it was too late. Rookie as I was at the time, registry tweaks were simply not something I dared touch because I just didn't understand it.

Game developers being lazy and high-end games such as Crysis 2 targeting Fermi hardware primarily due to the GTX 580 being much better than the rest of the cards of its time and not bothering with handling occlusion well (particularly as Fermi could manage the geometry) rendered more than a few accusations of sabotage by Nvidia at the time. Even the original GF100 could handle twice as much geometry as Cypress back then. That isn't to say AMD is lazy or even utterly incompetent buffoons, because they aren't - they're pioneers even, their problem is of a managerial nature - for example, the RV770 used in the HD 4870 already had a programmable tessellator, but as it wasn't conformant with the DX11 shader model 5.0 spec it went largely unused at the time. I honestly and sincerely believe that the problem with Radeon is of a corporate nature, it's the company culture and the boomers at the helm who just can't keep up with the industry any longer.

This would again repeat with the infamous "Gimpworks" from Nvidia, which could largely be attributed to the AMD driver's low instancing performance as it was restricted to immediate context and the CPUs of the time simply didn't have the IPC and frequency to muscle through 200,000+ individual hair strands + the game's usual requirements. When Nvidia ran into the instancing wall with the Fermi hardware, they simply disabled it on Fermi and removed the hardware command scheduler altogether beginning with Kepler, which meant the driver handled it - and in fact, still does handle it with deferred contexts and can even defer commands from and originally immediate context, sacrificing what is now ample CPU power to maximize GPU performance. This is the true reason why AMD cards have had their long standing (and founded) reputation of performing better with low-end processors with few threads available.

With the longevity of the current generations, software innovations will make or break the user experience, particularly as hardware has grown quite powerful and games have not seen an extreme increase in fidelity since the PS4/Xbox One days, most of the power of these new consoles is spent in sugar coating the graphics with fancy ray traced illumination and high-resolution textures. The latter has never been a problem for a powerful desktop GPU and you can disable the former in most cases. Nvidia understands this, and that is why they zealously and jealously guard and segment their star features such as DLSS 3 as an incentive to have people upgrade. AMD, on the other hand, is still struggling with high idle power, TDRs while playing videos, and the occasional completely botched driver release... and FSR 3 is delayed/still a no show.
You had me until the High idle power comment. I have not seen a botched driver in about 4 years and I have been using AMD since the original 6800 1 GB card. I do feel that FSR 3 will come sooner than we think.
 
So back then I was really into COD, and at that time it was World at War... Everything had a wierd white outline around it, and the game looked very cartoony. I tried multiple Windows installs, drivers.. nothing fixed it. It did not help it was pretty much the only game that I played at the time. It sucked compared to my 295 that was in RMA with XFX :D

And then the overclocking. It did 1GHz core no problem, but if you even breathed on the memory by even 1MHz, it would just get the screen blinking like mad.

I actually gave that card away to a kid who lived in a van at OC Forums :kookoo:
Out of curiosity, did you try to RMA that 4890?
 
You had me until the High idle power comment. I have not seen a botched driver in about 4 years and I have been using AMD since the original 6800 1 GB card. I do feel that FSR 3 will come sooner than we think.

He is not wrong though, but what he likely means about the drivers being "botched" are mostly due to new hardware (and major software changes) at their respective timeframes. 5700 XT in 2019 was not a fun experience (@INSTG8R knows this from my bug reports lol), but this is because they just switched to Adrenalin and the 5700 XT RDNA1 was new architecture at the time. I felt it was 100% good around April 2020 (because they fixed the high memory clocks issue when idle and the control panel wasn't freezing randomly anymore). 6000 series release was not that bad, but the initial driver (December 2020) had the idle clocks issue again only to be worked on the next driver and was addressed in the one after (I remember it was good around March 2021). 7000 series, as you can see in reviews also had the idle clocks issue but was just resolved recently (along with the 6 months VR issue).

But with the RX 480 in 2016 (along with the RX 580/590, but I have not used this GPU myself) and the Radeon VII in 2019, I consider these their best balanced cards, not only because the VII was a proper prosumer card with working compute, but because the RX 480/580 were $200 monsters that could compete with the GTX 1060 and provided good 1080p value compared to the GTX 1070. The drivers were not bad (as in there were no noticeable issues) during their time too.
 
Last edited:
He is not wrong though, but what he likely means about the drivers being "botched" are mostly due to new hardware (and major software changes) at their respective timeframes. 5700 XT in 2019 was not a fun experience (@INSTG8R knows this from my bug reports lol), but this is because they just switched to Adrenalin and the 5700 XT RDNA1 was new architecture at the time. I felt it was 100% good around April 2020 (because they fixed the high memory clocks issue when idle and the control panel wasn't freezing randomly anymore). 6000 series release was not that bad, but the initial driver (December 2020) had the idle clocks issue again only to be worked on the next driver and was addressed in the one after (I remember it was good around March 2021). 7000 series, as you can see in reviews also had the idle clocks issue but was just resolved recently (along with the 6 months VR issue).

But with the RX 480 in 2016 (along with the RX 580/590, but I have not used this GPU myself) and the Radeon VII in 2019, I consider these their best balanced cards, not only because the VII was a proper prosumer card with working compute, but because the RX 480/580 were $200 monsters that could compete with the GTX 1060 and provided good 1080p value compared to the GTX 1070. The drivers were not bad (as in there were no noticeable issues) during their time too.
Thanks for some real context. There is no such thing as a 100% fool proof GPU. I totally enjoyed the 470/580 Cards as Crossfire was at the driver level so it just worked flawlessly and you are right about the pricing. For me where AMD lost is in price. When Vega launched Mining put the prices into the high end even though it was a good card(s) it was not worth $1000. Then the 5000 series was meh for me as the only thing better than Vega was power draw so I guess by the time I bought one those (5600XT) problems were resolved. The thing with 6000 was (for me) as fast as 2 Vega 64s in Crossfire (6800XT) I certainly wanted a Vega 7 but I held out as you can see. I never experienced Monthly drivers from AMD like I had for 6000. Now 7000 is here and I can Ray Trace as fast as a 3090 and blow it away at regular Gaming.

I was watching a Level One video this morning and it proved something to me. It was about running programs designed to run CUDA exclusively and how AMD Instinct cards are able to do that without any CUDA cores to the tune of 85%. That is not it though. Just like in CPUs AMD does not ignore the negative narrative but actively works to defeat it. The thread is about power consumption though and I will use the 6800XT as an example. When that launched it could pull up to 330 Watts but by the time the driver stack had matured the 6800XT does not consume more 255 Watts under normal consumption. If I am really concerned about power draw and want to game though a 4080 or 7900XTX makes absolutely no sense to buy. You could buy a 4060 or a 6700XT and not have to buy a 1000 Watt PSU. Guess what? I bought a 1000 Watt PSU. It is even the one that has no pigtails for the PCie plugs so every 8 pin is connected directly to the Power supply. To be honest I bought that (HX1200I RIP) for 7000 as I have no issue with my GPU pulling 342 Watts form the Wall for High refresh 4K Gaming on a 144Hz panel. It can only be butter and you can call me a fan boy all you want but AMD is all I am going use.
 
I know Sapphire is a good brand, but their current Pulse and Nitro+ variants are too big for my use case, albeit the Pulse can fit but it has that extra PCI-E power connector which is disappointing to me since I'm trying to minimize cable jank. I would've stayed with the PowerColor Hellhound, but the coil whine was actually noticeable, and this was from two cards (One newly bought and another new replacement, also due to the coil whine.)
I think that you just got unlucky because I hadn't previously heard of any coil whine issues that are specific to the Hellhound (or any other card for that matter). I think that you just got unlucky because Powercolor sells thousands of those cards. To be perfectly honest, coil whine is completely unpredictable. It can happen to any card regardless of team or brand without any pattern that I've ever been able to follow. To make things even more frustrating, sometimes a card won't have it but will develop it over time and sometimes a card will have it but then one day it won't anymore. Then the one that had it go away might get it back and the one that had it begin might have it go away...

It's literally a random roll of the dice as to whether or not you get coil whine and/or whether or not it stays or goes away or comes back again.

Yeah, it sounds insane but that's how it is. The only thing that you can be sure of is that you can limit it by using Radeon Chill or VSync.
 
Yeah, but I was talking about efficiency with framecap, v-sync etc. It seems Nvidia is better at running cards efficient at partisl load vs AMD and has been now for several generations.
At this moment in time, that is true. I don't think anyone disputes that for the current and most recent generations. I was simply pointing out it has not and will not always be true.
 
The last time I remember AMD having an uncontested lead in performance per watt was with the HD 5000 series over 14 years ago.
Umm, I don't know how to say this without making you look a bit ridiculous but....
Power.png

It's not huge, but it's definitely an uncontested lead and it's also pretty recent. No biggie though, I had the same attitude towards it then that I do today.... "Who the hell cares?"

If people truly cared about efficiency and weren't just paying it empty lip-service, then nobody would've bought any of those insane Raptor Lake CPUs, but they did. There's nothing more ludicrous than someone complaining about the power use of a Radeon card while their rig is sporting an i9-13900K/S or i7-13700K with a 36mm AIO. :roll:
 
Umm, I don't know how to say this without making you look a bit ridiculous but....
Power.png

It's not huge, but it's definitely an uncontested lead and it's also pretty recent. No biggie though, I had the same attitude towards it then that I do today.... "Who the hell cares?"

If people truly cared about efficiency and weren't just paying it empty lip-service, then nobody would've bought any of those insane Raptor Lake CPUs, but they did. There's nothing more ludicrous than someone complaining about the power use of a Radeon card while their rig is sporting an i9-13900K/S or i7-13700K with a 36mm AIO. :roll:

I wasn't complaining, and that's a single game (sample size one) on a GPU that simply isn't as powerful as the ones you're comparing them to. That's a situational small lead due to a more stringent power limit on a game known to make very good utilization of its architecture. Break out the 6950 XT and let's see that grand accomplishment (not) evaporate.

IMHO you measure efficiency from a normalized performance standpoint across a large sample size. A decisive lead? Radeon HD 5970 (dual fully enabled Cypress XT, with two cores equivalent to HD 5870s) used less power than one GTX 480 on average. That's what I mean by "decisive lead".
 
...that's a single game (sample size one) on a GPU that simply isn't as powerful as the ones you're comparing them to..

...you measure efficiency from a normalized performance standpoint across a large sample size....
Quoted for truth, I never liked HUB's power testing for this reason, a sample size of one game. For RDNA2 v Ampere, it's so close I'd call it a tie, yet it makes it plainly obvious that Ampere was an efficient architecture on an inefficient node.
 
I think that you just got unlucky because I hadn't previously heard of any coil whine issues that are specific to the Hellhound (or any other card for that matter). I think that you just got unlucky because Powercolor sells thousands of those cards. To be perfectly honest, coil whine is completely unpredictable. It can happen to any card regardless of team or brand without any pattern that I've ever been able to follow. To make things even more frustrating, sometimes a card won't have it but will develop it over time and sometimes a card will have it but then one day it won't anymore. Then the one that had it go away might get it back and the one that had it begin might have it go away...

It's literally a random roll of the dice as to whether or not you get coil whine and/or whether or not it stays or goes away or comes back again.

Yeah, it sounds insane but that's how it is. The only thing that you can be sure of is that you can limit it by using Radeon Chill or VSync.

There is a thread to fix inductor resonance on this page

Umm, I don't know how to say this without making you look a bit ridiculous but....
Power.png

It's not huge, but it's definitely an uncontested lead and it's also pretty recent. No biggie though, I had the same attitude towards it then that I do today.... "Who the hell cares?"

If people truly cared about efficiency and weren't just paying it empty lip-service, then nobody would've bought any of those insane Raptor Lake CPUs, but they did. There's nothing more ludicrous than someone complaining about the power use of a Radeon card while their rig is sporting an i9-13900K/S or i7-13700K with a 36mm AIO. :roll:
Hypocrites they are
 
I always love seeing people who pay over a thousand dollars for a GPU pretending to care about power consumption.

I understand what you are meaning to say, but imo power consumption is an indicator of the quality of a product.
You want more performance per watt then the previous generations, it has to be better.

Personally I would want gpu's to be limited to consume at most 300 watts and then have developers work within those constraints to make the most of it (like for example Formula 1 Racing),
the cards being 1000 bucks at the high end or not is not relevant there.
 
Umm, I don't know how to say this without making you look a bit ridiculous but....
Power.png

It's not huge, but it's definitely an uncontested lead and it's also pretty recent. No biggie though, I had the same attitude towards it then that I do today.... "Who the hell cares?"

If people truly cared about efficiency and weren't just paying it empty lip-service, then nobody would've bought any of those insane Raptor Lake CPUs, but they did. There's nothing more ludicrous than someone complaining about the power use of a Radeon card while their rig is sporting an i9-13900K/S or i7-13700K with a 36mm AIO. :roll:

So your counter to a Performance per watt argument is just a power consumption chart? shouldn't you include the Performance chart too?

Cause here it is
4K-Doom.png


3090 is 20% faster than 6800XT, yet the total power consumption is only 11% more, doesn't that mean 3090 is better in Performance per Watt?

3090: 0.39 Frame per Watt
6800XT: 0.36 Frame per Watt
 
I think that you just got unlucky because I hadn't previously heard of any coil whine issues that are specific to the Hellhound (or any other card for that matter). I think that you just got unlucky because Powercolor sells thousands of those cards. To be perfectly honest, coil whine is completely unpredictable. It can happen to any card regardless of team or brand without any pattern that I've ever been able to follow. To make things even more frustrating, sometimes a card won't have it but will develop it over time and sometimes a card will have it but then one day it won't anymore. Then the one that had it go away might get it back and the one that had it begin might have it go away...

It's literally a random roll of the dice as to whether or not you get coil whine and/or whether or not it stays or goes away or comes back again.

Yeah, it sounds insane but that's how it is. The only thing that you can be sure of is that you can limit it by using Radeon Chill or VSync.
I completely understand. Its unfortunate that two of PowerColor's 7900 XTX Hellhounds had coil whine, but it is not an isolated case so much as a major issue. And I know its not my PSU as my RM850x has run the Pulse XTX, my current ASRock XTX MBA, a previous PowerColor Red Devil 6950 XT and a RTX 3090 FE without issue. That and I have tried it on three other PSUs (HX850, SF1000 and V850 SFX) during my ITX fittings with the same coil whine for both cards. I'll just tack it onto PowerColor's years of varying QA. This sucked because it had a proper white PCB.

I prefer using RTSS over Adrenalin's Radeon Chill for framerate limiting as I don't want it based off screen movement. That and I only use Anti-Lag since I mostly play competitive games. All it needs is a proper "boost" feature (like Reflex Boost) to tell the GPU to push higher clocks when I'm CPU limited, but this is low priority as the XTX can handle those games quite fine.
 
So your counter to a Performance per watt argument is just a power consumption chart? shouldn't you include the Performance chart too?
Yet sadly it's reacted to unironically too, ain't that the beauty of it I suppose, cherry picking arguments to suit your preference when and if it suits you and rejecting the counter arguments when they don't.

It's clear that AMD has the inferior architecture but if they're priced out of greed and not compellingly, who cares? If I'm playing Jedi: Survivor at 4k with the highest textures at with flawless gameplay, do you think I give a rat's posterior about whether or not my video card has mountains of unused VRAM? The answer is no, I honestly couldn't care less and neither would most people. Also, don't forget, a card could have the most VRAM in the world but if someone paid extra for it and the card is hamstrung because it doesn't have the GPU horsepower to make use of it, the buyer is a fool, plain and simple.

It's been clear for well over a decade that lots of fools buy AMD.
 
You want more performance per watt then the previous generations, it has to be better.
If you check you'll see that actually all of these GPUs have better efficiency than the previous generation.
the cards being 1000 bucks at the high end or not is not relevant there.
Of course it is, if you're concerned about the price of electricity clearly you can afford an extra 10 bucks a month or if you're concerned about heat you can use the AC.
 
Yet sadly it's reacted to unironically too, ain't that the beauty of it I suppose, cherry picking arguments to suit your preference when and if it suits you and rejecting the counter arguments when they don't.

It's clear that AMD has the inferior architecture but if they're priced out of greed and not compellingly, who cares? If I'm playing Jedi: Survivor at 4k with the highest textures at with flawless gameplay, do you think I give a rat's posterior about whether or not my video card has mountains of unused VRAM? The answer is no, I honestly couldn't care less and neither would most people. Also, don't forget, a card could have the most VRAM in the world but if someone paid extra for it and the card is hamstrung because it doesn't have the GPU horsepower to make use of it, the buyer is a fool, plain and simple.

It's been clear for well over a decade that lots of fools buy AMD.

The more tech outlets appear over the year (such as tech tubers), the worsening marketshare Radeon is becoming, so I guess people are getting smarter?

Radeon is reaching an all time low marketshare now.
29160939924l.jpg
 
Back
Top