• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

4080 vs 7900XTX power consumption - Optimum Tech

actually 400gcd with 4 mcd 16GB would be perfectly balanced. give it more raw power, remove the unnecessary 24GB. stop the crazy shenanigans with the VRAM. Yeay we have more VRAM. for what. future proof, IM not going to use a 400 W something card 2 years from now when I can get the same performance from 200W 200mm gcd.
 
actually 400gcd with 4 mcd 16GB would be perfectly balanced. give it more raw power, remove the unnecessary 24GB. stop the crazy shenanigans with the VRAM. Yeay we have more VRAM. for what. future proof, IM not going to use a 400 W something card 2 years from now when I can get the same performance from 200W 200mm gcd.

I'm not a hardware engineer so I have no idea if that could work or not but just looking at their design makes me think that they need to balanced the GCD and MCD based on how many CU they want to incorporate probably due to memory bandwidth issues dropping down to 4 would cut it down quite a bit maybe they would do a stacked cache variant but that probably causes more problems like we see on lower end ada at higher resolutions with the smaller buses but massively more cache.
 
actually 400gcd with 4 mcd 16GB would be perfectly balanced. give it more raw power, remove the unnecessary 24GB. stop the crazy shenanigans with the VRAM. Yeay we have more VRAM. for what. future proof, IM not going to use a 400 W something card 2 years from now when I can get the same performance from 200W 200mm gcd.
TBH, AMD should have not made this 7900XTX 24GB. But I can see where the limitation would be, as aiming for 16GB would bring the memory bus back down to 256-bit. Maybe a possible alternative would've been to go GDDR6X (not sure if NVIDIA still has an exclusive contract with Micron to use this variant) but at 256-bit to get the same bandwidth, price permissive. This maybe could have gotten the price down to around $800 to $900 MSRP perhaps?

7900XT would've been in a good spot at $699 if there wasn't still a big stock of 6950XTs at its launch last year. It technically is the namesake's succesor. But now that its dwindling its looking like a good deal (discounted) if one was planning to upgrade to a 6950XT in the first place.
 
TBH, AMD should have not made this 7900XTX 24GB. But I can see where the limitation would be, as aiming for 16GB would bring the memory bus back down to 256-bit. Maybe a possible alternative would've been to go GDDR6X (not sure if NVIDIA still has an exclusive contract with Micron to use this variant) but at 256-bit to get the same bandwidth, price permissive. This maybe could have gotten the price down to around $800 to $900 MSRP perhaps?

7900XT would've been in a good spot at $699 if there wasn't still a big stock of 6950XTs at its launch last year. It technically is the namesake's succesor. But now that its dwindling its looking like a good deal (discounted) if one was planning to upgrade to a 6950XT in the first place.

I believe the reason this GPU is 24 GB is one of practical capacity, as 12 GB would prove limiting and a regression from it's predecessor.

The real reason for 24 GB is bus width, using a 384-bit design for increased bandwidth, it's entirely possible that an eventual 4th gen RDNA regresses to 256-bit/16 GB again but instead have a two or even three times larger last level cache attached to each MCD. I don't see them doing a 512-bit design like Hawaii/Grenada again.

The large 128 MB cache in RDNA 2 worked well for them, even considering the apparent deficiency in memory bandwidth displayed by 256-bit GPUs when driving extreme resolutions.

The RTX 4080 is another example of a balanced 256-bit design bolstered by a large cache (in Ada's case a large L2), Navi 21 and AD103 are quite alike in this regard, software and drivers can make excellent use of their resources. The RTX 4090 is faster but it's not proportionally faster than the 4080, despite having almost twice as many CUDA cores (16384 v. 9728) and 50% larger bus width.

You're looking at almost 70% more resources for 20-25% more performance, and while it's true they can still build a 4090 Ti with all of the AD102 resources it's going to be another 20-25% over the 4090, making it 40-50% faster for quite literally 100% extra resources. This indicates that at the very highest end, the GPU designs currently aren't scaling very well despite the vast amount of computing resources available.

This may have contributed for AMD to choose not to release a GPU larger than 96 CU to begin with.
 
I don't see them doing a 512-bit design like Hawaii/Grenada again.
I don't think we'll ever see a memory bus wider than 384 bits on a consumer GPU again. 512-bit for Hawaii and HBM for Vega were both pure, expensive desperation by AMD to try to make poorly-balanced GPUs at least somewhat competitive. It took them a few generations to figure out that putting large caches on-die was a cheaper and more effective way to accomplish the same thing.
 
I believe the reason this GPU is 24 GB is one of practical capacity, as 12 GB would prove limiting and a regression from it's predecessor.

The real reason for 24 GB is bus width, using a 384-bit design for increased bandwidth, it's entirely possible that an eventual 4th gen RDNA regresses to 256-bit/16 GB again but instead have a two or even three times larger last level cache attached to each MCD. I don't see them doing a 512-bit design like Hawaii/Grenada again.

The large 128 MB cache in RDNA 2 worked well for them, even considering the apparent deficiency in memory bandwidth displayed by 256-bit GPUs when driving extreme resolutions.

The RTX 4080 is another example of a balanced 256-bit design bolstered by a large cache (in Ada's case a large L2), Navi 21 and AD103 are quite alike in this regard, software and drivers can make excellent use of their resources. The RTX 4090 is faster but it's not proportionally faster than the 4080, despite having almost twice as many CUDA cores (16384 v. 9728) and 50% larger bus width.

You're looking at almost 70% more resources for 20-25% more performance, and while it's true they can still build a 4090 Ti with all of the AD102 resources it's going to be another 20-25% over the 4090, making it 40-50% faster for quite literally 100% extra resources. This indicates that at the very highest end, the GPU designs currently aren't scaling very well despite the vast amount of computing resources available.

This may have contributed for AMD to choose not to release a GPU larger than 96 CU to begin with.

Yeah I would've not agreed with the 7900 XT or XTX having less than 16 GB as that would be a serious regression compared to the 6900 XT and 6950 XT. Its good they went 320-bit and 384-bit this time around, but 24 GB is still a bit too much, especially for a card aimed at gaming. At least with the RTX 3090 (and Ti) and 4090s they can be used in their respective non-gaming workloads, but the extra VRAM is more or less useless on the XTX (outside of game texture mods, some overkill 4K graphics settings and 8K).

At least I can continue using a RAM Drive with it though. Its actually a viable scratch disk if you have less than 64 GB of RAM and don't want to wear out your SSDs.
 
AMD seems to always has had an issue with excessive consumption when framecapping etc. I remember this from my 5700XT, RX580, RX480 etc. Consumption is lower, but not as much as a Nvidiacard would do.
 
I think it would be wise to have Wizzard (re-)test this stuff to validate and until then this topic does not serve much purpose other then bickering and unprofessional behavior.
 
I think it would be wise to have Wizzard (re-)test this stuff to validate and until then this topic does not serve much purpose other then bickering and unprofessional behavior.
If AMD wants the whole issue of power consumption to not be a stone around their neck after launch day, then they should damn well fix it to not be a problem on launch day. Expecting W1zz - or indeed any tech reviewer - to retest everything every time that AMD drops a new driver with supposed fixes, is making AMD's problems be those reviewers' problems. And that's not acceptable.
 
If AMD wants the whole issue of power consumption to not be a stone around their neck after launch day, then they should damn well fix it to not be a problem on launch day. Expecting W1zz - or indeed any tech reviewer - to retest everything every time that AMD drops a new driver with supposed fixes, is making AMD's problems be those reviewers' problems. And that's not acceptable.
Ermm I never said anything about a new driver or not, the gpus were all tested at the start and non showed what this YouTuber now is showing....that is the problem
 
This is why I prefer the reference XTX MBA model compared to the others. I think Nitro+ has the sensors (I could be wrong since its the same PCB), but my Pulse does not and it kind of annoys me (especially when I compare the sensors shown by my RTX 3090 FE).
I can't remember the last time that there has been a problem with Sapphire cards of any kind. I wouldn't worry about the sensor because Sapphire cards had to earn their stellar reputation.
Anyways, I'll most likely return my Pulse and get a "fixed" (working vapor chamber) reference XTX next week. The 7000 series seems to have gotten better after the latest July Adrenalin update (idle power consumption, its now like the 6000 series), although the power consumption at not-so-heavy loads is clearly one of its pain points.
If that's what you want, go for it. It's not like you can go wrong when it comes to card brands. Incidentally, the Radeon reference cards are produced by PC Partner. It's kind of ironic because they own Zotac and Inno3D, two GeForce brands.
It should have launched at $800. XTX $900. Would be compelling at those prices.
With GeForce prices where they are, complaining about Radeon prices seems pretty dumb.
Now that they're both on TSMC 5nm, we see who actually has the better architecture.
It's clear that nVidia has the better architecture but if they're priced out of competition, who cares? If I'm playing Jedi: Survivor at 1440p Ultra with flawless gameplay, do you think I give a rat's posterior about whether or not my video card has "the best architecture" in it? The answer is no, I honestly couldn't care less and neither would most people. Also, don't forget, a card could have the best architecture in the world but if someone paid extra for it and the card is hamstrung because it doesn't have enough VRAM for the high-resolution ray-tracing that nVidia promotes, the buyer is a fool, plain and simple.

It's been clear for well over a decade that lots of fools buy nVidia.
 
Last edited:
I always love seeing people who pay over a thousand dollars for a GPU pretending to care about power consumption.
 
Last edited:
I don't think we'll ever see a memory bus wider than 384 bits on a consumer GPU again. 512-bit for Hawaii and HBM for Vega were both pure, expensive desperation by AMD to try to make poorly-balanced GPUs at least somewhat competitive. It took them a few generations to figure out that putting large caches on-die was a cheaper and more effective way to accomplish the same thing.
Remember that Hawaii used TSMC's 28 nm process. The tradeoff of smaller bus size versus larger cache wouldn't have made sense at the time, because the cache would have been too small. It would have also limited compute workloads that would have working sets larger than the cache: recall that Hawaii was the fastest GPU for double precision operations until Pascal.
 
Last edited:
I always love seeing people who pay over a thousand dollars for a GPU pretending to care about power consumption.

a thousand! that was a nice sale even for the 4080
 
I always hate spending a thousand bucks on a GPU.

Crazy thing is my 3070 Ti was 400 more than my 4070 Ti. Which I got for under 1000 :D

And I don't need to swap out my PSU or anything :cool:
 
I can't remember the last time that there has been a problem with Sapphire cards of any kind. I wouldn't worry about the sensor because Sapphire cards had to earn their stellar reputation.

I know Sapphire is a good brand, but their current Pulse and Nitro+ variants are too big for my use case, albeit the Pulse can fit but it has that extra PCI-E power connector which is disappointing to me since I'm trying to minimize cable jank. I would've stayed with the PowerColor Hellhound, but the coil whine was actually noticeable, and this was from two cards (One newly bought and another new replacement, also due to the coil whine.)
 
AMD seems to always has had an issue with excessive consumption when framecapping etc. I remember this from my 5700XT, RX580, RX480 etc. Consumption is lower, but not as much as a Nvidiacard would do.
If you go further back in history, AMD was more efficient. That title goes back and forth.
 
ATi used to be fun. All in wonder goodness. The last AMD card I had scared me into buying another.. so much disappointment was had.

I feel bad for saying that. The only way to change my mind is to see and use one in person. But since I had kids I don't have friends anymore so..
 
ATi used to be fun. All in wonder goodness. The last AMD card I had scared me into buying another.. so much disappointment was had.

I feel bad for saying that. The only way to change my mind is to see and use one in person. But since I had kids I don't have friends anymore so..
Which ATI/AMD GPU soured you on the whole brand?
 
Which ATI/AMD GPU soured you on the whole brand?

For me it was them completely ditching the high end market for over a half decade..... Unless you consider them only competing with Nvidia 2nd to 3rd best options high end. I really liked my RX 7970 and 290X even though they were a bit less refined than the Nvidia Alternative. Vega being so late and so underwhelming was the start of it for me honestly.


Man that was way back lol....
 
For me it was them completely ditching the high end market for over a half decade..... Unless you consider them only competing with Nvidia 2nd to 3rd best options high end. I really liked my RX 7970 and 290X even though they were a bit less refined than the Nvidia Alternative. Vega being so late and so underwhelming was the start of it for me honestly.



Man that was way back lol....
I like to say I don't hold a grudge.. but..

When you guys convinced me to try AM4, I thought of my bad experience with 939 lol..

I trusted you!
And it paid off :toast:

But 1K+ GPUs.. ehh.. I will go with what I know for now :laugh:
 
I like to say I don't hold a grudge.. but..

When you guys convinced me to try AM4, I thought of my bad experience with 939 lol..

I trusted you!
And it paid off :toast:

But 1K+ GPUs.. ehh.. I will go with what I know for now :laugh:

I'm not sure what issues you had but I remember the HD 4000 series being decent.

I don't blame you though spending that much cheddar on a gpu it's hard to take any risk regardless of how minimal they may seem. I just hope developers come up with good ways to do memory management and that 4070ti doesn't start choking on Vram at 4k.
 
I'm not sure what issues you had but I remember the HD 4000 series being decent.
So back then I was really into COD, and at that time it was World at War... Everything had a wierd white outline around it, and the game looked very cartoony. I tried multiple Windows installs, drivers.. nothing fixed it. It did not help it was pretty much the only game that I played at the time. It sucked compared to my 295 that was in RMA with XFX :D

And then the overclocking. It did 1GHz core no problem, but if you even breathed on the memory by even 1MHz, it would just get the screen blinking like mad.

I actually gave that card away to a kid who lived in a van at OC Forums :kookoo:
 
Back
Top