Friday, January 3rd 2025
NVIDIA GeForce RTX 5090 Features 575 W TDP, RTX 5080 Carries 360 W TDP
According to two of the most accurate leakers, kopite7kimi and hongxing2020, NVIDIA's GeForce RTX 5090 and RTX 5080 will feature 575 W and 360 W TDP, respectively. Previously, rumors have pointed out that these GPU SKUs carry 600 W and 400 W TGPs, which translates into total graphics power, meaning that an entire GPU with its RAM and everything else draws a certain amount of power. However, TDP (thermal design power) is a more specific value attributed to the GPU die or the specific SKU in question. According to the latest leaks, 575 Watts are dedicated to the GB202-300-A1 GPU die in the GeForce RTX 5090, while 25 Watts are for GDDR7 memory and other components on the PCB.
For the RTX 5080, the GB203-400-A1 chip is supposedly drawing 360 Watts of power alone, while 40 Watts are set aside for GDDR7 memory and other components in the PC. The lower-end RTX 5080 uses more power than the RTX 5090 because its GDDR7 memory modules reportedly run at 30 Gbps, while the RTX 5090 uses GDDR7 memory modules with 28 Gbps speeds. Indeed, the RTX 5090 uses more modules or higher capacity modules, but the first-generation GDDR7 memory could require more power to reach the 30 Gbps threshold. Hence, more power is set aside for that. In future GDDR7 iterations, more speed could be easily achieved without much more power.
Sources:
hongxing2020 and kopite7kimi, via VideoCardz
For the RTX 5080, the GB203-400-A1 chip is supposedly drawing 360 Watts of power alone, while 40 Watts are set aside for GDDR7 memory and other components in the PC. The lower-end RTX 5080 uses more power than the RTX 5090 because its GDDR7 memory modules reportedly run at 30 Gbps, while the RTX 5090 uses GDDR7 memory modules with 28 Gbps speeds. Indeed, the RTX 5090 uses more modules or higher capacity modules, but the first-generation GDDR7 memory could require more power to reach the 30 Gbps threshold. Hence, more power is set aside for that. In future GDDR7 iterations, more speed could be easily achieved without much more power.
207 Comments on NVIDIA GeForce RTX 5090 Features 575 W TDP, RTX 5080 Carries 360 W TDP
I'm still thinking that if a 5090 performs at 100%, and a 5080 at 320 W performs at 50%, and you can get 60% by running your 5090 at 320 W, then the other 40% is wasted money.
Edit: Then, you basically paid double price for 20% more performance. I completely agree, although this wasn't my question.
Besides, if you are buying the 5090, it's because the 5080 isnt enough for what you want. For most who want high end hardware, drawing 525w isnt a concern. The high end has always had huge power draw (hello SLI era). No it wasnt bad. It was great. My point was that overall most GPU generations are defined by MOAR COARS and more power, with power being offset by smaller nodes. IPC is far less important to GPUS then it is CPUs, parallelism and clock speeds make a much larger difference. Been true for a long time.
A 5090 at a lower power budget than a 5080 is still going to have almost double the memory bandwidth, and way more cores, even if those are clocked lower. Your assumptions are also wrong. A 5090 at 320W is likely to only be 10~20% slower than the stock setting.
The 5080 math is also not that simple because things (sadly) often do not scale linearly like that.
Maybe its just me missing the pricing structure of Pascal, there was only a $200 difference between x80 and x80Ti, the Titan XP wasn't something gamers with money to waste were buying.
Im currently running 320w with clocked memory and it's around 2-3% faster than stock 450w, so I don't think the 5090 will be any different.
Titan, ehm... x90 card, despite its price.It begs the question though, why the 4090 has to be a 450 W card by default if it doesn't bring any extra performance to the table. What is Nvidia aiming at with such a high power consumption?
Ocing vram gives me ~8-9% performance, overclocking the core to 3000mhz gives me ~2%.
Same goes for the 600W limit some models have, really pushing the power envelope for minor clock gains. Reminder that after some point, the performance scaling x power becomes exponential.
Both my 3090s have a default power limit of 370W, whereas at 275W I loose less than 10% perf.
Here's a simple example of power scaling for some AI workloads on a 3090, you can see that after some point you barely get any extra performance when increasing power:
benchmarks.andromeda.computer/videos/3090-power-limit?suite=language
That has been the case since... always. Here's another example with a 2080ti:
timdettmers.com/2023/01/30/which-gpu-for-deep-learning/#Power_Limiting_An_Elegant_Solution_to_Solve_the_Power_Problem
Games often don't really push a GPU that hard, so the consumption while playing is really lower than the actual limit.
Consumers have already shown they don't care about sane power consumption, they want that extra performance out of the box. Just look at what happened to the 9000 series from AMD, where they had to push for a bios with a higher default TDP to appease their consumers. Or Intel, where most people didn't give a damn about the great efficiency vs the previous gen.