Friday, January 3rd 2025
NVIDIA GeForce RTX 5090 Features 575 W TDP, RTX 5080 Carries 360 W TDP
According to two of the most accurate leakers, kopite7kimi and hongxing2020, NVIDIA's GeForce RTX 5090 and RTX 5080 will feature 575 W and 360 W TDP, respectively. Previously, rumors have pointed out that these GPU SKUs carry 600 W and 400 W TGPs, which translates into total graphics power, meaning that an entire GPU with its RAM and everything else draws a certain amount of power. However, TDP (thermal design power) is a more specific value attributed to the GPU die or the specific SKU in question. According to the latest leaks, 575 Watts are dedicated to the GB202-300-A1 GPU die in the GeForce RTX 5090, while 25 Watts are for GDDR7 memory and other components on the PCB.
For the RTX 5080, the GB203-400-A1 chip is supposedly drawing 360 Watts of power alone, while 40 Watts are set aside for GDDR7 memory and other components in the PC. The lower-end RTX 5080 uses more power than the RTX 5090 because its GDDR7 memory modules reportedly run at 30 Gbps, while the RTX 5090 uses GDDR7 memory modules with 28 Gbps speeds. Indeed, the RTX 5090 uses more modules or higher capacity modules, but the first-generation GDDR7 memory could require more power to reach the 30 Gbps threshold. Hence, more power is set aside for that. In future GDDR7 iterations, more speed could be easily achieved without much more power.
Sources:
hongxing2020 and kopite7kimi, via VideoCardz
For the RTX 5080, the GB203-400-A1 chip is supposedly drawing 360 Watts of power alone, while 40 Watts are set aside for GDDR7 memory and other components in the PC. The lower-end RTX 5080 uses more power than the RTX 5090 because its GDDR7 memory modules reportedly run at 30 Gbps, while the RTX 5090 uses GDDR7 memory modules with 28 Gbps speeds. Indeed, the RTX 5090 uses more modules or higher capacity modules, but the first-generation GDDR7 memory could require more power to reach the 30 Gbps threshold. Hence, more power is set aside for that. In future GDDR7 iterations, more speed could be easily achieved without much more power.
207 Comments on NVIDIA GeForce RTX 5090 Features 575 W TDP, RTX 5080 Carries 360 W TDP
Games now often Do push high load's.
idk what games will use over 16gb vram maxed out or what insane textures can fit 32gb vram. witcher4 and gta6 might be some of it. no words on ac shadows and hexe.
and no, 5080 underpowered is not same as 5090. cause under hood is a totally different card. this is why overclocking won't make a huge difference excepting benchmarks.
and seriously,anyone buying 5090 should watercooled it in summers which become more and more hotter, way over 2 degree climate target
They don't, just look at their clockspeeds, they are all within 0.01% of each other. Higher end models just have better PCB power delivery etc. FE was nice when it was used on smaller models (got a 3060ti FE, it's really nice), but look at the 4090 FE, it's ugly as hell. There are a few designs that still look decent, usually it's the high end that's bling bling gamerz OC RGB, base models are nice, check the 4090 windforce for example.
Most gamers won't be buying those products. Reminder that this forum is a niche with some enthusiasts that can afford it, but your average buyer won't even consider that product as an option. And that's why you buy multiple of those :p
It's honestly for the best this way. Overclocking was fun (especially when you had to draw your own traces), but most buyers are fundamentally getting ripped off by not getting all of the performance out of the card they paid for.
Now you get all the performance, and it's on you to find the power/heat level that you're OK with.
But that's the beauty of undervolting... you don't have to get that last 5% at all costs, you can reduce power consumption and your own carbon footprint. The choice has simply shifted from having to gain performance to having to conserve power and heat, other side of the same coin.
I can't speak to newer Nvidia cards, but AMD definitely has some one click and done settings in their software that will bias the card one way or another so it doesn't even require a whole to of tweaking and noodling to get decent results.
Does Nvidia have an equivalent function in their new software? I haven't tried it, yet (my HTPCs are running old drivers at the moment).
Seriously, just fuck off with this bullshit attitude. I thought this was an enthusiast website.
Just as an FYI I upgraded from a GTX 960. The reason I can afford a 4090 is because I saved the money by not upgrading constantly.
They are very bad for gaming, while the x090 are the highest performant for gaming. They are gaming cards.
Also, how do you define an "enthusiast"? There's no more Quadro, just RTX Axxx. And who said they're not good at gaming? That's pure marketing, not the full picture.
buildapc/comments/167yyqu
I still don't think a x90 card is meant for gaming purely because of the insane specs and price.
The way you phrase things does sound aggressive and judgy nonetheless, and I believe that's what leads to many rude answers you often get as well. They often have lower clocks and power limits (for better power efficiency), but will work just fine for games.
Most of that argument you got off reddit is based on price to performance, which is valid, but doesn't meant the product won't be able to run games (and even do it well!) Well, it's meant to be used however the customer wants to. If they want to use it just as a paperweight they're free to.