Monday, July 15th 2024
NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector
In the preparation season for NVIDIA's upcoming GeForce RTX 50 Series of GPUs, codenamed "Blackwell," one power supply manufacturer accidentally leaked the power configurations of all SKUs. Seasonic operates its power supply wattage calculator, allowing users to configure their systems online and get power supply recommendations. This means that the system often gets filled with CPU/GPU SKUs to accommodate the massive variety of components. This time we have the upcoming GeForce RTX 50 series, with RTX 5050 all the way up to the top RTX 5090 GPU. Starting with the GeForce RTX 5050, this SKU is expected to carry a 100 W TDP. Its bigger brother, the RTX 5060, bumps the TDP to 170 W, 55 W higher than the previous generation "Ada Lovelace" RTX 4060.
The GeForce RTX 5070, with a 220 W TDP, is in the middle of the stack, featuring a 20 W increase over the Ada generation. For higher-end SKUs, NVIDIA prepared the GeForce RTX 5080 and RTX 5090, with 350 W and 500 W TDP, respectively. This also represents a jump in TDP from Ada generation with an increase of 30 W for RTX 5080 and 50 W for RTX 5090. Interestingly, this time NVIDIA wants to unify the power connection system of the entire family with a 16-pin 12V-2x6 connector but with an updated PCIe 6.0 CEM specification. The increase in power requirements for the "Blackwell" generation across the SKUs is interesting, and we are eager to see if the performance gains are enough to balance efficiency.
Sources:
@Orlak29_ on X, via VideoCardz
The GeForce RTX 5070, with a 220 W TDP, is in the middle of the stack, featuring a 20 W increase over the Ada generation. For higher-end SKUs, NVIDIA prepared the GeForce RTX 5080 and RTX 5090, with 350 W and 500 W TDP, respectively. This also represents a jump in TDP from Ada generation with an increase of 30 W for RTX 5080 and 50 W for RTX 5090. Interestingly, this time NVIDIA wants to unify the power connection system of the entire family with a 16-pin 12V-2x6 connector but with an updated PCIe 6.0 CEM specification. The increase in power requirements for the "Blackwell" generation across the SKUs is interesting, and we are eager to see if the performance gains are enough to balance efficiency.
168 Comments on NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector
www.nvidia.com/en-us/geforce/news/nvidia-rtx-games-engines-apps/
For me it's
Witcher 3 NG
Cyberpunk 2077
Alan Wake 2
Ratchet and Clank
Control
Metro Exodus EE
Spiderman
Quake 2 RTX
Portal RTX
Minecraft (haven't played this but it does look way better with RT)
Most games do it pretty terribly though RE 4 Remake is absolutely trash when it comes to RT those F1 games and countless others I'd say probably 1 in 10 games that has RT does it well and that might be generous....
I’m sure he’ll be coming back to explain to you what “proper use” is. I’ll bet you a lunch it’s anything AMD performance doesn’t tank in. I would add:
Portal Prelude
A Plague Tale: Requiem
Avatar: Frontiers of Pandora
Chernobylite
Deathloop
Diablo IV
Fortnite
Ghostwire: Tokyo
Hellblade: Senua's Sacrifice
LEGO Builder's Journey
Miles Morales
Pinball FX
Hitman 3
Hellblade: Senua's Saga
Which product is slow?
There must be a global professional investigation against nvidia for cheating, using dlss as the default setting, making slow GPUs appear good on the charts.
Let's compare the slow RX 7600 with the even slower RTX 4060, which is a junk, leftover byproduct, rebadge of something GT **30 class, or *50 LE class.
RTX 4060 vs RX 7600:
vs
You weren't even talking about the 7600 or 4060 before, you were talking about the 7900GRE, why change the goalposts? Maybe you're the troll?
And nice work changing the goalposts yet again, it's impressive the speed you can do it and dodge what I've said.
I've done and continue to do my own testing on my vast range of hardware and systems, and don't need.. uhh cherry picked comments? and poorly recorded video's? to attempt to "prove" anything to me.
Good luck with the global professional investigation.
Still it's not like there are a ton of games that are transformative with it on and it's been what 6 years since the first RT capable gpu..... I do expect more and more games over time to get amazing RT anyone who thinks otherwise is delusional.
Also people pushing RT performance as a must have on anything below the 4070 makes me lol pretty hard even the 4070 struggles at it but it can do it ok with heavy DLSS which some what defeats the point of improving visuals.
I love RT don't get me wrong but there are still a ton of games that the performance hit doesn't justify the visual upgrade. Only uses software lumen on PC/Console I believe there is no way the reflections on water would look as bad as they do if it was hardware lumen otherwise the game looks fantastic and other than resolution even looks great on console.
Honestly it looks much better than a lot of games with RT......
I'm only trying to find solace in the fact that game graphics aren't evolving as quickly as they used to, either, so faster GPUs aren't as badly needed as we may think at times. I needed a high-end CPU and GPU to achieve stable 60 FPS at 1080p in the latest games 10 years ago. Nowadays, a Ryzen 5 / Core i5 and a x60-class GPU can do it. Bump up the resolution to 1440p, and you're still only at x70-class at most.
Part of the problem might be that chip makers seem to be giving up on new nodes as the development costs are getting too great. TSMC seems to be in a monopoly on the most advanced 3-7 nm ones, but even they're increasing prices way beyond the benefits. This is proven by cards like the RX 7600, which isn't just a node shrink of the 6600 XT, but also a new architecture, yet, it performs and costs the same. AMD and Nvidia might be in a position when they can't rely on the good old "let's just cram more cores into a denser die" method anymore, yet they're still doing just that (if you don't count all the DLSS/AI bullshit). They need something way more architectural to improve future generations, which also drives up costs. IMO, what we need is better surfaces on humanoids and other living things, especially in the rain, and better animations. Map detail and lighting are awesome, but humans still look and act like porcelain dolls. I'm not that bothered by TAA, to be honest, but I do agree on the hardware agnostic part. I think the big question is on the "meaningful" part. What does it mean? Meaningful enough to notice the difference? Or meaningful enough to buy a $1,600 graphics card? My answer to the first question is mostly no, sometimes maybe, but for the second one, it's a definite never.
The humans in Hellblade 2 are the best I've ever seen in real time it'll fool you into thinking it's cgi at times.
I mostly meant I doubt we will see open world games with the same fidelity of Hellblade 2 for a long time mostly due to the time and resources it would take to craft a whole world to that level. It's going to take either multibounce pathtracing or something new that doesn't exist to have a greater visual impact. I'm interested to see what The coalition does with the engine though their work on Gears 5 basically running on an HD 7770 and some mobile cores is insane.
I see we have journeyed into green vs red and RT fantasy land... I guess thats all this thread's gonna give now.
I notice (and appreciate) the difference in Cyberpunk and Control, but I don't need it to enjoy the game. I also tried RT in Alan Wake 2, but that game looks so awesome even without it that I honestly don't care. :) And that's why I think extra VRAM (and games building around it) is more useful than RT, AI upscaling and other gimmicks... at least to me. Lights and shadows can be faked to look good, but textures can't. I haven't tried Hellblade 2, but I've heard good things about it. Unfortunately, online-only games are on my no-buy list.
You must learn from the global community which has much more knowledge than you.
You might do well to take some of your own advice, I sincerely hope you learn from all this.
I had already read this thread, and though about waiting for 5000 series, but everything seems to point to even bigger cards and even bigger power requirements, so I will keep to my plans, and see what I end up getting for the 3080 to offset the cost.
Adrenalin has an option to make the colors super saturated, you just click and it enables (or is on by default, not sure) which can explain any differences. I have my 6900hs laptop and my 4090 desktop connected to the same screen and colors are identical Personal preference. If you wanna buy a card and keep it long amd is better because 2-3 years down the line RT won't be playable on your old card even if it's an nvidia one. If you upgrade every gen nvidia is clearly the better option though.