Monday, June 6th 2022

NVIDIA RTX 4080 Rumored To Feature 420 W TDP

The upcoming generation of graphics cards from NVIDIA look set to feature significantly higher power budgets than their predecessors according to a recent claim from leaker Kopite. The RTX 4090 has been rumored to feature a TDP above 400 W for some time and this latest leak indicates that the RTX 4080 may also ship with an increased power requirement of 420 W. This RTX 4080 (PG139-SKU360) would represent an increase of 100 W compared to the RTX 3080 with power rises also expected with the RTX 4070 and RTX 4060. The RTX 4070 could see a power budget as high as 400 W if NVIDIA chooses to use GDDR6X memory for the card while the RTX 4060 is rumored to see a 50 W increase to 220 W at a minimum. The preliminary rumors indicate a launch date for these cards in late 2022.
Sources: @kopite7kimi (via VideoCardz), @kopite7kimi
Add your own comment

80 Comments on NVIDIA RTX 4080 Rumored To Feature 420 W TDP

#26
Unregistered
Hands up who will buy this..........................Liars you know you will
#27
ARF
TiggerHands up who will buy this..........................Liars you know you will
I am looking forward to the Radeon RX 7000 series :)
Posted on Reply
#28
BSim500
hardcore_gamerUnfortunately, there is no way around it. Dennard scaling hit a wall while Moore is still going. Transistors/die can still increase significantly but power/transistor isn't going down at the same rate. If you want more performance, you have to give it more power.
Whilst true, I'm far more interested in seeing better optimised games than blindly trying to brute-force it with unwanted space-heaters. The other half the of "My GPU isn't powerful enough for this game" equation is "this game dev can't optimise for sh*t"...

Edit: Looking back on the past 35 years of PC gaming, I remember being blown away in the late 90's to mid 2000's when games like Thief, Half Life and FEAR came out. Not just the graphics, but the whole package, eg, The Dark Engine's audio propagation physics on an Aureal 3D sound card, enemies communicating with each other in FEAR in an actually believable way, etc, and thinking damn what are games going to be like in 25 years time. 25 years on, and advances in these areas are flat as hell. Everything is solely about graphics and even that's starting to experience the same depreciating excitement issue as Hollywood CGI, ie, once you've seen the 300th similar looking photo-realistic clone, it's no longer that 'interesting' anymore and you start craving stuff that actually looks and feels different, eg, Dishonored's water-colour or Bioshock's underwater art-deco styles. I'd take more of these type of games that run exceptionally well on an average GPU anyday than the 'need' for a 1500w RTX 6090Ti Super Titan for Crysis 6's 7hr long 'content'...
Posted on Reply
#29
bug
BSim500Whilst true, I'm far more interested in seeing better optimised games than blindly trying to brute-force it with unwanted space-heaters. The other half the of "My GPU isn't powerful enough for this game" equation is "this game dev can't optimise for sh*t"...
Current cards run two engines: raster and ray-tracing. And it will be another decade easily before we can move completely past rastering.
Posted on Reply
#30
BSim500
bugCurrent cards run two engines: raster and ray-tracing. And it will be another decade easily before we can move completely past rastering.
I don't think we ever will "completely move past it". For every AAA FPS that can be ray-traced, there are like 200x Indie's often of a style or genre where real-time ray-tracing hardly adds anything at all to the game.
Posted on Reply
#31
chrcoluk
Prima.VeraSomebody from gov should regulate this nonsense. This is getting ridiculous. A GPU that consumes more then a standard PSU from a couple of years ago..
Could happen in Europe over next few years if energy crisis doesnt ease up, is already talks of energy rationing in UK. People buying generators in anticipation of power cuts.
Posted on Reply
#32
Guwapo77
Prima.VeraSomebody from gov should regulate this nonsense. This is getting ridiculous. A GPU that consumes more then a standard PSU from a couple of years ago..
There is no need for government intervention when the consumer can refuse to purchase it. There is/are other option(s)...AMD/Intel.
Posted on Reply
#33
chrcoluk
BSim500I don't think we ever will "completely move past it". For every AAA FPS that can be ray-traced, there are like 200x Indie's often of a style or genre where real-time ray-tracing hardly adds anything at all to the game.
Pretty much this, sometimes people forget, the especially picked AAA games typically used by reviewers for graphics tech showcasing are something like 0.01% of the market.
Posted on Reply
#34
ARF
And not only this, but also process nodes and transistor count. You have to throw many, many transistors, while in the next decade there will be two, maximum three process node shrinks. So, no, it won't happen, if ever...
Posted on Reply
#35
Guwapo77
ARFErr :eek::twitch:

RTX Titan - 900 W
RTX 4090 Ti - 700 W
RTX 4090 - 600 W
RTX 4080 - 420 W
RTX 4070 - 300 W
RTX 4060 - 250 W
RTX 4050 - 170 W
GeForce RTX 4000 “Lovelace“ power draw might exceed 800 W - HWCooling.net
Nvidia GeForce RTX 4090 GPU may demand 600W from your PSU | PCGamesN

Not nice :kookoo:
I thought those wattages were debunked. But if they are true, Nvidia certainly taking the wrong path moving forward. We might end up with Nvidia Certified PSUs again...
Posted on Reply
#36
looniam
yep nvidia - where every fermi meme is a challenge and smoldering gpus will be a feature, adding real life ambient occlusion. :p
Posted on Reply
#37
bug
BSim500I don't think we ever will "completely move past it". For every AAA FPS that can be ray-traced, there are like 200x Indie's often of a style or genre where real-time ray-tracing hardly adds anything at all to the game.
There will be a point where ray tracing will be easy enough to implement, it won't matter. E.g. a lot of games don't need Pixel Shader 3.0, but since it's there and easy to use, why not?
Posted on Reply
#38
randomName
I do also agree that this power consumption needs to be regulated, because it is getting out of hands really. Game makers can easily add gazillion of extra polygons, so GPUs have a hard time processing and so GPU makers will make even bigger chips and even more power hungry ones, then games will be made even more demanding and this will never end. Also add crypto currency to that.

High power CPU and GPU should be licensed and used by those who really need it, not by simple end-users who want to play games.
Posted on Reply
#39
ZoneDymo
TiggerHands up who will buy this..........................Liars you know you will
Im not supporting Nvidia, if only because too many are already doing so.
Posted on Reply
#40
MarsM4N

[SIZE=4][ISPOILER][URL unfurl="true"]https://wccftech.com/nvidia-ada-lovelace-gpus-4n-process-node-advantage-over-5nm-amd-rdna-3/[/URL][/ISPOILER][/SIZE]

"The reasons why NVIDIA may have selected TSMC's 4N as the candidate for its next-gen gaming GPU lineup are kind of obvious. The upcoming cards will be real power-hungry and NVIDIA & the company is going to optimize them as much as they can by utilizing the 4N process node. AMD on the other hand will be utilizing a mix of TSMC 5nm and 6nm process nodes for its upcoming MCM and monolithic GPUs based on the RDNA 3 graphics architecture and while they don't bring the optimizations that 4N does, they will feature an MCM approach that is expected to be highly efficient. So at the end of the day, NVIDIA gets the better node while AMD delivers a better design approach."

Well, let's wait & see who has the more efficient package this round. :cool: Power efficency will be a big selling point for many.
AlwaysHopeThat's an assumption if ever there was one...
Where I live, all domestic electricity is hydro generated. :D
Can you even see your monitor with the bush fires raging down there? :wtf:
Posted on Reply
#41
BSim500
bugThere will be a point where ray tracing will be easy enough to implement, it won't matter. E.g. a lot of games don't need Pixel Shader 3.0, but since it's there and easy to use, why not?
It won't be "easy enough" anywhere near that justifies a drastic loss in frame-rate for games that won't benefit if we hit an obvious wall for lack of efficiency increase (and we have, GTX 960 -> GTX 1060 = +70-90% FPS increase in one year at same wattage and only +20% price. GTX 1660Ti -> RTX 3050 = barely +5-10% FPS increase after 3 years for actually same price-tier, even using 20% lower than reality MSRP for the latter). Everything else only got faster by pumping up the wattage (GTX1660Ti -> RTX3060 = 50% faster purely because 180w vs 120w). Perf-per-watt improvements have already ground to a halt on the most popular price-tier of GPU's, and yet most of us still don't want space-heaters. Nor is creating games the most inefficient way possible going to be used by default for the bulk of games that won't benefit and games devs are not that stupid they'd voluntarily kill off 70% of their sales for the bulk of the sub $500 GPU market.

Here's a healthy reality check of what most normal people are actually playing, ie, ray-traced Cyberpunk 2077 is barely more popular than "Russian Fishing 4" and on the brink of falling out of the chart. In the middle of the chart, Skyrim, Age of Empires 2, Don't Starve, Stardew Valley, etc have been there for years. No one cares about ray-tracing for those. Now look at the top of the chart. All the competitive CS:GO, Fortnite, etc, players are going to do is turn RT off for better FPS. It really is a literal truth to say less than 5% of the gaming market is interested in RT (and not all of those are interested in it "at any cost"). Perceptions of "everyone will want this" future popularity of the newest gimmick has often been heavily skewed on hardware enthusiast tech forums. Ray-tracing is like VR, 3D glasses, etc, some will want it and it'll be a premium feature for those who do, but many won't and just like VR it certainly isn't going to become a new game development "minimum requirement baseline" though for the bulk of the market and not just due to GPU cost.
Posted on Reply
#42
ARF
MarsM4N

[SIZE=4][ISPOILER][URL unfurl="true"]https://wccftech.com/nvidia-ada-lovelace-gpus-4n-process-node-advantage-over-5nm-amd-rdna-3/[/URL][/ISPOILER][/SIZE]

"The reasons why NVIDIA may have selected TSMC's 4N as the candidate for its next-gen gaming GPU lineup are kind of obvious. The upcoming cards will be real power-hungry and NVIDIA & the company is going to optimize them as much as they can by utilizing the 4N process node. AMD on the other hand will be utilizing a mix of TSMC 5nm and 6nm process nodes for its upcoming MCM and monolithic GPUs based on the RDNA 3 graphics architecture and while they don't bring the optimizations that 4N does, they will feature an MCM approach that is expected to be highly efficient. So at the end of the day, NVIDIA gets the better node while AMD delivers a better design approach."

Well, let's wait & see who has the more efficient package this round. :cool: Power efficency will be a big selling point for many.
TSMC N4 is a rebranded N5 process, in the best case +5%. It is N5+.
If the nvidia micro-architecture is much worse, then those 5% won't matter at all.
Posted on Reply
#43
Bomby569
i heard a rumour you should not pay attention to rumours, even more as this is just another rumour, and this one contradicts the previous rumour (that the numbers were much higher)

idiots after their 15 minutes of fame (and don't came after me, because at least one of them is, and maybe both)
Posted on Reply
#44
bug
BSim500It won't be "easy enough" anywhere near that justifies a drastic loss in frame-rate for games that won't benefit if we hit an obvious wall for lack of efficiency increase (and we have, GTX 960 -> GTX 1060 = +70-90% FPS increase in one year at same wattage and only +20% price. GTX 1660Ti -> RTX 3050 = barely +5-10% FPS increase after 3 years for actually same price-tier, even using 20% lower than reality MSRP for the latter). Everything else only got faster by pumping up the wattage (GTX1660Ti -> RTX3060 = 50% faster purely because 180w vs 120w). Perf-per-watt improvements have already ground to a halt on the most popular price-tier of GPU's, and yet most of us still don't want space-heaters. Nor is creating games the most inefficient way possible going to be used by default for the bulk of games that won't benefit and games devs are not that stupid they'd voluntarily kill off 70% of their sales for the bulk of the sub $500 GPU market.

Here's a healthy reality check of what most normal people are actually playing, ie, ray-traced Cyberpunk 2077 is barely more popular than "Russian Fishing 4" and on the brink of falling out of the chart. In the middle of the chart, Skyrim, Age of Empires 2, Don't Starve, Stardew Valley, etc have been there for years. No one cares about ray-tracing for those. Now look at the top of the chart. All the competitive CS:GO, Fortnite, etc, players are going to do is turn RT off for better FPS. It really is a literal truth to say less than 5% of the gaming market is interested in RT (and not all of those are interested in it "at any cost"). Perceptions of "everyone will want this" future popularity of the newest gimmick has often been heavily skewed on hardware enthusiast tech forums. Ray-tracing is like VR, 3D glasses, etc, some will want it and it'll be a premium feature for those who do, but many won't and just like VR it certainly isn't going to become a new game development "minimum requirement baseline" though for the bulk of the market and not just due to GPU cost.
You're assuming AMD and Nvidia will want to drag two engines with them forever. They won't. They'll find a breaking point and go full RT. But again, 10 years is a (very) optimistic estimation for when that will happen.
Posted on Reply
#45
big_glasses
BSim500Here's a healthy reality check of what most normal people are actually playing, ie, ray-traced Cyberpunk 2077 is barely more popular than "Russian Fishing 4" and on the brink of falling out of the chart. In the middle of the chart, Skyrim, Age of Empires 2, Don't Starve, Stardew Valley, etc have been there for years. No one cares about ray-tracing for those. Now look at the top of the chart. All the competitive CS:GO, Fortnite, etc, players are going to do is turn RT off for better FPS. It really is a literal truth to say less than 5% of the gaming market is interested in RT (and not all of those are interested in it "at any cost"). Perceptions of "everyone will want this" future popularity of the newest gimmick has often been heavily skewed on hardware enthusiast tech forums. Ray-tracing is like VR, 3D glasses, etc, some will want it and it'll be a premium feature for those who do, but many won't and just like VR it certainly isn't going to become a new game development "minimum requirement baseline" though for the bulk of the market and not just due to GPU cost.
While I agree with you on RT, you are pulling a SP game with a (imo) limited replayability vs MP games (and Skyrim with it's massive mod base), at peak it had 800k+ players.
And it's also for the times that people play the SP games they want the gorgeous graphics, and I'd argue that most people don't want to turn down too much of details (unless they are the top percenter)

and a couple of those still requires a bit of power to output the high FPS
Posted on Reply
#46
ARF
Bomby569i heard a rumour you should not pay attention to rumours, even more as this is just another rumour, and this one contradicts the previous rumour (that the numbers were much higher)

idiots after their 15 minutes of fame (and don't came after me, because at least one of them is, and maybe both)
A single 16-pin power connector is also present. It is too early to speculate on RTX 40 series power draw, but it is quite clear we will be seeing a 600W card in this generation such as RTX 4090 Ti or a new TITAN, it may simply not launch in the coming months but later. Such TDP is still within the capability of a single 16-pin connector, which as far as we know, should now debut on all RTX 40 cards.
Alleged GeForce RTX 4090 series PCB leak shows NVLink and 16-pin power connectors but no USB Type-C display output - VideoCardz.com

Posted on Reply
#48
ARF
Bomby569"It is too early to speculate on RTX 40 series power draw, but it is quite clear we will be seeing a 600W card in this generation"

it's to early but let me speculate anyway :D:D:D
I think what they mean is where exactly it will fall between 600 and 1000 watts TGP. :D
Posted on Reply
#49
InVasMani
MarsM4N

[SIZE=4][ISPOILER][URL unfurl="true"]https://wccftech.com/nvidia-ada-lovelace-gpus-4n-process-node-advantage-over-5nm-amd-rdna-3/[/URL][/ISPOILER][/SIZE]

"The reasons why NVIDIA may have selected TSMC's 4N as the candidate for its next-gen gaming GPU lineup are kind of obvious. The upcoming cards will be real power-hungry and NVIDIA & the company is going to optimize them as much as they can by utilizing the 4N process node. AMD on the other hand will be utilizing a mix of TSMC 5nm and 6nm process nodes for its upcoming MCM and monolithic GPUs based on the RDNA 3 graphics architecture and while they don't bring the optimizations that 4N does, they will feature an MCM approach that is expected to be highly efficient. So at the end of the day, NVIDIA gets the better node while AMD delivers a better design approach."

Well, let's wait & see who has the more efficient package this round. :cool: Power efficency will be a big selling point for many.



Can you even see your monitor with the bush fires raging down there? :wtf:
IF RDNA 3 ends up being more efficient it's a huge success for AMD and huge failure for Nvidia given the context of the situation with the node difference. We'll have to see how big a impact MCM provide AMD, but we've seen with CPU's it's worked really nicely.
Posted on Reply
#50
Chaitanya
BSim500Whilst true, I'm far more interested in seeing better optimised games than blindly trying to brute-force it with unwanted space-heaters. The other half the of "My GPU isn't powerful enough for this game" equation is "this game dev can't optimise for sh*t"...

Edit: Looking back on the past 35 years of PC gaming, I remember being blown away in the late 90's to mid 2000's when games like Thief, Half Life and FEAR came out. Not just the graphics, but the whole package, eg, The Dark Engine's audio propagation physics on an Aureal 3D sound card, enemies communicating with each other in FEAR in an actually believable way, etc, and thinking damn what are games going to be like in 25 years time. 25 years on, and advances in these areas are flat as hell. Everything is solely about graphics and even that's starting to experience the same depreciating excitement issue as Hollywood CGI, ie, once you've seen the 300th similar looking photo-realistic clone, it's no longer that 'interesting' anymore and you start craving stuff that actually looks and feels different, eg, Dishonored's water-colour or Bioshock's underwater art-deco styles. I'd take more of these type of games that run exceptionally well on an average GPU anyday than the 'need' for a 1500w RTX 6090Ti Super Titan for Crysis 6's 7hr long 'content'...
For open world there was GTA - SA big upgrade over previous 2 3D GTA games, then there were games like Quake, Doom(upto 3 and 3 also scaled well with SLI), etc... and games back were real purchases unlike todays rent system thanks to Steam, Epic and others.
Posted on Reply
Add your own comment
Dec 22nd, 2024 02:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts