Monday, April 3rd 2023

NVIDIA GeForce RTX 4070 has an Average Gaming Power Draw of 186 W

The latest leaked slide for GeForce RTX 4070 confirms most of the specifications, as well as reveals some previously unknown details, including the 186 W average power draw. While the specification list does not mention the number of CUDA cores, it does confirm it will be based on AD104 GPU with 36 MB of L2 cache, and come with 12 GB of GDDR6X memory with 504 GB/s of maximum memory bandwidth, which points to 192-bit memory interface and 21 Gbps clocked memory.

The slide posted by Videocardz also compares the upcoming GeForce RTX 4070 with the previous generation RTX 3070 Ti and the RTX 3070, showing a significant increase in shader number, RT cores, and Tensor cores, not to mention we are talking about 3rd gen RT cores and 4th Gen Tensor cores on the RTX 4070. It will also support DLSS 3, and have AV1 and H.264 NV encoders.
The most interesting part of the slide is the power draw comparison, showing a TGP of 200 W, which is is lower than on the RTX 3070 Ti and the RTX 3070. It also draws less power under average gaming, video playback, and in idle. According to NVIDIA's own slide, the GeForce RTX 4070 has an average gaming draw of 186 W, with video playback draw of 16 W, and idle power draw of 10 W. All of these are lower than on the RTX 3070 Ti and the RTX 3070.

The slide also pretty much confirms the previously reported $599 price tag, at least for some of the RTX 4070 graphics cards, as some custom models will definitely be priced significantly higher. So far, it appears that NVIDIA might not change the launch price, and, as reported earlier, the $599 will leave plenty of room for custom RTX 4070 graphics cards without going close to the less expensive RTX 4070 Ti graphics cards, which sell close to $800.
Source: Videocardz
Add your own comment

56 Comments on NVIDIA GeForce RTX 4070 has an Average Gaming Power Draw of 186 W

#51
oxrufiioxo
Chrispy_I think you are right but this time around it will be much longer than two years, so it won't matter as much:

The Xbox and PS5 both have 16GB of RAM, usually allocating 10-12GB as VRAM and both consoles targeting 4K. Both consoles are "current" for the next 3-4 years and when their successors appear in 2027 (rumoured), game devs won't instantly swap to optimising for the newest consoles, they tend to away from the outgoing generation over a year or so, while the vast majority of their paying customers are still on the older hardware.

IMO 12GB is enough for at least 3 years, maybe even 5. Meanwhile, the 10GB of the 3080 and 8GB of the 3070 were widely questioned at launch - I forget whether it was a Sony or Microsoft presentation that claimed up to 13.5GB of the shared memory could be allocated to graphics, but the point is that we had entire consoles with 13.5GB of VRAM costing less than the GPUs in question that were hobbled out of the gate by miserly amounts of VRAM.

The 3070 in particular has been scaling poorly with resolution for a good year now, but it's only in the last couple of months that the 3070 has really struggled. Four big-budget AAA games run like ass on 8GB cards, with 3070 and 3070Ti owners given the no-win choice between stuttering or significantly lower graphics settings.
Yeah 12GB is the bare minimum but it should be ok through the console generation except for with terrible ports.

Also the Series S is like 8.5 GB for games and targets 1080p so all the sub 400 gpus should be fine for 1080p.... Although 400 for an 8GB gpu is kinda sad.
Posted on Reply
#53
napata
Vya DomusFrequency and voltages automatically go down as the GPU approaches it's power limit, as such it becomes voltage limited because there is no more headroom at those frequency steps, this happens on pretty much every GPU. If you look at TPU's power figures practically all GPUs run at exactly their power limit.



Because that card is fast enough for games to become CPU limited in a lot more scenarios, a card like the 4070 will almost certainly be power limited 100% of the time.
The maximum stock voltage on a 4090 is 1050mv and it will not hit the power limit on most games with that voltage. In general if a PGU is constantly hitting the power limit it just means that the chosen power limit is actually too low for the programmed voltage-frequency curve. That's the case with Ampere and RDNA3 but if your power limit is high enough you'll just hit the top of the voltage-frequency curve and the limit will be voltage.

I mean my 4090 isn't CPU bottlenecked in Port Royal and it'll still draw less than 400W in some scenes. I don't think it ever hits 450W in that test. Why? Because it's hitting top of the voltage-frequency curve before it hits the power limit. That's a voltage limit. That's also how most games work. I can show you some screenshots at DSR 4K resolutions if you want?

The Igorlab's results are not because of a CPU bottleneck. A 4070Ti isn't going to be CPU bottlenecked at 4K.

You don't seem to understand that a power limit is just a number chosen by Nvidia or AMD. The 4090 could've easily been a 200W TDP card if Nvidia wanted to but then it would've indeed always been power limited. With 450W that's not the case and the same might be true for 200W on the 4070.
Posted on Reply
#54
Vya Domus
napataBecause it's hitting top of the voltage-frequency curve before it hits the power limit. That's a voltage limit. That's also how most games work.
You realize this makes no sense, right ?

If the GPU is limited by voltage then it's always going to be that way, it makes no sense to design a card that hits a voltage limit before the power limit is reached.
napataIn general if a PGU is constantly hitting the power limit it just means that the chosen power limit is actually too low for the programmed voltage-frequency curve.
You have this completely backwards, the reason the voltage-frequency curve exists in the first place is in order to regulate power and temperature. GPUs are specifically designed to hit their designated power targets, it's the power and temperature limit which dictate how high the frequencies go, not the other way around.
Posted on Reply
#55
napata
Vya DomusYou realize this makes no sense, right ?

If the GPU is limited by voltage then it's always going to be that way, it makes no sense to design a card that hits a voltage limit before the power limit is reached.

You have this completely backwards, the reason the voltage-frequency curve exists in the first place is in order to regulate power and temperature. GPUs are specifically designed to hit their designated power targets, it's the power and temperature limit which dictate how high the frequencies go, not the other way around.
Well, that's how Nvidia designed it. I have a 4090 so I know what I'm talking about as I see it has trouble hitting 450W in most things. It makes things less efficient but plenty of hardware works like that. It's not that uncommon. A lot of AMD CPUs work like that in games. They'll hit the top of the frequency curve before they hit the power limit.

I can assure you there's no CPU bottleneck happening here:

I can show you Spider-Man with RT at 8K at sub 40 fps not hitting 450W if you want a personal example? I don't get why you keep denying reality when there's so many examples you can check. It's fine to think it makes no sense but I think a CPU that hits TJmax before hitting the power limit or voltage limit makes even less sense and yet that also exists. It's really weird how you act as if you know it better than people who actually own the card and can test it.

Also I'm pretty sure a voltage-frequency curve is mostly dependant on the process node in function with the architecture.

This is my stock voltage curve and once I hit 2790mhz it'll not go up no matter how little power I'm using as the max voltage Nvidia allows at stock is 1050mv.
Posted on Reply
#56
Vya Domus
napataThey'll hit the top of the frequency curve before they hit the power limit.
I mean I am not going to go over this a million times but that's absolutely not how most cards are designed to operate. I am 100% sure if you simply increase the power target your card will run at higher average clock speeds even though nothing about the frequency curve would change proving that I am right and these GPUs are designed to adhere to the power limit above all else.

I can guarantee you the 4070 will run at it's designated power target all the time in the same way the 4070ti does :

napataI can show you Spider-Man with RT at 8K at sub 40 fps not hitting 450W if you want a personal example
No because it wouldn't prove anything, RT cores would be the bottleneck in that case leaving shaders underutilized, it's well known that RT workloads result in lesser power consumption vs pure rasterized workloads on Nvidia cards. Also Spider-Man is known to be pretty heavy on the CPU under any circumstances so it would be a bad choice anyway.
Posted on Reply
Add your own comment
Dec 19th, 2024 01:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts