• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4070 has an Average Gaming Power Draw of 186 W

Joined
Feb 20, 2019
Messages
8,332 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
12GB is a repeat of the 3070. The card could just barely fit new AAA games in it's VRAM buffer at the time but not even 2 years later we are already seeing the card having to drop settings and stuttering issues. The same is likely to happen to the 4070 / 4070 Ti. This kind of price for a cards that will last less than 2 years is not what I'd call acceptable and it'll kill PC gaming as the vast majority of people cannot afford to drop that kind of money on just their GPU less than every 2 years.
I think you are right but this time around it will be much longer than two years, so it won't matter as much:

The Xbox and PS5 both have 16GB of RAM, usually allocating 10-12GB as VRAM and both consoles targeting 4K. Both consoles are "current" for the next 3-4 years and when their successors appear in 2027 (rumoured), game devs won't instantly swap to optimising for the newest consoles, they tend to away from the outgoing generation over a year or so, while the vast majority of their paying customers are still on the older hardware.

IMO 12GB is enough for at least 3 years, maybe even 5. Meanwhile, the 10GB of the 3080 and 8GB of the 3070 were widely questioned at launch - I forget whether it was a Sony or Microsoft presentation that claimed up to 13.5GB of the shared memory could be allocated to graphics, but the point is that we had entire consoles with 13.5GB of VRAM costing less than the GPUs in question that were hobbled out of the gate by miserly amounts of VRAM.

The 3070 in particular has been scaling poorly with resolution for a good year now, but it's only in the last couple of months that the 3070 has really struggled. In 2023 we've had four big-budget AAA games which run like ass at maximum texture quality on 8GB cards, with 3070 and 3070Ti owners given the no-win choice between stuttering or significantly lower graphics settings.
 
Joined
Sep 10, 2018
Messages
6,959 (3.03/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
I think you are right but this time around it will be much longer than two years, so it won't matter as much:

The Xbox and PS5 both have 16GB of RAM, usually allocating 10-12GB as VRAM and both consoles targeting 4K. Both consoles are "current" for the next 3-4 years and when their successors appear in 2027 (rumoured), game devs won't instantly swap to optimising for the newest consoles, they tend to away from the outgoing generation over a year or so, while the vast majority of their paying customers are still on the older hardware.

IMO 12GB is enough for at least 3 years, maybe even 5. Meanwhile, the 10GB of the 3080 and 8GB of the 3070 were widely questioned at launch - I forget whether it was a Sony or Microsoft presentation that claimed up to 13.5GB of the shared memory could be allocated to graphics, but the point is that we had entire consoles with 13.5GB of VRAM costing less than the GPUs in question that were hobbled out of the gate by miserly amounts of VRAM.

The 3070 in particular has been scaling poorly with resolution for a good year now, but it's only in the last couple of months that the 3070 has really struggled. Four big-budget AAA games run like ass on 8GB cards, with 3070 and 3070Ti owners given the no-win choice between stuttering or significantly lower graphics settings.

Yeah 12GB is the bare minimum but it should be ok through the console generation except for with terrible ports.

Also the Series S is like 8.5 GB for games and targets 1080p so all the sub 400 gpus should be fine for 1080p.... Although 400 for an 8GB gpu is kinda sad.
 
Last edited:
Joined
Sep 27, 2008
Messages
1,210 (0.20/day)
RX 6800 can be had for around $500 on Amazon in the US so it's going to be cheaper. It also has more VRAM, which is important given games are already using more than the 4070 Ti's 12GB frame buffer.

$485 on Newegg atm
 
Joined
Dec 5, 2020
Messages
203 (0.14/day)
Frequency and voltages automatically go down as the GPU approaches it's power limit, as such it becomes voltage limited because there is no more headroom at those frequency steps, this happens on pretty much every GPU. If you look at TPU's power figures practically all GPUs run at exactly their power limit.



Because that card is fast enough for games to become CPU limited in a lot more scenarios, a card like the 4070 will almost certainly be power limited 100% of the time.
The maximum stock voltage on a 4090 is 1050mv and it will not hit the power limit on most games with that voltage. In general if a PGU is constantly hitting the power limit it just means that the chosen power limit is actually too low for the programmed voltage-frequency curve. That's the case with Ampere and RDNA3 but if your power limit is high enough you'll just hit the top of the voltage-frequency curve and the limit will be voltage.

I mean my 4090 isn't CPU bottlenecked in Port Royal and it'll still draw less than 400W in some scenes. I don't think it ever hits 450W in that test. Why? Because it's hitting top of the voltage-frequency curve before it hits the power limit. That's a voltage limit. That's also how most games work. I can show you some screenshots at DSR 4K resolutions if you want?

The Igorlab's results are not because of a CPU bottleneck. A 4070Ti isn't going to be CPU bottlenecked at 4K.

You don't seem to understand that a power limit is just a number chosen by Nvidia or AMD. The 4090 could've easily been a 200W TDP card if Nvidia wanted to but then it would've indeed always been power limited. With 450W that's not the case and the same might be true for 200W on the 4070.
 
Joined
Jan 8, 2017
Messages
9,499 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Because it's hitting top of the voltage-frequency curve before it hits the power limit. That's a voltage limit. That's also how most games work.
You realize this makes no sense, right ?

If the GPU is limited by voltage then it's always going to be that way, it makes no sense to design a card that hits a voltage limit before the power limit is reached.
In general if a PGU is constantly hitting the power limit it just means that the chosen power limit is actually too low for the programmed voltage-frequency curve.
You have this completely backwards, the reason the voltage-frequency curve exists in the first place is in order to regulate power and temperature. GPUs are specifically designed to hit their designated power targets, it's the power and temperature limit which dictate how high the frequencies go, not the other way around.
 
Joined
Dec 5, 2020
Messages
203 (0.14/day)
You realize this makes no sense, right ?

If the GPU is limited by voltage then it's always going to be that way, it makes no sense to design a card that hits a voltage limit before the power limit is reached.

You have this completely backwards, the reason the voltage-frequency curve exists in the first place is in order to regulate power and temperature. GPUs are specifically designed to hit their designated power targets, it's the power and temperature limit which dictate how high the frequencies go, not the other way around.
Well, that's how Nvidia designed it. I have a 4090 so I know what I'm talking about as I see it has trouble hitting 450W in most things. It makes things less efficient but plenty of hardware works like that. It's not that uncommon. A lot of AMD CPUs work like that in games. They'll hit the top of the frequency curve before they hit the power limit.

I can assure you there's no CPU bottleneck happening here:

I can show you Spider-Man with RT at 8K at sub 40 fps not hitting 450W if you want a personal example? I don't get why you keep denying reality when there's so many examples you can check. It's fine to think it makes no sense but I think a CPU that hits TJmax before hitting the power limit or voltage limit makes even less sense and yet that also exists. It's really weird how you act as if you know it better than people who actually own the card and can test it.

Also I'm pretty sure a voltage-frequency curve is mostly dependant on the process node in function with the architecture.

This is my stock voltage curve and once I hit 2790mhz it'll not go up no matter how little power I'm using as the max voltage Nvidia allows at stock is 1050mv.
Untitled.png
 
Joined
Jan 8, 2017
Messages
9,499 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
They'll hit the top of the frequency curve before they hit the power limit.
I mean I am not going to go over this a million times but that's absolutely not how most cards are designed to operate. I am 100% sure if you simply increase the power target your card will run at higher average clock speeds even though nothing about the frequency curve would change proving that I am right and these GPUs are designed to adhere to the power limit above all else.

I can guarantee you the 4070 will run at it's designated power target all the time in the same way the 4070ti does :

1680639348820.png


I can show you Spider-Man with RT at 8K at sub 40 fps not hitting 450W if you want a personal example
No because it wouldn't prove anything, RT cores would be the bottleneck in that case leaving shaders underutilized, it's well known that RT workloads result in lesser power consumption vs pure rasterized workloads on Nvidia cards. Also Spider-Man is known to be pretty heavy on the CPU under any circumstances so it would be a bad choice anyway.
 
Top