• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GDDR6 GeForce RTX 4070 Tested, Loses 0-1% Performance Against RTX 4070 with GDDR6X

G6 uses less power than GDDR6X because its slower


Not really, the GDDR6X on my 4090 sits at like 60-65C in demanding games, meaning 100% gpu usage
yeah a 4090, but I was refering to the cooler on something like a cheap aib model of a 3060ti, in the case of some cheap 4070, that are made by some companies they have weaker cooling solutions, is not un common to have some companies to skimp in the colling departement, like not having thermal pads in the memory modules
 
yeah a 4090, but I was refering to the cooler on something like a cheap aib model of a 3060ti, in the case of some cheap 4070, that are made by some companies they have weaker cooling solutions, is not un common to have some companies to skimp in the colling departement, like not having thermal pads in the memory modules
Not a problem at all - https://www.techpowerup.com/review/msi-geforce-rtx-4070-gaming-x-trio/37.html

3060 Ti did not use GDDR6X, only 3070 Ti did in the 3060/3070 range

TjMax for GDDR6X is like 95-100C some even list 110-120C but most cards runs them in the 60-80 range
 
This begs the question: why did Nvidia release the 4070 with GDDR6X and not non-X in the first place? Artificial price markup? Bragging rights?
Wasn't G6X significantly faster (in terms of clock speeds) at the time, and G6 has only recently caught up?
 
Wasn't G6X significantly faster (in terms of clock speeds) at the time, and G6 has only recently caught up?
Maybe. As G6X is just a slightly faster spinoff version of G6, I never paid much attention to it.
 
Seems like nVidia saves some money on this. Probably the manufacturers as well. The ones who won't benefit from the cost savings are the consumers. But hey, at least LJM can buy a couple more jackets...
 
Power efficiency should be checked at the same memory clock speed, otherwise, people might get the wrong assumption that GDDR6 is more efficient when it's only about the GDDR6X clock much past the efficiency sweet spot.
 
Didn't GDDR6X run hot AF, I heard that in 3000 cards this memory type run much hotter
They also just had crap cooling for the memory.
 
As expected. The main question is how this affects power efficiency. Can you squeeze even more FPS per W with G6?

It's 4% in HWUB's bench, which turns out to be almost as much as the reduction in bandwidth:


To me this indicates that the 4070 is definitely memory bottlenecked in some titles, because if it wasn't you wouldn't have seen a near 1:1 drop-off in performance.

Mind you the performance difference is really besides the point, the fact remains that Nvidia is using cheaper, slower memory while not passing the savings onto the customers or making it clear in the model name.


Power efficiency should be checked at the same memory clock speed, otherwise, people might get the wrong assumption that GDDR6 is more efficient when it's only about the GDDR6X clock much past the efficiency sweet spot.

1726326523471.png


The 4070 GDDR6 model consumes 1.52% less power while being 4% slower. It also consumes more power at idle. So overall a small reduction in efficiency.
 
the fact remains that Nvidia is using cheaper, slower memory while not passing the savings onto the customers or making it clear in the model name.
That's because they can get away with it. Too obvious to discuss.
overall a small reduction in efficiency.
Not great. At least I know what GPU to avoid, unless the price is really good.
 
It's 4% in HWUB's bench, which turns out to be almost as much as the reduction in bandwidth:


To me this indicates that the 4070 is definitely memory bottlenecked in some titles, because if it wasn't you wouldn't have seen a near 1:1 drop-off in performance.

Mind you the performance difference is really besides the point, the fact remains that Nvidia is using cheaper, slower memory while not passing the savings onto the customers or making it clear in the model name.




View attachment 363389

The 4070 GDDR6 model consumes 1.52% less power while being 4% slower. It also consumes more power at idle. So overall a small reduction in efficiency.

They tested a Gigabyte AIB card vs the Founder's Edition 4070 (as there's no Founder's of this DDR6 version of course) and if you look at W1zz's tests of various AIB models, the idle power usage can vary model-to-model by around the 2W listed here. Same goes for the load power usage, these are both within model variation so there's little to conclude here.

Also the fps reduction at 1440p where this card's likely to be used is only 3% in their testing. 4% is at 1080p, not really a primary use case for this GPU.

Still with seemingly no price reduction from the GDDR6X model, this provides consumers with a slightly lower value product.
 
They tested a Gigabyte AIB card vs the Founder's Edition 4070 (as there's no Founder's of this DDR6 version of course) and if you look at W1zz's tests of various AIB models, the idle power usage can vary model-to-model by around the 2W listed here. Same goes for the load power usage, these are both within model variation so there's little to conclude here.

The 4070 GDDR6 tested in the video is a mere 15 MHz higher clocked than a reference 4070: https://www.gigabyte.com/Graphics-Card/GV-N4070WF3OCV2-12GD/sp#sp

So for all intents and purposes it is a good comparison to the reference model.

FYI a small difference doesn't mean there's nothing to conclude. You are throwing shade on every model variant review with a comment like that. You could perhaps advise waiting for a larger sample size but small doesn't inherently mean the data isn't informing of a genuine difference.
 
The 4070 GDDR6 tested in the video is a mere 15 MHz higher clocked than a reference 4070: https://www.gigabyte.com/Graphics-Card/GV-N4070WF3OCV2-12GD/sp#sp

So for all intents and purposes it is a good comparison to the reference model.

FYI a small difference doesn't mean there's nothing to conclude. You are throwing shade on every model variant review with a comment like that. You could perhaps advise waiting for a larger sample size but small doesn't inherently mean the data isn't informing of a genuine difference.

Yes, I'm very much pointing out that every model variant is subject to variations that are larger than shown in the HUB video. For example, Asus Dual 4070 Super vs the FE 4070 Super, both use stock Nvidia clocks:

power-idle.png


4W difference in idle. And I wouldn't be surprised if 5 different samples of both the FE and the Asus Dual overlap in idle power. Same goes for load power. Too little data in any of these examples to make conclusions so IMO all within normal variance until proven otherwise. I'd love to see a HWInfo chart of the GDDR6 vs GDDR6X power use difference in the two 4070s though so see what the actual difference in power draw is.
 
Back
Top