• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3070 Founders Edition

Nice Review! But how was the experience with coil whine, if any?
I checked again for you. If I run over 200 FPS and put my ear right next to the card I can hear a little bit of coil whine, but it's minimal, similar to other cards, not worth mentioning.
 
I really don't get Nvidia timing this time around. Why show 3070 a day before RDNA2 reveal? AMD will surely have an answer for 3070 in RDNA2 lineup tomorrow. All it has to do is to price it a bit cheaper and pop more vram on top of it to make 3070 look silly. What am I missing?
The key question is, if it is a real release, or a la 3080, to trick AMD to price own cards lower, while NV would shrug off "oh, no availability, sorry" until they have an actual answer to Big Navi.
 
Quite obviously to steal some thunder. AMD can't 'pop more vram' on the card a day before. AMD's card is what it is on that front. Surely clocks are being tweaked, but yeah.
I mean 3070 is such an easy target for AMD to beat. Adding 4GB extra GDDR6 costs 20 bucks at most and as we've seen from reputable leaks RDNA2 can hit nearly 2.5Ghz now. All AMD has to offer is "X Box X's" 52CU (less than 300 mm2 die size) GPU clocked at 2.2Ghz (PS5 gaming clock speed) to marginally beat 3070 in standard rasterization performance. I'd expect Nvidia to wait for 6700XT announcement and then tweak and release 3070 accordingly. I think Nvidia will lose xx70 battle this time around if it can't release competitive 3070 TI soon after 6700XT launches.
 
Last edited:
I mean 3070 is such an easy target for AMD to beat. Adding 4GB extra GDDR6 costs 20 bucks at most and as we've seen from reputable leaks RDNA2 can hit nearly 2.5Ghz now. All AMD has to offer is "X Box X's" 52CU (less than 300 mm2 die size) GPU clocked at 2.2Ghz (PS5 gaming clock speed) to marginally beat 3070 in standard rasterization performance. I'd expect Nvidia to wait for 6700XT announcement and then tweak and release 3070 accordingly. I think Nvidia will lose xx70 battle this time around if it can't release competitive 3070 TI soon after 6700XT launches.
Not the goal I shot at... but ok.
 
From GUru3d:

Much like the 3080, the GeForce RTX 3070 does exhibit coil squeal. Is it annoying? It's at a level you can hear it. In a closed chassis, that noise would fade away in the background. However, with an open chassis, you can hear coil whine/squeal. Graphics cards all make this in some form, especially at high framerates; this can be perceived.

Feel like a lot of words to say nothing, plus it's false that coil whine fade away into background noise which has lower frequencies. It's more or less what you expect from an high end card? reading into words, as a reviewer myself, i think it's noticeable and that's not good.

I checked again for you. If I run over 200 FPS and put my ear right next to the card I can hear a little bit of coil whine, but it's minimal, similar to other cards, not worth mentioning.
Guru3d mention felt like a worse problem. Thanks for your second check, i'll trust you.
 
Thank you for the detailed review, Nice GPU.. 3070 is truly a nice beast.
 
I really don't get Nvidia timing this time around. Why show 3070 a day before RDNA2 reveal? AMD will surely have an answer for 3070 in RDNA2 lineup tomorrow. All it has to do is to price it a bit cheaper and pop more vram on top of it to make 3070 look silly. What am I missing?

1. RTRT performance. AMD will be a lot slower.
2. DLSS (read: free performance, tier higher performance than your card is actually capables of). Just missing.
 
@W1zzard : why don't you mention the very high multi monitor power draw in the cons of your Ampere reviews, because compared to Turing or Pascal its significantly worse? AMD gets constantly bashed for this metric, but nVidia gets a free pass?
 
@W1zzard : why don't you mention the very high multi monitor power draw in the cons of your Ampere reviews, because compared to Turing or Pascal its significantly worse? AMD gets constantly bashed for this metric, but nVidia gets a free pass?
It's half of AMD ?

Guru3d mention felt like a worse problem. Thanks for your second check, i'll trust you.
Could be variance between samples
 
Since Nvidia stopped shipping 3080 and 3090 Founders edition - probably the 3070 will be only IT influencers.
 
@W1zzard : why don't you mention the very high multi monitor power draw in the cons of your Ampere reviews, because compared to Turing or Pascal its significantly worse? AMD gets constantly bashed for this metric, but nVidia gets a free pass?

At least with the NVIDIA cards its not running the VRAM at max clocks. For some reason running multi-monitor (and even for some single monitors at 144 Hz) causes the VRAM to stay maxed with the AMD cards.
 
1. RTRT performance. AMD will be a lot slower.
2. DLSS (read: free performance, tier higher performance than your card is actually capables of). Just missing.
1. RT really doesn't matter for now and SP5/XboX X will have RDNA chips in them so expect AMD's implementation of ray tracing to rule gaming market with only exception being Nvidia sponsored titles.
2. Yeah, AI upscaling could become a valid advantage point of Ampere IF DLSS can be implemented on GPU driver basis rather than game code as not many devs will walk extra mile to add it into the code without financial compensation for doing so I'm afraid :(
 
The Ampere SM has 128 CUDA cores however only 64 of them can execute FP32 operations concurrent with 64 integer operations in a clock cycle, meaning that in the best case scenario the Ampere SM has twice the shading power (128 FP32 per cycle) of a Turing one and in the worst case scenarios it's about as fast (64 FP32).

In other words the SM has some pretty bad constraints making it less efficient per cycle in terms of FP32.
This is exactly what I was looking for, thank you. So indeed, most games will never benefit from the new SM array it seems as the games seldomly use FP64 int, correct?
 
This is exactly what I was looking for, thank you. So indeed, most games will never benefit from the new SM array it seems as the games seldomly use FP64 int, correct?

Graphics workloads are predominantly mixed between FP32 and INT32. When integer instructions are issued that means the SM can no longer process 128 FP32 operations in that clock cycle. That happens pretty often since you constantly need to calculate addresses which need INT32, so I suspect the SM hardly ever achieves it's maximum FP32 throughput.
 
W1zzard has mentioned this and this has been disspelled a thousand times already:

No he has not. Next gen games haven't even been released yet and we are on the edge. He said it was good enough for now, which isn't what we should be looking at, but over the next year as PS5 level games get ported to PC. There is also a difference between "enough" and providing the hardware so that game makers can provide better textures packs, and nVidia is holding PCs back here. I was able to use more than 4GB the day the 1070 released because game makers released new options and new texture packs.

4 years from the 770 to the 1070 got us 4x more VRAM. 4 years from the 1070 to the 3070 got us nothing at all. That has never happened before and I think we shouldn't be so glib about it.
 
Last edited:
There is only ONE Benchmark I'm interested in.

MICROSOFT FLIGHT SIMULATOR 2020 4K (since no one bothers benchmarking DCS-WORLD).

The 3070 is just below the 2080Ti at 30fps (+/- 5 fps)

For $500 vs. $1200, that's pretty good.

But I have the 3090 and I can't even see 60fps in that game.
 
No he has not. Next gen games haven't even been released yet and we are on the edge. He said it was good enough for now, which isn't what we should be looking at, but over the next year as PS5 level games get ported to PC. There is also a difference between "enough" and providing the hardware so that game makers can provide better textures packs, and nVidia is holding PCs back here. I was able to use more than 4GB the day the 1070 released because game makers released new options and new texture packs.

4 years from the 770 to the 1070 got us 4x more VRAM. 4 years from the 1070 to the 3070 got us nothing at all. That has never happened before and I think we shouldn't be so glib about it.
It will be OK to hold your breath... seriously.

People need to get over this RAM thing already... so much crying over spilled milk.
 
A wonderful review, a great card, only I cannot quite accept the fact that it's still priced quite high:

GTX 770 - $399
GTX 970 - $329
GTX 1070 - $449

Lastly it's the most power efficient Ampere GPU but I expected a much higher overall efficiency for this generation. It's only a few percent more efficient than the GTX 1660 Ti based on a much less advanced node. I expect availablily will totally suck for the next at least three months - Ampere cards are so good several generations of AMD/NVIDIA owners are in line to get them. The RTX3080 is still nowhere to be seen: https://www.nowinstock.net/computers/videocards/nvidia/rtx3080/

As a consumer, I also want cheaper products.

But keep in mind that there is not only inflation, but so many other factors in the world that have an influence on these prices (like corona). The 770 was released in 2013, that's 7 years ago. How much more performance do we get out of a 3070 than we did from a 770? No other market can keep up with such major iterations. CPUs, cars, planes, internet speeds... nothing can keep up with the major gains we get every year from GPUs. So at least we can be happy about that :)
 
There is only ONE Benchmark I'm interested in.

MICROSOFT FLIGHT SIMULATOR 2020 4K (since no one bothers benchmarking DCS-WORLD).

The 3070 is just below the 2080Ti at 30fps (+/- 5 fps)

For $500 vs. $1200, that's pretty good.

But I have the 3090 and I can't even see 60fps in that game.

The game is currently extremely poorly optimized and not yet ported to DirectX12 (which is doubly strange since it's published by Xbox Game Studios) - this is why Wizz refuses to review it.
 
Back
Top