• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 3080 Founders Edition

Which 4K graph has made you think so? RTX 3080 minimum fps is above average for RTX 2080 Ti. I don't remember the last time we've had such a huge leap in performance. How is it "underwhelming"?

Maybe you could share some graphs to prove your PoV?



It's exactly as powerful as advertised if not better at 4K, the resolution it was created for. Never seen so many AMD trolls in one discussion. Also, it costs $700, so not sure about greed. Also, no one forces you to buy it.

Speaking of greed:

f4114121b3eddaa6ab4f8f710044f519b39c5992bce548ec4b5da0ddb15d1fe7.png
perfrel_3840_2160.png

Here you go. 1080ti came out at 700 dollars and performed almost twice as fast as a 980ti that also was 700 dollars. Then 20series came out and almost doubled prices with 2080ti at like 1200 dollars. Everyone is impressed, but in reality Nvidia just shifted back to normal pricing because they are actually expecting competition. What everyone here is concerned about is the fact that 3080 is a cut down version of their biggest gaming chip and is using energy close to the pci thermal limit, and at this stage you start to run into diminishing returns as you scale performance any higher. But in summary, no, 3080 is not a 2080 successor, it is a 2080ti successor which was a cut down version of a Titan.
 
Thought about it, decided against it. Old DX11 engine, badly designed, badly optimized, not a proper simulator, small playerbase. When they add DX12 I might reconsider it
I can agree on being badly optimized at the moment, but not on having small player base and not being a proper simulator. Predicted sales are 2.27 million units over the next three years and third party devs will soon offer state of the art super highly detailed planes as addons through MS FS 2020 shop. FS2020 is predestined to become next X-Plane/Prepar3D kind of sim.
 
And that fact that they practically doubled the shader units from the 2080 Ti (not the 2080 Super, mind you) along with 8 more ROPs and RT/Tensor cores. I was realistically expecting this thing to exceed 300W.

That's the catch, the shaders have been doubled but not the amount of SMs. In other words more execution units now share the same control logic/registers/cache which are the real power hogs in a chip.

Nvidia has done this before with Kepler to a more extreme extent, which had six times the amount of shaders per SM versus Fermi. As a result it was one of the most inefficient architectures ever per SM in terms of performance. GK110 was 90% faster than GF110 despite having almost 600% more FP32 units (GF110 shaders did run at a faster clock, still, they are worlds apart).

It's a similar story here, a lot of shading power but it's used rather inefficiently because a lot of resources are shared, that's why the power consumption looks quite bad when you think about it.
 
Last edited:
AMD or intel doesn’t matter! It’s time to upgrade from 10th Gen and game on! It’s almost 3x difference in performance for me from 1070ti :)
 
Sorry to bother you, but did you take a look at either reducing the power limit or manually altering the voltage/freq curve in Afterburner/similar? computerbase.de reported only a few percent less performance with a 270 W limit, and someone linked me a video of someone changing that curve to good effect on another site.

It honestly sounds like Turing, where you can't drop the voltage with the slider but can with the curve editor.
 
@W1zzard

In the overclocking section, you mention shortly that undervolting is not possible. Can you elaborate on this very important point? Is it prevented on Ampere on a hardware level or is it because the necessary tools are not yet available? The inability to undervolt these extremely power hungry chips would be a serious shortcoming, which I can't believe nVidia would exclude for this exact generation, since they have allowed it for so long with previous generations with access to the whole voltage/clock curve.

Other than that, an excellent article as usual!
 
Great review as usual.

It's weird that they prioritized maximum performance over energy efficiency like in previous gens. By going from 270W to 320W, they sacrificed 15% energy efficiency for just 4% higher performance.
Kinda pulled an AMD there...
 
2k and 1080p not better enough could be CPU not good enough, could we test again with next Zen 3
 
That's the catch, the shaders have been doubled but not the amount of SMs. In other words more execution units now share the same control logic/registers/cache which are the real power hogs in a chip.

Nvidia has done this before with Kepler to a more extreme extent, which had six times the amount of shaders per SM versus Fermi. As a result it was one of the most inefficient architectures ever per SM in terms of performance. GK110 was 90% faster than GF110 despite having almost 600% more FP32 units (GF110 shaders did run at a faster clock, still, they are worlds apart).

It's a similar story here, a lot of shading power but it's used rather inefficiently because a lot of resources are shared, that's why the power consumption looks quite bad when you think about it.

No doubt that doubling the shader units in the same amount of streaming multiprocessors (64/SM Turing vs 128/SM in Ampere) would increase power consumption. If anything, I don't consider it as efficient as Maxwell>Pascal, per se, but I also don't consider it being a complete waste of power as well. I am also considering the fact that those RT and Tensors cores also add to the weight.

All in all, I believe the slight sacrifice to energy efficiency is justified, as long as it doesn't get to Fury X or 290X-levels of wasted power. With this card I can probably play PUBG at 4K 144Hz with competitive settings, which is a mix of medium and low settings with AA maxed out for improved visual clarity, which is important for spotting opponents.

TL;DR - This card is overkill for those still gaming in 1080p between 1440p. If you're aiming at UW (3440x)1440p or 4K, this seems to hit the sweet spot.
 
I don't see why the power consumption is that surprising if you look at the specs. Sure its on 8nm now, but thats coming from 12nm.

Nitpick, TSMC 16FF (12N) -> Samsung 10LPU (8N). 320W+ is a 30% increase in power for ~30% increase in perf.

No doubt that doubling the shader units in the same amount of streaming multiprocessors (64/SM Turing vs 128/SM in Ampere) would increase power consumption. If anything, I don't consider it as efficient as Maxwell>Pascal, per se, but I also don't consider it being a complete waste of power as well. I am also considering the fact that those RT and Tensors cores also add to the weight.

Means little when it's dark silicon. Largely useful for compute/CUDA workloads & @ 4k where each frame becomes more alu limited. The increased number & revised ROP partitions help @ 4k.

We may be doing GA a disservice given RT & denoising should show gains when games/compiler are better optimised.

GPU-Z already shows the same data, using NVAPI, too

With a nice in-game overlay? :)
 
Last edited:
I already have a 1080 SLI 3 years ago and it's great for 4k experience minimum 70 - 80 fps in Ultra in most games...
 
Great review, I cannot wait for the RTX 3090 review!

Its interesting how this cooler performs, seems to be a nice improvement over past "FE" coolers in a meaningful way especially with the power consumption of these cards. I am a little disappointed in the overclocking of these cards, I mean overclocking has been meh for awhile but it seems like this one is even less than normal. Granted the memory moved up and it shows some decent performance gains but it looks like these cards are already pushed to their limit out of the box with only minor improvement with aftermarket cards.

Still cant wait!
 
The important increase is in transistors. 28000(5000 broken/disabled) is 70% more than 2080 and 25% more than what 2080Ti had. 25% is the same as the performance increase. now, on 7nm DUV this is a 426mm2 die, instead of 628mm2. the main reason to avoid this is that shrinks are imminent at this point. this we didn't get, sadly. on 6nm EUV this is 360mm2 and clock speed +50% for same power. so this is just another titan. big powerful but will fall down inevitably. sometimes in less than 10-12 months. so yeah $700 not as good as you think. except in the moment, in the moment is everything.
 
Nitpick, TSMC 16FF (12N) -> Samsung 10LPU (8N). 320W+ is a 30% increase in power for ~30% increase in perf.



Means little when it's dark silicon. Largely useful for compute/CUDA loads & @ 4k where each frame becomes more alu limited. The increased number & revised ROP partitions help @ 4k.

We may be doing GA a disservice given RT & denoising should show gains when games/compiler are better optimised.



With a nice in-game overlay? :)

Hmm.. I know the TDP of the 2080 Ti is at 250W, but you can see it using around 273W (average gaming) according to @W1zzard's charts. The 3080 is rated at 320W, but it does seem to be hovering around 303W. Maybe NVIDIA just overshooting their stated specs?
 
I hate to say, but it's kinda weird with all those charts. Hype all the way. LOL
unimpressive performance. feels like they release this card in hurry. :oops:

nice review. thanks.
 
Hardware unboxed numbers (similar results):

RTX 3080 vs 2080 Ti: 14 Game Average at 1440p = +21 %
RTX 3080 vs 2080 Ti: 14 Game Average at 4K = +31 %

RTX 3080 vs RTX 2080: 14 Game Average at 1440p = +47%
RTX 3080 vs RTX 2080: 14 Game Average at 4K = +68%

Very nice gains at 4K and average generational gain at 1440p (excluding Turing)...


Total (off the wall) system power consumption: 523W
GPU only measured on Nvidia’s PCAT (Power Capture Analysis Tool) playing DOOM: 327W
8% performance per watt gain over Turing -> far from impressive given Ampere moved to new node
 
The fan noise levels are enough for me to pass on a FE.

Looks like i will have to find a partner board that fit my Arctic Accelero Xtreme 3.
 
Here you go. 1080ti came out at 700 dollars and performed almost twice as fast as a 980ti that also was 700 dollars
Math my man. :(

that is 46% faster. You realize that 2x = 100% right? For example if card A ran at 100 FPS and card B ran at 146 FPS, card B is 46% faster than card A. If it was "double" it would be 100%.
 
it didnt seem to have that damn adhesive that you need to heat up to access fasteners like the 1xxx Reference cards atleast. unless i missed the picture with that.

the 9xx Reference had those damn plastic type hex screws that stripped if you coughed near them
 
At 60Hz, yes.

My 1440p G-Sync monitor goes upto 165Hz, I wouldn't say overkill for that, it depends on the game.


True. Mine goes up to 144, so actually looking at the figures, it may make sense for it. Especially with future-proofing concerns. I suppose it depends mostly on cyberpunk 2077's performance, though. Luckily, I have time until we have some information from the competition.
 
True. Mine goes up to 144, so actually looking at the figures, it may make sense for it. Especially with future-proofing concerns. I suppose it depends mostly on cyberpunk 2077's performance, though. Luckily, I have time until we have some information from the competition.

I'm gonna throw my guess out there. This RTX 3080 is probably around 100 FPS in Cyberpunk 2077 at 4K. This is based off any improvements from the Witcher 3 engine and the fact that they are still optimizing it for the current gen (PS4/XB1) consoles.
 
Hmm.. I know the TDP of the 2080 Ti is at 250W, but you can see it using around 273W (average gaming) according to @W1zzard's charts. The 3080 is rated at 320W, but it does seem to be hovering around 303W. Maybe NVIDIA just overshooting their stated specs?

It's simple to explain
2080 Ti FE 260W TDP put the chip in the lower region (more efficient) part of the perf/power curve while 3080 320W TG is at the higher point in the perf/power curve
Meaning it's easy to overclock the 2080 Ti by simply rasing the power limit while raising power limit on 3080 does nothing (as in every review have pointed out, very similar to 5700XT).
That also means lowering the TGP of the 3080 to similar level to 2080 Ti like Computerbase did will not lower the performance of 3080 by much, improving 3080 efficiency if you so require.

what_is_nvidia_max_q_efficiency.jpg
 
Back
Top