• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

The Reason Why NVIDIA's GeForce RTX 3080 GPU Uses 19 Gbps GDDR6X Memory and not Faster Variants

Obviously but that is not what this topic is about, is it... Nobody ever said 'buy an FE'. The article here is specifically talking about temps on the FE.

And we both know expecting 5-6 years of life out of a GPU is not a strange idea at all. Obviously it won't run everything beautifully, but it certainly should not be defective before then. Broken or crappy fan over time? Sure. Chip and memory issues? Bad design.

Now, when it comes to those AIB cards... the limitations of the FE do translate to those as well, since they're also 19 Gbps cards because 'the FE has it'.

Yeah; I just read the original article and it seems like the FE card suffers from stability issues under certain intensive workloads.
I expected so much more from Nvidia at this time and age...
 
Yeah; I just read the original article and it seems like the FE card suffers from stability issues under certain intensive workloads.
I expected so much more from Nvidia at this time and age...

Like I said, I can smell Intel CPU nonsense here. Nice burst, shit sustained full perf unless you place a monstrous cooler on it.

This new cutting edge we're getting stinks a little bit, if you ask me.
 
These cards are not memory bandwidth starved anyway, preliminary memory overclock tests show very little performance gains. The extra performance is not a real problem, however, existing temps already seem to be a problem.

I remember some 5700XT having to sell for very low prices because they ran their memory too hot (Asus tuf and MSI evoke gen 1), and the temps were slightly under 100°C.

The memory bandwidth is the bottleneck if we take gamers nexus results as accurate.

He did manual overclocks, and found the gpu clock been increase gave basically no performance.
He then boosted memory clock speed and got a measurable increase, it was sub 5% but was there.

Even my 1080ti gets bigger gains from memory clocks vs gpu core clocks, these rumours that memory clock speeds are pointless seem wrong.

The most likely reason the fastest gddr6x chips are not been used is they will instead be used on a future 3000 series product, nvidia dont show their best hand on the early products of a generation. They probably cost more for starters.
 
It is unfortunate that NVIDIA engineers they do not keep track of TPU forum boards.
I wrote all ready too many hints that they need to follow so the hot pan GPU technology model this to change.

In the product design of RTX 3080, all headroom of support technologies this came at the max.
Power usage this maxed at the point that the use of power limiter to be enforced.
Air cooling system this were developed at max of possible obtained performance due air.
Memory modules which are thermally cooler and use lesser energy were selected.
GPU max frequency which does not cause destructive power usage this is enforced at highest performance variants.

In summary, dear NVIDIA you did succeed to bring your self at 2020 in a DEAD END for any further GPU development.
I bet 1000 Euro that your R&D team today this is pulling their hairs from desperation of what to use so to develop the next BIG thing.
 
On 3080 the memory plane draws 70 watt, 3090 probably up to 170w, so what is the issue. Well the same way there were 6,7,8,9Gbps Gddr5, now there are 19,20,21,22,23 gddr6 and it comes down to quality and price. With time clocks will improve. Perhaps the error correction can cause the performance to drop at those temperatures and higher than 19 clock speeds.
 
Last edited:
On 3080 the memory plane draws 70 watt, 3090 probably up to 170w,
So... if that is remotely true... how is the 3090 spec'd at 30W higher? The significantly increased SP count and slightly lower clocks don't account for 50W of power savings.
 
Back
Top