• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

Something I noticed not being mentioned is that they are seem to be settling on advertising as PCIe 4.0 ready (whether or not it's really necessary yet), which likely answers the question/dilemma GamersNexus touched upon in regards to how NVIDIA would advertise their cards. While it's an insignificant element for those in the know, it's a big thing for the masses who'd just look at it and then panic because their Intel mobo doesn't have PCIe 4.0 capability, and the only ones on the market with PCIe 4.0 at the time of release is AMD.

I'm actually hoping NVIDIA decides to completely follow through on marketing, if only to see how Intel would spin things in order to stop would-be RTX 3000 series buyers from panic buying an AMD build to go along with their new GPU (which would hilariously also benefit the same mobo makers frustrated with Intel's failure on intended PCIe 4.0).
 
Something I noticed not being mentioned is that they are seem to be settling on advertising as PCIe 4.0 ready (whether or not it's really necessary yet), which likely answers the question/dilemma GamersNexus touched upon in regards to how NVIDIA would advertise their cards. While it's an insignificant element for those in the know, it's a big thing for the masses who'd just look at it and then panic because their Intel mobo doesn't have PCIe 4.0 capability, and the only ones on the market with PCIe 4.0 at the time of release is AMD.

I'm actually hoping NVIDIA decides to completely follow through on marketing, if only to see how Intel would spin things in order to stop would-be RTX 3000 series buyers from panic buying an AMD build to go along with their new GPU (which would hilariously also benefit the same mobo makers frustrated with Intel's failure on intended PCIe 4.0).

I definitely thought about this. There will certainly be some people that go with AMD/x570 just to get PCIe 4.0 (unecessarily) which is funny because this launch could be a bit of a boost to AMD solely because of that. I'm kind of in that same boat because I'm on x370 now, and while I'd like to stay with it I am probably going to end up getting the 3090 after the dust settles depending on benchmarks RDNA2 etc, but one thing I need to know first is if there's any benefit to running it on PCIe 4.0. There may not be, but I want a definitive answer on that before I do anything.
 
I want to see some funky fish tailed cards.
 
So 7nm woopwoop, and the new tensor where able to calculate fp32, and we will have an GPU with tripple the Tflops xD, lets see
 
Well seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
 
Well seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.

Why would you have to buy a new PSU? Why can't you use an adapter? Especially if it's a modular PSU you can probably buy a new cable direct from your PSU manufacturer eventually.
 
Well seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
Reference cards are supposedly shipping with a 2x8pin->12pin adaptor.
 
No poll option for Waiting for the reviews?
Looks like it was added and most are voting for it. I'm in with that vote. But in addition to reviews, price points are important. Most people in this new economic condition of the world are going to be less able to afford the same prices NVidia was charging for the RTX20xx line-up.

(Before anyone flames me for repeating what might have already been said, TLDR. Yes, I know I'm late to the party again..)
Holy hannah the wattage on these cards! Granted, they are are going to be premium performance cards, but unless buyers already have beefy PSU's, said buyers must also include the cost of a PSU in addition to the cost of the GPU's. Of course those costs haven't been stated, but we can safely presume that they will very likely be similar to the RTX20XX series.

One also has to wonder if NVidia has a plan for the GTX line-up and what they might look like.
 
Last edited:
This TGP rather than TDP is going to be annoying and cause confusion for people...Can see it in this thread already.

I mean how do you compared TGP to the old TDP values. is the 350W TGP gonna be similar to the 250W TDP of the 2080Ti, plus extra for board power? But then can you see the rest of the components using another 100W?

TGP?

Trying to pull 2 year old arguments? Hardly. Given the fact that many AAA titles are raytracing enabled, the fact that main game engines like UE now support raytracing and some of the biggest games are going to be raytraced - Cyberpunk 2077, Minecraft just for an example - nobody believes those old arguments anymore. And you seem to have a bit of a split personality - your beloved AMD is saying new consoles and their RDNA2 GPUs will support raytracing as well. So what are you really trying to say? Are you trying to prepare for the eventuality that AMD's raytracing performance sucks?

But it's a good question. If I turn RT off, does the card use less power? If no, why not, if the reason for the power use is RT hardware?
 
Leaked specs of RTX3080 don't make sense. Knowing GA100 and A100 chip configuration it's easy to deduct that fully enabled GA102 chip will have 6144 CUDA Cores split between 96 SMs which are further organized into 6 GPCs. If RTX3080 uses only 68 SMs (~71%) then a lot of silicon is wasted. I think there is a space between GA102 and GA104 chips for GA103 chip which fully enabled will have 5120 CUDA Cores (80 SMs, 5 GPCs). The RTX3080 will probably be based on GA103 chip if indeed it has 4352 CUDA Cores.
 
Last edited:
What's with 7nm process power consumption? 3080 vs 2080 Ti, similar specs, +150MHz on the GPU and faster GDDR but maybe less ram chips ... the result is +100W on a 7nm node ... I wouldn't expect power envelope expanded by hundred watts even on the same node.
 
Without bench's to show how poor the performance difference between the 3080 vs. 3090 at the same settings, we are going to be fighting over nothing here. Its obvious the 3080 is the best choice if it was priced where it should be. The 3090 is the 'ti' variant that the 2080ti people will purchase and see performance gains. Then some time around March 2021, the real 3090ti variant releases and prices adjust accordingly.

I have a 1080ti, I want a performance increase (3080+) without spending a kidney. This 1080ti was $550, Someone tell me what I can purchase for $5-700 in the next month that will give me an increase in performance in my sexsimulator69?
 
This is just a enterprise card designed for AI / DL / whatever workload being pushed into gaming. These cards normally fail the enterprise quality stamp. So having up to 350W of TDP / TBP is not unknown. It's like Linus torwards said about Intel: Stop putting stuff in chips that only make themself look good in really specific (AVX-512) workloads. These RT/Tensor cores proberly count up big for the extra power consumption.

Price is proberly in between 1000 and 2000$. Nvidia is the new apple.
Do you mean *probably*?
 
Ah, beloved Gainward aka Palit division for EU, good to see it still going strong.
 
3090 launch price: 1400$
2080 ti launch price (standard edition): 999$



I laugh on some people who really believed nvidia when said: "Ampere will be cheaper than what Turing was". (edited)
Even some tech sites mentioned that. It is so funny when people believing in hope, than in reality. And the 70% faster than Volta nvidia claims, is a joke. Not even in RTX scenarios the difference will be that big. Maybe it will be that much in CUDA processing, and that's all.

P.S. Thats a 40% increase in price. I bet, the difference in performance (in gaming) will be less than this (maybe 25-30% in real world scenarios). Mark this post for reference, when reviews come out. Over and Out.
 
...
RTX 3090 5248 CUDA, 1695 MHz boost, 24 GB, 936 GB/s, 350 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process
...

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.

Yes, it's not a bad upgrade but it is predictable. Basically everything in their lineup shifts one level, 3080=2080Ti + higher clock, 3070=2080+higher clock. If the pattern follows to the midrange, which I think it will, we'll see slightly higher than 2070 / 2070 Super performance from the 3060 / 3060 Ti.

The biggest impact to future PC games and the capabilities of PC games will be if they take the 1650/1660 series and include ray tracing and DLSS at 2060+ levels of performance. That will be a mainstream card and would become a new baseline for developers to target for games to be released in 2-3 years.

Still I suspect we will see same performance at the same price point for a while (~6 months after release), regardless of what the name is. Only the 3090 offers significantly more performance, and that's very much niche with more marketing value as those types of cards typically garner 0.1% of market share.

Ironically the pricing situation may hinge a lot on potential competition from Intel and its new Xe discrete GPU.
 
Why would you have to buy a new PSU? Why can't you use an adapter? Especially if it's a modular PSU you can probably buy a new cable direct from your PSU manufacturer eventually.
PSU is no longer supported guarantee it. It’s a modular design but it uses screw in round connectors and I have had it for awhile. It’s a 1300 watt gold PSU.


Reference cards are supposedly shipping with a 2x8pin->12pin adaptor.
Oh I missed that, then I am no longer worried I’ll just get a reference 3090 and a water block and use the adaptors for awhile.
 
Can’t wait to know more about RDNA2 so I can make a informed decision..NV or AMD

my Vega64 can’t do 4K 30fps in project cars 3. I get 26fps but still it’s smooth thanks to freesync but can’t want to upgrade.
 
NVLink SLI is only available on 3090, why would you want to sli a beast Nvidia?
 
NVLink SLI is only available on 3090, why would you want to sli a beast Nvidia?

If you got unlimited cash to burn.
 
2080 SUPER
Transistors13,600 million Shading Units3072 RT Cores48

3070
Transistors30,000 million Shading Units3072 RT Cores96

What,

other than doubling the RT core from 48 to 96, what other benefit did the doubling the transistor count do, OMG, this could have been 6144 CUDA core count for the transistor budget it has.
 
Raytracing should take visual fidelity a step further.
Had that been the case, people wouldn't need to ask Epic whether Unreal PS5 demo was using DXR like calls or not.

RT fails to deliver on its main promise: _easier_to_develop_ realistic reflections/shadows.

Short term, it could evaporate the way PhysX did.
 
Had that been the case, people wouldn't need to ask Epic whether Unreal PS5 demo was using DXR like calls or not.

RT fails to deliver on its main promise: _easier_to_develop_ realistic reflections/shadows.

Short term, it could evaporate the way PhysX did.


PhysX did not evaporate. It became ubiquitous to the point people don't know it's there anymore. It's used by Unreal Engine 3+, Unity, and host of others.
 
Back
Top