Friday, August 28th 2020

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.

The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
Source: VideoCardz
Add your own comment

216 Comments on NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

#151
Caring1
I want to see some funky fish tailed cards.
Posted on Reply
#152
sYn
So 7nm woopwoop, and the new tensor where able to calculate fp32, and we will have an GPU with tripple the Tflops xD, lets see
Posted on Reply
#153
GhostRyder
Well seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
Posted on Reply
#154
Hotobu
GhostRyderWell seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
Why would you have to buy a new PSU? Why can't you use an adapter? Especially if it's a modular PSU you can probably buy a new cable direct from your PSU manufacturer eventually.
Posted on Reply
#155
bubbleawsome
GhostRyderWell seems we were right to be skeptical of pricing rumors before as while the prices are still high they are not as big of a jump as thought on the top. Now I only have one major concern...

That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.

Guess ill have to wait and see.
Reference cards are supposedly shipping with a 2x8pin->12pin adaptor.
Posted on Reply
#156
lexluthermiester
TheLostSwedeNo poll option for Waiting for the reviews?
Looks like it was added and most are voting for it. I'm in with that vote. But in addition to reviews, price points are important. Most people in this new economic condition of the world are going to be less able to afford the same prices NVidia was charging for the RTX20xx line-up.

(Before anyone flames me for repeating what might have already been said, TLDR. Yes, I know I'm late to the party again..)
Holy hannah the wattage on these cards! Granted, they are are going to be premium performance cards, but unless buyers already have beefy PSU's, said buyers must also include the cost of a PSU in addition to the cost of the GPU's. Of course those costs haven't been stated, but we can safely presume that they will very likely be similar to the RTX20XX series.

One also has to wonder if NVidia has a plan for the GTX line-up and what they might look like.
Posted on Reply
#157
Frick
Fishfaced Nincompoop
Flying FishThis TGP rather than TDP is going to be annoying and cause confusion for people...Can see it in this thread already.

I mean how do you compared TGP to the old TDP values. is the 350W TGP gonna be similar to the 250W TDP of the 2080Ti, plus extra for board power? But then can you see the rest of the components using another 100W?
TGP?
JinxedTrying to pull 2 year old arguments? Hardly. Given the fact that many AAA titles are raytracing enabled, the fact that main game engines like UE now support raytracing and some of the biggest games are going to be raytraced - Cyberpunk 2077, Minecraft just for an example - nobody believes those old arguments anymore. And you seem to have a bit of a split personality - your beloved AMD is saying new consoles and their RDNA2 GPUs will support raytracing as well. So what are you really trying to say? Are you trying to prepare for the eventuality that AMD's raytracing performance sucks?
But it's a good question. If I turn RT off, does the card use less power? If no, why not, if the reason for the power use is RT hardware?
Posted on Reply
#158
calkapokole
Leaked specs of RTX3080 don't make sense. Knowing GA100 and A100 chip configuration it's easy to deduct that fully enabled GA102 chip will have 6144 CUDA Cores split between 96 SMs which are further organized into 6 GPCs. If RTX3080 uses only 68 SMs (~71%) then a lot of silicon is wasted. I think there is a space between GA102 and GA104 chips for GA103 chip which fully enabled will have 5120 CUDA Cores (80 SMs, 5 GPCs). The RTX3080 will probably be based on GA103 chip if indeed it has 4352 CUDA Cores.
Posted on Reply
#159
BiggieShady
What's with 7nm process power consumption? 3080 vs 2080 Ti, similar specs, +150MHz on the GPU and faster GDDR but maybe less ram chips ... the result is +100W on a 7nm node ... I wouldn't expect power envelope expanded by hundred watts even on the same node.
Posted on Reply
#160
saki630
Without bench's to show how poor the performance difference between the 3080 vs. 3090 at the same settings, we are going to be fighting over nothing here. Its obvious the 3080 is the best choice if it was priced where it should be. The 3090 is the 'ti' variant that the 2080ti people will purchase and see performance gains. Then some time around March 2021, the real 3090ti variant releases and prices adjust accordingly.

I have a 1080ti, I want a performance increase (3080+) without spending a kidney. This 1080ti was $550, Someone tell me what I can purchase for $5-700 in the next month that will give me an increase in performance in my sexsimulator69?
Posted on Reply
#161
Hardware Geek
JismThis is just a enterprise card designed for AI / DL / whatever workload being pushed into gaming. These cards normally fail the enterprise quality stamp. So having up to 350W of TDP / TBP is not unknown. It's like Linus torwards said about Intel: Stop putting stuff in chips that only make themself look good in really specific (AVX-512) workloads. These RT/Tensor cores proberly count up big for the extra power consumption.

Price is proberly in between 1000 and 2000$. Nvidia is the new apple.
Do you mean *probably*?
Posted on Reply
#163
BiggieShady
Ah, beloved Gainward aka Palit division for EU, good to see it still going strong.
Posted on Reply
#164
CandymanGR
3090 launch price: 1400$
2080 ti launch price (standard edition): 999$



I laugh on some people who really believed nvidia when said: "Ampere will be cheaper than what Turing was". (edited)
Even some tech sites mentioned that. It is so funny when people believing in hope, than in reality. And the 70% faster than Volta nvidia claims, is a joke. Not even in RTX scenarios the difference will be that big. Maybe it will be that much in CUDA processing, and that's all.

P.S. Thats a 40% increase in price. I bet, the difference in performance (in gaming) will be less than this (maybe 25-30% in real world scenarios). Mark this post for reference, when reviews come out. Over and Out.
Posted on Reply
#165
RandallFlagg
Mark Little...
RTX 3090 5248 CUDA, 1695 MHz boost, 24 GB, 936 GB/s, 350 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process
...

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.
Yes, it's not a bad upgrade but it is predictable. Basically everything in their lineup shifts one level, 3080=2080Ti + higher clock, 3070=2080+higher clock. If the pattern follows to the midrange, which I think it will, we'll see slightly higher than 2070 / 2070 Super performance from the 3060 / 3060 Ti.

The biggest impact to future PC games and the capabilities of PC games will be if they take the 1650/1660 series and include ray tracing and DLSS at 2060+ levels of performance. That will be a mainstream card and would become a new baseline for developers to target for games to be released in 2-3 years.

Still I suspect we will see same performance at the same price point for a while (~6 months after release), regardless of what the name is. Only the 3090 offers significantly more performance, and that's very much niche with more marketing value as those types of cards typically garner 0.1% of market share.

Ironically the pricing situation may hinge a lot on potential competition from Intel and its new Xe discrete GPU.
Posted on Reply
#166
GhostRyder
HotobuWhy would you have to buy a new PSU? Why can't you use an adapter? Especially if it's a modular PSU you can probably buy a new cable direct from your PSU manufacturer eventually.
PSU is no longer supported guarantee it. It’s a modular design but it uses screw in round connectors and I have had it for awhile. It’s a 1300 watt gold PSU.
bubbleawsomeReference cards are supposedly shipping with a 2x8pin->12pin adaptor.
Oh I missed that, then I am no longer worried I’ll just get a reference 3090 and a water block and use the adaptors for awhile.
Posted on Reply
#167
SKD007
Can’t wait to know more about RDNA2 so I can make a informed decision..NV or AMD

my Vega64 can’t do 4K 30fps in project cars 3. I get 26fps but still it’s smooth thanks to freesync but can’t want to upgrade.
Posted on Reply
#168
Prior
NVLink SLI is only available on 3090, why would you want to sli a beast Nvidia?
Posted on Reply
#169
P4-630
PriorNVLink SLI is only available on 3090, why would you want to sli a beast Nvidia?
If you got unlimited cash to burn.
Posted on Reply
#170
ppn
2080 SUPER
Transistors13,600 million Shading Units3072 RT Cores48

3070
Transistors30,000 million Shading Units3072 RT Cores96

What,

other than doubling the RT core from 48 to 96, what other benefit did the doubling the transistor count do, OMG, this could have been 6144 CUDA core count for the transistor budget it has.
Posted on Reply
#171
medi01
HotobuRaytracing should take visual fidelity a step further.
Had that been the case, people wouldn't need to ask Epic whether Unreal PS5 demo was using DXR like calls or not.

RT fails to deliver on its main promise: _easier_to_develop_ realistic reflections/shadows.

Short term, it could evaporate the way PhysX did.
Posted on Reply
#172
rtwjunkie
PC Gaming Enthusiast
GhostRyderBuy the reference, but have to purchase new PSU due to 12 pin.
From what I've seen, adapters will be included.
Posted on Reply
#173
RandallFlagg
medi01Had that been the case, people wouldn't need to ask Epic whether Unreal PS5 demo was using DXR like calls or not.

RT fails to deliver on its main promise: _easier_to_develop_ realistic reflections/shadows.

Short term, it could evaporate the way PhysX did.
PhysX did not evaporate. It became ubiquitous to the point people don't know it's there anymore. It's used by Unreal Engine 3+, Unity, and host of others.
Posted on Reply
#174
RoutedScripter
I come here as someone who does not like spoilers ... just days away this seems like a total psycho obsession with some of the people who think they're doing something noble with leaks ... at least the news media, if they are eager to profit off the leaks because of drama and traffic, should put up big spoiler warnings and some standards in this regard, I'm so sick of this, no, I do not know what the leak is, I only came here to say this, I will be going on tech-site blackout until I watch the proper reveal. Yes I was hiding my eyes not to do a single peek of the content or any comments, I did not read any posts here in this thread either.
Posted on Reply
#175
John Naylor
ZoneDymoNo, but you do have to answer why that gap is so insanely hugh, more then twice the ram? borderline 2.5? that is just insane.
And again, the midrange of old, RX480 had 8gb of ram and the gtx1060 had 6gb of ram...to have an RTX3080 now with 10gb is just pathetic imo with the eye on progression and placement.
Peeps have been complaining about VRAM for generations of cards, and real world testing has nott borne it up. Every time test sites have compared the same GPU w/ different RAM sizes, in almost every game, there was no observable impact in performance.

alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/
2GB vs 4 GB 770... when everyone was saying 2Gb was not enough, this test showed otherwise.

"There isn’t a lot of difference between the cards at 1920×1080 or at 2560×1600. We only start to see minimal differences at 5760×1080, and even so, there is rarely a frame or two difference. ... There is one last thing to note with Max Payne 3: It would not normally allow one to set 4xAA at 5760×1080 with any 2GB card as it ***claims*** to require 2750MB. However, when we replaced the 4GB GTX 770 with the 2GB version, the game allowed the setting. And there were no slowdowns, stuttering, nor any performance differences that we could find between the two GTX 770s.

Same here .... www.guru3d.com/articles_pages/gigabyte_geforce_gtx_960_g1_gaming_4gb_review,12.html
Same here .... www.extremetech.com/gaming/213069-is-4gb-of-vram-enough-amds-fury-x-faces-off-with-nvidias-gtx-980-ti-titan-x
Same here .... www.pugetsystems.com/labs/articles/Video-Card-Performance-2GB-vs-4GB-Memory-154/

Yes, you can find some games that will show a difference, most;ly SIMs w/ bad console ports

And lets remember ... the 3GB version of the 1060 did just fine. They were not the same GPU, the 3 GB version had 10% less shaders which gave the 6 GB an VRAM independent speed advantage. The extra shaders gave the 6 GB version a 6 % speed advantage over the 3 GB ... So when going to 1440p, if there was even a hint of impact due to VRAM, that 6% should be miuch bigger ... it wasn't....only saw a difference at 4k.

Based upon actual testing at lower res's and scaling up accordingly, my expectations for the 2080 were 12k, so was surprised at the odd 11 number.... for 3080, I thot they'd do 12 ... so 10 tells me thet Nvidia must know more than we know. No sense putting it in if it's not used.... no different that have an 8+6 power connector on a 225 watt card. Just because the connectors and cable can pull 225 watts (+ 75 from the slot) doesn't mean it will ever happen.

Nvidia’s Brandon Bell has addressed this topic more than once saying that the utilities that are available "all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.” The card manufacturers gave us more RAM because customers would buy it. But for a 1060 ... the test results proved we don't need more than 3 GB at 1080p, the 6 GB version didn't add anything to the mix other than more shaders.

So now for the why 10 question ?

When they did the 1060, 3 GB, why did they disable 10% of the shaders ... didn't save any money ? Let's look at W1zzard's conclusion:

"Typically, GPU vendors use the exact same GPU for SKUs of different memory capacity, just not in this case. NVIDIA decided to reduce the shader count of the GTX 1060 3 GB to 1152 from the 1280 on the 6 GB version. This rough 10% reduction in shaders lets the company increase the performance difference between the 3 GB and 6 GB version, which will probably lure potential customers closer toward the 6 GB version. "

In other words, that needed to kill 10% of the shaders because otherwise.... the performance would be the same and folks would have no reason to spring for the extra $$ for the 6 GB card. Same with the 970's 3.5 GB ... it was clearly done to gimp the 970 and provide a performance gap between the 980. When I heard there was a 3080 and a 3090, coming, I expected 12 and 16 GB.. No I can't help but wonder .... is the 12 GB the sweet spot for 4k and is the use of the 10 GB this generation's the little "gimp" needed to make the cost increase to the 3090 attactive ?
Posted on Reply
Add your own comment
Nov 21st, 2024 15:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts