Friday, August 28th 2020
NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked
Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.
The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
Source:
VideoCardz
The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
216 Comments on NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked
That 12 pin, I intend to water cool this card But it seems I am going to be stuck one of two ways:
1: Buy the reference, but have to purchase new PSU due to 12 pin.
2: Hope to god I pick a non reference design that gets a water block and uses the 3 8 pin connections.
Guess ill have to wait and see.
(Before anyone flames me for repeating what might have already been said, TLDR. Yes, I know I'm late to the party again..)
Holy hannah the wattage on these cards! Granted, they are are going to be premium performance cards, but unless buyers already have beefy PSU's, said buyers must also include the cost of a PSU in addition to the cost of the GPU's. Of course those costs haven't been stated, but we can safely presume that they will very likely be similar to the RTX20XX series.
One also has to wonder if NVidia has a plan for the GTX line-up and what they might look like.
I have a 1080ti, I want a performance increase (3080+) without spending a kidney. This 1080ti was $550, Someone tell me what I can purchase for $5-700 in the next month that will give me an increase in performance in my sexsimulator69?
2080 ti launch price (standard edition): 999$
I laugh on some people who really believed nvidia when said: "Ampere will be cheaper than what Turing was". (edited)
Even some tech sites mentioned that. It is so funny when people believing in hope, than in reality. And the 70% faster than Volta nvidia claims, is a joke. Not even in RTX scenarios the difference will be that big. Maybe it will be that much in CUDA processing, and that's all.
P.S. Thats a 40% increase in price. I bet, the difference in performance (in gaming) will be less than this (maybe 25-30% in real world scenarios). Mark this post for reference, when reviews come out. Over and Out.
The biggest impact to future PC games and the capabilities of PC games will be if they take the 1650/1660 series and include ray tracing and DLSS at 2060+ levels of performance. That will be a mainstream card and would become a new baseline for developers to target for games to be released in 2-3 years.
Still I suspect we will see same performance at the same price point for a while (~6 months after release), regardless of what the name is. Only the 3090 offers significantly more performance, and that's very much niche with more marketing value as those types of cards typically garner 0.1% of market share.
Ironically the pricing situation may hinge a lot on potential competition from Intel and its new Xe discrete GPU.
my Vega64 can’t do 4K 30fps in project cars 3. I get 26fps but still it’s smooth thanks to freesync but can’t want to upgrade.
Transistors13,600 million Shading Units3072 RT Cores48
3070
Transistors30,000 million Shading Units3072 RT Cores96
What,
other than doubling the RT core from 48 to 96, what other benefit did the doubling the transistor count do, OMG, this could have been 6144 CUDA core count for the transistor budget it has.
RT fails to deliver on its main promise: _easier_to_develop_ realistic reflections/shadows.
Short term, it could evaporate the way PhysX did.
alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/
2GB vs 4 GB 770... when everyone was saying 2Gb was not enough, this test showed otherwise.
"There isn’t a lot of difference between the cards at 1920×1080 or at 2560×1600. We only start to see minimal differences at 5760×1080, and even so, there is rarely a frame or two difference. ... There is one last thing to note with Max Payne 3: It would not normally allow one to set 4xAA at 5760×1080 with any 2GB card as it ***claims*** to require 2750MB. However, when we replaced the 4GB GTX 770 with the 2GB version, the game allowed the setting. And there were no slowdowns, stuttering, nor any performance differences that we could find between the two GTX 770s.
Same here .... www.guru3d.com/articles_pages/gigabyte_geforce_gtx_960_g1_gaming_4gb_review,12.html
Same here .... www.extremetech.com/gaming/213069-is-4gb-of-vram-enough-amds-fury-x-faces-off-with-nvidias-gtx-980-ti-titan-x
Same here .... www.pugetsystems.com/labs/articles/Video-Card-Performance-2GB-vs-4GB-Memory-154/
Yes, you can find some games that will show a difference, most;ly SIMs w/ bad console ports
And lets remember ... the 3GB version of the 1060 did just fine. They were not the same GPU, the 3 GB version had 10% less shaders which gave the 6 GB an VRAM independent speed advantage. The extra shaders gave the 6 GB version a 6 % speed advantage over the 3 GB ... So when going to 1440p, if there was even a hint of impact due to VRAM, that 6% should be miuch bigger ... it wasn't....only saw a difference at 4k.
Based upon actual testing at lower res's and scaling up accordingly, my expectations for the 2080 were 12k, so was surprised at the odd 11 number.... for 3080, I thot they'd do 12 ... so 10 tells me thet Nvidia must know more than we know. No sense putting it in if it's not used.... no different that have an 8+6 power connector on a 225 watt card. Just because the connectors and cable can pull 225 watts (+ 75 from the slot) doesn't mean it will ever happen.
Nvidia’s Brandon Bell has addressed this topic more than once saying that the utilities that are available "all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.” The card manufacturers gave us more RAM because customers would buy it. But for a 1060 ... the test results proved we don't need more than 3 GB at 1080p, the 6 GB version didn't add anything to the mix other than more shaders.
So now for the why 10 question ?
When they did the 1060, 3 GB, why did they disable 10% of the shaders ... didn't save any money ? Let's look at W1zzard's conclusion:
"Typically, GPU vendors use the exact same GPU for SKUs of different memory capacity, just not in this case. NVIDIA decided to reduce the shader count of the GTX 1060 3 GB to 1152 from the 1280 on the 6 GB version. This rough 10% reduction in shaders lets the company increase the performance difference between the 3 GB and 6 GB version, which will probably lure potential customers closer toward the 6 GB version. "
In other words, that needed to kill 10% of the shaders because otherwise.... the performance would be the same and folks would have no reason to spring for the extra $$ for the 6 GB card. Same with the 970's 3.5 GB ... it was clearly done to gimp the 970 and provide a performance gap between the 980. When I heard there was a 3080 and a 3090, coming, I expected 12 and 16 GB.. No I can't help but wonder .... is the 12 GB the sweet spot for 4k and is the use of the 10 GB this generation's the little "gimp" needed to make the cost increase to the 3090 attactive ?