Friday, August 28th 2020
NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked
Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.
The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
Source:
VideoCardz
The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
216 Comments on NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked
RTX 3090 5248 CUDA, 1695 MHz boost, 24 GB, 936 GB/s, 350 W, 7 nm process
RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process
I guess all the potential fab process power savings were erased by the extra RAM, RAM speed, CUDA, RT and tensor cores.
Edit: Maybe comparing to the RTX 3080 is more informative:
RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process
RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process
Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.
- Team Blue: Intel.
- Team Green: AMD.
- Team Lime: Nvidia.
Yeah, there used to be ATI. "But! But! Nvidia already has green!" No, they're lime and AMD has been around way the hell longer than Nvidia.I can't wait for the 6950XTX.
Might as well say minecraft. :p
With that said, so am i planning a RTX 3080 to be the GPU for what i am planning to be a AMD Zen 3 based system. Besides i just sold my GTX 1080 TI, so i am currently stuck to a GTX 1060 6 GB. Sold my GTX 1080 TI while i cut still get a desent price for it. With the RTX 3000 lauch, that will press value down.
"Nvidia has the best card at $36,700! So when I spend $170 I'll somehow magically get the best card I can!" - Fanboyism
Now you take that with "Oh, but it's actually smaller than two eight pins!" which is intended for Nvidia fan boys to basically say,"I don't care that my room is 20 degrees warmer and my electricity bill doubled! I need those 14 FPS because twice the FPS of my 144Hz monitor isn't enough for some very objective unquestionable reason!"
That is the problem with cronies, they know how weaker physiological mindsets work and they have no qualms about taking advantage of people.
24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps (memory bandwidth capacity of 936 GB/s)
5,248 CUDA cores, 1695 MHz boost (Gainward Phoenix Golden Sample 3090 will boost to 1725 MHz)
350 W TGP (board power)
RTX 3080
10 GB of GDDR6X memory running on a 320-bit bus at 19 Gbps (memory bandwidth capacity of 760 GB/s)
4,352 CUDA cores, 1710 MHz boost (Gainward Phoenix GS 3080 will boost to 1740 MHz)
320 W TGP (board power)
RTX 3070
? GB of GDDR6 memory running on a 256-bit bus at 16 Gbps (memory bandwidth capacity of ? GB/s)
? CUDA cores running at ?
? W TGP (board power)
As a gamer, I want the performance of the 3090 but with the VRAM amount of the 3080. Let's hope for a 3080 Ti with 12 GB GDD6X with 5,120 CUDA cores.
10 GB of GDDR6X memory running on a 320-bit bus at 19 Gbps (memory bandwidth capacity of 760 GB/s)
4,352 CUDA cores running at 1710 MHz
320 W TGP (board power)
-------------------------------------------------------------
FP32 (float) performance = 14.88 ?????????????
RTX 2080TI
11 GB of GDDR6 memory running on a 352-bit bus at 14 Gbps (memory bandwidth capacity of 616 GB/s)
4,352 CUDA cores running at 1545 MHz
250 W TGP (board power)
------------------------------------------------------------
FP32 (float) performance = 13.45
This can't be right, only 10 % rasterization performance gain over 2080TI? I REALY hope Nvidia managed to get some IPC gain due to new node/arch or it's gonna be Turing Deja Vu all over again :(