Friday, August 28th 2020

NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

Just ahead of the September launch, specifications of NVIDIA's upcoming RTX Ampere lineup have been leaked by industry sources over at VideoCardz. According to the website, three alleged GeForce SKUs are being launched in September - RTX 3090, RTX 3080, and RTX 3070. The new lineup features major improvements: 2nd generation ray-tracing cores and 3rd generation tensor cores made for AI and ML. When it comes to connectivity and I/O, the new cards use the PCIe 4.0 interface and have support for the latest display outputs like HDMI 2.1 and DisplayPort 1.4a.

The GeForce RTX 3090 comes with 24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps. This gives a memory bandwidth capacity of 936 GB/s. The card features the GA102-300 GPU with 5,248 CUDA cores running at 1695 MHz, and is rated for 350 W TGP (board power). While the Founders Edition cards will use NVIDIA's new 12-pin power connector, non-Founders Edition cards, from board partners like ASUS, MSI and Gigabyte, will be powered by two 8-pin connectors. Next up is specs for the GeForce RTX 3080, a GA102-200 based card that has 4,352 CUDA cores running at 1710 MHz, paired with 10 GB of GDDR6X memory running at 19 Gbps. The memory is connected with a 320-bit bus that achieves 760 GB/s bandwidth. The board is rated at 320 W and the card is designed to be powered by dual 8-pin connectors. And finally, there is the GeForce RTX 3070, which is built around the GA104-300 GPU with a yet unknown number of CUDA cores. We only know that it has the older non-X GDDR6 memory that runs at 16 Gbps speed on a 256-bit bus. The GPUs are supposedly manufactured on TSMC's 7 nm process, possibly the EUV variant.
Source: VideoCardz
Add your own comment

216 Comments on NVIDIA GeForce RTX 3090 and 3080 Specifications Leaked

#1
Chaitanya
I will wait to see how these new GPUs perform in Helicon before deciding on uograde.
Posted on Reply
#2
Vya Domus
The card features GA102-300 GPU with 5248 CUDA cores running at 1695 MHz, rated for 350 W TGP.
So it turns out that A100 was indeed a very good indicator of how the GAXXX counter parts are going to look like, I don't know why everyone thought otherwise. Anyway, that's about 14% more shaders for 25% more power compared to TU102 and given that the larger the processor is the more power efficient it's going to be something must have went wrong with the manufacturing process. Or something else related to the SMs themselves.
Posted on Reply
#3
TheLostSwede
News Editor
No poll option for Waiting for the reviews?
Posted on Reply
#4
koaschten
ChaitanyaI will wait to see how these new GPUs perform in Helicon before deciding on upgrade.
Also going to wait to see which cards get watercooler support, because I don't want an extra 350W of heat stuck in a small metal box.
Posted on Reply
#5
W1zzard
TheLostSwedeNo poll option for Waiting for the reviews?
Added
Posted on Reply
#6
Daven
So let's see

RTX 3090 5248 CUDA, 1695 MHz boost, 24 GB, 936 GB/s, 350 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

I guess all the potential fab process power savings were erased by the extra RAM, RAM speed, CUDA, RT and tensor cores.

Edit: Maybe comparing to the RTX 3080 is more informative:

RTX 3080 4352 CUDA, 1710 MHz boost, 10 GB, 760 GB/s, 320 W, 7 nm process

RTX 2080 Ti 4352 CUDA, 1545 MHz boost, 11 GB, 616 GB/s, 250 W, 12 nm process

Almost no difference between these cards except on the RT and Tensor side. If the price is much lower than $1000 for the 3080 then you can get 2080 Ti performance on the 'cheap'.
Posted on Reply
#7
CrAsHnBuRnXp
Good. Still 2 8 pin connectors from board partners. Nothing has changed. Just nvidia being weird.
Posted on Reply
#8
JAB Creations
Who the hell is team red? Okay, gotta figure this one out...
  • Team Blue: Intel.
  • Team Green: AMD.
  • Team Lime: Nvidia.
Yeah, there used to be ATI. "But! But! Nvidia already has green!" No, they're lime and AMD has been around way the hell longer than Nvidia.

I can't wait for the 6950XTX.
Posted on Reply
#9
Raendor
only 10GB on 3080? Seriously? And 8GB on 3070 is same we had ever since Pascal already for x70. That's lame.
Posted on Reply
#10
EarthDog
ChaitanyaI will wait to see how these new GPUs perform in Helicon before deciding on uograde.
Helicon? hahahahahahahahahhaha

Might as well say minecraft. :p
Posted on Reply
#11
steen
TheLostSwedeNo poll option for Waiting for the reviews?
Reviews will be critical. Current bench suites may need reviewing. It'll be interesting to see what 34b odd transistors @ 1.7GHz can do.
Posted on Reply
#12
Vya Domus
Mark LittleIf the price is much lower than $1000 for the 3080
We all know that ain't gonna happen. "Much lower" will mean 899$ or something along those line probably.
Posted on Reply
#13
Tomgang
I just hope RTX 3000 will not be Fermi all over again. Triple slot cooler for RTX 3090 and 350 watt TDP and 320 watt for RTX 3080 gives some sort of concern for heat and temperature.

With that said, so am i planning a RTX 3080 to be the GPU for what i am planning to be a AMD Zen 3 based system. Besides i just sold my GTX 1080 TI, so i am currently stuck to a GTX 1060 6 GB. Sold my GTX 1080 TI while i cut still get a desent price for it. With the RTX 3000 lauch, that will press value down.
Posted on Reply
#14
JAB Creations
CrAsHnBuRnXpGood. Still 2 8 pin connectors from board partners. Nothing has changed. Just nvidia being weird.
They're not being weird, they're trying to distract people that they feel threatened by AMD who no longer has a stock value of under $2 from the crony tactics of both Intel and Nvidia so naturally they're going to do their absolute best. Why?

"Nvidia has the best card at $36,700! So when I spend $170 I'll somehow magically get the best card I can!" - Fanboyism

Now you take that with "Oh, but it's actually smaller than two eight pins!" which is intended for Nvidia fan boys to basically say,"I don't care that my room is 20 degrees warmer and my electricity bill doubled! I need those 14 FPS because twice the FPS of my 144Hz monitor isn't enough for some very objective unquestionable reason!"

That is the problem with cronies, they know how weaker physiological mindsets work and they have no qualms about taking advantage of people.
Posted on Reply
#15
King Mustard
RTX 3090
24 GB of GDDR6X memory running on a 384-bit bus at 19.5 Gbps (memory bandwidth capacity of 936 GB/s)
5,248 CUDA cores, 1695 MHz boost (Gainward Phoenix Golden Sample 3090 will boost to 1725 MHz)
350 W TGP (board power)

RTX 3080
10 GB of GDDR6X memory running on a 320-bit bus at 19 Gbps (memory bandwidth capacity of 760 GB/s)
4,352 CUDA cores, 1710 MHz boost (Gainward Phoenix GS 3080 will boost to 1740 MHz)
320 W TGP (board power)

RTX 3070
? GB of GDDR6 memory running on a 256-bit bus at 16 Gbps (memory bandwidth capacity of ? GB/s)
? CUDA cores running at ?
? W TGP (board power)

As a gamer, I want the performance of the 3090 but with the VRAM amount of the 3080. Let's hope for a 3080 Ti with 12 GB GDD6X with 5,120 CUDA cores.
Posted on Reply
#16
medi01
AleksandarKThe GPUs are supposedly manufactured on TSMC's 7 nm process
Based on what?
Posted on Reply
#17
kayjay010101
medi01Based on what?
Based on what the article says. Bottom of the article states the following:
The data that we saw clearly mention the 7nm fabrication node. At this time we are unable to confirm if this is indeed true.
Posted on Reply
#18
RedelZaVedno
Is 1695 MHz a boost or a base frequency? Very disappointing if it's the boost, as it would mean only 17,8 TFlops (2080TI + 30%) if there is no arch gain. But even if IPC is increased by 15% due to new arch, it would still mean only 20,45 TFlops (50 % more than 2080TI). These rasterization performance gains would maybe justify $800/1000 price tag, but certainly not $1,4K or more.
Posted on Reply
#19
Chaitanya
koaschtenAlso going to wait to see which cards get watercooler support, because I don't want an extra 350W of heat stuck in a small metal box.
I think water cooling support will take 1-2 months after release of the new GPUs. I am considering upgrade from my ageing GTX 1060 to something in same range not top end or enthusiast class GPU.
EarthDogHelicon? hahahahahahahahahhaha

Might as well say minecraft. :p
I dont play games and one of the main softwares that I use is Helicon for photo stacking and it scales and performs very well on GPUs. You can check their website for performance numbers on various CPUs and GPUs.
Posted on Reply
#20
kayjay010101
ChaitanyaI think water cooling support will take 1-2 months after release of the new GPUs. I am considering upgrade from my ageing GTX 1060 to something in same range not top end or enthusiast class GPU.
EK and AlphaCool has already stated they have designs ready for launch. I wouldn't be surprised if we see AIB cards like the Sea Hawk from MSI which uses the EK block be released day one.
Posted on Reply
#21
EarthDog
ChaitanyaI dont play games and one of the main softwares that I use is Helicon for photo stacking and it scales and performs very well on GPUs. You can check their website for performance numbers on various CPUs and GPUs.
Oh boy... I was thinking a game, HeliBORNE not Helicon.. my bad!!! o_O :roll:
Posted on Reply
#22
xkm1948
Nvidia is seriously leaky this time around. Curious to see the results as well as the Tensor core config. Would be nice if the consumer variant of Ampere would also receive the sparse matrix FP16 Tensor like A100.
Posted on Reply
#23
erek
looks fake, why's the spacing different?

Posted on Reply
#24
Chomiq
I'll wait for RDNA2 reviews and then consider my options. NV has the experience of beta testing their RTX tech for 2 years so it is tempting. I'm planning to buy PS5 at some point, so AMD will get my money either way.
Posted on Reply
#25
RedelZaVedno
RTX 3080
10 GB of GDDR6X memory running on a 320-bit bus at 19 Gbps (memory bandwidth capacity of 760 GB/s)
4,352 CUDA cores running at 1710 MHz
320 W TGP (board power)
-------------------------------------------------------------
FP32 (float) performance = 14.88 ?????????????

RTX 2080TI

11 GB of GDDR6 memory running on a 352-bit bus at 14 Gbps (memory bandwidth capacity of 616 GB/s)
4,352 CUDA cores running at 1545 MHz
250 W TGP (board power)
------------------------------------------------------------
FP32 (float) performance = 13.45

This can't be right, only 10 % rasterization performance gain over 2080TI? I REALY hope Nvidia managed to get some IPC gain due to new node/arch or it's gonna be Turing Deja Vu all over again :(
Posted on Reply
Add your own comment
Nov 21st, 2024 10:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts