Monday, January 20th 2020

Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

NVIDIA's next-generation of graphics cards codenamed Ampere is set to arrive sometime this year, presumably around GTC 2020 which takes place on March 22nd. Before the CEO of NVIDIA, Jensen Huang officially reveals the specifications of these new GPUs, we have the latest round of rumors coming our way. According to VideoCardz, which cites multiple sources, the die configurations of the upcoming GeForce RTX 3070 and RTX 3080 have been detailed. Using the latest 7 nm manufacturing process from Samsung, this generation of NVIDIA GPU offers a big improvement from the previous generation.

For starters the two dies which have appeared have codenames like GA103 and GA104, standing for RTX 3080 and RTX 3070 respectively. Perhaps the biggest surprise is the Streaming Multiprocessor (SM) count. The smaller GA104 die has as much as 48 SMs, resulting in 3072 CUDA cores, while the bigger, oddly named, GA103 die has as much as 60 SMs that result in 3840 CUDA cores in total. These improvements in SM count should result in a notable performance increase across the board. Alongside the increase in SM count, there is also a new memory bus width. The smaller GA104 die that should end up in RTX 3070 uses a 256-bit memory bus allowing for 8/16 GB of GDDR6 memory, while its bigger brother, the GA103, has a 320-bit wide bus that allows the card to be configured with either 10 or 20 GB of GDDR6 memory. In the images below you can check out the alleged diagrams for yourself and see if this looks fake or not, however, it is recommended to take this rumor with a grain of salt.
Source: VideoCardz
Add your own comment

173 Comments on Rumor: NVIDIA's Next Generation GeForce RTX 3080 and RTX 3070 "Ampere" Graphics Cards Detailed

#1
londiste
Tl;dr
GA103 - 60 SM (3480 SP) - 320-bit VRAM (10/20 GB) - assumed 3080
GA104 - 48 SM (3072 SP) - 256-bit VRAM (8/16 GB) - assumed 3070

7nm has not been favourable to higher clocks so far but has been considerably better in power consumption and efficiency. With frequency of around 2GHz (and some architectural improvements) GA104 is faster than 2080 Super and GA103 faster than 2080Ti.

Assuming Nvidia's usual chip numeration there should be another higher-end GA102 in the works...
Posted on Reply
#2
kapone32
Depending on pricing I may make the jump to Nvidia if this holds true. I cringe at the potential price point for these particular cards.
Posted on Reply
#3
Unregistered
kapone32Depending on pricing I may make the jump to Nvidia if this holds true. I cringe at the potential price point for these particular cards.
Ya, I will wait and see what the 3080 Ti is priced at when released. It needs to have at bare minimum, 16GB of RAM or more. If they can get it priced appropriately, I'll buy a pair of them. If not, I will continue to be patient and wait for a good deal on ebay down the road.
#4
xkm1948
I do wonder how much different the overall design would be from Turing uArc. Also I hope they don't cut down the Tensorflow units. It has been really nice to use consumer level GPU for dl/ml acceleration.
Posted on Reply
#5
CrAsHnBuRnXp
Razrback16Ya, I will wait and see what the 3080 Ti is priced at when released. It needs to have at bare minimum, 16GB of RAM or more. If they can get it priced appropriately, I'll buy a pair of them. If not, I will continue to be patient and wait for a good deal on ebay down the road.
Why? Dual card systems are basically dead these days. Yes, there is still support for SLI but it's at the point where its unneeded and troublesome.
Posted on Reply
#6
Otonel88
Not really an expert on the depth details, but currently the 2080 TI has 4352 Cuda cores, and b, and the rumored GA103 die that could have potentially 3840 Cuda cores and as much as 60 SM's.
So on paper this is not faster than the 2080TI. Right?
Obviously the smaller die would result in other benefit, such as lower TDP which I am really interested, and other stuff.
Do you guys think the GA103 card would be under 200W TDP?
Always looking at lower TDP cards, which are more silent as there is less heath to deal with by the cooler which translates into quieter card operation.
Posted on Reply
#7
xkm1948
Otonel88Not really an expert on the depth details, but currently the 2080 TI has 4352 Cuda cores, and b, and the rumored GA103 die that could have potentially 3840 Cuda cores and as much as 60 SM's.
So on paper this is not faster than the 2080TI. Right?
Obviously the smaller die would result in other benefit, such as lower TDP which I am really interested, and other stuff.
Do you guys think the GA103 card would be under 200W TDP?
Always looking at lower TDP cards, which are more silent as there is less heath to deal with by the cooler which translates into quieter card operation.
Different generation of stream processors cannot be compared just by numbers.
Posted on Reply
#8
Otonel88
xkm1948Different generation of stream processors cannot be compared just by numbers.
Good point :)
Posted on Reply
#9
kapone32
CrAsHnBuRnXpWhy? Dual card systems are basically dead these days. Yes, there is still support for SLI but it's at the point where its unneeded and troublesome.
Multi GPU is for anyone that wants it. I have not replaced my Vega 64s with 5700Xts for exactly that reason. If Multi GPU was really dead MBs would not be touting it and giving you 2, 3 and 4 SLI bridges. As much as people complain about it being troublesome it is not as bad as people make it out to be. There are plenty of Games that support Multi GPU anyway. As an example I get 107 FPS average at 4K playing Jedi Fallen Order @ Ultra settings.
Posted on Reply
#10
CrAsHnBuRnXp
kapone32Multi GPU is for anyone that wants it. I have not replaced my Vega 64s with 5700Xts for exactly that reason. If Multi GPU was really dead MBs would not be touting it and giving you 2, 3 and 4 SLI bridges. As much as people complain about it being troublesome it is not as bad as people make it out to be. There are plenty of Games that support Multi GPU anyway. As an example I get 107 FPS average at 4K playing Jedi Fallen Order @ Ultra settings.
Triple and quad SLI is officially dead as nvidia no longer supports it.
Posted on Reply
#11
Otonel88
Any opinions or guesses on the TDP of this new cards using the 7nm?
Posted on Reply
#13
kapone32
CrAsHnBuRnXpTriple and quad SLI is officially dead as nvidia no longer supports it.
Yeah I know, I was just referencing that we still get those. Personally I would never go for more than 2 GPUs in a setup. It would be nearly impossible (unless you install single slot water blocks) to fit more than in some cases, 2 GPUS on modern Motherboards.
Posted on Reply
#14
Adam Krazispeed
IDK, my thoughts..

AM102 - ?? sm (????? SP) -RT Cores 108 - 384bit VRAM - assumed RTX Titan XXX
AM104 - 68 SM (4352 SP) - RT Cores 88 - 320-bit VRAM - assumed 3080ti -12gb-16gb-32gb
AM106 - 46 SM (2944 SP) - RT Cores 68 - 256-bit VRAM - assumed 3070ti - 12gb-16gb

7nm has not had much higher clocks so ?? but has been much better in power consumption/Eff.. With frequency of 1.7 - 2 GHz GA104 Could be faster than a 2080 Super and GA102 will be faster than 2080Ti. 20% >>??? 30% MAX , but RT-RT up to 50% MAX

Assuming Nvidia's usual chip numeration there should be another higher-end GA102 in the works...TITAN model!!
Adam KrazispeedIDK, my thoughts..

AM102 - ?? sm (????? SP) -RT Cores 108 - 384bit VRAM - assumed RTX Titan XXX
AM104 - 68 SM (4352 SP) - RT Cores 88 - 320-bit VRAM - assumed 3080ti -12gb-16gb-32gb
AM106 - 46 SM (2944 SP) - RT Cores 68 - 256-bit VRAM - assumed 3070ti - 12gb-16gb

7nm has not had much higher clocks so ?? but has been much better in power consumption/Eff.. With frequency of 1.7 - 2 GHz GA104 Could be faster than a 2080 Super and GA102 will be faster than 2080Ti. 20% >>??? 30% MAX , but RT-RT up to 50% MAX

Assuming Nvidia's usual chip numeration there should be another higher-end GA102 in the works...TITAN model!!
depens on also who nvidia using?? TSMC. or SAMSUNG... rumored that TSMC 7nm is a bit better than samsungs 7nm at the moment?? (whether EUV or DUV)
Posted on Reply
#16
VrOtk
That 320-bit bus looks weird: allowing your xx80 grade card to have such high memory bandwidth, you start to cripple your xx80Ti's performance at higher resolutions (unless it uses 512-bit bus or HBM2).
Though I'd be happy to be wrong, as better grade products for the same or cheaper price is always a welcome.
Posted on Reply
#18
efikkan
GTC is a compute/professional graphics conference. I can't remember the last time Nvidia launched consumer products there, so this sounds unlikely but not impossible. Keep in mind that we've had similar rumors about GTC previous years too.
But I would expect them to show data center products and perhaps give some hints about the upcoming consumer products.
Posted on Reply
#19
Gungar
VrOtkThat 320-bit bus looks weird: allowing your xx80 grade card to have such high memory bandwidth, you start to cripple your xx80Ti's performance at higher resolutions (unless it uses 512-bit bus or HBM2).
Though I'd be happy to be wrong, as better grade products for the same or cheaper price is always a welcome.
Yeah i was thinking the same but it really wouldn't surprise me that the new 3080Ti comes with HBM.
Posted on Reply
#20
EarthDog
kapone32Multi GPU is for anyone that wants it.
True... for anyone that is a masochist :p. I mean if people want to put themselves through spotty/inconsistent performance uptick while always using 2x power, 2x heat mitigation... go for it. mGPU just needs to die already or go all in and get it working. Far too long has this technology just been 'MEH'.
Gungarit really wouldn't surprise me that the new 3080Ti comes with HBM.
I don't see this happening...

Does anyone else see HBM making any headway in consumer graphics? When first released, AMD touted it as the second coming... but it turns out, it was fairly useless comparatively (I mean it did shrink gaps between NV cards at high resolutions, but never had the core horsepower in the first place for anything past 1440p).
Posted on Reply
#21
_Flare
They could go for 4SMs per TPC leading to doubled RT-Perf.
or they could place 8 or even 12 TPC per GPC, also leading to 2x Perf.
People shouldn´t forget, nvidia gets about 2x the area 12nm vs 7nm.
I bet nvidia has plenty of ideas to get performance from that extra-area and doubled RT Perf is scaled 1:1 with SMs.
Posted on Reply
#23
Gungar
EarthDogDoes anyone else see HBM making any headway in consumer graphics? When first released, AMD touted it as the second coming... but it turns out, it was fairly useless comparatively (I mean it did shrink gaps between NV cards at high resolutions, but never had the core horsepower in the first place for anything past 1440p).
HBM is by far the most powerful and efficient graphic memory out there right now. AMD is just incapable of producing any good gpu, for sometime now.
Posted on Reply
#24
EarthDog
DeathtoGnomesWonder what AMD will counter this with.
Big Navi is due out at the same time...

... wondering if it will still be 'big' compared to these (or w.e NV's high-end will be) or what...
Posted on Reply
#25
Otonel88
I think what ever the potential of the Ampere architecture, the performance will be around 10% - 15% improved over current generation.
This is so Nvidia can 'milk' the architecture as much as possible in the coming years. If they are running tests and they realise the they could push up to 60% more performance, they will release this performance in batches over the next generations of cards that will come in the next few years. So, yeah I am excited about the new cards, but Nvidia is always a business, and they won't release the full performance in the first batch, as this would leave them empty handed.
Posted on Reply
Add your own comment
Dec 22nd, 2024 07:38 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts