Monday, December 28th 2020

NVIDIA's Next-Gen Big GPU AD102 Features 18,432 Shaders

The rumor mill has begun grinding with details about NVIDIA's next-gen graphics processors based on the "Lovelace" architecture, with Kopite7kimi (a reliable source with NVIDIA leaks) predicting a 71% increase in shader units for the "AD102" GPU that succeeds the "GA102," with 12 GPCs holding 6 TPCs (12 SMs), each. 3DCenter.org extrapolates on this to predict a CUDA core count of 18.432 spread across 144 streaming multiprocessors, which at a theoretical 1.80 GHz core clock could put out an FP32 compute throughput of around 66 TFLOP/s.

The timing of this leak is interesting, as it's only 3 months into the market cycle of "Ampere." NVIDIA appears unsettled with AMD RDNA2 being competitive with "Ampere" at the enthusiast segment, and is probably bringing in its successor, "Lovelace" (after Ada Lovelace), out sooner than expected. Its previous generation "Turing" architecture saw market presence for close to two years. "Lovelace" could leverage the 5 nm silicon fabrication process and its significantly higher transistor density, to step up performance.
Sources: Kopite7kimi (Twitter), 3DCenter.org (Twitter), VideoCardz
Add your own comment

43 Comments on NVIDIA's Next-Gen Big GPU AD102 Features 18,432 Shaders

#26
TumbleGeorge
2021 H2 end Apple will free 5nm product lines+time to adaptation to big cores+time to release GPUs in ready to market graphic cards ...2022 autumn or little earlier?
Posted on Reply
#27
HD64G
That is a first for nVidia me thinks! The RTX3000 aren't good enough in efficiency and cost and are still absent from the shelves and they were forced to intentionally leak critical detail of their next arch.
Posted on Reply
#28
Fabio
jabbadapHonestly I don't think they care one iota what the end price and availability is in the market. Currently they can sell all what they can produce and are still getting record revenues quarter after quarter. And more different skus they made more new cards they can sell...

Sure by consumer's pov current market sucks big time. But it's more because of absent for true new gen mid-range cards rather than pricier high end(I don't count 3060ti as mid-range). Ancient Polaris and lack luster 16 -series Turings just wont cut it anymore. Sure one can have good deals from 1-gen navis, but those aren't really mid-range either and yeah i think they are EOLled anyway to get room for other 7nm products. So what market really needs is a new $200-$250 mid-range champ, perfect 1080p and capable 1440p card as rx 480 or gtx 1060 were at the beginning.
i dont agree. Nvidia and other company decide a plan, that plan say you have to sell a minimum pieces at a certain price (msrp) to make enough profit and close the year positively.
If the market, is so much distorced that you can see gpus on sell at double their msrp thing are not good for nvidia\ amd for at least 2 reasons
1) those price are for the final customer, due to scalping. a custom gpu selled at the customer at msrp or double for nvidia amd is the same.

2) if the number ar so much few that such a price distortion may happen, i can bet what ever you want that amd nvidia are doing whatever they can to maximise production and reach beak even. I think that after almost 3 month from presentation they arent even near to that
Posted on Reply
#29
tfdsaf
Dumpster tier rumor. While there is no question Nvidia is working on the next gen and has probably been working on it even before Ampere launch, anything like shader count, SM's, clock speed, vram, etc... probably even the engineers and designers don't know themselves, as they are probably just testing stuff out and figuring it out. It takes anywhere from 6 months to 18 months to finalize a GPU design.

Plus Ampere cards just started selling, barely 2 months ago and they aren't even in stock, they haven't even released mid or low tier cards yet. So any new generation would wait for a good year at the least!
Posted on Reply
#30
r9
And AMD will quadruple the performance, and none of this matter as we can't buy any of these damn cards anyways.
Posted on Reply
#31
dorsetknob
"YOUR RMA REQUEST IS CON-REFUSED"
cellar doorThis is WCFtech level stuff article.
And the name for the Following GPU after Ada will be Linda (lovelace)
Source for this is of course >> Deep Throat
Posted on Reply
#32
Anymal
More and more it seems that Ampere in Geforce is just an opportunity not missed. How do you utilize 10k cuda cores for gaming efficiently? They will learn how, utilize 18k cores and make it work and manufacture it on way better litography.
Posted on Reply
#33
Metroid
windwhirlIronic, since in Hispanic America today is the Day of the Innocent Saints (rough translation) Holy Innocents' Day, which is similar to April Fools.
Interesting, thanks.
Posted on Reply
#34
Nkd
TumbleGeorgeBut if numbers is true AMD has not any chance to competition. With up to +50% perf/watt for rdna3 will be joke to compare with Nvidia's progress.
The fact that you think this will scale 100% is a joke in itself. Nvidia will likely get 50-60% performance bump over ampere. RDNA3 is not only rumored to carry same efficiency as RDNA2 it is also rumored to be a large architectural upgrade not just more shaders etc, big upgrade to geometry engine and stuff. Nvidia was rumored to go MCM after ampere the fact they had to delay it is not a great sign if RDNA 3 is such a large leap and could possibly be MCM.
Posted on Reply
#35
TumbleGeorge
Will see next flagship battle on 8k arena. On 4k in battles will go only middle class graphic cards.
Posted on Reply
#36
Anymal
If rdna3 is from team that brought rdna1, than we should just wait for even.
Posted on Reply
#37
N3M3515
NkdThe fact that you think this will scale 100% is a joke in itself. Nvidia will likely get 50-60% performance bump over ampere. RDNA3 is not only rumored to carry same efficiency as RDNA2 it is also rumored to be a large architectural upgrade not just more shaders etc, big upgrade to geometry engine and stuff. Nvidia was rumored to go MCM after ampere the fact they had to delay it is not a great sign if RDNA 3 is such a large leap and could possibly be MCM.
This is weird. Ampere has aprox 70% more perf than turing, if the next gen has 70% more cores, that's like 50% more perf. So a smaller jump than from turing to ampere.
Posted on Reply
#38
windwhirl
N3M3515This is weird. Ampere has aprox 70% more perf than turing, why is 50% of this news any good?
Is it always possible to push +50% IPC improvements from gen to gen?
Posted on Reply
#39
N3M3515
windwhirlIs it always possible to push +50% IPC improvements from gen to gen?
As i understand it they aren't talking about ipc but just outright increase in cores.
Posted on Reply
#40
windwhirl
N3M3515As i understand it they aren't talking about ipc but just outright increase in cores.
Ah, sorry, I completely misunderstood.
Posted on Reply
#41
saki630
Is this WccfTech? Last I checked, 99% of all RTX3080 buyers still cannot order their graphics cards. Not to mention 5% of those buyers will die from Covid before being able to even get their hands on one. Yet someone finds time to speculate the RTX4080 is going to debut 1H 2021 -- for what end?

What's unsettling is not that AMD's cards are doing good (you still cant buy either), but that the only people who need the Next Gen cards are those strictly playing true 4k 120hz+ (or higher)with an additional monitor. The Green+Red team will achieve 100+ FPS in 1440P with high graphics settings and not even struggle. Where is the artificial demand? that is what we should be afraid of.
Posted on Reply
#42
Vayra86
SteevoWhy stop at 18K, why not 24K or 48K? If they really roll out a new GPU in a few months I can't wait to hear the justifications for Nvidiots paying through the nose for another GPU or have reduced epeen from not paying through the nose for the latest again.

Also, Nvidia isn't going to TSMC, they are staying with samsung, so unless Samsung improves their node they are still going to be limited by the process more than architecture again, and have a even more power hungry chip, that if larger will mean more defect dies, and thus higher priced.... Competition is glorious, AMD is held back by wafer supply, Nvidia is held back by wafer supply, and we the consumers have to pay to play.
10 Gigarays did sound better to me, at least you could readily see the nonsense and laugh about it. With Ampere we never heard of those again, but shouldn't that be at least 19 Gigarays now? Wait, no... 38? (1.9x perf and double shader count after all)
HD64GThat is a first for nVidia me thinks! The RTX3000 aren't good enough in efficiency and cost and are still absent from the shelves and they were forced to intentionally leak critical detail of their next arch.
This man gets it. I damn well knew it the moment I saw those die shots/sizes and accompanying specs + TDP. Nvidia hoped to sell a lot of pre-orders and pre-empt everything because once the dust truly settles, this whole gen is obsolete faster than Turing. Inb4 the refresh with a supposedly better Samsung node underneath, or even a last minute change to TSMC. All options are open at this time really, but its clear Nvidia will move in one way or another. They already priced themselves back to Pascal levels, which speaks volumes.
Posted on Reply
#43
quadibloc
Whether or not it is feasible to put that many shaders on a GPU in a year from now, it seems perfectly logical to me that since Nvidia has found that beating AMD in the ray-tracing department hasn't generated the expected enthusiasm among consumers, for their next generation they will emphasize conventional muscle in things like shaders.
Posted on Reply
Add your own comment
Apr 12th, 2025 07:46 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts