Monday, July 26th 2021

NVIDIA "Ada Lovelace" Architecture Designed for N5, GeForce Returns to TSMC

NVIDIA's upcoming "Ada Lovelace" architecture, both for compute and graphics, is reportedly being designed for the 5 nanometer silicon fabrication node by TSMC. This marks NVIDIA's return to the Taiwanese foundry after its brief excursion to Samsung, with the 8 nm "Ampere" graphics architecture. "Ampere" compute dies continue to be built on TSMC 7 nm nodes. NVIDIA is looking to double the compute performance on its next-generation GPUs, with throughput approaching 70 TFLOP/s, from a numeric near-doubling in CUDA cores, generation-over-generation. These will also be run at clock speeds above 2 GHz. One can expect "Ada Lovelace" only by 2022, as TSMC N5 matures.
Source: HotHardware
Add your own comment

26 Comments on NVIDIA "Ada Lovelace" Architecture Designed for N5, GeForce Returns to TSMC

#1
Hardcore Games
Still no NVIDIA cards at the local NewEgg etc

Lots of Radeon cards at double MSRP however
Posted on Reply
#2
matar
I bought this year RTX 2070 super and its Rocking @3440x1440 and not upgrading for a while, i even tried rtx 3060 but wasn't happy.
Posted on Reply
#3
watzupken
If this is true, then I think Nvidia is truly worried about AMD's progress in the GPU space. The reason why I said that is because Nvidia' "gaming" GPUs have not been manufactured on near cutting edge nodes when Nvidia was dominating the high end GPU space over the last few years. When AMD introduced their first TSMC N7 GPU, Turing was introduced on TSMC 12nm (basically a 16nm), then they slowly move to Samsung 8nm (essentially a 10nm) even though AMD was already using N7 for a year or 2. So now with competition heating up, if they continue going for cheaper nodes, it is not going to do them any favor.
matarI bought this year RTX 2070 super and its Rocking @3440x1440 and not upgrading for a while, i even tried rtx 3060 but wasn't happy.
RTX 2070 Super is faster than a RTX 3060 for sure. The only benefit of going with the RTX 3060 is the 50% increase in VRAM, which may be more beneficial in the longer run.
Posted on Reply
#4
matar
watzupkenIf this is true, then I think Nvidia is truly worried about AMD's progress in the GPU space. The reason why I said that is because Nvidia' "gaming" GPUs have not been manufactured on near cutting edge nodes when Nvidia was dominating the high end GPU space over the last few years. When AMD introduced their first TSMC N7 GPU, Turing was introduced on TSMC 12nm (basically a 16nm), then they slowly move to Samsung 8nm (essentially a 10nm) even though AMD was already using N7 for a year or 2. So now with competition heating up, if they continue going for cheaper nodes, it is not going to do them any favor.


RTX 2070 Super is faster than a RTX 3060 for sure. The only benefit of going with the RTX 3060 is the 50% increase in VRAM, which may be more beneficial in the longer run.
OH yes I know the 2070 super is faster but gave the 3060 a try for testing not that i was going to replace my 2070 super with it.
Posted on Reply
#5
ratirt
Is that supposed to be the multi chip design?
Posted on Reply
#6
nguyen
matarOH yes I know the 2070 super is faster but gave the 3060 a try for testing not that i was going to replace my 2070 super with it.
I would prefer the 3060 over 2070 Super because of the HDMI 2.1, that make the 3060 very suitable for HTPC, goes along well with OLED TV too :D
Posted on Reply
#7
TumbleGeorge
watzupkenSamsung 8nm (essentially a 10nm)
"10nm class"(not real 10nm) also isn't exactly 10nm :D
Posted on Reply
#8
Tomorrow
ratirtIs that supposed to be the multi chip design?
Nope Lovelace is supposed to be monolithic. They also have Hopper that is MCM but that is for data center and HPC customers.
Posted on Reply
#9
BorisDG
Ampere is such a flop with Samsung's 8nm.
Posted on Reply
#10
Tomorrow
BorisDGAmpere is such a flop with Samsung's 8nm.
Agreed. Tho there are people who argue that its not that much worse than TSMC's 7nm. Tho that argument only looks at the density and not the power characteristics, output quantity or yields. It does not help matters that Micron's G6X is also very power hungry for a small bump in effective speed over standard 16Gbps (18Gbps G6 has existed since Turing).

I hope that if Lovelace or whatever it ends up beingh called uses TSMC once again and Micron fixes their G6X power draw or Samsung comes out with 20Gbps G6 to replace G6X. Turing was an insult with nonexistant (RT) and bad (DLSS 1.0) features and high price. Ampere is just expensive to produce, hot, low yielding and power hungry. Samsung's 8nm process was never meant to produce such large chips. Even in smartphones Samsung's 8nm was always losing to TSMC.
The only reason Ampere is half decent is Nvidia's architecture and monstrous cooling solutions by Nvidia and AIB's to keep it in check.

If we were not in the middle of a global pandemic, supply shortage and mining boom the low (atleast lower than Turing) MSRP's would have made Ampere tolerable. But not as great as Maxwell or Pascal were. Especially 1080Ti when it came out. 700 was a steal for it and even years later Nvidia could only produce 2080Ti that was slightly faster. Only with Ampere was 1080Ti defeated by midrange cards. Cards that cost more than 700....
Posted on Reply
#11
wolf
Better Than Native
BorisDGAmpere is such a flop with Samsung's 8nm.
Desktop Ampere is not what it *could* have been on, for example, TSMC 7nm, but a flop?

*checks notes*

Sure doesn't seem that way.
Posted on Reply
#12
Minus Infinity
TomorrowNope Lovelace is supposed to be monolithic. They also have Hopper that is MCM but that is for data center and HPC customers.
That is not a given. There are leaks that if RDNA3 is so good, Nividia will skip Lovelace and go straight to Hopper for desktop. RDNA3 will be MCM on Big NAvi at least, but Lovelace is just evolution of Ampere. It reportedly 60-80% faster than Ampere, but RDNA3 is at least 100% faster and on biggest Navi31 it could be 200% faster but at an obscene $2K price.
Posted on Reply
#13
nguyen
Minus InfinityThat is not a given. There are leaks that if RDNA3 is so good, Nividia will skip Lovelace and go straight to Hopper for desktop. RDNA3 will be MCM on Big NAvi at least, but Lovelace is just evolution of Ampere. It reportedly 60-80% faster than Ampere, but RDNA3 is at least 100% faster and on biggest Navi31 it could be 200% faster but at an obscene $2K price.
I wouldn't bet on MCM design for gaming in this early stage, SLI and Xfire died for a reason LOL.
Posted on Reply
#14
Minus Infinity
nguyenI wouldn't bet on MCM design for gaming in this early stage, SLI and Xfire died for a reason LOL.
I'm only talking MCM in the flagship, not the mainstream. They might have a 7950XT, 7900XT and 7800XT. 7950XT would be $2K and just for bragging rights. I doubt 4090 would get near it if specs are to believed.
Posted on Reply
#15
Tomorrow
Minus InfinityThat is not a given. There are leaks that if RDNA3 is so good, Nividia will skip Lovelace and go straight to Hopper for desktop. RDNA3 will be MCM on Big NAvi at least, but Lovelace is just evolution of Ampere. It reportedly 60-80% faster than Ampere, but RDNA3 is at least 100% faster and on biggest Navi31 it could be 200% faster but at an obscene $2K price.
We dont know. Nvidia is a black (green?) box when it comes to keeping these things close to it's chest. The leaks about AMD and Intel products tend to be far more reliable.
nguyenI wouldn't bet on MCM design for gaming in this early stage, SLI and Xfire died for a reason LOL.
MCM is invisible to the OS and games. It's a hardware solution that does not depend on OS or game developers optimizing for it. As far as they are concerned they see one monolithic chip. Load balancing is done in hardware. Atleast that is what AMD patents thus far have shown. SLI and Crossfire being dead is good. Nothing good ever came out of those.
Posted on Reply
#16
nguyen
TomorrowMCM is invisible to the OS and games. It's a hardware solution that does not depend on OS or game developers optimizing for it. As far as they are concerned they see one monolithic chip. Load balancing is done in hardware. Atleast that is what AMD patents thus far have shown. SLI and Crossfire being dead is good. Nothing good ever came out of those.
If the MCM design lead to unwanted stutterings I would rather stick to huge monolithic chip.
between 120FPS with mad stutterings and smooth 80FPS I would pick the later LOL, I play games, not benchmarks, same reason I haven't gone back to SLI ever since I bought the first ever SLI GPU (7950GX2).
Posted on Reply
#17
Tomorrow
nguyenIf the MCM design lead to unwanted stutterings I would rather stick to huge monolithic chip.
between 120FPS with mad stutterings and smooth 80FPS I would pick the later LOL, I play games, not benchmarks, same reason I haven't gone back to SLI ever since I bought the first ever SLI GPU (7950GX2).
Why would MCM lead to stuttering? MCM CPU's have been fine for example. Monolithic chips are getting more and more expensive and have essentially a 800mm² limit. MCM's can scale higher. For example four 400mm² chips. Tho first iterations use two. Atleast in gaming.
Posted on Reply
#18
nguyen
TomorrowWhy would MCM lead to stuttering? MCM CPU's have been fine for example. Monolithic chips are getting more and more expensive and have essentially a 800mm² limit. MCM's can scale higher. For example four 400mm² chips. Tho first iterations use two. Atleast in gaming.
Well MCM will have higher latency than monolithic, that's for sure.
The overhead associated with MCM for gaming is not yet known at this point, Nvidia and AMD probably have thought about MCM a long time ago and just waited for the right kind of interconnect technology to make it possible.
While AMD is going to use a big pool of Infinity Cache, Nvidia will probably use networking tech from Mellanox like the PAM4 on GDDR6x, no one knows which interconnect will allow better MCM design at this point, or whether MCM is suitable for gaming at all or just meant for workstation tasks.
Posted on Reply
#19
wolf
Better Than Native
nguyenIf the MCM design lead to unwanted stutterings I would rather stick to huge monolithic chip.
I guesss I'd have to hope, and to an extent bank on that if they are going to do it, they've figured that out, because nobody wants that stuttery mess.
Posted on Reply
#20
Tomorrow
nguyenWell MCM will have higher latency than monolithic, that's for sure.
The overhead associated with MCM for gaming is not yet known at this point, Nvidia and AMD probably have thought about MCM a long time ago and just waited for the right kind of interconnect technology to make it possible.
While AMD is going to use a big pool of Infinity Cache, Nvidia will probably use networking tech from Mellanox like the PAM4 on GDDR6x, no one knows which interconnect will allow better MCM design at this point, or whether MCM is suitable for gaming at all or just meant for workstation tasks.
40ns vs 60ns monolithic vs MCM. At least on CPU's- On GPU's latency is far less of an issue. GDDR6 itself has much higher latency than DDR4 for example. But despite that it is still used as system RAM on consoles. GPU's are more about bandwidth and troughput. If they are bringing out MCM GPU's then im assuming it's ok.
Posted on Reply
#21
ppn
looking at GA100 65.6M / mm² density on N7, this new N5 should land around 118. Ampere 8nm sits at only 44. this means the maximum EUV die of 421 mm² can contain 50 B transistors, and this is just mindblowing.
Posted on Reply
#22
chodaboy19
Would it make sense to not work with Samsung after just 1 product launch? I would think given the supply contains Nvidia would continue to use both TSMC and Samsung. Samsung themselves are investing many billions to fix their manufacturing issues, how much validity does this news item carry?
Posted on Reply
#23
THU31
Good. The Samsung 8N process is trash for big chips. Ampere efficiency is garbage without severe undervolting.

My 3080 can push over 270 W with ray-tracing, at just 1800 MHz and 0.8 V. That is crazy.

Regular games do 200-230 W. Vsynced, rarely getting past 70-80% GPU usage.

At stock settings the clock can actually drop below 1800 MHz with ray tracing while drawing over 350 W. That is madness.
Posted on Reply
#24
Tomorrow
THU31Good. The Samsung 8N process is trash for big chips. Ampere efficiency is garbage without severe undervolting.

My 3080 can push over 270 W with ray-tracing, at just 1800 MHz and 0.8 V. That is crazy.

Regular games do 200-230 W. Vsynced, rarely getting past 70-80% GPU usage.

At stock settings the clock can actually drop below 1800 MHz with ray tracing while drawing over 350 W. That is madness.
That's crazy. TSMC's 12nm 2080Ti with a 380W limit BIOS can do 2050Mhz+ with 380W. 1800 stock 350W is just bad for a "8nm" process.
Posted on Reply
#25
THU31
I am always getting lower clocks than reviewers do, even though temperatures are good. Non-RT games usually stayed just above 1900 MHz at stock (while pretty much constantly drawing 350 W), sometimes the clock dropped below 1900.

I would actually not use this card if stock setting were the only option. The amount of heat is not acceptable to me. I got the card knowing I would be severely undervolting it, and it is still super fast with good efficiency.
I undervolted my 1080 and 2070 SUPER too, but the power draw was low enough that I got higher than stock performance. I will probably always undervolt from now on, but hopefully Lovelace will get similar results to Pascal and Turing.
Posted on Reply
Add your own comment
Dec 19th, 2024 07:13 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts