Monday, July 26th 2021
NVIDIA "Ada Lovelace" Architecture Designed for N5, GeForce Returns to TSMC
NVIDIA's upcoming "Ada Lovelace" architecture, both for compute and graphics, is reportedly being designed for the 5 nanometer silicon fabrication node by TSMC. This marks NVIDIA's return to the Taiwanese foundry after its brief excursion to Samsung, with the 8 nm "Ampere" graphics architecture. "Ampere" compute dies continue to be built on TSMC 7 nm nodes. NVIDIA is looking to double the compute performance on its next-generation GPUs, with throughput approaching 70 TFLOP/s, from a numeric near-doubling in CUDA cores, generation-over-generation. These will also be run at clock speeds above 2 GHz. One can expect "Ada Lovelace" only by 2022, as TSMC N5 matures.
Source:
HotHardware
26 Comments on NVIDIA "Ada Lovelace" Architecture Designed for N5, GeForce Returns to TSMC
Lots of Radeon cards at double MSRP however
I hope that if Lovelace or whatever it ends up beingh called uses TSMC once again and Micron fixes their G6X power draw or Samsung comes out with 20Gbps G6 to replace G6X. Turing was an insult with nonexistant (RT) and bad (DLSS 1.0) features and high price. Ampere is just expensive to produce, hot, low yielding and power hungry. Samsung's 8nm process was never meant to produce such large chips. Even in smartphones Samsung's 8nm was always losing to TSMC.
The only reason Ampere is half decent is Nvidia's architecture and monstrous cooling solutions by Nvidia and AIB's to keep it in check.
If we were not in the middle of a global pandemic, supply shortage and mining boom the low (atleast lower than Turing) MSRP's would have made Ampere tolerable. But not as great as Maxwell or Pascal were. Especially 1080Ti when it came out. 700 was a steal for it and even years later Nvidia could only produce 2080Ti that was slightly faster. Only with Ampere was 1080Ti defeated by midrange cards. Cards that cost more than 700....
*checks notes*
Sure doesn't seem that way.
between 120FPS with mad stutterings and smooth 80FPS I would pick the later LOL, I play games, not benchmarks, same reason I haven't gone back to SLI ever since I bought the first ever SLI GPU (7950GX2).
The overhead associated with MCM for gaming is not yet known at this point, Nvidia and AMD probably have thought about MCM a long time ago and just waited for the right kind of interconnect technology to make it possible.
While AMD is going to use a big pool of Infinity Cache, Nvidia will probably use networking tech from Mellanox like the PAM4 on GDDR6x, no one knows which interconnect will allow better MCM design at this point, or whether MCM is suitable for gaming at all or just meant for workstation tasks.
My 3080 can push over 270 W with ray-tracing, at just 1800 MHz and 0.8 V. That is crazy.
Regular games do 200-230 W. Vsynced, rarely getting past 70-80% GPU usage.
At stock settings the clock can actually drop below 1800 MHz with ray tracing while drawing over 350 W. That is madness.
I would actually not use this card if stock setting were the only option. The amount of heat is not acceptable to me. I got the card knowing I would be severely undervolting it, and it is still super fast with good efficiency.
I undervolted my 1080 and 2070 SUPER too, but the power draw was low enough that I got higher than stock performance. I will probably always undervolt from now on, but hopefully Lovelace will get similar results to Pascal and Turing.