Tuesday, March 1st 2022

NVIDIA to Split Graphics and Compute Architecture Naming, "Blackwell" Architecture Spotted

The recent NVIDIA data-leak springs up information on various upcoming graphics parts. Besides "Ada Lovelace," "Hopper," we come across a new codename, "Blackwell." It turns out that NVIDIA is splitting the the graphics and compute architecture naming with the next generation, not unlike what AMD did, with its RDNA and CDNA series. The current "Ampere" architecture is being used both for compute and graphics, with the streaming multiprocessor for the two being slightly different—the compute "Ampere" has more FP64 and Tensor components, while the graphics "Ampere" does away with these in favor of RT cores and graphics-relevant components.

The graphics architecture to succeed GeForce "Ampere" will be GeForce "Ada Lovelace." GPUs in this series are identified in the leaked code as "AD102," "AD103," "AD104," "AD106," "AD107," and "AD10B," succeeding a similar numbering for parts with the "A" (GeForce Ampere) series. The compute architecture succeeding "Ampere" will be codenamed "Hopper." with parts in the series being codenamed "GH100" and "GH202." Another compute or datacenter architecture is "Blackwell," with parts being codenamed "GB100" and "GB102." From all accounts, NVIDIA is planning to launch the GeForce 40-series "Ada" graphics card lineup in the second half of 2022. The company is in need of a similar refresh for its compute product lineup, and could debut "Hopper" either toward the end of 2022 or next year. "Blackwell" could follow "Hopper."
Source: VideoCardz
Add your own comment

14 Comments on NVIDIA to Split Graphics and Compute Architecture Naming, "Blackwell" Architecture Spotted

#1
bug
From all accounts, NVIDIA is planning to launch the GeForce 40-series "Ada" graphics card lineup in the second half of 2022.
And by "launch" you mean put the name on some slides, right? Because actual cards that people can buy are so 2018...
Posted on Reply
#2
DrCR
bugAnd by "launch" you mean put the name on some slides, right? Because actual cards that people can buy are so 2018...
Nah, you’ll definitely be able to get one … if you trade in your car for one.
Posted on Reply
#3
zlobby
'Unveiled'... Why it seems that the recent leak from the hack messed nvidia's plans a bit?
Posted on Reply
#4
bug
DrCRNah, you’ll definitely be able to get one … if you trade in your car for one.
That's true. But how many people drive a Ferrari?
Posted on Reply
#5
W1zzard
zlobby'Unveiled'... Why it seems that the recent leak from the hack messed nvidia's plans a bit?
Reworded to "Spotted", to not suggest that there was an official NVIDIA statement
Posted on Reply
#6
Chaitanya
bugAnd by "launch" you mean put the name on some slides, right? Because actual cards that people can buy are so 2018...
There are no more kidneys left for people to sell with the absurd prices we are seeing.
Posted on Reply
#7
Guwapo77
Interesting how Nvidia is following AMD's lead here with the split of architectures and adopting a MCM design. Looks like AMD was really on to something...
Posted on Reply
#8
bug
Guwapo77Interesting how Nvidia is following AMD's lead here with the split of architectures and adopting a MCM design. Looks like AMD was really on to something...
There's no lead to follow here. Nvidia was on to something when they managed to design one architecture that would scale from mobile all the way to datacenter. That was some serious cost reduction. Apparently that doesn't work anymore. Hopefully it leads to smaller dies.
Posted on Reply
#9
zlobby
W1zzardReworded to "Spotted", to not suggest that there was an official NVIDIA statement
Ah, thanks! As it is now evident I only saw the title. :D
Posted on Reply
#10
Oberon
Things that aren't actually news.
Posted on Reply
#11
Guwapo77
bugThere's no lead to follow here. Nvidia was on to something when they managed to design one architecture that would scale from mobile all the way to datacenter. That was some serious cost reduction. Apparently that doesn't work anymore. Hopefully it leads to smaller dies.
I'm not saying Nvidia didn't make a stellar design...they did. However, they are following AMD when it comes to splitting up their architecture. Now they have an Infinity Cache as well? I don't know man, it certainly seems like they are following AMD. The question then becomes, who's design will reign supreme?
Posted on Reply
#12
bug
Guwapo77I'm not saying Nvidia didn't make a stellar design...they did. However, they are following AMD when it comes to splitting up their architecture. Now they have an Infinity Cache as well? I don't know man, it certainly seems like they are following AMD. The question then becomes, who's design will reign supreme?
Like I said, "split" architecture was the status quo until Nvidia made something better. Going back to "split" is not really following anything. And this may come as a little surprise, but AMD didn't invent bigger caches either. Caches are always a game of balance between size and latency. And they're highly sensitive to workloads as well. There's no right solution here, just solutions that will hold for a few years.
Posted on Reply
#13
Guwapo77
bugLike I said, "split" architecture was the status quo until Nvidia made something better. Going back to "split" is not really following anything. And this may come as a little surprise, but AMD didn't invent bigger caches either. Caches are always a game of balance between size and latency. And they're highly sensitive to workloads as well. There's no right solution here, just solutions that will hold for a few years.
Are you talking about like the A100? Has Nvidia ever split their whole product range, I don't recall that happening? Its just really interesting how these decisions are being made after AMD has found success from this strategy. /shrugs Anywho, next generation has me really excited and I'm going without whoever makes the best GPU at 4K gaming.
Posted on Reply
#14
bug
Guwapo77Are you talking about like the A100? Has Nvidia ever split their whole product range, I don't recall that happening? Its just really interesting how these decisions are being made after AMD has found success from this strategy. /shrugs Anywho, next generation has me really excited and I'm going without whoever makes the best GPU at 4K gaming.
I believe it was around Maxwell (can't recall exactly, sorry) when Nvidia was able to use the same architecture/shaders across their entire lineup, from mobile to servers. Before that, mobile would require different shaders from the desktop SKUs (it wasn't called a different architecture back then, but the silicon was different). Up until now, that unification also covered the HPC parts, but now apparently HPC has grown different enough to warrant its own spin on the silicon.
Posted on Reply
Add your own comment
Dec 21st, 2024 23:26 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts