Monday, August 14th 2023
NVIDIA Blackwell Graphics Architecture GPU Codenames Revealed, AD104 Has No Successor
The next-generation GeForce RTX 50-series graphics cards will be powered by the Blackwell graphics architecture, named after American mathematician David Blackwell. kopite7kimi, a reliable source with NVIDIA leaks revealed what the lineup of GPUs behind the series could look like. It reportedly will be led by the GB202, followed by the GB203, and then the GB205 and GB206, followed by the GB207 at the entry level. What's surprising here, is the lack of a "GB204" succeeding the AD104, GA104, TU104, and a long line of successful performance-segment GPUs by NVIDIA.
The GeForce Blackwell ASIC series begins with "GB" (GeForce Blackwell) followed by a 200-series number. The last time NVIDIA used a 200-series ASIC number for GeForce GPUs was with "Maxwell," as the GPUs ended up being built on a more advanced node, and with a few more advanced features, than what the architecture was originally conceived for. For "Blackwell," the GB202 logically succeeds the AD102, GA102, TU102, and a long line of "big chips" that have powered the company's flagship client graphics cards. The GB103 succeeds AD103, as a high SIMD count GPU with a narrower memory bus than the GB202, powering the #2 and #3 SKUs in the series. There is curiously the lack of a "GB104."NVIDIA's xx04 ASICs have powered a long line of successful performance-thru-high end SKUs, such as the TU104 powering the RTX 2080, and the GP104 powering the immensely popular GTX 1080 and GTX 1070 series. The denominator has been missing the mark for the past two generations. The "Ampere" based GA104 powering the RTX 3070 may have sold in volumes, but a its maxed out RTX 3070 Ti hasn't quite sold in numbers, and missed the mark against the Radeon RX 6800 (similar price). Even with Ada, while the AD104 powering the RTX 4070 may be selling in numbers, the maxed out chip powering the RTX 4070 Ti, misses the mark against the RX 7900 XT with a similar price. This has caused NVIDIA to introduce the AD103 in the desktop segment—a high CUDA core-count silicon with a mainstream memory bus width of 256-bit—out to justify high-end pricing, which will continue in the GeForce Blackwell generation with the GB203.
As with AD103, NVIDIA will leverage the high SIMD power of GB203 to power high-end mobile SKUs. The introduction of the GB205 ASIC could be an indication that NVIDIA's performance-segment GPU will come with a feature-set that would avoid the kind of controversy NVIDIA faced when trying to carve out the original "RTX 4080 12 GB" using the AD104 and its narrow 192-bit memory interface.
Given NVIDIA's 2-year cadence for new client graphics architectures, one can expect Blackwell to debut toward Q4-2024, to align with mass-production availability of the 3 nm foundry node.
Source:
VideoCardz
The GeForce Blackwell ASIC series begins with "GB" (GeForce Blackwell) followed by a 200-series number. The last time NVIDIA used a 200-series ASIC number for GeForce GPUs was with "Maxwell," as the GPUs ended up being built on a more advanced node, and with a few more advanced features, than what the architecture was originally conceived for. For "Blackwell," the GB202 logically succeeds the AD102, GA102, TU102, and a long line of "big chips" that have powered the company's flagship client graphics cards. The GB103 succeeds AD103, as a high SIMD count GPU with a narrower memory bus than the GB202, powering the #2 and #3 SKUs in the series. There is curiously the lack of a "GB104."NVIDIA's xx04 ASICs have powered a long line of successful performance-thru-high end SKUs, such as the TU104 powering the RTX 2080, and the GP104 powering the immensely popular GTX 1080 and GTX 1070 series. The denominator has been missing the mark for the past two generations. The "Ampere" based GA104 powering the RTX 3070 may have sold in volumes, but a its maxed out RTX 3070 Ti hasn't quite sold in numbers, and missed the mark against the Radeon RX 6800 (similar price). Even with Ada, while the AD104 powering the RTX 4070 may be selling in numbers, the maxed out chip powering the RTX 4070 Ti, misses the mark against the RX 7900 XT with a similar price. This has caused NVIDIA to introduce the AD103 in the desktop segment—a high CUDA core-count silicon with a mainstream memory bus width of 256-bit—out to justify high-end pricing, which will continue in the GeForce Blackwell generation with the GB203.
As with AD103, NVIDIA will leverage the high SIMD power of GB203 to power high-end mobile SKUs. The introduction of the GB205 ASIC could be an indication that NVIDIA's performance-segment GPU will come with a feature-set that would avoid the kind of controversy NVIDIA faced when trying to carve out the original "RTX 4080 12 GB" using the AD104 and its narrow 192-bit memory interface.
Given NVIDIA's 2-year cadence for new client graphics architectures, one can expect Blackwell to debut toward Q4-2024, to align with mass-production availability of the 3 nm foundry node.
71 Comments on NVIDIA Blackwell Graphics Architecture GPU Codenames Revealed, AD104 Has No Successor
I wasn't comparing anything here, you totally misunderstood. I was talking about an idea, that instead of using 1 big monolithic chip approach, you can use multiple smaller chips or chiplets, where, in theory, you can double the performance of a Video card buy just adding more of those on a 2 ratio, therefore making High-end or Enthusiast cards quite "easily" to manufacture.
3dfx weren't magical visionaries decades ahead of everyone else, the simple reason they developed SLI is that their GPUs were no longer competitive and SLI was a way to make up this performance gap without a fundamental redesign. SLI worked for 3dfx because GPUs were a lot simpler back then, so a multi-GPU implementation could also be simple.
It was an interesting stopgap at the right time, but ultimately it was another bad decision that killed the company - because 3dfx ended up focusing on it as the magic bullet to overcome the fundamental limitations of their architecture, instead of making the necessary design changes to regain competitiveness. Thus the doomed and company-dooming Voodoo 5, because it turns out that when your graphics card needs 4 GPUs to be competitive with a single GPU from NVIDIA or ATI, it makes that graphics card really freaking expensive for the same level of performance - and nobody's going to pay a lot more for the same performance.
Everything worked and works, including SLi, SLI, nvLink, Chiplet designs, Data Centers GPU farms, etc, what th are you even talking about??
Hell, even inside the big GPU are smaller GPUs or Cores, that are interconnected.....ufff. The key, is to use different packages to stack them, by not adding latency, reduce efficiency, etc. The latest tech is the Chiplet design AMD is going to use, and there is even an brand new article linked to this NEWS forum.
Btw, you know what's funny? For example, the nVidia's RTX 4090 has the power of 16.384 VooDoo2 cards. Considering that a VooDoo2 GPU had ~4 million transistors, when you do the math, it results almost the exact amount of a modern monolithic GPU transistor count...
Wonders of miniaturization.
a) 3DFX had huge success initially, for years - not equivaling a "flash in the pan". Especially as the IT sector can not be compared to the car sector, which again, is laughable. 3DFX was a pioneer of the 3D graphics branch, well respected among people who understand a thing about PCs.
b) they decided to buy a factory and produce GPUs themselves, also denying anyone else to produce cards for them (for the Voodoo 4 and 5 line, which makes it obvious how late this was)
c) as the Voodoo 4 and 5 cards weren't huge successes and they overspent on that endeavor as well, they basically lost the company by going bankrupt If you aren't interested in the discussion, then you don't have to partake. But save us your boring attitude, please. Aside from the many mistakes you make while being so sure of yourself. Nobody ever said that, are you sleeping or reading this discussion? Sorry but utter nonsense. The fact is that SLI was their philosophy of doing tech back then, as simple as this, and not whatever you just made up here, for the sake of having an argument that you can't win (due to history and facts not agreeing with you, still). SLI scaled so far, that no other company could compete with it, there were arcade machines that used 8 of these chips in SLI, together. Calling this "inferior tech", is just ridiculous, aside from the fact that they used this tech for many years and has nothing to do with going "SOS" on tech. Not really, but I have already explained what their ultimate failure was.
I will add to it more explanations:
- marketing, Nvidia marketing was strong, same as it is now (and more even, their marketing was very toxic as well), and they convinced everyone that 3DFX cards are inferior because they didn't have a few features that mattered 0 back then. While also touting their "T&L" horn (among other things), which also wasn't very relevant, a handful of games (if that) supported it. Too bad that this is utterly wrong as well.
- The Voodoo 5 6000 was never released, so never played any role in anything, you're basically talking up fantasies that never happened
- The Voodoo 5 6000 was proven to be WAY FASTER than any other card, IF it had been released, by reviewers much later that happened to get a few of these cards in working shape and review them.
- If a Voodoo 5 6000 would have been released with way higher performance than the competition, the top dollar it would've cost, would've been worth it. So you're wrong with that assertion as well.
- The Voodoo 5 5500 was fast enough to compete with the competition at the time. 3DFX lost the company due to other reasons already explained in this post.
Ultimately, if 3DFX hadn't invested to buy up a company to produce their own cards, they would've not gone bankrupt. God knows what would've happened later as 3DFX was already leaving the SLI-route for next-gen and inventing single GPU cards again, the successor to the "VSA-100"-chip and architecture.
And that initial success was solely due to the fact that they had no competition. As soon as NVIDIA and ATI stepped up, 3dfx began to falter. No other company competed with it because they didn't need to. Because those other companies had simpler technology that could accomplish the same performance. It was inferior because it was unnecessary. It was never released because it doomed the company and a company that no longer exists can't release products. Its speed was not commensurate with its price point and it lacked hardware T&L, which despite your earlier claim was incredibly important. See previous point. The STB purchase was a manifestly stupid idea in multiple aspects, but ultimately 3dfx simply lacked a strategy to compete with NVIDIA and ATI. That would've doomed them regardless of how much or how little capital they had. What happened was that NVIDIA put the 3dfx engineers to work on GeForce, and the result was the GeForce FX series, widely considered to be one of if not the worst series of GPUs ever released by the company.
well, at least if the chosen api would be Vulkan and not some proprietary one like Glide:P
RTX 5090 speccs were leaked, rumoured performance uplift is 70% :eek::
videocardz.com/newz/nvidia-geforce-rtx-5090-rumors-2-9-ghz-boost-clock-1-5-tb-s-bandwidth-and-128mb-of-l2-cache
If this is real, and AMD abandons the high-end all together, then it's safe to assume that nvidia has a perfect chance to rmake the whole AMD lineup obsolete and dead as is...
1. If not making the single fastest GPU in the world could kill a company, then AMD would be a name long forgotten by now.
2. Even if AMD decided to stop selling GPUs altogether (which they won't as long as Sony and Microsoft are a thing), that would help you how exactly?
Ok, I will chew it down for your convenience.
70% uplift in the highest end means that more or less the whole RTX 5000 generation will move up by more or less the same number 70%.
When AMD's fastest is still that pathetic garbage RX 7900 XTX, it will line up somwhere closer to RTX 5050 than RTX 5090.
Game over for AMD :D I begin to think that you are a troll.
The RTX 5090 will have a strong influence on the overall company reputation and will sell the other parts well.
And even if "only" 5% of the gamers buy the RTX 5090, it still means that large profit margins and revenue will flow towards nvidia, not towards AMD.
If the 7900 XTX is pure garbage, then so is the RTX 4080 and everything below. If we follow that logic, 0.76% of gamers have a proper video card (4090) according to the Steam survey, and what the rest of us use is garbage. The way you think the one single flagship GPU determines a whole company's success or failure is mind-boggling!
Like I said, it's not over for AMD as long as Sony and Microsoft keep buying their GPUs for their consoles, and as long as they price their cards right.
You also haven't answered the question: what would you benefit from AMD's demise? Do you want to pay $3,000 for a 5090 and maybe $4,000 for a 6090? Like it's been the case since decades. And?
Don't forget that AMD is a much smaller company than Nvidia with lot less spending. They don't need as much cash to make a profit.
By the way, I'm not talking for AMD. I'm talking against your nonsense.
That bubble will explode sooner or later. Nothing on Earth is eternal, even the businesses :D
I think you have to reread everything again and stop for some time to rethink and reconsider the whole situation.
RTX 4090 is already 24% faster than RX 7900 XTX, and this is before we put the RT games into account, which makes the gap even wider - 50% faster than RT 7900 XTX
Do you know what will happen when you add 70% on top of those 50%?
I will let your imagination find itself the answer...
And because Radeon is weak for mining, RT, lacks DLSS, lacks other valued features...
Because geforce is the famous luxury brand, while Radeon is more like low quality, cheap alternative in eyes of the buyers.
The bad thing is that so many years have passed and AMD did nothing to change - its marketing must be dismissed as useless. They are not. They are direct competitors. But since AMD has serious problems with the RDNA 3 architecture, that card is much slower.
Speculation times:
RTX 4060 - 100% for 300$
RTX 4060 Ti - +20% for 400-500$
RTX 4070 - +30% for 600$
RTX 4070 Ti - +25% for 800$
RTX 4080 - +30% for 1200$
RTX 4090 - +30% for 1600$
So... if:
RTX 5090 = RTX 4090 +70%
RTX 5080 = RTX 4080 +70%
RTX 5070 Ti = RTX 4070 Ti +70%
RTX 5070 = RTX 4070 +70%
RTX 5060 Ti = RTX 4060 Ti +70%
RTX 5060 = RTX 4060 +70%
then:
RTX 5060 for 300$ ~ RX 7800 XT ~ RX 6900 XT
RTX 5060 Ti for 450$ ~ RX 7900 XT
RTX 5070 for 600$ ~ RX 7900 XTX OCed
Enjoy looking smug in your $35k BMW 5-series and all of its "extras and luxuries" cruising down the highway while my $15k Fiesta ST takes me to my destination just as quickly with just as much fun. ;)
The Radeon brand is weaker than GeForce only because of Nvidia's strong marketing campaign around its "luxuries". People believe that you need RT and DLSS and DL-whatnot to play games because that's what they're told by the media, when in fact, you don't. A $900 GPU directly competing with a $1600 one? What drugs have you been taking? Or is the 7600 directly competing with the 4070 Ti now? And the 6400 with the 3060 Ti? :roll:
videocardz.com/newz/nvidia-geforce-rtx-50-gb202-gpu-rumored-to-feature-192-sms-and-512-bit-memory-bus-according-to-kopite7kimi
RTX 5090 (GB202) could be coming with 512-bit 32 GB GDDR7 VRAM.
I guess nvidia must instead make the lower end GB203, GB205, etc parts coming with more VRAM... because 12 GB won't work anymore... not that it works now...
I am not buying any 12 GB card today..