Monday, December 23rd 2024

NVIDIA GB300 "Blackwell Ultra" Will Feature 288 GB HBM3E Memory, 1400 W TDP

NVIDIA "Blackwell" series is barely out with B100, B200, and GB200 chips shipping to OEMs and hyperscalers, but the company is already setting in its upgraded "Blackwell Ultra" plans with its upcoming GB300 AI server. According to UDN, the next generation NVIDIA system will be powered by the B300 GPU chip, operating at 1400 W and delivering a remarkable 1.5x improvement in FP4 performance per card compared to its B200 predecessor. One of the most notable upgrades is the memory configuration, with each GPU now sporting 288 GB of HBM3e memory, a substantial increase from the previous 192 GB of GB200. The new design implements a 12-layer stack architecture, advancing from the GB200's 8-layer configuration. The system's cooling infrastructure has been completely reimagined, incorporating advanced water cooling plates and enhanced quick disconnects in the liquid cooling system.

Networking capabilities have also seen a substantial upgrade, with the implementation of ConnectX 8 network cards replacing the previous ConnectX 7 generation, while optical modules have been upgraded from 800G to 1.6T, ensuring faster data transmission. Regarding power management and reliability, the GB300 NVL72 cabinet will standardize capacitor tray implementation, with an optional Battery Backup Unit (BBU) system. Each BBU module costs approximately $300 to manufacture, with a complete GB300 system's BBU configuration totaling around $1,500. The system's supercapacitor requirements are equally substantial, with each NVL72 rack requiring over 300 units, priced between $20-25 per unit during production due to its high-power nature. The GB300, carrying Grace CPU and Blackwell Ultra GPU, also introduces the implementation of LPCAMM on its computing boards, indicating that the LPCAMM memory standard is about to take over servers, not just laptops and desktops. We have to wait for the official launch before seeing LPCAMM memory configurations.
Source: UDN
Add your own comment

10 Comments on NVIDIA GB300 "Blackwell Ultra" Will Feature 288 GB HBM3E Memory, 1400 W TDP

#1
KLMR
HBM3E
288GB
etc..
and people is still discussing if 16GB is enough or if GDDR7Xplusultra will bring bandwidth to the consumer market if 384bit bus is used.

What I only see is that they've translated the industrial cost price per computed unit to the the consumer market.
Posted on Reply
#2
Daven
KLMRHBM3E
288GB
etc..
and people is still discussing if 16GB is enough or if GDDR7Xplusultra will bring bandwidth to the consumer market if 384bit bus is used.

What I only see is that they've translated the industrial cost price per computed unit to the the consumer market.
RAM comparisons between GPU compute and gaming GPUs is like comparing an orange to a Klingon. They have nothing to do with the other.
Posted on Reply
#3
Timbaloo
DavenRAM comparisons between GPU compute and gaming GPUs is like comparing an orange to a Klingon. They have nothing to do with the other.
Not until you‘ve seen a klingon with liver failure that is.
Posted on Reply
#4
Nostras
KLMRHBM3E
288GB
etc..
and people is still discussing if 16GB is enough or if GDDR7Xplusultra will bring bandwidth to the consumer market if 384bit bus is used.

What I only see is that they've translated the industrial cost price per computed unit to the the consumer market.
It's almost as if Nvidia is doing this intentionally to prevent consumer sales from eating into their b2b products. Crazy.
Posted on Reply
#5
Daven
NostrasIt's almost as if Nvidia is doing this intentionally to prevent consumer sales from eating into their b2b products. Crazy.
Again, nothing to do with the other. Not even the same kind of RAM.
Posted on Reply
#6
Punkenjoy
DavenRAM comparisons between GPU compute and gaming GPUs is like comparing an orange to a Klingon. They have nothing to do with the other.
I disagree. HBM memory have already been found on gaming GPU and we will probably see it again in the future when price will go down.

The main reason why Nvidia isn't putting more RAM on their gaming GPU is because they don't want to affect their margin too much. Ram scaling have slowed down drastically and we no longer see the big increase of density we saw in the past. Stacking is probably the only option there but it's still costly. Until price come down, we will stagnate.

This and also, Nvidia whole marketing strategy is to make you feel sorry for not buying a *090 class GPU.

They can put that much memory on these chips because they sell them at crazy price. Their margin is still increasily good. But the gamers have now to adapt to the new reality that both Radeon and Geforce primary client is no longer customers, but business with deep pocket.

Gamers not will just get leftovers. Get used to it.
Posted on Reply
#7
Ruru
S.T.A.R.S.
PunkenjoyI disagree. HBM memory have already been found on gaming GPU and we will probably see it again in the future when price will go down.

The main reason why Nvidia isn't putting more RAM on their gaming GPU is because they don't want to affect their margin too much. Ram scaling have slowed down drastically and we no longer see the big increase of density we saw in the past. Stacking is probably the only option there but it's still costly. Until price come down, we will stagnate.

This and also, Nvidia whole marketing strategy is to make you feel sorry for not buying a *090 class GPU.

They can put that much memory on these chips because they sell them at crazy price. Their margin is still increasily good. But the gamers have now to adapt to the new reality that both Radeon and Geforce primary client is no longer customers, but business with deep pocket.

Gamers not will just get leftovers. Get used to it.
Nah, dunno... Fury and Vega were pretty meh, even with HBM compared to Ngreedia cards of the same price range.

Though not gonna lie, they were way more interesting than 900/1000 series' cards even though they weren't as fast.
Posted on Reply
#8
Punkenjoy
Fury and Vega used HBM way too early, a bit like RDNA 3 was too early on Chiplets.

That seems like a tendance on AMD side to early adopt newer manufacturing tech in the hope to gain an advantage. It do not seems to have work yet for them and I suspect that it is related to the purge AMD made in their GPU division. (This and also the fact they will go with just one uArch in the future).

But denser stacked memory will come. And sooner than later, the memory will be on package.GDDRX memory already need to be really close to it. At some point you can't have huge speed over long distances.

In the next decade, we will certainly see GPU package with multi die and stacked memory. The huge initial cost is currently being paid by those DC/cloud provider and other AI shop.

It's sad that there is currently stagnation. But those high end tech will be one day in reach for Gaming folks.
Posted on Reply
#9
JustBenching
PunkenjoyI disagree. HBM memory have already been found on gaming GPU and we will probably see it again in the future when price will go down.

The main reason why Nvidia isn't putting more RAM on their gaming GPU is because they don't want to affect their margin too much. Ram scaling have slowed down drastically and we no longer see the big increase of density we saw in the past. Stacking is probably the only option there but it's still costly. Until price come down, we will stagnate.

This and also, Nvidia whole marketing strategy is to make you feel sorry for not buying a *090 class GPU.

They can put that much memory on these chips because they sell them at crazy price. Their margin is still increasily good. But the gamers have now to adapt to the new reality that both Radeon and Geforce primary client is no longer customers, but business with deep pocket.

Gamers not will just get leftovers. Get used to it.
I think the main reason nvidia isn't adding much vram isn't to screw with gamers, they want to screw with professionals that don't care that much about compute performance but need the vram. As a consequence, yes, gamers are also screwed. Nvidia is trying to solve the issue by some "AI textures" crap so they can keep selling low vram cards so pros move to the higher end gpus.
Posted on Reply
#10
Chrispy_
LOL, talking about the $25 cost per supercapacitor and $1500 rack cost of batteries seems like they're ignoring the elephant(s) in the room:
  • A rack of GB200s costs about 3 million bucks, if you are placing an order big enough to let you negotiate a good price (i.e, you are Apple).
  • One single GB200 blade? $60,000 for just the GPU and no supporting hardware.
  • A rack of GB200s uses about 120 kW!
I am fairly certain GB200 won't suddenly stop being relevant, so expect GB300 to cost more by at least the proportional difference in performance, and more realistically a huge premium on top of that.
Posted on Reply
Dec 24th, 2024 08:09 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts