• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
2,657 (0.99/day)
During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.



View at TechPowerUp Main Site | Source
 
Joined
Jan 3, 2021
Messages
3,612 (2.49/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
It's not at all clear what logic the HBM4 base (logic) die contains. The memory controller is located on the GPU or whatever the main processor is, so what remains are some signal buffers, testing logic and maybe power management. Nothing that would require 12nm. This, basically:

1715938031481.png


Or is this old architecture about to change substantially? The guys at SemiEngineering have a thorough understanding of what they write about, and they say, "SK hynix said the collaboration will enable breakthroughs in memory performance with increased density of the memory controller at the base of the HBM stack."
 
Joined
Nov 6, 2016
Messages
1,777 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
64GB stacks? So then we could see 6x stacks for a total.of 384GB of HBM4? That's wild, can you imagine what the next super APU from AMD will contain? It really will be an entire system on an AIC.

It's a shame AI is eating up all the HBM, can you imagine what that new Strix Halo APU would be capable of with 16GB of HBM on package?
 
Joined
Apr 13, 2022
Messages
1,197 (1.21/day)
64GB stacks? So then we could see 6x stacks for a total.of 384GB of HBM4? That's wild, can you imagine what the next super APU from AMD will contain? It really will be an entire system on an AIC.

It's a shame AI is eating up all the HBM, can you imagine what that new Strix Halo APU would be capable of with 16GB of HBM on package?
AI is not eating all the HBM it's also available on pro cards aka quadro. The issue is that the cost of it is not really a good idea for consumer cards. And for all the footsies stompsies people already throw about the cost of GPUs imagine the billow biting and tantrums with the added costs of HBM!
 
Joined
Dec 30, 2010
Messages
2,200 (0.43/day)
64GB stacks? So then we could see 6x stacks for a total.of 384GB of HBM4? That's wild, can you imagine what the next super APU from AMD will contain? It really will be an entire system on an AIC.

It's a shame AI is eating up all the HBM, can you imagine what that new Strix Halo APU would be capable of with 16GB of HBM on package?

Adding more HBM to a APU is not going to change it's performance just like that. You still need to have a GPU that's capable of taking advantage of all that extra memory bandwidth.

And if a GPU is rated for over 150W of power, what do you keep left for the CPU? Your starting to get into absurd cooling territory here if it was just for an APU.
 
Joined
Mar 13, 2021
Messages
476 (0.34/day)
Processor AMD 7600x
Motherboard Asrock x670e Steel Legend
Cooling Silver Arrow Extreme IBe Rev B with 2x 120 Gentle Typhoons
Memory 4x16Gb Patriot Viper Non RGB @ 6000 30-36-36-36-40
Video Card(s) XFX 6950XT MERC 319
Storage 2x Crucial P5 Plus 1Tb NVME
Display(s) 3x Dell Ultrasharp U2414h
Case Coolermaster Stacker 832
Power Supply Thermaltake Toughpower PF3 850 watt
Mouse Logitech G502 (OG)
Keyboard Logitech G512
It's not at all clear what logic the HBM4 base (logic) die contains. The memory controller is located on the GPU or whatever the main processor is, so what remains are some signal buffers, testing logic and maybe power management. Nothing that would require 12nm. This, basically:

View attachment 347708

Or is this old architecture about to change substantially? The guys at SemiEngineering have a thorough understanding of what they write about, and they say, "SK hynix said the collaboration will enable breakthroughs in memory performance with increased density of the memory controller at the base of the HBM stack."


Correct me if I am wrong but I was under impression the Memory controller of the device spoke to the logic die and the logic die spoke to the HBM banks and there wasnt a direct connection between the memory and the memory controller.

https://upcommons.upc.edu/bitstream/handle/2117/362010/HBM.pdf
HBM STRUCTURE AND FEATURES
Point B



I wonder then if they are going to move from the 1024bit interface to say 1536 or 2048 per die? This just means you can go >8 "stacks" per die is what SK Hynix was alluding to?
 
Joined
Mar 28, 2024
Messages
115 (0.42/day)
Processor AMD 7800X3D
Motherboard MSI B650 Tomahawk
Cooling Noctua NHU12S
Memory 2x16 GB GSKILL 6000MHZ CL28
Video Card(s) Powercolor 7900 GRE
Storage 1TB Samsung 980 PRO
Display(s) LG 32GP750 31.5" 2K QHD (2560 x 1440) 165Hz Gaming Monitor
Case Coolermater HAF 650
Audio Device(s) BeyerDynamic Amiron Home
Power Supply Seasonic 850W Gold
Adding more HBM to a APU is not going to change it's performance just like that. You still need to have a GPU that's capable of taking advantage of all that extra memory bandwidth.
1715954298467.png

AMD's 8000g series seem to scale pretty linearly with memory bandwidth. I know HBM4 would be massive overkill for this but fast ram does help.
 
Joined
Jul 13, 2016
Messages
3,337 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage P5800X 1.6TB 4x 15.36TB Micron 9300 Pro 4x WD Black 8TB M.2
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) JDS Element IV, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse PMM P-305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
The issue is that the cost of it is not really a good idea for consumer cards.

It costs more than GDDR but to the point of being not logical for consumer products? I don't think so, Vega 56 was $400 at launch. If AMD could slip HBM into cards at that price point, with the economies of scale being much more in favor of HBM today, it should absolutely be possible to put on consumer cards. Especially after you consider the price of current video cards.

Consumer cards don't need anywhere near the latest HBM either. They'd be fine with going with last gen to save money.


And for all the footsies stompsies people already throw about the cost of GPUs imagine the billow biting and tantrums with the added costs of HBM!

People are mad at the unjustified inflation of GPU prices, which makes sense given performance per dollar metrics have hardly been improving (particularly on the "low" end). Meanwhile game requirements are increasing, which has translated to a regression in how much performance people's budgets get them in new games.

Demeaning these complaints as "billow biting" and "tantrums" is simply ignoring these facts and is nothing more than a Ad Hominen attack designed to discredit anyone that disagrees with your argument without actually having to debate the facts.

I really don't think a price increase would be required at all given Nvidia operates at a gross margin of 64% (vastly higher than even Apple) but I also don't think people would be mad at a $30 price increase on their $1,000 GPU if it meant more memory bandwidth and lower power usage. Nvidia has already make the argument that new nodes are a large part of why pricing has increased (an argument that is dubious in part because their profit margins have increased in tune with their price increases and the fact that wafer cost is only a portion of their total expenses). Following Nvidia's own logic, nodes are expensive and therefore the cost to implment HBM becomes increasingly trivial as compared to the rising cost of nodes. Especially when you consider a cash starved AMD did it back when they were only making 1/3rd of the margins Nvidia is now.
 
Last edited:
Joined
Jun 1, 2010
Messages
392 (0.07/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
AI is not eating all the HBM it's also available on pro cards aka quadro. The issue is that the cost of it is not really a good idea for consumer cards. And for all the footsies stompsies people already throw about the cost of GPUs imagine the billow biting and tantrums with the added costs of HBM!
I mean, at the price of these top consumer solutions akin 4090, or other almost two grand worth product, they better have HBM on-board, instead GDDR6X. A limited series of 4090/7900XTX HBM versions, could easilly find their buyers (these are already go in way lower amounts anyway). Not to mention with amounts of memory these top cards use, the HBM would suit even more, especially by reducing the temperature footprint, along with much wider bus. And it was beneficial to GCN, this also might benefit ADA/Blackwell and RDNA3/4/5. Especially, that RDNA MCDs can be altered to any memory type/configuration.

It costs more than GDDR but to the point of being not logical for consumer products? I don't think so, Vega 56 was $400 at launch. If AMD could slip HBM into cards at that price point, with the economies of scale being much more in favor of HBM today, it should absolutely be possible to put on consumer cards. Especially after you consider the price of current video cards.

Consumer cards don't need anywhere near the latest HBM either. They'd be fine with going with last gen to save money.




People are mad at the unjustified inflation of GPU prices, which makes sense given performance per dollar metrics have hardly been improving (particularly on the "low" end). Meanwhile game requirements are increasing, which has translated to a regression in how much performance people's budgets get them in new games.

Demeaning these complaints as "billow biting" and "tantrums" is simply ignoring these facts and is nothing more than a Ad Hominen attack designed to discredit anyone that disagrees with your argument without actually having to debate the facts.

I really don't think a price increase would be required at all given Nvidia operates at a gross margin of 64% (vastly higher than even Apple) but I also don't think people would be mad at a $30 price increase on their $1,000 GPU if it meant more memory bandwidth and lower power usage. Nvidia has already make the argument that new nodes are a large part of why pricing has increased (an argument that is dubious in part because their profit margins have increased in tune with their price increases and the fact that wafer cost is only a portion of their total expenses). Following Nvidia's own logic, nodes are expensive and therefore the cost to implment HBM becomes increasingly trivial as compared to the rising cost of nodes. Especially when you consider a cash starved AMD did it back when they were only making 1/3rd of the margins Nvidia is now.
People not only complain. They are no longer able to afford low and medium end cards, they used to buy for decades. Because those cards suddengly had their price tags became premium. Considering the margins, the entire range of GPUs, is highly overpriced. As if the BOM has risen that much, that should have eaten significant portions of those said margins. But it didn't.

And this isn't only demeaning. This is discarding any reasoning. Kinda, like no one is allowed asking these companies about their behavior, just take everything they say and report as given.

P.S.: Intel used to make Kaby Lake-G with single HBM2 stack on board. These still being used in specific embedded systems, like this, for around $300. This is obvious, that these APU's are old, and the HBM2 stacks have been installed more than five years ago, when the HBM prices were nowhere as expensive. But there's nothing wrong in this concept, and the NUCs based on KBL-G used to have impressive performance. I don't see why AMD can't revive this approach, since they benefit from it like no one else.
 
Last edited:
Joined
Mar 13, 2021
Messages
476 (0.34/day)
Processor AMD 7600x
Motherboard Asrock x670e Steel Legend
Cooling Silver Arrow Extreme IBe Rev B with 2x 120 Gentle Typhoons
Memory 4x16Gb Patriot Viper Non RGB @ 6000 30-36-36-36-40
Video Card(s) XFX 6950XT MERC 319
Storage 2x Crucial P5 Plus 1Tb NVME
Display(s) 3x Dell Ultrasharp U2414h
Case Coolermaster Stacker 832
Power Supply Thermaltake Toughpower PF3 850 watt
Mouse Logitech G502 (OG)
Keyboard Logitech G512
HBM when Vega/Kay Lake-G were being released didnt really have a mass market appeal anywhere at the time so it didnt command a massive premium over GDDR like it does now.

HBM now due to the AI craze etc now cannot keep up with demand. I mean all of SKs 2025 production has been purchased if I remember correctly from a few weeks ago plus

So that "cheap" HBM that was offered back at the HBM1/2 days is NOT there anymore and it asks a hell of a premium vs GDDR.

Do you want to sell a $70k B200 or a $3k 5090........ You see where I am getting at.
 
Joined
Jun 1, 2010
Messages
392 (0.07/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
HBM when Vega/Kay Lake-G were being released didnt really have a mass market appeal anywhere at the time so it didnt command a massive premium over GDDR like it does now.

HBM now due to the AI craze etc now cannot keep up with demand. I mean all of SKs 2025 production has been purchased if I remember correctly from a few weeks ago plus

So that "cheap" HBM that was offered back at the HBM1/2 days is NOT there anymore and it asks a hell of a premium vs GDDR.

Do you want to sell a $70k B200 or a $3k 5090........ You see where I am getting at.
This is true. But the part of the "issue" is that AMD didn't secure even tiny amounts of HBM, even for the "experimental" APU series. I mean, Intel Kaby Lake-G didn't live long, and made only a bunch of them. But they still find the place in the custom products, years later.

Of course, HMB isn't viable inside desktop APUs, and it's clear, as those are direct derivatives of mobile silicon. But still, there could be some laptops and mini-PCs, where the there's plenty of space for bigger package substrate/die area (and it isn't restricted by IHS/socket), which would benefit such devices, since it will lower (not remove) the need of fast and expensive SODIMMs, at least for an iGPU.

But it's hard to tell if this approach is worth or not, since neither AMD, nor Intel got any consumer grade products on the market right now, that would have had an APU+HBM concept, even in limited amounts. Because, yeah, the entire HBM stocks went solely into the enterprise stuff. I mean, if they'd had at least some amounts Hawk-Point+HBM products, it might have shown their viability, and consumer interest.

But that's my take.

P.S.: "3K for 5090" is still too overpriced. There's no reason consumer cards with GDDR having such price tags. I doubt, that even with that that pricey exclusive GDD6x, costs a grand. And it's even more ridiculous, to see AMD cards being neck-and-neck in price with nVidia counterparts, while having "regular" slower GRDDR6.
 
Last edited:
Joined
Jul 13, 2016
Messages
3,337 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage P5800X 1.6TB 4x 15.36TB Micron 9300 Pro 4x WD Black 8TB M.2
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) JDS Element IV, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse PMM P-305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
HBM when Vega/Kay Lake-G were being released didnt really have a mass market appeal anywhere at the time so it didnt command a massive premium over GDDR like it does now.

HBM now due to the AI craze etc now cannot keep up with demand. I mean all of SKs 2025 production has been purchased if I remember correctly from a few weeks ago plus

So that "cheap" HBM that was offered back at the HBM1/2 days is NOT there anymore and it asks a hell of a premium vs GDDR.

The AI craze started after the 4000 series was released and especially after the 3000 series release. By extension it's almost irrelevant to point out the fact that HBM is supply constrained today as it simply wasn't a factor when these generations were being designed.

Consider also that HBM allocation is secured years in advance to ensure partners are able to build out production to meet customer requirements. The current shortfall is due to unexpected demand and that's where you start seeing supply constraints in the semi-conductor industry as production cannot just be picked up willy-nilly. IF Nvidia wanted some 4000 series cards to have HBM, it would have secured allocation at least 1 year or more ahead of time. They would have signed a contract agreeing to reasonable rates and allotment at that time, meaning they would have gotten much better pricing than you do in the current market. Pretty much what you are saying would only be relevant if Nvidia wanted some 5000 series cards to have HBM, which indeed is unlikely due to the pricing increase and short supply. That said it doesn't apply to the 4000 series, were Nvidia signed contracts securing their wafers and memory way before HBM supply became an issue.

Do you want to sell a $70k B200 or a $3k 5090........ You see where I am getting at.

I'd assume this is indeed the reason why Nvidia is putting everything into enterprise / AI. The price they are fetching for those cards is insane.
 
Joined
Mar 13, 2021
Messages
476 (0.34/day)
Processor AMD 7600x
Motherboard Asrock x670e Steel Legend
Cooling Silver Arrow Extreme IBe Rev B with 2x 120 Gentle Typhoons
Memory 4x16Gb Patriot Viper Non RGB @ 6000 30-36-36-36-40
Video Card(s) XFX 6950XT MERC 319
Storage 2x Crucial P5 Plus 1Tb NVME
Display(s) 3x Dell Ultrasharp U2414h
Case Coolermaster Stacker 832
Power Supply Thermaltake Toughpower PF3 850 watt
Mouse Logitech G502 (OG)
Keyboard Logitech G512
GDDR6x was a pricey way of getting extra bandwidth from the same width bus. GDDR7 is a halfway house between 6x and 6. These sorts of measures keeps the "cheaper" GDDR relevant in modern times and has meant things like HBM/HMC get sidelined in the consumer space.

Of course, HMB isn't viable inside desktop APUs, and it's clear, that those are direct derivatives of mobile silicon.
What would have been a big push for it is if they had decided to go for HBM on say the PS5/Xbox Series X. AMD already saw the benefits in HBM and had experience with it from Vega but again costing/supply would have been the biggest constratint for it in the Console space.

So what do you do. Make it a halo only product again? 69xx maybe having HBM? So you either go for the top end of HBM2 which was available at the time as it was limited to 8Gb per package or your looking at 4 dies of the more common 4Gb layout to make it up to the 16Gb of the eventual 6900xt. So now you are talking a 4096bit bus on a consumer GPU. How big is this die going to end up? HBM2e/HBM3 upped the dies per package to 12 drastically increasing the memory density available hence why were are now seeing insane memory on the MI250/H100 and their upcoming replacements but they had only really just been announced when the 6900xt was being released.

The AI craze started after the 4000 series was released and especially after the 3000 series release.
AI went into HYPERDRIVE at that point but there was a build up of AI focus since 2016 when nVidia gifted their 1st DGX1 to OpenAI and it was always going to happen but was a question of when it would become viable outside of supercomputer. 3080 and especially the 3090s gave people the ability to do LLM on their home systems and the 4080/90s are even better at it due to increased power and VRAM.

I'd assume this is indeed the reason why Nvidia is putting everything into enterprise / AI. The price they are fetching for those cards is insane.
Its why everyone not just nVidia is focusing so heavily on it. Its got similar demand as the crypto boom but the difference is that people are willing to add 000s to cheques to guarantee they get what they want when they want it. Its why your seeing things like AMD helping TinyCorp trying to get RDNA3 into the datacenter as getting more AMD products into the datacenter means more EPYC and Instinct accelerators in there as well in the longer term.
 
Top