• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

SK hynix Develops Industry's First 12-Layer HBM3, Provides Samples To Customers

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,290 (7.53/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
SK hynix announced today it has become the industry's first to develop 12-layer HBM3 product with a 24 gigabyte (GB) memory capacity, currently the largest in the industry, and said customers' performance evaluation of samples is underway. HBM (High Bandwidth Memory): A high-value, high-performance memory that vertically interconnects multiple DRAM chips and dramatically increases data processing speed in comparison to traditional DRAM products. HBM3 is the 4th generation product, succeeding the previous generations HBM, HBM2 and HBM2E

"The company succeeded in developing the 24 GB package product that increased the memory capacity by 50% from the previous product, following the mass production of the world's first HBM3 in June last year," SK hynix said. "We will be able to supply the new products to the market from the second half of the year, in line with growing demand for premium memory products driven by the AI-powered chatbot industry." SK hynix engineers improved process efficiency and performance stability by applying Advanced Mass Reflow Molded Underfill (MR-MUF)# technology to the latest product, while Through Silicon Via (TSV)## technology reduced the thickness of a single DRAM chip by 40%, achieving the same stack height level as the 16 GB product.



# MR-MUF (Mass Reflow Molded Underfill): A method of placing multiple chips on the lower substrate and bonding them at once through reflow, and then simultaneously filling the gap between the chips or between the chip and the substrate with a mold material.

## TSV (Through Silicon Via): An interconnecting technology used in advanced packaging that links the upper and lower chips with electrode that vertically passes through thousands of fine holes on DRAM chips. SK hynix's HBM3 that integrated this technology can process up to 819 GB per second, meaning that 163 FHD (Full-HD) movies can be transmitted in a single second


The HBM, first developed by SK hynix in 2013, has drawn broad attention from the memory chip industry for its crucial role in implementing generative AI that operates in high-performance computing (HPC) systems.

The latest HBM3 standard, in particular, is considered the optimal product for rapid processing of large volumes of data, and therefore its adoption by major global tech companies is on the rise.

SK hynix has provided samples of its 24 GB HBM3 product to multiple customers that have expressed great expectation for the latest product, while the performance evaluation of the product is in progress.

"SK hynix was able to continuously develop a series of ultra-high speed and high capacity HBM products through its leading technologies used in the back-end process," said Sang Hoo Hong, Head of Package & Test at SK hynix. "The company plans to complete mass production preparation for the new product within the first half of the year to further solidify its leadership in cutting-edge DRAM market in the era of AI."

View at TechPowerUp Main Site
 

Calatinus

New Member
Joined
Jan 1, 2023
Messages
8 (0.01/day)
Maybe it's time for AMD to dust off the HBM R9 Fury / HBM2 Vega64 development paths and reconsider HBM3 as a good source of more VRAM and end the planned clown-obsolesce of memory bottle-necked graphic cards. Also Nvidia shall take notes.
 
Joined
Apr 9, 2020
Messages
309 (0.18/day)
HBM2E is plenty used in the industry with one of the biggest customers being NVIDIA actually.
I'm sure they will be in the front row for HBM3.
 
Joined
Oct 6, 2021
Messages
1,605 (1.37/day)
Maybe it's time for AMD to dust off the HBM R9 Fury / HBM2 Vega64 development paths and reconsider HBM3 as a good source of more VRAM and end the planned clown-obsolesce of memory bottle-necked graphic cards. Also Nvidia shall take notes.
If you're willing to pay twice as much for GPUs. HBM is super expensive.
 
Joined
Jun 2, 2017
Messages
9,353 (3.39/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
If you're willing to pay twice as much for GPUs. HBM is super expensive.
It's great if you want to put a Waterblock on the card though.
 
Joined
Oct 12, 2005
Messages
709 (0.10/day)
I am not sure if that will ever be really available for consumable GPU.

For sure, the margin of the high end is so ridiculous that they could still put this on their board and make huge profits. But it look they don't want to affect to much their margin.

But with the huge increase of sales for AI silicons that really frequently use HBM, that might mean enough mass scale production to bring the price down for high end GPU while keeping their high margin.

We will see. In the past, the driving factor of GPU was gaming. These days, it's almost anything but gaming !
 
Joined
Aug 4, 2022
Messages
54 (0.06/day)
Isn't this 24GB in a single stack though? That would be horrible for consumer cards. 2 stacks of this HBM3 has the same bandwidth as 384 bit GDDR7, so we would need at least 2 stacks. But why would you want that much memory on a consumer card? It would save you on power budget over GDDR7, but cost a lot more to make. Of course now with chiplets for GPUs coming the transistor savings of HBM are largely mitigated too, since you can move memory controllers and cache off the main die
 
Last edited:
Top