Thursday, January 16th 2025
SK hynix Ships HBM4 Samples to NVIDIA in June, Mass Production Slated for Q3 2025
SK hynix has sped up its HBM4 development plans, according to a report from ZDNet. The company wants to start shipping HBM4 samples to NVIDIA this June, which is earlier than the original timeline. SK hynix hopes to start supplying products by the end of Q3 2025, this push likely aims to get a head start in the next-gen HBM market. To meet this sped-up schedule, SK hynix has set up a special HBM4 development team to supply NVIDIA. Industry sources indicated on January 15th that SK Hynix plans to deliver its first customer samples of HBM4 in early June this year. The company hit a big milestone when it wrapped up the HBM4 tapeout in Q4 2024, the last design step.
HBM4 marks the sixth iteration of high-bandwidth memory tech using stacked DRAM architecture. It comes after HBM3E, the current fifth-gen version, with large-scale production likely to kick off in late 2025 at the earliest. HBM4 boasts a big leap forward doubling data transfer ability with 2,048 I/O channels up from its forerunner. NVIDIA planned to use 12-layer stacked HBM4 in its 2026 "Rubin" line of powerful GPUs. However, NVIDIA has moved up its timeline for "Rubin" aiming to launch in late 2025.A source familiar with the matter explained, "It seems that NVIDIA's will to launch Rubin early is stronger than expected, to the point that it is pushing forward trial production to the second half of this year." He added, "In line with this, memory companies such as SK hynix are also pushing for early supply of samples. Product supply could be possible as early as the end of the third quarter."
Source:
ZDNet
HBM4 marks the sixth iteration of high-bandwidth memory tech using stacked DRAM architecture. It comes after HBM3E, the current fifth-gen version, with large-scale production likely to kick off in late 2025 at the earliest. HBM4 boasts a big leap forward doubling data transfer ability with 2,048 I/O channels up from its forerunner. NVIDIA planned to use 12-layer stacked HBM4 in its 2026 "Rubin" line of powerful GPUs. However, NVIDIA has moved up its timeline for "Rubin" aiming to launch in late 2025.A source familiar with the matter explained, "It seems that NVIDIA's will to launch Rubin early is stronger than expected, to the point that it is pushing forward trial production to the second half of this year." He added, "In line with this, memory companies such as SK hynix are also pushing for early supply of samples. Product supply could be possible as early as the end of the third quarter."
13 Comments on SK hynix Ships HBM4 Samples to NVIDIA in June, Mass Production Slated for Q3 2025
The 5090 with GDDR7 at 512-bit manages 1.8TB/s, which is higher than the A100 40GB PCIe (1.6TB/s) and pretty near the A100 80GB SXM/H100 80GB PCIe (2TB/s), all of which use HBM2e, and even the H100 SXM 64GB (2TB/s, HBM3).
To reach such high bandwidth you'd need enough stacks, which would both be hella expensive, and also give a consumer GPU way too much memory that's only usually meant for enterprise offerings.
I would have applications that need every bit of core speed and don't need much memory.
As for the advantages, HBM is far more power efficient than GDDR of the same generation. One stack of HBM4 would offer 89% of the bandwidth of the 5090's GDDR7 at a fraction of the power. Alternatively, two stacks of HBM3e would exceed that bandwidth and increase capacity. HBM PHYs also require less area than GDDR PHYs so you could either have a smaller die or increase the number of SMXs to take advantage of the saved area and power.
I believe this was also the reason why RDNA4 multi.chiplet high end versions were canned. The lowest 4-Hi stack is 16GB using 4GB layers. So two 16GB stacks would offer 32GB with 3.2TB/s of speed. And HBM3e is cheaper as it's not the latest and greatest.