• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,771 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce's forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.




HBM is unequivocally a superior solution for building high-speed computing platforms, thanks to its higher bandwidth and lower energy consumption compared to DDR SDRAM. This distinction is clear when comparing DDR4 SDRAM and DDR5 SDRAM, released in 2014 and 2020 respectively, whose bandwidths only differed by a factor of two. Regardless of whether DDR5 or the future DDR6 is used, the quest for higher transmission performance will inevitably lead to an increase in power consumption, which could potentially affect system performance adversely. Taking HBM3 and DDR5 as examples, the former's bandwidth is 15 times that of the latter and can be further enhanced by adding more stacked chips. Furthermore, HBM can replace a portion of GDDR SDRAM or DDR SDRAM, thus managing power consumption more effectively.

TrendForce concludes that the current driving force behind the increasing demand is AI servers equipped with Nvidia A100, H100, AMD MI300, and large CSPs such as Google and AWS, which are developing their own ASICs. It is estimated that the shipment volume of AI servers, including those equipped with GPUs, FPGAs, and ASICs, will reach nearly 1.2 million units in 2023, marking an annual growth rate of almost 38%. TrendForce also anticipates a concurrent surge in the shipment volume of AI chips, with growth potentially exceeding 50%.

View at TechPowerUp Main Site | Source
 
Joined
Jun 18, 2021
Messages
2,569 (2.00/day)
Cool, more volume means it will be more affordable to use in consumer products as well
 
Top