Monday, November 20th 2023
SK hynix Showcases Next-Gen AI and HPC Solutions at SC23
SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12-17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first appearance at the conference by introducing its groundbreaking memory solutions to the HPC community. During the six-day event, several SK hynix employees also made presentations revealing the impact of the company's memory solutions on AI and HPC.
Displaying Advanced HPC & AI Products
At SC23, SK hynix showcased its products tailored for AI and HPC to underline its leadership in the AI memory field. Among these next-generation products, HBM3E attracted attention as the HBM solution meets the industry's highest standards of speed, capacity, heat dissipation, and power efficiency. These capabilities make it particularly suitable for data-intensive AI server systems. HBM3E was presented alongside NVIDIA's H100, a high-performance GPU for AI that uses HBM3 for its memory.SK hynix also held a demonstration of AiMX, the company's generative AI accelerator card which specializes in large language models (LLM) using GDDR6-AiM chips that leverage PIM technology. This product is set to play a key role in the advancement of data-intensive generative AI inference systems as it significantly reduces the AI inference time of server systems compared to systems with GPUs, while also offering lower power consumption.
CXL was another highlight at SK hynix's booth. Based on PCle, CXL is a standardized interface that helps increase the efficiency of HPC systems. Offering flexible memory expansion, CXL is a promising interface for HPC systems such as AI and big data-related applications. In particular, SK hynix's Niagara CXL disaggregated memory prototype platform was showcased as a pooled memory solution that can improve system performance in AI and big data distributed processing systems.
Additionally, SK hynix was able to present the results of its collaboration with Los Alamos National Laboratory (LANL) to improve the performance and reduce the energy requirements of applications that utilize HPC physics. Called CXL-based computational memory solution (CMS), the product has the capability to accelerate indirect memory accesses while also significantly reducing data movement. Such technological enhancements are also applicable to various memory-intensive domains such as AI and graph analytics.
Lastly, object-based computational storage (OCS) was shown as part of SK hynix's efforts to develop an analytics ecosystem with multiple partners. It minimizes data movement between analytics application systems and storage, reduces storage software stack weight, and accelerates data analysis speed. And through a demonstration, the company showed how its interface technology enhances data processing capabilities in OCS.
Innovative Data Center & eSSD Solutions
SK hynix also displayed a range of its data center solutions at the conference, including its DDR5 Registered Dual In-line Memory Module (RDIMM). Equipped with 1b nm, the fifth generation of the 10 nm process technology, DDR5 RDIMM reaches speeds of up to 6,400 megabits per second (Mbps). The display also featured DDR5 Multiplexer Combined Ranks (MCR) DIMM, which reaches speeds of up to 8,800 Mbps. With such rapid speeds, these DDR5 solutions are suited for AI computing in high-performance servers.
Visitors to the SK hynix booth could also see its latest enterprise SSD (eSSD) products, including the PCle Gen 5-based PS1010 E3.S and PS1030. In particular, the PS1030 delivers the industry's fastest sequential read speed of 14,800 megabytes per second (MBps) which makes it ideal for big data and machine learning.Sharing the Potential of SK hynix's Memory Solutions
During the conference, SK hynix employees also held presentations on the application of the company's memory solutions for AI and HPC. On the fourth day of the conference, Technical Leader of PIM Hardware in Solution Advanced Technology Division Yongkee Kwon held a talk titled "Cost-Effective Large Language Model (LLM) Inference Solution Using SK hynix's AiM." Kwon revealed how SK hynix's AiM, a PIM device that is specialized for LLMs, can significantly improve the performance and energy efficiency of LLM inference. When applied to Meta's Open Pre-trained Transformers (OPT) language model, an open-source alternative to Open AI's GPT-3, AiM can reach speeds up to ten times higher than state-of-the-art GPU systems while also offering lower costs and energy consumption.
On the same day, Technical Leader of System Software in Memory Forest x&D Hokyoon Lee held a presentation titled "CXL-based Memory Disaggregation for HPC and AI Workloads." SK hynix's Niagara addresses the issue of stranded memory—or unused memory in each server that can never be utilized by other servers—with its elastic memory feature. Additionally, Niagara's memory sharing feature provides a solution to heavy network traffic in conventional distributed computing. In the session, the presenters demonstrated the effectiveness of memory sharing during a real simulation with the Ray distributed AI framework which is used in ChatGPT.
A day later, Director and Technical Leader of SOLAB in Memory Forest x&D Jongryool Kim presented on "Accelerating Data Analytics Using Object Based Computational Storage in an HPC." By introducing SK hynix's collaboration with LANL in researching computational storage technologies, Kim proposed object-based computational storage (OCS) as a new computational storage platform for data analytics in HPC. Due to its high scalability and data-aware characteristics, OCS can perform analytics independently without help from compute nodes—highlighting its potential as the future of computational storage in HPC.
Similarly, SK hynix will continue to develop solutions that enable the advancement of AI and HPC as a globally leading AI memory provider.
Source:
SK hynix
Displaying Advanced HPC & AI Products
At SC23, SK hynix showcased its products tailored for AI and HPC to underline its leadership in the AI memory field. Among these next-generation products, HBM3E attracted attention as the HBM solution meets the industry's highest standards of speed, capacity, heat dissipation, and power efficiency. These capabilities make it particularly suitable for data-intensive AI server systems. HBM3E was presented alongside NVIDIA's H100, a high-performance GPU for AI that uses HBM3 for its memory.SK hynix also held a demonstration of AiMX, the company's generative AI accelerator card which specializes in large language models (LLM) using GDDR6-AiM chips that leverage PIM technology. This product is set to play a key role in the advancement of data-intensive generative AI inference systems as it significantly reduces the AI inference time of server systems compared to systems with GPUs, while also offering lower power consumption.
CXL was another highlight at SK hynix's booth. Based on PCle, CXL is a standardized interface that helps increase the efficiency of HPC systems. Offering flexible memory expansion, CXL is a promising interface for HPC systems such as AI and big data-related applications. In particular, SK hynix's Niagara CXL disaggregated memory prototype platform was showcased as a pooled memory solution that can improve system performance in AI and big data distributed processing systems.
Additionally, SK hynix was able to present the results of its collaboration with Los Alamos National Laboratory (LANL) to improve the performance and reduce the energy requirements of applications that utilize HPC physics. Called CXL-based computational memory solution (CMS), the product has the capability to accelerate indirect memory accesses while also significantly reducing data movement. Such technological enhancements are also applicable to various memory-intensive domains such as AI and graph analytics.
Lastly, object-based computational storage (OCS) was shown as part of SK hynix's efforts to develop an analytics ecosystem with multiple partners. It minimizes data movement between analytics application systems and storage, reduces storage software stack weight, and accelerates data analysis speed. And through a demonstration, the company showed how its interface technology enhances data processing capabilities in OCS.
Innovative Data Center & eSSD Solutions
SK hynix also displayed a range of its data center solutions at the conference, including its DDR5 Registered Dual In-line Memory Module (RDIMM). Equipped with 1b nm, the fifth generation of the 10 nm process technology, DDR5 RDIMM reaches speeds of up to 6,400 megabits per second (Mbps). The display also featured DDR5 Multiplexer Combined Ranks (MCR) DIMM, which reaches speeds of up to 8,800 Mbps. With such rapid speeds, these DDR5 solutions are suited for AI computing in high-performance servers.
Visitors to the SK hynix booth could also see its latest enterprise SSD (eSSD) products, including the PCle Gen 5-based PS1010 E3.S and PS1030. In particular, the PS1030 delivers the industry's fastest sequential read speed of 14,800 megabytes per second (MBps) which makes it ideal for big data and machine learning.Sharing the Potential of SK hynix's Memory Solutions
During the conference, SK hynix employees also held presentations on the application of the company's memory solutions for AI and HPC. On the fourth day of the conference, Technical Leader of PIM Hardware in Solution Advanced Technology Division Yongkee Kwon held a talk titled "Cost-Effective Large Language Model (LLM) Inference Solution Using SK hynix's AiM." Kwon revealed how SK hynix's AiM, a PIM device that is specialized for LLMs, can significantly improve the performance and energy efficiency of LLM inference. When applied to Meta's Open Pre-trained Transformers (OPT) language model, an open-source alternative to Open AI's GPT-3, AiM can reach speeds up to ten times higher than state-of-the-art GPU systems while also offering lower costs and energy consumption.
On the same day, Technical Leader of System Software in Memory Forest x&D Hokyoon Lee held a presentation titled "CXL-based Memory Disaggregation for HPC and AI Workloads." SK hynix's Niagara addresses the issue of stranded memory—or unused memory in each server that can never be utilized by other servers—with its elastic memory feature. Additionally, Niagara's memory sharing feature provides a solution to heavy network traffic in conventional distributed computing. In the session, the presenters demonstrated the effectiveness of memory sharing during a real simulation with the Ray distributed AI framework which is used in ChatGPT.
A day later, Director and Technical Leader of SOLAB in Memory Forest x&D Jongryool Kim presented on "Accelerating Data Analytics Using Object Based Computational Storage in an HPC." By introducing SK hynix's collaboration with LANL in researching computational storage technologies, Kim proposed object-based computational storage (OCS) as a new computational storage platform for data analytics in HPC. Due to its high scalability and data-aware characteristics, OCS can perform analytics independently without help from compute nodes—highlighting its potential as the future of computational storage in HPC.
Similarly, SK hynix will continue to develop solutions that enable the advancement of AI and HPC as a globally leading AI memory provider.
1 Comment on SK hynix Showcases Next-Gen AI and HPC Solutions at SC23