News Posts matching #HBM3E

Return to Keyword Browsing

ASRock Rack Unveils New Server Platforms Supporting AMD EPYC 9005 Series Processors and AMD Instinct MI325X Accelerators at AMD Advancing AI 2024

ASRock Rack Inc., a leading innovative server company, announced upgrades to its extensive lineup to support AMD EPYC 9005 Series processors. Among these updates is the introduction of the new 6U8M-TURIN2 GPU server. This advanced platform features AMD Instinct MI325X accelerators, specifically optimized for intensive enterprise AI applications, and will be showcased at AMD Advancing AI 2024.

ASRock Rack Introduce GPU Servers Powered by AMD EPYC 9005 series processors
AMD today revealed the 5th Generation AMD EPYC processors, offering a wide range of core counts (up to 192 cores), frequencies (up to 5 GHz), and expansive cache capacities. Select high-frequency processors, such as the AMD EPYC 9575F, are optimized for use as host CPUs in GPU-enabled systems. Additionally, the just launched AMD Instinct MI325X accelerators feature substantial HBM3E memory and 6 TB/s of memory bandwidth, enabling quick access and efficient handling of large datasets and complex computations.

GIGABYTE Releases Servers with AMD EPYC 9005 Series Processors and AMD Instinct MI325X GPUs

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced support for AMD EPYC 9005 Series processors with the release of new GIGABYTE servers alongside BIOS updates for some existing GIGABYTE servers using the SP5 platform. This first wave of updates supports over 60 servers and motherboards that customers can choose from that deliver exceptional performance for 5th Generation AMD EPYC processors. In addition, with the launch of the AMD Instinct MI325X accelerator, a newly designed GIGABYTE server was created, and it will be showcased at SC24 (Nov. 19-21) in Atlanta.

New GIGABYTE Servers and Updates
To fill in all possible workload scenarios, using modular design servers to edge servers to enterprise-grade motherboards, these new solutions will ship already supporting AMD EPYC 9005 Series processors. The XV23-ZX0 is one of the many new solutions and it is notable for its modularized server design using two AMD EPYC 9005 processors and supporting up to four GPUs and three additional FHFL slots. It also has 2+2 redundant power supplies on the front-side for ease of access.

AMD Launches Instinct MI325X Accelerator for AI Workloads: 256 GB HBM3E Memory and 2.6 PetaFLOPS FP8 Compute

During its "Advancing AI" conference today, AMD has updated its AI accelerator portfolio with the Instinct MI325X accelerator, designed to succeed its MI300X predecessor. Built on the CDNA 3 architecture, Instinct MI325X brings a suite of improvements over the old SKU. Now, the MI325X features 256 GB of HBM3E memory running at 6 TB/s bandwidth. The capacity memory alone is a 1.8x improvement over the old MI300 SKU, which features 192 GB of regular HBM3 memory. Providing more memory capacity is crucial as upcoming AI workloads are training models with parameter counts measured in trillions, as opposed to billions with current models we have today. When it comes to compute resources, the Instinct MI325X provides 1.3 PetaFLOPS at FP16 and 2.6 PetaFLOPS at FP8 training and inference. This represents a 1.3x improvement over the Instinct MI300.

A chip alone is worthless without a good platform, and AMD decided to make the Instinct MI325X OAM modules a drop-in replacement for the current platform designed for MI300X, as they are both pin-compatible. In systems packing eight MI325X accelerators, there are 2 TB of HBM3E memory running at 48 TB/s memory bandwidth. Such a system achieves 10.4 PetaFLOPS of FP16 and 20.8 PetaFLOPS of FP8 compute performance. The company uses NVIDIA's H200 HGX as reference claims for its performance competitiveness, where the company claims that the Instinct MI325X outperforms NVIDIA H200 HGX system by 1.3x across the board in memory bandwidth, FP16 / FP8 compute performance and 1.8x in memory capacity.

Micron Updates Corporate Logo with "Ahead of The Curve" Design

Today, Micron updated its corporate logo with new symbolism. The redesign comes as Micron celebrates over four decades of technological advancement in the semiconductor industry. The new logo features a distinctive silicon color, paying homage to the wafers at the core of Micron's products. Its curved lettering represents the company's ability to stay ahead of industry trends and adapt to rapid technological changes. The design also incorporates vibrant gradient colors inspired by light reflections on wafers, which are the core of Mircorn's memory and storage products.

This rebranding effort coincides with Micron's expanding role in AI, where memory and storage innovations are increasingly crucial. The company has positioned itself beyond a commodity memory supplier, now offering leadership in solutions for AI data centers, high-performance computing, and AI-enabled devices. The company has come far from its original 64K DRAM in 1981 to HBM3E DRAM today. Micron offers different HBM memory products, graphics memory powering consumer GPUs, CXL memory modules, and DRAM components and modules.

Synopsys and TSMC Pave the Path for Trillion-Transistor AI and Multi-Die Chip Design

Synopsys, Inc. today announced its continued, close collaboration with TSMC to deliver advanced EDA and IP solutions on TSMC's most advanced process and 3DFabric technologies to accelerate innovation for AI and multi-die designs. The relentless computational demands in AI applications require semiconductor technologies to keep pace. From an industry leading AI-driven EDA suite, powered by Synopsys.ai for enhanced productivity and silicon results to complete solutions that facilitate the migration to 2.5/3D multi-die architectures, Synopsys and TSMC have worked closely for decades to pave the path for the future of billion to trillion-transistor AI chip designs.

"TSMC is excited to collaborate with Synopsys to develop pioneering EDA and IP solutions tailored for the rigorous compute demands of AI designs on TSMC advanced process and 3DFabric technologies," said Dan Kochpatcharin, head of the Ecosystem and Alliance Management Division at TSMC. "The results of our latest collaboration across Synopsys' AI-driven EDA suite and silicon-proven IP have helped our mutual customers significantly enhance their productivity and deliver remarkable performance, power, and area results for advanced AI chip designs.

SK Hynix Begins Mass-production of 12-layer HBM3E Memory

SK hynix Inc. announced today that it has begun mass production of the world's first 12-layer HBM3E product with 36 GB, the largest capacity of existing HBM to date. The company plans to supply mass-produced products to customers within the year, proving its overwhelming technology once again six months after delivering the HBM3E 8-layer product to customers for the first time in the industry in March this year.

SK hynix is the only company in the world that has developed and supplied the entire HBM lineup from the first generation (HBM1) to the fifth generation (HBM3E), since releasing the world's first HBM in 2013. The company plans to continue its leadership in the AI memory market, addressing the growing needs of AI companies by being the first in the industry to mass-produce the 12-layer HBM3E.

Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth

As AI workloads continue to evolve and expand, memory bandwidth and capacity are increasingly critical for system performance. The latest GPUs in the industry need the highest performance high bandwidth memory (HBM), significant memory capacity, as well as improved power efficiency. Micron is at the forefront of memory innovation to meet these needs and is now shipping production-capable HBM3E 12-high to key industry partners for qualification across the AI ecosystem.

Micron's industry-leading HBM3E 12-high 36 GB delivers significantly lower power consumption than our competitors' 8-high 24 GB offerings, despite having 50% more DRAM capacity in the package
Micron HBM3E 12-high boasts an impressive 36 GB capacity, a 50% increase over current HBM3E 8-high offerings, allowing larger AI models like Llama 2 with 70 billion parameters to run on a single processor. This capacity increase allows faster time to insight by avoiding CPU offload and GPU-GPU communication delays. Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitors' HBM3E 8-high 24 GB solutions. Micron HBM3E 12-high 36 GB offers more than 1.2 terabytes per second (TB/s) of memory bandwidth at a pin speed greater than 9.2 gigabits per second (Gb/s). These combined advantages of Micron HBM3E offer maximum throughput with the lowest power consumption can ensure optimal outcomes for power-hungry data centers. Additionally, Micron HBM3E 12-high incorporates fully programmable MBIST that can run system representative traffic at full spec speed, providing improved test coverage for expedited validation and enabling faster time to market and enhancing system reliability.

SK hynix Presents Extensive AI Memory Lineup at Expanded FMS 2024

SK hynix has returned to Santa Clara, California to present its full array of groundbreaking AI memory technologies at FMS: the Future of Memory and Storage (FMS) 2024 from August 6-8. Previously known as Flash Memory Summit, the conference changed its name to reflect its broader focus on all types of memory and storage products amid growing interest in AI. Bringing together industry leaders, customers, and IT professionals, FMS 2024 covers the latest trends and innovations shaping the memory industry.

Participating in the event under the slogan "Memory, The Power of AI," SK hynix is showcasing its outstanding memory capabilities through a keynote presentation, multiple technology sessions, and product exhibits.

Samsung's 8-layer HBM3E Chips Pass NVIDIA's Tests

Samsung Electronics has achieved a significant milestone in its pursuit of supplying advanced memory chips for AI systems. Their latest fifth-generation high-bandwidth memory (HBM) chips, known as HBM3E, have finally passed all NVIDIA's tests. This approval will help Samsung in catching up with competitors SK Hynix and Micron in the race to provide HBM memory chips to NVIDIA. While a supply deal hasn't been finalized yet, deliveries are expected to start in late 2024.

However, it's worth noting that Samsung passed NVIDIA's tests for the eight-layer HBM3E chips while the more advanced twelve-layer version of the HBM3E chips is still struggling pass those tests. Both Samsung and NVIDIA declined to comment on these developments. Industry expert Dylan Patel notes that while Samsung is making progress, they're still behind SK Hynix, which is already preparing to ship its own twelve-layer HBM3E chips.

Samsung Electronics Announces Results for Second Quarter of 2024

Samsung Electronics today reported financial results for the second quarter ended June 30, 2024. The Company posted KRW 74.07 trillion in consolidated revenue and operating profit of KRW 10.44 trillion as favorable memory market conditions drove higher average sales price (ASP), while robust sales of OLED panels also contributed to the results.

Memory Market Continues To Recover; Solid Second Half Outlook Centered on Server Demand
The DS Division posted KRW 28.56 trillion in consolidated revenue and KRW 6.45 trillion in operating profit for the second quarter. Driven by strong demand for HBM as well as conventional DRAM and server SSDs, the memory market as a whole continued its recovery. This increased demand is a result of the continued AI investments by cloud service providers and growing demand for AI from businesses for their on-premise servers.

Samsung's HBM3 Chips Approved by NVIDIA for Limited Use

Samsung Electronics' latest high bandwidth memory (HBM) chips have reportedly passed NVIDIA's suitability tests, according to Reuters. This development comes two months after initial reports suggested the chips had failed due to heat and power consumption issues. Despite this approval, NVIDIA plans to use Samsung's memory chips only in its H20 GPUs, a less advanced version of the H100 processors designed for the Chinese market to comply with US export restrictions.

The future of Samsung's HBM3 chips in NVIDIA's other GPU models remains uncertain, with potential additional testing required. Reuters also reported that Samsung's upcoming fifth-generation HBM3E chips are still undergoing NVIDIA's evaluation process. When approached for comment, neither company responded to Reuters. It's worth noting that Samsung previously denied the initial claims of chip failure.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Anthropic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

SK hynix Showcases Its Next-Gen Solutions at Computex 2024

SK hynix presented its leading AI memory solutions at COMPUTEX Taipei 2024 from June 4-7. As one of Asia's premier IT shows, COMPUTEX Taipei 2024 welcomed around 1,500 global participants including tech companies, venture capitalists, and accelerators under the theme "Connecting AI". Making its debut at the event, SK hynix underlined its position as a first mover and leading AI memory provider through its lineup of next-generation products.

"Connecting AI" With the Industry's Finest AI Memory Solutions
Themed "Memory, The Power of AI," SK hynix's booth featured its advanced AI server solutions, groundbreaking technologies for on-device AI PCs, and outstanding consumer SSD products. HBM3E, the fifth generation of HBM1, was among the AI server solutions on display. Offering industry-leading data processing speeds of 1.18 terabytes (TB) per second, vast capacity, and advanced heat dissipation capability, HBM3E is optimized to meet the requirements of AI servers and other applications. Another technology which has become crucial for AI servers is CXL as it can increase system bandwidth and processing capacity. SK hynix highlighted the strength of its CXL portfolio by presenting its CXL Memory Module-DDR5 (CMM-DDR5), which significantly expands system bandwidth and capacity compared to systems only equipped with DDR5. Other AI server solutions on display included the server DRAM products DDR5 RDIMM and MCR DIMM. In particular, SK hynix showcased its tall 128-gigabyte (GB) MCR DIMM for the first time at an exhibition.

Details Revealed about SK Hynix HBM4E, Computing, and Caching Features Integrated Directly

SK Hynix, leader in HBM3E memory, has now shared more details about HBM4E. Based on fresh reports by Wccftech and ET News, SK Hynix plans to make an HBM memory type that features multiple things like computing, caching, and network memory, all within the same package. This will make SK Hynix stand out from others. This idea is still in the early stages, but SK Hynix has started getting the design information it needs to support its goals. The reports say that SK Hynix wants to lay the groundwork for a versatile HBM with its upcoming HBM4 design. The company reportedly plans to include a memory controller on board, which will allow new computing abilities with its 7th generation HBM4E memory.

By using SK Hynix's method, everything will be unified as a single unit. This will not only make data transfer faster because there is less space between parts, but it will also make it more energy-efficient. Previously in April, SK Hynix announced that it has been working with TSMC to produce the next generation of HBM and improve how logic chips and HBM work together through advanced packaging. In late May, SK Hynix has disclosed yield details regarding HBM3E for the first time, the memory giant reporting successfully reducing the time needed for mass production of HBM3E chips by 50%, while getting closer to the target yield of 80%. The company plans to keep developing HBM4, which is expected to start mass production in 2026.
SK Hynix HBM SK Hynix HBMe3

Samsung Could Start 1nm Mass Production Sooner Than Expected

Samsung Foundry business is set to announce its technology roadmap and plans to strengthen the foundry ecosystem at the Foundry and SAFE Forum in Silicon Valley from June 12 to 13. Notably, Samsung is expected to advance its 1 nm process mass production plan, originally scheduled for 2027, to 2026. This move could look like a surprise since recent rumors (denied by Samsung) emerged about HBM3 and HBM3E chips running too hot and failing to be validated by NVIDIA.

Previously, Samsung successfully mass-produced the world's first 3 nm wafer foundry in June 2022. The company plans to start mass production of its second-generation 3 nm process in 2024 and 2 nm process in 2025. Speculations suggest Samsung may integrate these nodes and potentially begin mass-producing 2 nm chips as early as the second half of 2024. In comparison, rival TSMC aims to reach the A16 node (1.6 nm) in 2027 and start mass production of its 1.4 nm process around 2027-2028.
Samsung Foundry

NVIDIA Reportedly Having Issues with Samsung's HBM3 Chips Running Too Hot

According to Reuters, NVIDIA is having some major issues with Samsung's HBM3 chips, as NVIDIA hasn't managed to finalise its validations of the chips. Reuters are citing multiple sources that are familiar with the matter and it seems like Samsung is having some serious issues with its HMB3 chips if the sources are correct. Not only do the chips run hot, which itself is a big issue due to NVIDIA already having issues cooling some of its higher-end products, but the power consumption is apparently not where it should be either. Samsung is said to have tried to get its HBM3 and HBM3E parts validated by NVIDIA since sometime in 2023 according to Reuter's sources, which suggests that there have been issues for at least six months, if not longer.

The sources claim there are issues with both the 8- and 12-layer stacks of HMB3E parts from Samsung, suggesting that NVIDIA might only be able to supply parts from Micron and SK Hynix for now, the latter whom has been supplying HBM3 chips to NVIDIA since the middle of 2022 and HBM3E chips since March of this year. It's unclear if this is a production issue at Samsung's DRAM Fabs, a packaging related issue or something else entirely. The Reuter's piece goes on to speculating about Samsung not having had enough time to develop its HBM parts compared its competitors and that it's a rushed product, but Samsung issued a statement to the publication that it's a matter of customising the product for its customer's needs. Samsung also said that it's "the process of optimising its products through close collaboration with customers" without going into which customer(s). Samsung issued a further statement saying that "claims of failing due to heat and power consumption are not true" and that testing was going as expected.

SK hynix CEO Says HBM from 2025 Production Almost Sold Out

SK hynix held a press conference unveiling its vision and strategy for the AI era today at its headquarters in Icheon, Gyeonggi Province, to share the details of its investment plans for the M15X fab in Cheongju and the Yongin Semiconductor Cluster in Korea and the advanced packaging facilities in Indiana, U.S.

The event, hosted by theChief Executive Officer Kwak Noh-Jung, three years before the May 2027 completion of the first fab in the Yongin Cluster, was attended by key executives including the Head of AI Infra Justin (Ju-Seon) Kim, Head of DRAM Development Kim Jonghwan, Head of the N-S Committee Ahn Hyun, Head of Manufacturing Technology Kim Yeongsik, Head of Package & Test Choi Woojin, Head of Corporate Strategy & Planning Ryu Byung Hoon, and the Chief Financial Officer Kim Woo Hyun.

SK hynix Strengthens AI Memory Leadership & Partnership With Host at the TSMC 2024 Tech Symposium

SK hynix showcased its next-generation technologies and strengthened key partnerships at the TSMC 2024 Technology Symposium held in Santa Clara, California on April 24. At the event, the company displayed its industry-leading HBM AI memory solutions and highlighted its collaboration with TSMC involving the host's CoWoS advanced packaging technology.

TSMC, a global semiconductor foundry, invites its major partners to this annual conference in the first half of each year so they can share their new products and technologies. Attending the event under the slogan "Memory, the Power of AI," SK hynix received significant attention for presenting the industry's most powerful AI memory solution, HBM3E. The product has recently demonstrated industry-leading performance, achieving input/output (I/O) transfer speed of up to 10 gigabits per second (Gbps) in an AI system during a performance validation evaluation.

SK Hynix Announces 1Q24 Financial Results

SK hynix Inc. announced today that it recorded 12.43 trillion won in revenues, 2.886 trillion won in operating profit (with an operating margin of 23%), and 1.917 trillion won in net profit (with a net margin of 15%) in the first quarter. With revenues marking an all-time high for a first quarter and the operating profit a second-highest following the records of the first quarter of 2018, SK hynix believes that it has entered the phase of a clear rebound following a prolonged downturn.

The company said that an increase in the sales of AI server products backed by its leadership in AI memory technology including HBM and continued efforts to prioritize profitability led to a 734% on-quarter jump in the operating profit. With the sales ratio of eSSD, a premium product, on the rise and the average selling prices rising, the NAND business has also achieved a meaningful turnaround in the same period.

Samsung Signs $3 Billion HBM3E 12H Supply Deal with AMD

Korean media reports that Samsung Electronics has signed a 4.134 trillion Won ($3 billion) agreement with AMD to supply 12-high HBM3E stacks. AMD uses HBM stacks in its AI and HPC accelerators based on its CDNA architecture. This deal is significant, as it gives analysts some idea of the kind of volumes of AI GPUs AMD is preparing to push into the market, if they know what percent of an AI GPU's bill of materials is made up by memory stacks. AMD has probably negotiated a good price for Samsung's HBM3E 12H stacks, given that rival NVIDIA almost exclusively uses HBM3E made by SK Hynix.

The AI GPU market is expected to heat up with the ramp of NVIDIA's "Hopper" H200 series, the advent of "Blackwell," AMD's MI350X CDNA3, and Intel's Gaudi 3 generative AI accelerator. Samsung debuted its HBM3E 12H memory in February 2024. Each stack features 12 layers, a 50% increase over the first generation of HBM3E, and offers a density of 36 GB per stack. An AMD CDNA3 chip with 8 such stacks would have 288 GB of memory on package. AMD is expected to launch the MI350X in the second half of 2024. The star attraction with this chip is its refreshed GPU tiles built on the TSMC 4 nm EUV foundry node. This seems like the ideal product for AMD to debut HBM3E 12H on.

SK Hynix Plans a $4 Billion Chip Packaging Facility in Indiana

SK Hynix is planning a large $4 billion chip-packaging and testing facility in Indiana, USA. The company is still in the planning stage of the decision to invest in the US. "[the company] is reviewing its advanced chip packaging investment in the US, but hasn't made a final decision yet," a company spokesperson told the Wall Street Journal. The primary product focus for this plant will be stacked HBM memory meant to be consumed by the AI GPU and self-driving automobile industries. The plant could also focus on other exotic memory types, such as high-density server memory; and perhaps even compute-in-memory. The plant is expected to start operations in 2028, and will create up to 1,000 skilled jobs. SK Hynix is counting for state- and federal tax incentives to propel this investment; under government initiatives such as the CHIPS Act. SK Hynix is a significant supplier of HBM to NVIDIA for its AI GPUs. Its HBM3E features in the latest NVIDIA "Blackwell" GPUs.

Samsung Demonstrates New CXL Capabilities and Introduces New Memory Module for Scalable, Composable Disaggregated Infrastructure

Samsung Electronics, a world leader in advanced semiconductor technology, unveiled the expansion of its Compute Express Link (CXL) memory module portfolio and showcased its latest HBM3E technology, reinforcing leadership in high-performance and high-capacity solutions for AI applications.

In a keynote address to a packed crowd at Santa Clara's Computer History Museum, Jin-Hyeok Choi, Corporate Executive Vice President, Device Solutions Research America - Memory at Samsung Electronics, along with SangJoon Hwang, Corporate Executive Vice President, Head of DRAM Product and Technology at Samsung Electronics, took the stage to introduce new memory solutions and discuss how Samsung is leading HBM and CXL innovations in the AI era. Joining Samsung on stage was Paul Turner, Vice President, Product Team, VCF Division at VMware by Broadcom and Gunnar Hellekson, Vice President and General Manager at Red Hat to discuss how their software solutions combined with Samsung's hardware technology is pushing the boundaries of memory innovation.

SK hynix Presents the Future of AI Memory Solutions at NVIDIA GTC 2024

SK hynix is displaying its latest AI memory technologies at NVIDIA's GPU Technology Conference (GTC) 2024 held in San Jose from March 18-21. The annual AI developer conference is proceeding as an in-person event for the first time since the start of the pandemic, welcoming industry officials, tech decision makers, and business leaders. At the event, SK hynix is showcasing new memory solutions for AI and data centers alongside its established products.

Showcasing the Industry's Highest Standard of AI Memory
The AI revolution has continued to pick up pace as AI technologies spread their reach into various industries. In response, SK hynix is developing AI memory solutions capable of handling the vast amounts of data and processing power required by AI. At GTC 2024, the company is displaying some of these products, including its 12-layer HBM3E and Compute Express Link (CXL)1, under the slogan "Memory, The Power of AI". HBM3E, the fifth generation of HBM2, is the highest-specification DRAM for AI applications on the market. It offers the industry's highest capacity of 36 gigabytes (GB), a processing speed of 1.18 terabytes (TB) per second, and exceptional heat dissipation, making it particularly suitable for AI systems. On March 19, SK hynix announced it had become the first in the industry to mass-produce HBM3E.

SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024

SK hynix unveiled a new consumer product based on its latest solid-state drive (SSD), PCB01, which boasts industry-leading performance levels at GPU Technology Conference (GTC) 2024. Hosted by NVIDIA in San Jose, California from March 18-21, GTC is one of the world's leading conferences for AI developers. Applied to on-device AI PCs, PCB01 is a PCIe fifth-generation SSD which recently had its performance and reliability verified by a major global customer. After completing product development in the first half of 2024, SK hynix plans to launch two versions of PCB01 by the end of the year which target both major technology companies and general consumers.

Optimized for AI PCs, Capable of Loading LLMs Within One Second
Offering the industry's highest sequential read speed of 14 gigabytes per second (GB/s) and a sequential write speed of 12 GB/s, PCB01 doubles the speed specifications of its previous generation. This enables the loading of LLMs required for AI learning and inference in less than one second. To make on-device AIs operational, PC manufacturers create a structure that stores an LLM in the PC's internal storage and quickly transfers the data to DRAMs for AI tasks. In this process, the PCB01 inside the PC efficiently supports the loading of LLMs. SK hynix expects these characteristics of its latest SSD to greatly increase the speed and quality of on-device AIs.
Return to Keyword Browsing
Jul 2nd, 2025 22:52 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts