News Posts matching #HBM

Return to Keyword Browsing

NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

We know that NVIDIA's latest "Blackwell" GPUs are fast, but how much faster are they over the previous generation "Hopper"? Thanks to the latest MLPerf Training v4.1 results, NVIDIA's HGX B200 Blackwell platform has demonstrated massive performance gains, measuring up to 2.2x improvement per GPU compared to its HGX H200 Hopper. The latest results, verified by MLCommons, reveal impressive achievements in large language model (LLM) training. The Blackwell architecture, featuring HBM3e high-bandwidth memory and fifth-generation NVLink interconnect technology, achieved double the performance per GPU for GPT-3 pre-training and a 2.2x boost for Llama 2 70B fine-tuning compared to the previous Hopper generation. Each benchmark system incorporated eight Blackwell GPUs operating at a 1,000 W TDP, connected via NVLink Switch for scale-up.

The network infrastructure utilized NVIDIA ConnectX-7 SuperNICs and Quantum-2 InfiniBand switches, enabling high-speed node-to-node communication for distributed training workloads. While previous Hopper-based systems required 256 GPUs to optimize performance for the GPT-3 175B benchmark, Blackwell accomplished the same task with just 64 GPUs, leveraging its larger HBM3e memory capacity and bandwidth. One thing to look out for is the upcoming GB200 NVL72 system, which promises even more significant gains past the 2.2x. It features expanded NVLink domains, higher memory bandwidth, and tight integration with NVIDIA Grace CPUs, complemented by ConnectX-8 SuperNIC and Quantum-X800 switch technologies. With faster switching and better data movement with Grace-Blackwell integration, we could see even more software optimization from NVIDIA to push the performance envelope.

Samsung Hopes PIM Memory Technology Can Replace HBM in Next-Gen AI Applications

The 8th edition of the Samsung AI Forum was held on November 4th and 5th in Seoul, and among all the presentations and keynote speeches, one piece of information caught our attention. As reported by The Chosun Daily, Samsung is (again) turning its attention to Processing-in-Memory (PIM) technology, in what appears to be the company's latest attempt to keep up with its rival SK Hynix in this area. In 2021, Samsung introduced the world's first HBM-PIM, the chips showing impressive gains in performance (nearly double) while reducing energy consumption by almost 50% on average. PIM technology basically adds the processor functions necessary for computational tasks, reducing data transfer between the CPU and memory.

Now, the company hopes that PIM memory chips could replace HBM in the future, based on the advantages this next-generation memory technology possesses, mainly for artificial intelligence (AI) applications. "AI is transforming our lives at an unprecedented rate, and the question of how to use AI more responsibly is becoming increasingly important," said Samsung Electronics CEO Han Jong-hee in his opening remarks. "Samsung Electronics is committed to fostering a more efficient and sustainable AI ecosystem." During the event, Samsung also highlighted its partnership with AMD, which reportedly supplies AMD with its fifth-generation HBM, the HBM3E.

NVIDIA CEO Jensen Huang Asks SK hynix to Speed Up HBM4 Delivery by Six Months

SK hynix announced the first 48 GB 16-high HBM3E in the industry at the SK AI Summit in Seoul today. During the event, news came out about newer plans to develop their next-gen memory tech. Reuters and ZDNet Korea reported that NVIDIA CEO Jensen Huang asked SK hynix to speed up their HBM4 delivery by six months. SK Group Chairman Chey Tae-won shared this info at the Summit. The company had earlier said they would give HBM4 chips to customers in the second half of 2025.

When ZDNet asked about this sped-up plan, SK hynix President Kwak Noh-Jung gave a careful answer saying "We will give it a try." A company spokesperson told Reuters that this new schedule would be quicker than first planned, but they didn't share more details. In a video interview shown at the Summit, NVIDIA's Jensen Huang pointed out the strong team-up between the companies. He said working with SK hynix has helped NVIDIA go beyond Moore's Law performance gains. He stressed that NVIDIA will keep needing SK hynix's HBM tech for future products. SK hynix plans to supply the latest 12-layer HBM3E to an undisclosed customer this year, and will start sampling of the 16-layer HBM3E early next year.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.

HBM5 20hi Stack to Adopt Hybrid Bonding Technology, Potentially Transforming Business Models

TrendForce reports that the focus on HBM products in the DRAM industry is increasingly turning attention toward advanced packaging technologies like hybrid bonding. Major HBM manufacturers are considering whether to adopt hybrid bonding for HBM4 16hi stack products but have confirmed plans to implement this technology in the HBM5 20hi stack generation.

Hybrid bonding offers several advantages when compared to the more widely used micro-bumping. Since it does not require bumps, it allows for more stacked layers and can accommodate thicker chips that help address warpage. Hybrid-bonded chips also benefit from faster data transmission and improved heat dissipation.

Global Silicon Wafer Shipments to Remain Soft in 2024 Before Strong Expected Rebound in 2025, SEMI Reports

Global shipments of silicon wafers are projected to decline 2% in 2024 to 12,174 million square inches (MSI) with a strong rebound of 10% delayed until 2025 to reach 13,328 MSI as wafer demand continues to recover from the downcycle, SEMI reported today in its annual silicon shipment forecast.

Strong silicon wafer shipment growth is expected to continue through 2027 to meet increasing demand related to AI and advanced processing, driving improved fab utilization rate for global semiconductor production capacity. Moreover, new applications in advanced packaging and high-bandwidth memory (HBM) production, which require additional wafers, are contributing to the rising need for silicon wafers. Such applications include temporary or permanent carrier wafers, interposers, device separation into chiplets, and memory/logic array separation.

ASML Reports €7.5 Billion Total Net Sales and €2.1 Billion Net Income in Q3 2024

Today, ASML Holding NV (ASML) has published its 2024 third-quarter results.
  • Q3 total net sales of €7.5 billion, gross margin of 50.8%, net income of €2.1 billion
  • Quarterly net bookings in Q3 of €2.6 billion of which €1.4 billion is EUV
  • ASML expects Q4 2024 total net sales between €8.8 billion and €9.2 billion, and a gross margin between 49% and 50%
  • ASML expects 2024 total net sales of around €28 billion
  • ASML expects 2025 total net sales to be between €30 billion and €35 billion, with a gross margin between 51% and 53%
CEO statement and outlook
"Our third-quarter total net sales came in at €7.5 billion, above our guidance, driven by more DUV and Installed Base Management sales. The gross margin came in at 50.8%, within guidance. While there continue to be strong developments and upside potential in AI, other market segments are taking longer to recover. It now appears the recovery is more gradual than previously expected. This is expected to continue in 2025, which is leading to customer cautiousness. Regarding Logic, the competitive foundry dynamics have resulted in a slower ramp of new nodes at certain customers, leading to several fab push outs and resulting changes in litho demand timing, in particular EUV. In Memory, we see limited capacity additions, with the focus still on technology transitions supporting the HBM and DDR5 AI-related demand."

GIGABYTE Releases Servers with AMD EPYC 9005 Series Processors and AMD Instinct MI325X GPUs

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced support for AMD EPYC 9005 Series processors with the release of new GIGABYTE servers alongside BIOS updates for some existing GIGABYTE servers using the SP5 platform. This first wave of updates supports over 60 servers and motherboards that customers can choose from that deliver exceptional performance for 5th Generation AMD EPYC processors. In addition, with the launch of the AMD Instinct MI325X accelerator, a newly designed GIGABYTE server was created, and it will be showcased at SC24 (Nov. 19-21) in Atlanta.

New GIGABYTE Servers and Updates
To fill in all possible workload scenarios, using modular design servers to edge servers to enterprise-grade motherboards, these new solutions will ship already supporting AMD EPYC 9005 Series processors. The XV23-ZX0 is one of the many new solutions and it is notable for its modularized server design using two AMD EPYC 9005 processors and supporting up to four GPUs and three additional FHFL slots. It also has 2+2 redundant power supplies on the front-side for ease of access.

Slowing Demand Growth Constrains Q4 Memory Price Increases

TrendForce's latest findings reveal that weaker consumer demand has persisted through 3Q24, leaving AI servers as the primary driver of memory demand. This dynamic, combined with HBM production displacing conventional DRAM capacity, has led suppliers to maintain a firm stance on contract price hikes.

Smartphone brands continue to remain cautious despite some server OEMs continuing to show purchasing momentum. Consequently, TrendForce forecasts that Q4 memory prices will see a significant slowdown in growth, with conventional DRAM expected to increase by only 0-5%. However, benefiting from the rising share of HBM, the average price of overall DRAM is projected to rise 8-13%—a marked deceleration compared to the previous quarter.

Micron Updates Corporate Logo with "Ahead of The Curve" Design

Today, Micron updated its corporate logo with new symbolism. The redesign comes as Micron celebrates over four decades of technological advancement in the semiconductor industry. The new logo features a distinctive silicon color, paying homage to the wafers at the core of Micron's products. Its curved lettering represents the company's ability to stay ahead of industry trends and adapt to rapid technological changes. The design also incorporates vibrant gradient colors inspired by light reflections on wafers, which are the core of Mircorn's memory and storage products.

This rebranding effort coincides with Micron's expanding role in AI, where memory and storage innovations are increasingly crucial. The company has positioned itself beyond a commodity memory supplier, now offering leadership in solutions for AI data centers, high-performance computing, and AI-enabled devices. The company has come far from its original 64K DRAM in 1981 to HBM3E DRAM today. Micron offers different HBM memory products, graphics memory powering consumer GPUs, CXL memory modules, and DRAM components and modules.

NVIDIA Cancels Dual-Rack NVL36x2 in Favor of Single-Rack NVL72 Compute Monster

NVIDIA has reportedly discontinued its dual-rack GB200 NVL36x2 GPU model, opting to focus on the single-rack GB200 NVL72 and NVL36 models. This shift, revealed by industry analyst Ming-Chi Kuo, aims to simplify NVIDIA's offerings in the AI and HPC markets. The decision was influenced by major clients like Microsoft, who prefer the NVL72's improved space efficiency and potential for enhanced inference performance. While both models perform similarly in AI large language model (LLM) training, the NVL72 is expected to excel in non-parallelizable inference tasks. As a reminder, the NVL72 features 36 Grace CPUs, delivering 2,592 Arm Neoverse V2 cores with 17 TB LPDDR5X memory with 18.4 TB/s aggregate bandwidth. Additionally, it includes 72 Blackwell GB200 SXM GPUs that have a massive 13.5 TB of HBM3e combined, running at 576 TB/s aggregate bandwidth.

However, this shift presents significant challenges. The NVL72's power consumption of around 120kW far exceeds typical data center capabilities, potentially limiting its immediate widespread adoption. The discontinuation of the NVL36x2 has also sparked concerns about NVIDIA's execution capabilities and may disrupt the supply chain for assembly and cooling solutions. Despite these hurdles, industry experts view this as a pragmatic approach to product planning in the dynamic AI landscape. While some customers may be disappointed by the dual-rack model's cancellation, NVIDIA's long-term outlook in the AI technology market remains strong. The company continues to work with clients and listen to their needs, to position itself as a leader in high-performance computing solutions.

Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company's commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

"Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group. "With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security."

SK hynix Applies CXL Optimization Solution to Linux

SK hynix Inc. announced today that the key features of its Heterogeneous Memory Software Development Kit (HMSDK) are now available on Linux, the world's largest open source operating system. HMSDK is SK hynix's proprietary software for optimizing the operation of Compute Express Link (CXL), which is gaining attention as a next-generation AI memory technology along with High Bandwidth Memory (HBM). Having received global recognition for HMSDK's performance, SK hynix is now integrating it with Linux. This accomplishment marks a significant milestone for the company as it highlights the company's competitiveness in software, adding to the recognition for its high-performance memory hardware such as HBM.

In the future, developers around the world working on Linux will be able to use SK hynix's technology as the industry standard for CXL memory, putting the company in an advantageous position for global collaboration on next-generation memory. SK hynix's HMSDK enhances memory package's bandwidth by over 30% without modifying existing applications. It achieves this by selectively allocating memory based on the bandwidth between existing memory and expanded CXL memory. Additionally, the software improves performance by more than 12% over conventional systems through optimization based on access frequency, a feature which relocates frequently accessed data to faster memory.

Micron is Buying More Production Plants in Taiwan to Expand HBM Memory Production

Micron has been on a spending spree in Taiwan, where the company has been looking for new facilities. Micron has agreed to buy no less than three LCD plants from display maker AUO, which are located in the central Taiwanese city of Taichung. Micron is looking at paying NT$ 8.1 billion (~US$253.3 million). Initially, Micron was interested in buying another plant in Tainan from Innolux, but was turned down, so Micron turned to AUO for the purchases. Earlier this year, TSMC spent NT$17 billion (~US$531.6 million) to buy a similar facility from Innolux, but it seems that Innolux wasn't willing to part with any more facilities this year.

The three AUO plants are said to have produced LCD colour filters and the two of the plants had closed for production earlier this month. However, it appears that for some reason, the plant that is still in operation, will be leased by AUO and the company will continue production of colour filters in the factory. The larger plant measures 146,033 square metres, with the smaller measuring 32,500 square metres. As for Micron's plans, not much is known at this point in time, but the company has announced that it's planning on using at least some of the space for front-end wafer testing and that the new plants will support its current and upcoming DRAM production fabs in Taichung and Taoyuan, which the company is currently expanding. Market sources in Taiwan are quoted as saying that the focus will be on HBM memory, due to the high demand from various AI products in the market, least not from NVIDIA. The deal is expected to be finalised by the end of the year.

Samsung's 8-layer HBM3E Chips Pass NVIDIA's Tests

Samsung Electronics has achieved a significant milestone in its pursuit of supplying advanced memory chips for AI systems. Their latest fifth-generation high-bandwidth memory (HBM) chips, known as HBM3E, have finally passed all NVIDIA's tests. This approval will help Samsung in catching up with competitors SK Hynix and Micron in the race to provide HBM memory chips to NVIDIA. While a supply deal hasn't been finalized yet, deliveries are expected to start in late 2024.

However, it's worth noting that Samsung passed NVIDIA's tests for the eight-layer HBM3E chips while the more advanced twelve-layer version of the HBM3E chips is still struggling pass those tests. Both Samsung and NVIDIA declined to comment on these developments. Industry expert Dylan Patel notes that while Samsung is making progress, they're still behind SK Hynix, which is already preparing to ship its own twelve-layer HBM3E chips.

NVIDIA's New B200A Targets OEM Customers; High-End GPU Shipments Expected to Grow 55% in 2025

Despite recent rumors speculating on NVIDIA's supposed cancellation of the B100 in favor of the B200A, TrendForce reports that NVIDIA is still on track to launch both the B100 and B200 in the 2H24 as it aims to target CSP customers. Additionally, a scaled-down B200A is planned for other enterprise clients, focusing on edge AI applications.

TrendForce reports that NVIDIA will prioritize the B100 and B200 for CSP customers with higher demand due to the tight production capacity of CoWoS-L. Shipments are expected to commence after 3Q24. In light of yield and mass production challenges with CoWoS-L, NVIDIA is also planning the B200A for other enterprise clients, utilizing CoWoS-S packaging technology.

NEO Semiconductor Announces 3D X-AI Chip as HBM Successor

NEO Semiconductor, a leading developer of innovative technologies for 3D NAND flash memory and 3D DRAM, announced today the development of its 3D X-AI chip technology, targeted to replace the current DRAM chips inside high bandwidth memory (HBM) to solve data bus bottlenecks by enabling AI processing in 3D DRAM. 3D X-AI can reduce the huge amount of data transferred between HBM and GPUs during AI workloads. NEO's innovation is set to revolutionize the performance, power consumption, and cost of AI Chips for AI applications like generative AI.

AI Chips with NEO's 3D X-AI technology can achieve:
  • 100X Performance Acceleration: contains 8,000 neuron circuits to perform AI processing in 3D memory.
  • 99% Power Reduction: minimizes the requirement of transferring data to the GPU for calculation, reducing power consumption and heat generation by the data bus.
  • 8X Memory Density: contains 300 memory layers, allowing HBM to store larger AI models.

Ampere Announces 512-Core AmpereOne Aurora CPU for AI Computing

Ampere has announced a significant update to its product roadmap, highlighting the upcoming 512-core AmpereOne Aurora processor. This new chip is specifically designed to address the growing demands of cloud-native AI computing.

The newly announced AmpereOne Aurora 512 cores processor integrates AI acceleration and on-chip High Bandwidth Memory (HBM), promising three times the performance per rack compared to current AmpereOne processors. Aurora is designed to handle both AI training and inference workloads, indicating Ampere's commitment to becoming a major player in the AI computing space.

SK hynix Board Approves Yongin Semiconductor Cluster Investment Plan

SK hynix Inc. announced today that it has decided to invest about 9.4 trillion won in building the first fab and business facilities of the Yongin Semiconductor Cluster after the board resolution on the 26th. SK hynix designs to start construction of the 1st fab to be built in the Yongin cluster in March next year and complete it in May 2027, and have received an investment approval from the board of directors prior to it. The company will make every effort to build the fab to lay the foundation for the company's future growth and respond to the rapidly increasing demand for AI memory semiconductors.

The Yongin Cluster, which will be built on a 4.15 million square meter site in Wonsam-myeon, Yongin, Gyeonggi Province, is currently under site preparation and infrastructure construction. SK hynix has decided to build four state-of-the-art fabs that will produce next-generation semiconductors, and a semiconductor cooperation complex with more than 50 small local companies. After the construction of the 1st fab, the company aims to complete the remaining three fabs sequentially to grow the Yongin Cluster into a "Global AI semiconductor production base."

Memory Industry Revenue Expected to Reach Record High in 2025 Due to Increasing Average Prices and the Rise of HBM and QLC

TrendForce's latest report on the memory industry reveals that DRAM and NAND Flash revenues are expected to see significant increases of 75% and 77%, respectively, in 2024, driven by increased bit demand, an improved supply-demand structure, and the rise of high-value products like HBM.

Furthermore, industry revenues are projected to continue growing in 2025, with DRAM expected to increase by 51% and NAND Flash by 29%, reaching record highs. This growth is anticipated to revive capital expenditures and boost demand for upstream raw materials, although it will also increase cost pressure for memory buyers.

Micron Technology, Inc. Reports Results for the Third Quarter of Fiscal 2024

Micron Technology, Inc. (Nasdaq: MU) today announced results for its third quarter of fiscal 2024, which ended May 30, 2024.

Fiscal Q3 2024 highlights
  • Revenue of $6.81 billion versus $5.82 billion for the prior quarter and $3.75 billion for the same period last year
  • GAAP net income of $332 million, or $0.30 per diluted share
  • Non-GAAP net income of $702 million, or $0.62 per diluted share
  • Operating cash flow of $2.48 billion versus $1.22 billion for the prior quarter and $24 million for the same period last year
"Robust AI demand and strong execution enabled Micron to drive 17% sequential revenue growth, exceeding our guidance range in fiscal Q3," said Sanjay Mehrotra, President and CEO of Micron Technology. "We are gaining share in high-margin products like High Bandwidth Memory (HBM), and our data center SSD revenue hit a record high, demonstrating the strength of our AI product portfolio across DRAM and NAND. We are excited about the expanding AI-driven opportunities ahead, and are well positioned to deliver a substantial revenue record in fiscal 2025."

DRAM Prices Expected to Increase by 8-13% in Q3

TrendForce reports that a recovery in demand for general servers—coupled with an increased production share of HBM by DRAM suppliers—has led suppliers to maintain their stance on hiking prices. As a result, the ASP of DRAM in the third quarter is expected to continue rising, with an anticipated increase of 8-13%. The price of conventional DRAM is expected to rise by 5-10%, showing a slight contraction compared to the increase in the second quarter.

TrendForce notes that buyers were more conservative about restocking in the second, and inventory levels on both the supplier and buyer sides did not show significant changes. Looking ahead to the third quarter, there is still room for inventory replenishment for smartphones and CSPs, and the peak season for production is soon to commence. Consequently, it is expected that smartphones and servers will drive an increase in memory shipments in the third quarter.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Anthropic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

Samsung, SK Hynix, and Micron Compete for GDDR7 Dominance

Competition among Samsung, SK Hynix, and Micron is intensifying, with a focus on enhancing processing speed and efficiency in graphics DRAM (GDDR) for AI accelerators and cryptocurrency mining. Compared with High Bandwidth Memory (HBM), the GDDR7 has a faster data processing speed and a relatively low price. Since Nvidia is expected to use next-generation GDDR7 with its GeForce RTX50 Blackwell GPUs, competition will likely be as strong as the demand. We can see that by looking, for example, at the pace of new GDDR7 releases from the past two years.

In July 2022, Samsung Electronics developed the industry's first 32 Gbps GDDR7 DRAM, capable of processing up to 1.5 TB of data per second, a 1.4 times speed increase and 20% better energy efficiency compared to GDDR6. In February 2023, Samsung demonstrated its first GDDR7 DRAM with a pin rate of 37 Gbps. On June 4, Micron launched its new GDDR7 at Computex 2024, with speeds up to 32 Gbps, a 60% increase in bandwidth, and a 50% improvement in energy efficiency over the previous generation. Shortly after, SK Hynix introduced a 40 Gbps GDDR7, showcased again at Computex 2024, doubling the previous generation's bandwidth to 128 GB per second and improving energy efficiency by 40%.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.
Return to Keyword Browsing
Nov 18th, 2024 23:23 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts