News Posts matching #HBM

Return to Keyword Browsing

Numem to Showcase Next-Gen Memory Solutions at the Upcoming Chiplet Summit

Numem, an innovator focused on accelerating memory for AI workloads, will be at the upcoming Chiplet Summit to showcase its high-performance solutions. By accelerating the delivery of data via new memory subsystem designs, Numem solutions are re-architecting the hierarchy of AI memory tiers to eliminate the bottlenecks that negatively impact power and performance.

The rapid growth of AI workloads and AI Processor/GPUs are exacerbating the memory bottleneck caused by the slowing performance improvements and scalability of SRAM and DRAM - presenting a major obstacle to maximizing system performance. To overcome this, there is a pressing need for intelligent memory solutions that offer higher power efficiency and greater bandwidth, coupled with a reevaluation of traditional memory architectures.

SK hynix Ships HBM4 Samples to NVIDIA in June, Mass Production Slated for Q3 2025

SK hynix has sped up its HBM4 development plans, according to a report from ZDNet. The company wants to start shipping HBM4 samples to NVIDIA this June, which is earlier than the original timeline. SK hynix hopes to start supplying products by the end of Q3 2025, this push likely aims to get a head start in the next-gen HBM market. To meet this sped-up schedule, SK hynix has set up a special HBM4 development team to supply NVIDIA. Industry sources indicated on January 15th that SK Hynix plans to deliver its first customer samples of HBM4 in early June this year. The company hit a big milestone when it wrapped up the HBM4 tapeout in Q4 2024, the last design step.

HBM4 marks the sixth iteration of high-bandwidth memory tech using stacked DRAM architecture. It comes after HBM3E, the current fifth-gen version, with large-scale production likely to kick off in late 2025 at the earliest. HBM4 boasts a big leap forward doubling data transfer ability with 2,048 I/O channels up from its forerunner. NVIDIA planned to use 12-layer stacked HBM4 in its 2026 "Rubin" line of powerful GPUs. However, NVIDIA has moved up its timeline for "Rubin" aiming to launch in late 2025.

Micron Breaks Ground on New HBM Advanced Packaging Facility in Singapore

Micron Technology, Inc. (Nasdaq: MU) broke ground today on a new High-Bandwidth Memory (HBM) advanced packaging facility adjacent to the company's current facilities in Singapore. Micron marked the occasion with a ceremony attended by Gan Kim Yong, Deputy Prime Minister and Minister for Trade and Industry of Singapore, Png Cheong Boon, Chairman of the Singapore Economic Development Board, Pee Beng Kong, Executive Vice President of the Singapore Economic Development Board, and Tan Boon Khai, CEO of JTC Corporation.

The new HBM advanced packaging facility will be the first facility of its kind in Singapore. Operations for the new facility are scheduled to begin in 2026, with meaningful expansion of Micron's total advanced packaging capacity beginning in calendar 2027 to meet the demands of AI growth. The launch of this facility will further strengthen Singapore's local semiconductor ecosystem and innovation.

SK hynix Showcases AI-Driven Innovations for a Sustainable Tomorrow at CES 2025

SK hynix has returned to Las Vegas for Consumer Electronics Show (CES) 2025, showcasing its latest AI memory innovations reshaping the industry. Held from January 7-10, CES 2025 brings together the brightest minds and groundbreaking technologies from the world's leading tech companies. This year, the event's theme is "Dive In," inviting attendees to immerse themselves in the next wave of technological advancement. SK hynix is emphasizing how it is driving this wave through a display of leading AI memory technologies at the SK Group exhibit. Along with SK Telecom, SKC, and SK Enmove, the company is highlighting how the Group's AI infrastructure brings about true change under the theme "Innovative AI, Sustainable Tomorrow."

Groundbreaking Memory Tech Driving Change in the AI Era
Visitors enter SK Group's exhibit through the Innovation Gate, greeted by a video of dynamic wave-inspired visuals which symbolize the power of AI. The video shows the transformation of binary data into a wave which flows through the exhibition, highlighting how data and AI drives change across industries. Continuing deeper into the exhibit, attendees make their way into the AI Data Center area, the focal point of SK hynix's display. This area features the company's transformative memory products driving progress in the AI era. Among the cutting-edge AI memory technologies on display are SK hynix's HBM, server DRAM, eSSD, CXL, and PIM products.

Passive Buyer Strategies Drive DRAM Contract Prices Down Across the Board in 1Q25

TrendForce's latest investigations reveal that the DRAM market is expected to face downward pricing pressure in 1Q25 as seasonal weakness aligns with sluggish consumer demand for products like smartphones. Additionally, early stockpiling by notebook manufacturers—over potential import tariffs under the Trump administration—has further exacerbated the pricing decline.

Conventional DRAM prices are projected to drop by 8% to 13%. However, if HBM products are included, the anticipated price decline will range from 0% to 5%.

CXMT Achieves 80% Yield for DDR5 Chips, HBM2 Production and Capacity Expansion Underway

According to a recent Citigroup analysis, CXMT, China's domestic memory chipmaker, is demonstrating significant progress in its DDR5 production yields. The company's DDR5 yield rates had reached approximately 80%, marking a substantial improvement from its initial 50% yields when production began. This progress builds on CXMT's experience with DDR4 manufacturing, where the company has achieved yields of around 90%. The company currently operates two fab facilities in Hefei, with Fab 1 dedicated to DDR4 production on 19 nm process technology and a 100,000 wafer per month capacity. Fab 2 focuses on DDR5 production using 17 nm technology, with a current capacity of 50,000 wafers per month. CXMT's DDR5 yields could improve further to approximately 90% by the end of 2025.

Despite these improvements, CXMT faces technological challenges compared to industry leaders. The company's current production process is 19 nm for DDR4 and 17 nm for DDR5, lagging behind competitors like Samsung and SK Hynix, which manufacture 12 nm DDR5 chips. This technology gap results in higher power consumption and less favorable form factors for CXMT's products. The company primarily targets domestic Chinese smartphone and computing OEM customers. Looking ahead, CXMT plans to expand its DDR5 and HBM capabilities, with a potential additional capacity of 50,000 wafers per month at Fab 2 in 2025, if market conditions prove favorable. The company is also making progress on HBM2 development, with customer sampling underway and low-volume production expected to begin in mid-2025.

Nanya Technology Partners With PieceMakers to Develop Customized Ultra-High-Bandwidth Memory

Nanya Technology's Board of Directors today has approved a strategic partnership with PieceMakers Technology, Inc. ("PieceMakers") to jointly develop customized ultra-high-bandwidth memory solutions. As part of the collaboration, Nanya Technology will subscribe to a cash capital increase of up to NT$ 660 million, purchasing up to 22 million common shares at NT$ 30 per share in PieceMakers. Upon completion of the capital increase, Nanya Technology is expected to acquire up to approximately 38% stakes of PieceMakers.

To meet the growing demand for high-performance memory driven by AI and edge computing, this collaboration will combine Nanya Technology's 10 nm-class DRAM innovation with PieceMakers' expertise in customized DRAM design to develop high-value, high-performance, and low-power customized ultra-high-bandwidth memory solutions, unlocking new opportunities in AI and high-performance computing markets.

Marvell Announces Custom HBM Compute Architecture for AI Accelerators

Marvell Technology, Inc. (NASDAQ: MRVL), a leader in data infrastructure semiconductor solutions, today announced that it has pioneered a new custom HBM compute architecture that enables XPUs to achieve greater compute and memory density. The new technology is available to all of its custom silicon customers to improve the performance, efficiency and TCO of their custom XPUs. Marvell is collaborating with its cloud customers and leading HBM manufacturers, Micron, Samsung Electronics, and SK hynix to define and develop custom HBM solutions for next-generation XPUs.

HBM is a critical component integrated within the XPU using advanced 2.5D packaging technology and high-speed industry-standard interfaces. However, the scaling of XPUs is limited by the current standard interface-based architecture. The new Marvell custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size, and cost for specific XPU designs. This approach considers the compute silicon, HBM stacks, and packaging. By customizing the HBM memory subsystem, including the stack itself, Marvell is advancing customization in cloud data center infrastructure. Marvell is collaborating with major HBM makers to implement this new architecture and meet cloud data center operators' needs.

Marvell Announces Breakthrough Custom HBM Compute Architecture to Optimize Cloud AI Accelerators

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced that it has pioneered a new custom HBM compute architecture that enables XPUs to achieve greater compute and memory density. The new technology is available to all of its custom silicon customers to improve the performance, efficiency and TCO of their custom XPUs. Marvell is collaborating with its cloud customers and leading HBM manufacturers, Micron, Samsung Electronics, and SK hynix to define and develop custom HBM solutions for next-generation XPUs.

HBM is a critical component integrated within the XPU using advanced 2.5D packaging technology and high-speed industry-standard interfaces. However, the scaling of XPUs is limited by the current standard interface-based architecture. The new Marvell custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size, and cost for specific XPU designs. This approach considers the compute silicon, HBM stacks, and packaging. By customizing the HBM memory subsystem, including the stack itself, Marvell is advancing customization in cloud data center infrastructure. Marvell is collaborating with major HBM makers to implement this new architecture and meet cloud data center operators' needs.

Broadcom Delivers Industry's First 3.5D F2F Technology for AI XPUs

Broadcom Inc. today announced the availability of its 3.5D eXtreme Dimension System in Package (XDSiP) platform technology, enabling consumer AI customers to develop next-generation custom accelerators (XPUs). The 3.5D XDSiP integrates more than 6000 mm² of silicon and up to 12 high bandwidth memory (HBM) stacks in one packaged device to enable high-efficiency, low-power computing for AI at scale. Broadcom has achieved a significant milestone by developing and launching the industry's first Face-to-Face (F2F) 3.5D XPU.

The immense computational power required for training generative AI models relies on massive clusters of 100,000 growing to 1 million XPUs. These XPUs demand increasingly sophisticated integration of compute, memory, and I/O capabilities to achieve the necessary performance while minimizing power consumption and cost. Traditional methods like Moore's Law and process scaling are struggling to keep up with these demands. Therefore, advanced system-in-package (SiP) integration is becoming crucial for next-generation XPUs. Over the past decade, 2.5D integration, which involves integrating multiple chiplets up to 2500 mm² of silicon and HBM modules up to 8 HBMs on an interposer, has proven valuable for XPU development. However, as new and increasingly complex LLMs are introduced, their training necessitates 3D silicon stacking for better size, power, and cost. Consequently, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, is poised to become the technology of choice for next-generation XPUs in the coming decade.

SK Hynix Shifts to 3nm Process for Its HBM4 Base Die in 2025

SK Hynix plans to produce its 6th generation high-bandwidth memory chips (HBM4) using TSMC's 3 nm process, a change from initial plans to use the 5 nm technology. The Korea Economic Daily reports that these chips will be delivered to NVIDIA in the second half of 2025. NVIDIA's GPU products are currently based on 4 nm HBM chips. The HBM4 prototype chip launched in March by SK Hynix features vertical stacking on a 3 nm die., compared to a 5 nm base die, the new 3 nm-based HBM chip is expected to offer a 20-30% performance improvement. However, SK Hynix's general-purpose HBM4 and HBM4E chips will continue to use the 12 nm process in collaboration with TSMC.

While SK Hynix's fifth-generation HBM3E chips used its own base die technology, the company has chosen TSMC's 3 nm technology for HBM4. This decision is anticipated to significantly widen the performance gap with competitor Samsung Electronics, which plans to manufacture its HBM4 chips using the 4 nm process. SK hynix is currently leading the global HBM market with almost 50% of market share, most of its HBM products been delivered to NVIDIA.

Server DRAM and HBM Boost 3Q24 DRAM Industry Revenue by 13.6% QoQ

TrendForce's latest investigations reveal that the global DRAM industry revenue reached US$26.02 billion in 3Q24, marking a 13.6% QoQ increase. The rise was driven by growing demand for DDR5 and HBM in data centers, despite a decline in LPDDR4 and DDR4 shipments due to inventory reduction by Chinese smartphone brands and capacity expansion by Chinese DRAM suppliers. ASPs continued their upward trend from the previous quarter, with contract prices rising by 8% to 13%, further supported by HBM's displacement of conventional DRAM production.

Looking ahead to 4Q24, TrendForce projects a QoQ increase in overall DRAM bit shipments. However, the capacity constraints caused by HBM production are expected to have a weaker-than-anticipated impact on pricing. Additionally, capacity expansions by Chinese suppliers may prompt PC OEMs and smartphone brands to aggressively deplete inventory to secure lower-priced DRAM products. As a result, contract prices for conventional DRAM and blended prices for conventional DRAM and HBM are expected to decline.

AMD Custom Makes CPUs for Azure: 88 "Zen 4" Cores and HBM3 Memory

Microsoft has announced its new Azure HBv5 virtual machines, featuring unique custom hardware made by AMD. CEO Satya Nadella made the announcement during Microsoft Ignite, introducing a custom-designed AMD processor solution that achieves remarkable performance metrics. The new HBv5 virtual machines deliver an extraordinary 6.9 TB/s of memory bandwidth, utilizing four specialized AMD processors equipped with HBM3 technology. This represents an eightfold improvement over existing cloud alternatives and a staggering 20-fold increase compared to previous Azure HBv3 configurations. Each HBv5 virtual machine boasts impressive specifications, including up to 352 AMD EPYC "Zen4" CPU cores capable of reaching 4 GHz peak frequencies. The system provides users with 400-450 GB of HBM3 RAM and features doubled Infinity Fabric bandwidth compared to any previous AMD EPYC server platform. Given that each VM had four CPUs, this yields 88 Zen 4 cores per CPU socket, with 9 GB of memory per core.

The architecture includes 800 Gb/s of NVIDIA Quantum-2 InfiniBand connectivity and 14 TB of local NVMe SSD storage. The development marks a strategic shift in addressing memory performance limitations, which Microsoft identifies as a critical bottleneck in HPC applications. This custom design particularly benefits sectors requiring intensive computational resources, including automotive design, aerospace simulation, weather modeling, and energy research. While the CPU appears custom-designed for Microsoft's needs, it bears similarities to previously rumored AMD processors, suggesting a possible connection to the speculated MI300C chip architecture. The system's design choices, including disabled SMT and single-tenant configuration, clearly focus on optimizing performance for specific HPC workloads. If readers can recall, Intel also made customized Xeons for AWS and their needs, which is normal in the hyperscaler space, given they drive most of the revenue.

Samsung Reaches Key Milestone at New Semiconductor R&D Complex

Samsung Electronics Co., Ltd. today announced that it held a tool-in ceremony for its new semiconductor research and development complex (NRD-K) at its Giheung campus, marking a significant leap into the future. About 100 guests, including those from suppliers and customers, were in attendance to celebrate the milestone. As a state-of-the-art facility, NRD-K broke ground in 2022 and is set to become a key research base for Samsung's memory, system LSI and foundry semiconductor R&D. With its advanced infrastructure, research and product-level verification will be able to take place under one roof. Samsung plans to invest about KRW 20 trillion by 2030 for the complex in an area covering about 109,000 square meters (m²) within its Giheung campus. The complex will also include an R&D-dedicated line scheduled to begin operation in mid-2025.

"NRD-K will bolster our development speed, enabling the company to create a virtuous cycle to accelerate fundamental research on next generation technology and mass production. We will lay the foundation for a new leap forward in Giheung, where Samsung Electronics' 50-year history of semiconductors began, and create a new future for the next 100 years," said Young Hyun Jun, Vice Chairman and Head of the Device Solutions Division at Samsung Electronics.

NVIDIA B200 "Blackwell" Records 2.2x Performance Improvement Over its "Hopper" Predecessor

We know that NVIDIA's latest "Blackwell" GPUs are fast, but how much faster are they over the previous generation "Hopper"? Thanks to the latest MLPerf Training v4.1 results, NVIDIA's HGX B200 Blackwell platform has demonstrated massive performance gains, measuring up to 2.2x improvement per GPU compared to its HGX H200 Hopper. The latest results, verified by MLCommons, reveal impressive achievements in large language model (LLM) training. The Blackwell architecture, featuring HBM3e high-bandwidth memory and fifth-generation NVLink interconnect technology, achieved double the performance per GPU for GPT-3 pre-training and a 2.2x boost for Llama 2 70B fine-tuning compared to the previous Hopper generation. Each benchmark system incorporated eight Blackwell GPUs operating at a 1,000 W TDP, connected via NVLink Switch for scale-up.

The network infrastructure utilized NVIDIA ConnectX-7 SuperNICs and Quantum-2 InfiniBand switches, enabling high-speed node-to-node communication for distributed training workloads. While previous Hopper-based systems required 256 GPUs to optimize performance for the GPT-3 175B benchmark, Blackwell accomplished the same task with just 64 GPUs, leveraging its larger HBM3e memory capacity and bandwidth. One thing to look out for is the upcoming GB200 NVL72 system, which promises even more significant gains past the 2.2x. It features expanded NVLink domains, higher memory bandwidth, and tight integration with NVIDIA Grace CPUs, complemented by ConnectX-8 SuperNIC and Quantum-X800 switch technologies. With faster switching and better data movement with Grace-Blackwell integration, we could see even more software optimization from NVIDIA to push the performance envelope.

Samsung Hopes PIM Memory Technology Can Replace HBM in Next-Gen AI Applications

The 8th edition of the Samsung AI Forum was held on November 4th and 5th in Seoul, and among all the presentations and keynote speeches, one piece of information caught our attention. As reported by The Chosun Daily, Samsung is (again) turning its attention to Processing-in-Memory (PIM) technology, in what appears to be the company's latest attempt to keep up with its rival SK Hynix in this area. In 2021, Samsung introduced the world's first HBM-PIM, the chips showing impressive gains in performance (nearly double) while reducing energy consumption by almost 50% on average. PIM technology basically adds the processor functions necessary for computational tasks, reducing data transfer between the CPU and memory.

Now, the company hopes that PIM memory chips could replace HBM in the future, based on the advantages this next-generation memory technology possesses, mainly for artificial intelligence (AI) applications. "AI is transforming our lives at an unprecedented rate, and the question of how to use AI more responsibly is becoming increasingly important," said Samsung Electronics CEO Han Jong-hee in his opening remarks. "Samsung Electronics is committed to fostering a more efficient and sustainable AI ecosystem." During the event, Samsung also highlighted its partnership with AMD, which reportedly supplies AMD with its fifth-generation HBM, the HBM3E.

NVIDIA CEO Jensen Huang Asks SK hynix to Speed Up HBM4 Delivery by Six Months

SK hynix announced the first 48 GB 16-high HBM3E in the industry at the SK AI Summit in Seoul today. During the event, news came out about newer plans to develop their next-gen memory tech. Reuters and ZDNet Korea reported that NVIDIA CEO Jensen Huang asked SK hynix to speed up their HBM4 delivery by six months. SK Group Chairman Chey Tae-won shared this info at the Summit. The company had earlier said they would give HBM4 chips to customers in the second half of 2025.

When ZDNet asked about this sped-up plan, SK hynix President Kwak Noh-Jung gave a careful answer saying "We will give it a try." A company spokesperson told Reuters that this new schedule would be quicker than first planned, but they didn't share more details. In a video interview shown at the Summit, NVIDIA's Jensen Huang pointed out the strong team-up between the companies. He said working with SK hynix has helped NVIDIA go beyond Moore's Law performance gains. He stressed that NVIDIA will keep needing SK hynix's HBM tech for future products. SK hynix plans to supply the latest 12-layer HBM3E to an undisclosed customer this year, and will start sampling of the 16-layer HBM3E early next year.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.

HBM5 20hi Stack to Adopt Hybrid Bonding Technology, Potentially Transforming Business Models

TrendForce reports that the focus on HBM products in the DRAM industry is increasingly turning attention toward advanced packaging technologies like hybrid bonding. Major HBM manufacturers are considering whether to adopt hybrid bonding for HBM4 16hi stack products but have confirmed plans to implement this technology in the HBM5 20hi stack generation.

Hybrid bonding offers several advantages when compared to the more widely used micro-bumping. Since it does not require bumps, it allows for more stacked layers and can accommodate thicker chips that help address warpage. Hybrid-bonded chips also benefit from faster data transmission and improved heat dissipation.

Global Silicon Wafer Shipments to Remain Soft in 2024 Before Strong Expected Rebound in 2025, SEMI Reports

Global shipments of silicon wafers are projected to decline 2% in 2024 to 12,174 million square inches (MSI) with a strong rebound of 10% delayed until 2025 to reach 13,328 MSI as wafer demand continues to recover from the downcycle, SEMI reported today in its annual silicon shipment forecast.

Strong silicon wafer shipment growth is expected to continue through 2027 to meet increasing demand related to AI and advanced processing, driving improved fab utilization rate for global semiconductor production capacity. Moreover, new applications in advanced packaging and high-bandwidth memory (HBM) production, which require additional wafers, are contributing to the rising need for silicon wafers. Such applications include temporary or permanent carrier wafers, interposers, device separation into chiplets, and memory/logic array separation.

ASML Reports €7.5 Billion Total Net Sales and €2.1 Billion Net Income in Q3 2024

Today, ASML Holding NV (ASML) has published its 2024 third-quarter results.
  • Q3 total net sales of €7.5 billion, gross margin of 50.8%, net income of €2.1 billion
  • Quarterly net bookings in Q3 of €2.6 billion of which €1.4 billion is EUV
  • ASML expects Q4 2024 total net sales between €8.8 billion and €9.2 billion, and a gross margin between 49% and 50%
  • ASML expects 2024 total net sales of around €28 billion
  • ASML expects 2025 total net sales to be between €30 billion and €35 billion, with a gross margin between 51% and 53%
CEO statement and outlook
"Our third-quarter total net sales came in at €7.5 billion, above our guidance, driven by more DUV and Installed Base Management sales. The gross margin came in at 50.8%, within guidance. While there continue to be strong developments and upside potential in AI, other market segments are taking longer to recover. It now appears the recovery is more gradual than previously expected. This is expected to continue in 2025, which is leading to customer cautiousness. Regarding Logic, the competitive foundry dynamics have resulted in a slower ramp of new nodes at certain customers, leading to several fab push outs and resulting changes in litho demand timing, in particular EUV. In Memory, we see limited capacity additions, with the focus still on technology transitions supporting the HBM and DDR5 AI-related demand."

GIGABYTE Releases Servers with AMD EPYC 9005 Series Processors and AMD Instinct MI325X GPUs

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced support for AMD EPYC 9005 Series processors with the release of new GIGABYTE servers alongside BIOS updates for some existing GIGABYTE servers using the SP5 platform. This first wave of updates supports over 60 servers and motherboards that customers can choose from that deliver exceptional performance for 5th Generation AMD EPYC processors. In addition, with the launch of the AMD Instinct MI325X accelerator, a newly designed GIGABYTE server was created, and it will be showcased at SC24 (Nov. 19-21) in Atlanta.

New GIGABYTE Servers and Updates
To fill in all possible workload scenarios, using modular design servers to edge servers to enterprise-grade motherboards, these new solutions will ship already supporting AMD EPYC 9005 Series processors. The XV23-ZX0 is one of the many new solutions and it is notable for its modularized server design using two AMD EPYC 9005 processors and supporting up to four GPUs and three additional FHFL slots. It also has 2+2 redundant power supplies on the front-side for ease of access.

Slowing Demand Growth Constrains Q4 Memory Price Increases

TrendForce's latest findings reveal that weaker consumer demand has persisted through 3Q24, leaving AI servers as the primary driver of memory demand. This dynamic, combined with HBM production displacing conventional DRAM capacity, has led suppliers to maintain a firm stance on contract price hikes.

Smartphone brands continue to remain cautious despite some server OEMs continuing to show purchasing momentum. Consequently, TrendForce forecasts that Q4 memory prices will see a significant slowdown in growth, with conventional DRAM expected to increase by only 0-5%. However, benefiting from the rising share of HBM, the average price of overall DRAM is projected to rise 8-13%—a marked deceleration compared to the previous quarter.

Micron Updates Corporate Logo with "Ahead of The Curve" Design

Today, Micron updated its corporate logo with new symbolism. The redesign comes as Micron celebrates over four decades of technological advancement in the semiconductor industry. The new logo features a distinctive silicon color, paying homage to the wafers at the core of Micron's products. Its curved lettering represents the company's ability to stay ahead of industry trends and adapt to rapid technological changes. The design also incorporates vibrant gradient colors inspired by light reflections on wafers, which are the core of Mircorn's memory and storage products.

This rebranding effort coincides with Micron's expanding role in AI, where memory and storage innovations are increasingly crucial. The company has positioned itself beyond a commodity memory supplier, now offering leadership in solutions for AI data centers, high-performance computing, and AI-enabled devices. The company has come far from its original 64K DRAM in 1981 to HBM3E DRAM today. Micron offers different HBM memory products, graphics memory powering consumer GPUs, CXL memory modules, and DRAM components and modules.

NVIDIA Cancels Dual-Rack NVL36x2 in Favor of Single-Rack NVL72 Compute Monster

NVIDIA has reportedly discontinued its dual-rack GB200 NVL36x2 GPU model, opting to focus on the single-rack GB200 NVL72 and NVL36 models. This shift, revealed by industry analyst Ming-Chi Kuo, aims to simplify NVIDIA's offerings in the AI and HPC markets. The decision was influenced by major clients like Microsoft, who prefer the NVL72's improved space efficiency and potential for enhanced inference performance. While both models perform similarly in AI large language model (LLM) training, the NVL72 is expected to excel in non-parallelizable inference tasks. As a reminder, the NVL72 features 36 Grace CPUs, delivering 2,592 Arm Neoverse V2 cores with 17 TB LPDDR5X memory with 18.4 TB/s aggregate bandwidth. Additionally, it includes 72 Blackwell GB200 SXM GPUs that have a massive 13.5 TB of HBM3e combined, running at 576 TB/s aggregate bandwidth.

However, this shift presents significant challenges. The NVL72's power consumption of around 120kW far exceeds typical data center capabilities, potentially limiting its immediate widespread adoption. The discontinuation of the NVL36x2 has also sparked concerns about NVIDIA's execution capabilities and may disrupt the supply chain for assembly and cooling solutions. Despite these hurdles, industry experts view this as a pragmatic approach to product planning in the dynamic AI landscape. While some customers may be disappointed by the dual-rack model's cancellation, NVIDIA's long-term outlook in the AI technology market remains strong. The company continues to work with clients and listen to their needs, to position itself as a leader in high-performance computing solutions.
Return to Keyword Browsing
Jan 31st, 2025 04:32 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts