News Posts matching #HBM3E

Return to Keyword Browsing

NVIDIA Supercharges Hopper, the World's Leading AI Computing Platform

NVIDIA today announced it has supercharged the world's leading AI computing platform with the introduction of the NVIDIA HGX H200. Based on NVIDIA Hopper architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e - faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141 GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100. H200-powered systems from the world's leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

Samsung Electronics Announces Third Quarter 2023 Results

Samsung Electronics today reported financial results for the third quarter ended September 30, 2023. Total consolidated revenue was KRW 67.40 trillion, a 12% increase from the previous quarter, mainly due to new smartphone releases and higher sales of premium display products. Operating profit rose sequentially to KRW 2.43 trillion based on strong sales of flagship models in mobile and strong demand for displays, as losses at the Device Solutions (DS) Division narrowed.

The Memory Business reduced losses sequentially as sales of high valued-added products and average selling prices somewhat increased. Earnings in system semiconductors were impacted by a delay in demand recovery for major applications, but the Foundry Business posted a new quarterly high for new backlog from design wins. The mobile panel business reported a significant increase in earnings on the back of new flagship model releases by major customers, while the large panel business narrowed losses in the quarter. The Device eXperience (DX) Division achieved solid results due to robust sales of premium smartphones and TVs. Revenue at the Networks Business declined in major overseas markets as mobile operators scaled back investments.

Samsung Electronics Holds Memory Tech Day 2023 Unveiling New Innovations To Lead the Hyperscale AI Era

Samsung Electronics Co., Ltd., a world leader in advanced memory technology, today held its annual Memory Tech Day, showcasing industry-first innovations and new memory products to accelerate technological advancements across future applications—including the cloud, edge devices and automotive vehicles.

Attended by about 600 customers, partners and industry experts, the event served as a platform for Samsung executives to expand on the company's vision for "Memory Reimagined," covering long-term plans to continue its memory technology leadership, outlook on market trends and sustainability goals. The company also presented new product innovations such as the HBM3E Shinebolt, LPDDR5X CAMM2 and Detachable AutoSSD.

SK hynix Displays Next-Gen Solutions Set to Unlock AI and More at OCP Global Summit 2023

SK hynix showcased its next-generation memory semiconductor technologies and solutions at the OCP Global Summit 2023 held in San Jose, California from October 17-19. The OCP Global Summit is an annual event hosted by the world's largest data center technology community, the Open Compute Project (OCP), where industry experts gather to share various technologies and visions. This year, SK hynix and its subsidiary Solidigm showcased advanced semiconductor memory products that will lead the AI era under the slogan "United Through Technology".

SK hynix presented a broad range of its solutions at the summit, including its leading HBM(HBM3/3E), CXL, and AiM products for generative AI. The company also unveiled some of the latest additions to its product portfolio including its DDR5 RDIMM, MCR DIMM, enterprise SSD (eSSD), and LPDDR CAMM devices. Visitors to the HBM exhibit could see HBM3, which is utilized in NVIDIA's H100, a high-performance GPU for AI, and also check out the next-generation HBM3E. Due to their low-power consumption and ultra-high-performance, these HBM solutions are more eco-friendly and are particularly suitable for power-hungry AI server systems.

Samsung Notes: HBM4 Memory is Coming in 2025 with New Assembly and Bonding Technology

According to the editorial blog post published on the Samsung blog by SangJoon Hwang, Executive Vice President and Head of the DRAM Product & Technology Team at Samsung Electronics, we have information that High-Bandwidth Memory 4 (HBM4) is coming in 2025. In the recent timeline of HBM development, we saw the first appearance of HBM memory in 2015 with the AMD Radeon R9 Fury X. The second-generation HBM2 appeared with NVIDIA Tesla P100 in 2016, and the third-generation HBM3 saw the light of the day with NVIDIA Hopper GH100 GPU in 2022. Currently, Samsung has developed 9.8 Gbps HBM3E memory, which will start sampling to customers soon.

However, Samsung is more ambitious with development timelines this time, and the company expects to announce HBM4 in 2025, possibly with commercial products in the same calendar year. Interestingly, the HBM4 memory will have some technology optimized for high thermal properties, such as non-conductive film (NCF) assembly and hybrid copper bonding (HCB). The NCF is a polymer layer that enhances the stability of micro bumps and TSVs in the chip, so memory solder bump dies are protected from shock. Hybrid copper bonding is an advanced semiconductor packaging method that creates direct copper-to-copper connections between semiconductor components, enabling high-density, 3D-like packaging. It offers high I/O density, enhanced bandwidth, and improved power efficiency. It uses a copper layer as a conductor and oxide insulator instead of regular micro bumps to increase the connection density needed for HBM-like structures.

SK hynix Presents Advanced Memory Technologies at Intel Innovation 2023

SK hynix announced on September 22 that it showcased its latest memory technologies and products at Intel Innovation 2023 held September 19-20 in the western U.S. city of San Jose, California. Hosted by Intel since 2019, Intel Innovation is an annual IT exhibition which brings together the technology company's customers and partners to share the latest developments in the industry. At this year's event held at the San Jose McEnery Convention Center, SK hynix showcased its advanced semiconductor memory products which are essential in the generative AI era under the slogan "Pioneer Tomorrow With the Best."

Products that garnered the most interest were HBM3, which supports the high-speed performance of AI accelerators, and DDR5 RDIMM, a DRAM module for servers with 1bnm process technology. As one of SK hynix's core technologies, HBM3 has established the company as a trailblazer in AI memory. SK hynix plans to further strengthen its position in the market by mass-producing HBM3E (Extended) from 2024. Meanwhile, DDR5 RDIMM with 1bnm, or the 5th generation of the 10 nm process technology, also offers outstanding performance. In addition to supporting unprecedented transfer speeds of more than 6,400 megabits per second (Mbps), this low-power product helps customers simultaneously reduce costs and improve ESG performance.

SK hynix Develops World's Best Performing HBM3E Memory

SK hynix Inc. announced today that it successfully developed HBM3E, the next-generation of the highest-specification DRAM for AI applications currently available, and said a customer's evaluation of samples is underway. The company said that the successful development of HBM3E, the extended version of HBM3 which delivers the world's best specifications, comes on top of its experience as the industry's sole mass provider of HBM3. With its experience as the supplier of the industry's largest volume of HBM products and the mass-production readiness level, SK hynix plans to mass produce HBM3E from the first half of next year and solidify its unrivaled leadership in AI memory market.

According to the company, the latest product not only meets the industry's highest standards of speed, the key specification for AI memory products, but all categories including capacity, heat dissipation and user-friendliness. In terms of speed, the HBM3E can process data up to 1.15 terabytes a second, which is equivalent to processing more than 230 Full-HD movies of 5 GB-size each in a second.

NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip Platform With HMB3e

NVIDIA today announced the next-generation NVIDIA GH200 Grace Hopper platform - based on a new Grace Hopper Superchip with the world's first HBM3e processor - built for the era of accelerated computing and generative AI. Created to handle the world's most complex generative AI workloads, spanning large language models, recommender systems and vector databases, the new platform will be available in a wide range of configurations. The dual configuration - which delivers up to 3.5x more memory capacity and 3x more bandwidth than the current generation offering - comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance and 282 GB of the latest HBM3e memory technology.

"To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs," said Jensen Huang, founder and CEO of NVIDIA. "The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center."

Insider Info Alleges SK hynix Preparing HBM3E Samples for NVIDIA

Industry insiders in South Korea have informed news publications that NVIDIA has requested that SK hynix submit samples of next-generation high bandwidth memory (HBM) for evaluation purposes—according to Business Korea's article, workers were preparing an initial batch of HBM3E prototypes for shipment this week. SK hynix has an existing relationship with NVIDIA—it fended off tough competition last year and has since produced (current gen) HBM3 DRAM for the H100 "Hopper" Tensor Core GPU.

The memory manufacturer is hoping to maintain its position as the HBM market leader with fifth generation products in the pipeline—vice president Park Myung-soo revealed back in April that: "we are preparing 8 Gbps HBM3E product samples for the second half of this year and are preparing for mass production in the first half of next year." A new partnership with NVIDIA could help SK hynix widen the gulf between it and and its nearest competitor - Samsung - in the field of HBM production.

SK hynix Enters Industry's First Compatibility Validation Process for 1bnm DDR5 Server DRAM

SK hynix Inc. announced today that it has completed the development of the industry's most advanced 1bnm, the fifth-generation of the 10 nm process technology, while the company and Intel began a joint evaluation of 1bnm and validation in the Intel Data Center Certified memory program for DDR5 products targeted at Intel Xeon Scalable platforms.

The move comes after SK hynix became the first in the industry to reach 1anm readiness and completed Intel's system validation of the 1anm DDR5, the fourth-generation of the 10 nm technology. The DDR5 products provided to Intel run at the world's fastest speed of 6.4 Gbps (Gigabits per second), representing a 33% improvement in data processing speed compared with test-run products in early days of DDR5 development.
Return to Keyword Browsing
May 21st, 2024 19:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts