News Posts matching #HBM3E

Return to Keyword Browsing

SK hynix Announces 4Q24 Financial Results

SK hynix Inc. announced today that it recorded best-ever yearly performance with 66.1930 trillion won in revenues, 23.4673 trillion won in operating profit (with an operating margin of 35%), and 19.7969 trillion won in net profit (with a net margin of 30%). Yearly revenues marked all-time high, exceeding the previous record in 2022 by over 21 trillion won and operating profit exceeded the record in 2018 during the semiconductor super boom.

In particular, fourth quarter revenues went up by 12% to 19.7670 trillion won, operating profit up 15% to 8.0828 trillion won (with an operating margin of 41%) from the previous quarter and net profit recorded 8.0065 trillion won (with a net margin of 41%). SK hynix emphasized that with prolonged strong demand for AI memory, the company achieved all-time high result through world-leading HBM technology and profitability-oriented operation. HBM continued its high growth in fourth quarter marking over 40% of total DRAM revenue and eSSD also showed constant increase in sales. With remarkable product competitiveness based profitability-oriented operation, the company established a stable financial condition which led to improved outcome.

SK hynix Showcases AI-Driven Innovations for a Sustainable Tomorrow at CES 2025

SK hynix has returned to Las Vegas for Consumer Electronics Show (CES) 2025, showcasing its latest AI memory innovations reshaping the industry. Held from January 7-10, CES 2025 brings together the brightest minds and groundbreaking technologies from the world's leading tech companies. This year, the event's theme is "Dive In," inviting attendees to immerse themselves in the next wave of technological advancement. SK hynix is emphasizing how it is driving this wave through a display of leading AI memory technologies at the SK Group exhibit. Along with SK Telecom, SKC, and SK Enmove, the company is highlighting how the Group's AI infrastructure brings about true change under the theme "Innovative AI, Sustainable Tomorrow."

Groundbreaking Memory Tech Driving Change in the AI Era
Visitors enter SK Group's exhibit through the Innovation Gate, greeted by a video of dynamic wave-inspired visuals which symbolize the power of AI. The video shows the transformation of binary data into a wave which flows through the exhibition, highlighting how data and AI drives change across industries. Continuing deeper into the exhibit, attendees make their way into the AI Data Center area, the focal point of SK hynix's display. This area features the company's transformative memory products driving progress in the AI era. Among the cutting-edge AI memory technologies on display are SK hynix's HBM, server DRAM, eSSD, CXL, and PIM products.

SK hynix to Unveil Full Stack AI Memory Provider Vision at CES 2025

SK hynix Inc. announced today that it will showcase its innovative AI memory technologies at CES 2025, to be held in Las Vegas from January 7 to 10 (local time). A large number of C-level executives, including CEO Kwak No-jung, CMO (Chief Marketing Officer) Justin Kim and Chief Development Officer (CDO) Ahn Hyun, will attend the event. "We will broadly introduce solutions optimized for on-device AI and next-generation AI memories, as well as representative AI memory products such as HBM and eSSD at this CES," said Justin Kim. "Through this, we will publicize our technological competitiveness to prepare for the future as a Full Stack AI Memory Provider."

SK hynix will also run a joint exhibition booth with SK Telecom, SKC and SK Enmove, under the theme "Innovative AI, Sustainable Tomorrow." The booth will showcase how SK Group's AI infrastructure and services are transforming the world, represented in waves of light. SK hynix, which is the world's first to produce 12-layer HBM products for 5th generation and supply them to customers, will showcase samples of HBM3E 16-layer products, which were officially developed in November last year. This product uses the advanced MR-MUF process to achieve the industry's highest 16-layer configuration while controlling chip warpage and maximizing heat dissipation performance.

Samsung Hopes PIM Memory Technology Can Replace HBM in Next-Gen AI Applications

The 8th edition of the Samsung AI Forum was held on November 4th and 5th in Seoul, and among all the presentations and keynote speeches, one piece of information caught our attention. As reported by The Chosun Daily, Samsung is (again) turning its attention to Processing-in-Memory (PIM) technology, in what appears to be the company's latest attempt to keep up with its rival SK Hynix in this area. In 2021, Samsung introduced the world's first HBM-PIM, the chips showing impressive gains in performance (nearly double) while reducing energy consumption by almost 50% on average. PIM technology basically adds the processor functions necessary for computational tasks, reducing data transfer between the CPU and memory.

Now, the company hopes that PIM memory chips could replace HBM in the future, based on the advantages this next-generation memory technology possesses, mainly for artificial intelligence (AI) applications. "AI is transforming our lives at an unprecedented rate, and the question of how to use AI more responsibly is becoming increasingly important," said Samsung Electronics CEO Han Jong-hee in his opening remarks. "Samsung Electronics is committed to fostering a more efficient and sustainable AI ecosystem." During the event, Samsung also highlighted its partnership with AMD, which reportedly supplies AMD with its fifth-generation HBM, the HBM3E.

NVIDIA CEO Jensen Huang Asks SK hynix to Speed Up HBM4 Delivery by Six Months

SK hynix announced the first 48 GB 16-high HBM3E in the industry at the SK AI Summit in Seoul today. During the event, news came out about newer plans to develop their next-gen memory tech. Reuters and ZDNet Korea reported that NVIDIA CEO Jensen Huang asked SK hynix to speed up their HBM4 delivery by six months. SK Group Chairman Chey Tae-won shared this info at the Summit. The company had earlier said they would give HBM4 chips to customers in the second half of 2025.

When ZDNet asked about this sped-up plan, SK hynix President Kwak Noh-Jung gave a careful answer saying "We will give it a try." A company spokesperson told Reuters that this new schedule would be quicker than first planned, but they didn't share more details. In a video interview shown at the Summit, NVIDIA's Jensen Huang pointed out the strong team-up between the companies. He said working with SK hynix has helped NVIDIA go beyond Moore's Law performance gains. He stressed that NVIDIA will keep needing SK hynix's HBM tech for future products. SK hynix plans to supply the latest 12-layer HBM3E to an undisclosed customer this year, and will start sampling of the 16-layer HBM3E early next year.

SK hynix Introduces World's First 16-High HBM3E at SK AI Summit 2024

SK hynix CEO Kwak Noh-Jung, during his keynote speech titled "A New Journey in Next-Generation AI Memory: Beyond Hardware to Daily Life" at SK AI Summit in Seoul, made public development of the industry's first 48 GB 16-high - the world's highest number of layers followed by the 12-high product—HBM3E. Kwak also shared the company's vision to become a "Full Stack AI Memory Provider", or a provider with a full lineup of AI memory products in both DRAM and NAND spaces, through close collaboration with interested parties.

Samsung Electronics Announces Results for Third Quarter of 2024, 7 Percent Revenue Increase

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2024. The Company posted KRW 79.1 trillion in consolidated revenue, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. Operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division. The strength of the Korean won against the U.S. dollar resulted in a negative impact on company-wide operating profit of about KRW 0.5 trillion compared to the previous quarter.

In the fourth quarter, while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels. Against this backdrop, the Company will concentrate on driving sales of High Bandwidth Memory (HBM) and high-density products. The Foundry Business aims to increase order volumes by enhancing advanced process technologies. Samsung Display Corporation (SDC) expects the demand of flagship products from major customers to continue, while maintaining a quite conservative outlook on its performance. The Device eXperience (DX) Division will continue to focus on premium products, but sales are expected to decline slightly compared to the previous quarter.

SK Hynix Reports Third Quarter 2024 Financial Results

SK hynix Inc. announced today that it recorded 17.5731 trillion won in revenues, 7.03 trillion won in operating profit (with an operating margin of 40%), and 5.7534 trillion won in net profit (with a net margin of 33%) in the third quarter this year. Quarterly revenues marked all-time high, exceeding the previous record of 16.4233 trillion won in the second quarter of this year by more than 1 trillion won. Operating profit and net profit also far exceeded the record of 6.4724 trillion won and 4.6922 trillion won in the third quarter of 2018 during the semiconductor super boom.

SK hynix emphasized that the demand for AI memory continued to be strong centered on data center customers, and the company marked its highest revenue since its foundation by expanding sales of premium products such as HBM and eSSD. In particular, HBM sales showed excellent growth, up more than 70% from the previous quarter and more than 330% from the same period last year.

SK hynix Showcases Memory Solutions at the 2024 OCP Global Summit

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15-17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event's theme is "From Ideas to Impact," which aims to foster the realization of theoretical concepts into real-world technologies.

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM and CMS.

ASRock Rack Unveils New Server Platforms Supporting AMD EPYC 9005 Series Processors and AMD Instinct MI325X Accelerators at AMD Advancing AI 2024

ASRock Rack Inc., a leading innovative server company, announced upgrades to its extensive lineup to support AMD EPYC 9005 Series processors. Among these updates is the introduction of the new 6U8M-TURIN2 GPU server. This advanced platform features AMD Instinct MI325X accelerators, specifically optimized for intensive enterprise AI applications, and will be showcased at AMD Advancing AI 2024.

ASRock Rack Introduce GPU Servers Powered by AMD EPYC 9005 series processors
AMD today revealed the 5th Generation AMD EPYC processors, offering a wide range of core counts (up to 192 cores), frequencies (up to 5 GHz), and expansive cache capacities. Select high-frequency processors, such as the AMD EPYC 9575F, are optimized for use as host CPUs in GPU-enabled systems. Additionally, the just launched AMD Instinct MI325X accelerators feature substantial HBM3E memory and 6 TB/s of memory bandwidth, enabling quick access and efficient handling of large datasets and complex computations.

GIGABYTE Releases Servers with AMD EPYC 9005 Series Processors and AMD Instinct MI325X GPUs

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced support for AMD EPYC 9005 Series processors with the release of new GIGABYTE servers alongside BIOS updates for some existing GIGABYTE servers using the SP5 platform. This first wave of updates supports over 60 servers and motherboards that customers can choose from that deliver exceptional performance for 5th Generation AMD EPYC processors. In addition, with the launch of the AMD Instinct MI325X accelerator, a newly designed GIGABYTE server was created, and it will be showcased at SC24 (Nov. 19-21) in Atlanta.

New GIGABYTE Servers and Updates
To fill in all possible workload scenarios, using modular design servers to edge servers to enterprise-grade motherboards, these new solutions will ship already supporting AMD EPYC 9005 Series processors. The XV23-ZX0 is one of the many new solutions and it is notable for its modularized server design using two AMD EPYC 9005 processors and supporting up to four GPUs and three additional FHFL slots. It also has 2+2 redundant power supplies on the front-side for ease of access.

AMD Launches Instinct MI325X Accelerator for AI Workloads: 256 GB HBM3E Memory and 2.6 PetaFLOPS FP8 Compute

During its "Advancing AI" conference today, AMD has updated its AI accelerator portfolio with the Instinct MI325X accelerator, designed to succeed its MI300X predecessor. Built on the CDNA 3 architecture, Instinct MI325X brings a suite of improvements over the old SKU. Now, the MI325X features 256 GB of HBM3E memory running at 6 TB/s bandwidth. The capacity memory alone is a 1.8x improvement over the old MI300 SKU, which features 192 GB of regular HBM3 memory. Providing more memory capacity is crucial as upcoming AI workloads are training models with parameter counts measured in trillions, as opposed to billions with current models we have today. When it comes to compute resources, the Instinct MI325X provides 1.3 PetaFLOPS at FP16 and 2.6 PetaFLOPS at FP8 training and inference. This represents a 1.3x improvement over the Instinct MI300.

A chip alone is worthless without a good platform, and AMD decided to make the Instinct MI325X OAM modules a drop-in replacement for the current platform designed for MI300X, as they are both pin-compatible. In systems packing eight MI325X accelerators, there are 2 TB of HBM3E memory running at 48 TB/s memory bandwidth. Such a system achieves 10.4 PetaFLOPS of FP16 and 20.8 PetaFLOPS of FP8 compute performance. The company uses NVIDIA's H200 HGX as reference claims for its performance competitiveness, where the company claims that the Instinct MI325X outperforms NVIDIA H200 HGX system by 1.3x across the board in memory bandwidth, FP16 / FP8 compute performance and 1.8x in memory capacity.

Micron Updates Corporate Logo with "Ahead of The Curve" Design

Today, Micron updated its corporate logo with new symbolism. The redesign comes as Micron celebrates over four decades of technological advancement in the semiconductor industry. The new logo features a distinctive silicon color, paying homage to the wafers at the core of Micron's products. Its curved lettering represents the company's ability to stay ahead of industry trends and adapt to rapid technological changes. The design also incorporates vibrant gradient colors inspired by light reflections on wafers, which are the core of Mircorn's memory and storage products.

This rebranding effort coincides with Micron's expanding role in AI, where memory and storage innovations are increasingly crucial. The company has positioned itself beyond a commodity memory supplier, now offering leadership in solutions for AI data centers, high-performance computing, and AI-enabled devices. The company has come far from its original 64K DRAM in 1981 to HBM3E DRAM today. Micron offers different HBM memory products, graphics memory powering consumer GPUs, CXL memory modules, and DRAM components and modules.

Synopsys and TSMC Pave the Path for Trillion-Transistor AI and Multi-Die Chip Design

Synopsys, Inc. today announced its continued, close collaboration with TSMC to deliver advanced EDA and IP solutions on TSMC's most advanced process and 3DFabric technologies to accelerate innovation for AI and multi-die designs. The relentless computational demands in AI applications require semiconductor technologies to keep pace. From an industry leading AI-driven EDA suite, powered by Synopsys.ai for enhanced productivity and silicon results to complete solutions that facilitate the migration to 2.5/3D multi-die architectures, Synopsys and TSMC have worked closely for decades to pave the path for the future of billion to trillion-transistor AI chip designs.

"TSMC is excited to collaborate with Synopsys to develop pioneering EDA and IP solutions tailored for the rigorous compute demands of AI designs on TSMC advanced process and 3DFabric technologies," said Dan Kochpatcharin, head of the Ecosystem and Alliance Management Division at TSMC. "The results of our latest collaboration across Synopsys' AI-driven EDA suite and silicon-proven IP have helped our mutual customers significantly enhance their productivity and deliver remarkable performance, power, and area results for advanced AI chip designs.

SK Hynix Begins Mass-production of 12-layer HBM3E Memory

SK hynix Inc. announced today that it has begun mass production of the world's first 12-layer HBM3E product with 36 GB, the largest capacity of existing HBM to date. The company plans to supply mass-produced products to customers within the year, proving its overwhelming technology once again six months after delivering the HBM3E 8-layer product to customers for the first time in the industry in March this year.

SK hynix is the only company in the world that has developed and supplied the entire HBM lineup from the first generation (HBM1) to the fifth generation (HBM3E), since releasing the world's first HBM in 2013. The company plans to continue its leadership in the AI memory market, addressing the growing needs of AI companies by being the first in the industry to mass-produce the 12-layer HBM3E.

Micron Announces 12-high HBM3E Memory, Bringing 36 GB Capacity and 1.2 TB/s Bandwidth

As AI workloads continue to evolve and expand, memory bandwidth and capacity are increasingly critical for system performance. The latest GPUs in the industry need the highest performance high bandwidth memory (HBM), significant memory capacity, as well as improved power efficiency. Micron is at the forefront of memory innovation to meet these needs and is now shipping production-capable HBM3E 12-high to key industry partners for qualification across the AI ecosystem.

Micron's industry-leading HBM3E 12-high 36 GB delivers significantly lower power consumption than our competitors' 8-high 24 GB offerings, despite having 50% more DRAM capacity in the package
Micron HBM3E 12-high boasts an impressive 36 GB capacity, a 50% increase over current HBM3E 8-high offerings, allowing larger AI models like Llama 2 with 70 billion parameters to run on a single processor. This capacity increase allows faster time to insight by avoiding CPU offload and GPU-GPU communication delays. Micron HBM3E 12-high 36 GB delivers significantly lower power consumption than the competitors' HBM3E 8-high 24 GB solutions. Micron HBM3E 12-high 36 GB offers more than 1.2 terabytes per second (TB/s) of memory bandwidth at a pin speed greater than 9.2 gigabits per second (Gb/s). These combined advantages of Micron HBM3E offer maximum throughput with the lowest power consumption can ensure optimal outcomes for power-hungry data centers. Additionally, Micron HBM3E 12-high incorporates fully programmable MBIST that can run system representative traffic at full spec speed, providing improved test coverage for expedited validation and enabling faster time to market and enhancing system reliability.

SK hynix Presents Extensive AI Memory Lineup at Expanded FMS 2024

SK hynix has returned to Santa Clara, California to present its full array of groundbreaking AI memory technologies at FMS: the Future of Memory and Storage (FMS) 2024 from August 6-8. Previously known as Flash Memory Summit, the conference changed its name to reflect its broader focus on all types of memory and storage products amid growing interest in AI. Bringing together industry leaders, customers, and IT professionals, FMS 2024 covers the latest trends and innovations shaping the memory industry.

Participating in the event under the slogan "Memory, The Power of AI," SK hynix is showcasing its outstanding memory capabilities through a keynote presentation, multiple technology sessions, and product exhibits.

Samsung's 8-layer HBM3E Chips Pass NVIDIA's Tests

Samsung Electronics has achieved a significant milestone in its pursuit of supplying advanced memory chips for AI systems. Their latest fifth-generation high-bandwidth memory (HBM) chips, known as HBM3E, have finally passed all NVIDIA's tests. This approval will help Samsung in catching up with competitors SK Hynix and Micron in the race to provide HBM memory chips to NVIDIA. While a supply deal hasn't been finalized yet, deliveries are expected to start in late 2024.

However, it's worth noting that Samsung passed NVIDIA's tests for the eight-layer HBM3E chips while the more advanced twelve-layer version of the HBM3E chips is still struggling pass those tests. Both Samsung and NVIDIA declined to comment on these developments. Industry expert Dylan Patel notes that while Samsung is making progress, they're still behind SK Hynix, which is already preparing to ship its own twelve-layer HBM3E chips.

Samsung Electronics Announces Results for Second Quarter of 2024

Samsung Electronics today reported financial results for the second quarter ended June 30, 2024. The Company posted KRW 74.07 trillion in consolidated revenue and operating profit of KRW 10.44 trillion as favorable memory market conditions drove higher average sales price (ASP), while robust sales of OLED panels also contributed to the results.

Memory Market Continues To Recover; Solid Second Half Outlook Centered on Server Demand
The DS Division posted KRW 28.56 trillion in consolidated revenue and KRW 6.45 trillion in operating profit for the second quarter. Driven by strong demand for HBM as well as conventional DRAM and server SSDs, the memory market as a whole continued its recovery. This increased demand is a result of the continued AI investments by cloud service providers and growing demand for AI from businesses for their on-premise servers.

Samsung's HBM3 Chips Approved by NVIDIA for Limited Use

Samsung Electronics' latest high bandwidth memory (HBM) chips have reportedly passed NVIDIA's suitability tests, according to Reuters. This development comes two months after initial reports suggested the chips had failed due to heat and power consumption issues. Despite this approval, NVIDIA plans to use Samsung's memory chips only in its H20 GPUs, a less advanced version of the H100 processors designed for the Chinese market to comply with US export restrictions.

The future of Samsung's HBM3 chips in NVIDIA's other GPU models remains uncertain, with potential additional testing required. Reuters also reported that Samsung's upcoming fifth-generation HBM3E chips are still undergoing NVIDIA's evaluation process. When approached for comment, neither company responded to Reuters. It's worth noting that Samsung previously denied the initial claims of chip failure.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Anthropic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

SK hynix Showcases Its Next-Gen Solutions at Computex 2024

SK hynix presented its leading AI memory solutions at COMPUTEX Taipei 2024 from June 4-7. As one of Asia's premier IT shows, COMPUTEX Taipei 2024 welcomed around 1,500 global participants including tech companies, venture capitalists, and accelerators under the theme "Connecting AI". Making its debut at the event, SK hynix underlined its position as a first mover and leading AI memory provider through its lineup of next-generation products.

"Connecting AI" With the Industry's Finest AI Memory Solutions
Themed "Memory, The Power of AI," SK hynix's booth featured its advanced AI server solutions, groundbreaking technologies for on-device AI PCs, and outstanding consumer SSD products. HBM3E, the fifth generation of HBM1, was among the AI server solutions on display. Offering industry-leading data processing speeds of 1.18 terabytes (TB) per second, vast capacity, and advanced heat dissipation capability, HBM3E is optimized to meet the requirements of AI servers and other applications. Another technology which has become crucial for AI servers is CXL as it can increase system bandwidth and processing capacity. SK hynix highlighted the strength of its CXL portfolio by presenting its CXL Memory Module-DDR5 (CMM-DDR5), which significantly expands system bandwidth and capacity compared to systems only equipped with DDR5. Other AI server solutions on display included the server DRAM products DDR5 RDIMM and MCR DIMM. In particular, SK hynix showcased its tall 128-gigabyte (GB) MCR DIMM for the first time at an exhibition.

Details Revealed about SK Hynix HBM4E, Computing, and Caching Features Integrated Directly

SK Hynix, leader in HBM3E memory, has now shared more details about HBM4E. Based on fresh reports by Wccftech and ET News, SK Hynix plans to make an HBM memory type that features multiple things like computing, caching, and network memory, all within the same package. This will make SK Hynix stand out from others. This idea is still in the early stages, but SK Hynix has started getting the design information it needs to support its goals. The reports say that SK Hynix wants to lay the groundwork for a versatile HBM with its upcoming HBM4 design. The company reportedly plans to include a memory controller on board, which will allow new computing abilities with its 7th generation HBM4E memory.

By using SK Hynix's method, everything will be unified as a single unit. This will not only make data transfer faster because there is less space between parts, but it will also make it more energy-efficient. Previously in April, SK Hynix announced that it has been working with TSMC to produce the next generation of HBM and improve how logic chips and HBM work together through advanced packaging. In late May, SK Hynix has disclosed yield details regarding HBM3E for the first time, the memory giant reporting successfully reducing the time needed for mass production of HBM3E chips by 50%, while getting closer to the target yield of 80%. The company plans to keep developing HBM4, which is expected to start mass production in 2026.
SK Hynix HBM SK Hynix HBMe3

Samsung Could Start 1nm Mass Production Sooner Than Expected

Samsung Foundry business is set to announce its technology roadmap and plans to strengthen the foundry ecosystem at the Foundry and SAFE Forum in Silicon Valley from June 12 to 13. Notably, Samsung is expected to advance its 1 nm process mass production plan, originally scheduled for 2027, to 2026. This move could look like a surprise since recent rumors (denied by Samsung) emerged about HBM3 and HBM3E chips running too hot and failing to be validated by NVIDIA.

Previously, Samsung successfully mass-produced the world's first 3 nm wafer foundry in June 2022. The company plans to start mass production of its second-generation 3 nm process in 2024 and 2 nm process in 2025. Speculations suggest Samsung may integrate these nodes and potentially begin mass-producing 2 nm chips as early as the second half of 2024. In comparison, rival TSMC aims to reach the A16 node (1.6 nm) in 2027 and start mass production of its 1.4 nm process around 2027-2028.
Samsung Foundry
Return to Keyword Browsing
Feb 6th, 2025 15:40 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts