News Posts matching #HBM3E

Return to Keyword Browsing

SK Hynix Announces 1Q24 Financial Results

SK hynix Inc. announced today that it recorded 12.43 trillion won in revenues, 2.886 trillion won in operating profit (with an operating margin of 23%), and 1.917 trillion won in net profit (with a net margin of 15%) in the first quarter. With revenues marking an all-time high for a first quarter and the operating profit a second-highest following the records of the first quarter of 2018, SK hynix believes that it has entered the phase of a clear rebound following a prolonged downturn.

The company said that an increase in the sales of AI server products backed by its leadership in AI memory technology including HBM and continued efforts to prioritize profitability led to a 734% on-quarter jump in the operating profit. With the sales ratio of eSSD, a premium product, on the rise and the average selling prices rising, the NAND business has also achieved a meaningful turnaround in the same period.

Samsung Signs $3 Billion HBM3E 12H Supply Deal with AMD

Korean media reports that Samsung Electronics has signed a 4.134 trillion Won ($3 billion) agreement with AMD to supply 12-high HBM3E stacks. AMD uses HBM stacks in its AI and HPC accelerators based on its CDNA architecture. This deal is significant, as it gives analysts some idea of the kind of volumes of AI GPUs AMD is preparing to push into the market, if they know what percent of an AI GPU's bill of materials is made up by memory stacks. AMD has probably negotiated a good price for Samsung's HBM3E 12H stacks, given that rival NVIDIA almost exclusively uses HBM3E made by SK Hynix.

The AI GPU market is expected to heat up with the ramp of NVIDIA's "Hopper" H200 series, the advent of "Blackwell," AMD's MI350X CDNA3, and Intel's Gaudi 3 generative AI accelerator. Samsung debuted its HBM3E 12H memory in February 2024. Each stack features 12 layers, a 50% increase over the first generation of HBM3E, and offers a density of 36 GB per stack. An AMD CDNA3 chip with 8 such stacks would have 288 GB of memory on package. AMD is expected to launch the MI350X in the second half of 2024. The star attraction with this chip is its refreshed GPU tiles built on the TSMC 4 nm EUV foundry node. This seems like the ideal product for AMD to debut HBM3E 12H on.

SK Hynix Plans a $4 Billion Chip Packaging Facility in Indiana

SK Hynix is planning a large $4 billion chip-packaging and testing facility in Indiana, USA. The company is still in the planning stage of the decision to invest in the US. "[the company] is reviewing its advanced chip packaging investment in the US, but hasn't made a final decision yet," a company spokesperson told the Wall Street Journal. The primary product focus for this plant will be stacked HBM memory meant to be consumed by the AI GPU and self-driving automobile industries. The plant could also focus on other exotic memory types, such as high-density server memory; and perhaps even compute-in-memory. The plant is expected to start operations in 2028, and will create up to 1,000 skilled jobs. SK Hynix is counting for state- and federal tax incentives to propel this investment; under government initiatives such as the CHIPS Act. SK Hynix is a significant supplier of HBM to NVIDIA for its AI GPUs. Its HBM3E features in the latest NVIDIA "Blackwell" GPUs.

Samsung Demonstrates New CXL Capabilities and Introduces New Memory Module for Scalable, Composable Disaggregated Infrastructure

Samsung Electronics, a world leader in advanced semiconductor technology, unveiled the expansion of its Compute Express Link (CXL) memory module portfolio and showcased its latest HBM3E technology, reinforcing leadership in high-performance and high-capacity solutions for AI applications.

In a keynote address to a packed crowd at Santa Clara's Computer History Museum, Jin-Hyeok Choi, Corporate Executive Vice President, Device Solutions Research America - Memory at Samsung Electronics, along with SangJoon Hwang, Corporate Executive Vice President, Head of DRAM Product and Technology at Samsung Electronics, took the stage to introduce new memory solutions and discuss how Samsung is leading HBM and CXL innovations in the AI era. Joining Samsung on stage was Paul Turner, Vice President, Product Team, VCF Division at VMware by Broadcom and Gunnar Hellekson, Vice President and General Manager at Red Hat to discuss how their software solutions combined with Samsung's hardware technology is pushing the boundaries of memory innovation.

SK hynix Presents the Future of AI Memory Solutions at NVIDIA GTC 2024

SK hynix is displaying its latest AI memory technologies at NVIDIA's GPU Technology Conference (GTC) 2024 held in San Jose from March 18-21. The annual AI developer conference is proceeding as an in-person event for the first time since the start of the pandemic, welcoming industry officials, tech decision makers, and business leaders. At the event, SK hynix is showcasing new memory solutions for AI and data centers alongside its established products.

Showcasing the Industry's Highest Standard of AI Memory
The AI revolution has continued to pick up pace as AI technologies spread their reach into various industries. In response, SK hynix is developing AI memory solutions capable of handling the vast amounts of data and processing power required by AI. At GTC 2024, the company is displaying some of these products, including its 12-layer HBM3E and Compute Express Link (CXL)1, under the slogan "Memory, The Power of AI". HBM3E, the fifth generation of HBM2, is the highest-specification DRAM for AI applications on the market. It offers the industry's highest capacity of 36 gigabytes (GB), a processing speed of 1.18 terabytes (TB) per second, and exceptional heat dissipation, making it particularly suitable for AI systems. On March 19, SK hynix announced it had become the first in the industry to mass-produce HBM3E.

SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024

SK hynix unveiled a new consumer product based on its latest solid-state drive (SSD), PCB01, which boasts industry-leading performance levels at GPU Technology Conference (GTC) 2024. Hosted by NVIDIA in San Jose, California from March 18-21, GTC is one of the world's leading conferences for AI developers. Applied to on-device AI PCs, PCB01 is a PCIe fifth-generation SSD which recently had its performance and reliability verified by a major global customer. After completing product development in the first half of 2024, SK hynix plans to launch two versions of PCB01 by the end of the year which target both major technology companies and general consumers.

Optimized for AI PCs, Capable of Loading LLMs Within One Second
Offering the industry's highest sequential read speed of 14 gigabytes per second (GB/s) and a sequential write speed of 12 GB/s, PCB01 doubles the speed specifications of its previous generation. This enables the loading of LLMs required for AI learning and inference in less than one second. To make on-device AIs operational, PC manufacturers create a structure that stores an LLM in the PC's internal storage and quickly transfers the data to DRAMs for AI tasks. In this process, the PCB01 inside the PC efficiently supports the loading of LLMs. SK hynix expects these characteristics of its latest SSD to greatly increase the speed and quality of on-device AIs.

Unwrapping the NVIDIA B200 and GB200 AI GPU Announcements

NVIDIA on Monday, at the 2024 GTC conference, unveiled the "Blackwell" B200 and GB200 AI GPUs. These are designed to offer an incredible 5X the AI inferencing performance gain over the current-gen "Hopper" H100, and come with four times the on-package memory. The B200 "Blackwell" is the largest chip physically possible using existing foundry tech, according to its makers. The chip is an astonishing 208 billion transistors, and is made up of two chiplets, which by themselves are the largest possible chips.

Each chiplet is built on the TSMC N4P foundry node, which is the most advanced 4 nm-class node by the Taiwanese foundry. Each chiplet has 104 billion transistors. The two chiplets have a high degree of connectivity with each other, thanks to a 10 TB/s custom interconnect. This is enough bandwidth and latency for the two to maintain cache coherency (i.e. address each other's memory as if they're their own). Each of the two "Blackwell" chiplets has a 4096-bit memory bus, and is wired to 96 GB of HBM3E spread across four 24 GB stacks; which totals to 192 GB for the B200 package. The GPU has a staggering 8 TB/s of memory bandwidth on tap. The B200 package features a 1.8 TB/s NVLink interface for host connectivity, and connectivity to another B200 chip.

SK Hynix Begins Volume Production of Industry's First HBM3E

SK hynix Inc. announced today that it has begun volume production of HBM3E, the newest AI memory product with ultra-high performance, for supply to a customer from late March. The company made public its success with the HBM3E development just seven months ago. SK hynix being the first provider of HBM3E, a product with the best performing DRAM chips, extends its earlier success with HBM3. The company expects a successful volume production of HBM3E, along with its experiences also as the industry's first provider of HBM3, to help cement its leadership in the AI memory space.

In order to build a successful AI system that processes a huge amount of data quickly, a semiconductor package should be composed in a way that numerous AI processors and memories are multi-connected. Global big tech companies have been increasingly requiring stronger performance of AI semiconductor and SK hynix expects its HBM3E to be their optimal choice that meets such growing expectations.

NVIDIA B100 "Blackwell" AI GPU Technical Details Leak Out

Jensen Huang's opening GTC 2024 keynote is scheduled to happen tomorrow afternoon (13:00 Pacific time)—many industry experts believe that the NVIDIA boss will take the stage and formally introduce his company's B100 "Blackwell" GPU architecture. An enlightened few have been treated to preview (AI and HPC) units—including Dell's CEO, Jeff Clarke—but pre-introduction leaks have not flowed out. Team Green is likely enforcing strict conditions upon a fortunate selection of trusted evaluators, within a pool of ecosystem partners and customers.

Today, a brave soul has broken that silence—tech tipster, AGF/XpeaGPU, fears repercussions from the leather-jacketed one. They revealed a handful of technical details, a day prior to Team Green's highly anticipated unveiling: "I don't want to spoil NVIDIA B100 launch tomorrow, but this thing is a monster. 2 dies on (TSMC) CoWoS-L, 8x8-Hi HBM3E stacks for 192 GB of memory." They also crystal balled an inevitable follow-up card: "one year later, B200 goes with 12-Hi stacks and will offer a beefy 288 GB. And the performance! It's... oh no Jensen is there... me run away!" Reuters has also joined in on the fun, with some predictions and insider information: "NVIDIA is unlikely to give specific pricing, but the B100 is likely to cost more than its predecessor, which sells for upwards of $20,000." Enterprise products are expected to arrive first—possibly later this year—followed by gaming variants, maybe months later.

Samsung Expected to Unveil Enterprise "PBSSD" Subscription Service at GTC

Samsung Electronics is all set to discuss the future of AI, alongside Jensen Huang, at NVIDIA's upcoming GTC 2024 conference. South Korean insiders have leaked the company's intentions, only days before the event's March 18 kickoff time. Their recently unveiled 36 GB HBM3E 12H DRAM product is expected to be the main focus of official presentations—additionally, a new storage subscription service is marked down for a possible live introduction. An overall "Redefining AI Infrastructure" presentation could include—according to BusinessKorea—a planned launch of: "petabyte (PB)-level SSD solution, dubbed 'PBSSD,' along with a subscription service in the US market within the second quarter (of 2024) to address the era of ultra-high-capacity data."

A Samsung statement—likely sourced from leaked material—summarized this business model: "the subscription service will help reduce initial investment costs in storage infrastructure for our customers and cut down on maintenance expenses." Under agreed upon conditions, customers are not required to purchasing ultra-high-capacity SSD solutions outright: "enterprises using the service can flexibly utilize SSD storage without the need to build separate infrastructure, while simultaneously receiving various services from Samsung Electronics related to storage management, security, and upgrades." A special session—"The Value of Storage as a Service for AI/ML and Data Analysis"—is alleged to be on the company's GTC schedule.

Samsung Reportedly Acquiring New Equipment Due to Disappointing HBM Yields

Industry insiders reckon that Samsung Electronics is transitioning to molded underfill (MR-MUF) production techniques—rival memory manufacturer, SK Hynix, champions this chip making technology. A Reuters exclusive has cited claims made by five industry moles—they believe that Samsung is reacting to underwhelming HBM production yields. The publication proposes that: "one of the reasons Samsung has fallen behind (competing producers) is its decision to stick with chip making technology called non-conductive film (NCF) that causes some production issues, while Hynix switched to the mass reflow molded underfill (MR-MUF) method to address NCF's weakness." The report suggests that Samsung is in the process of ordering new MUF-related equipment.

One anonymous source stated: "Samsung had to do something to ramp up its HBM (production) yields... adopting MUF technique is a little bit of swallow-your-pride type thing for (them), because it ended up following the technique first used by SK Hynix." Reuters managed to extract a response from the giant South Korean multinational—a company spokesperson stated: "we are carrying out our HBM3E product business as planned." They indicated that NCF technology remains in place as an "optimal solution." Post-publication, another official response was issued: "rumors that Samsung will apply MR-MUF to its HBM production are not true." Insiders propose a long testing phase—Samsung is rumored to be sourcing MUF materials, but mass production is not expected to start this year. Three insiders allege that Samsung is planning to "use both NCF and MUF techniques" for a new-generation HBM chip.

NVIDIA's Selection of Micron HBM3E Supposedly Surprises Competing Memory Makers

SK Hynix believes that it leads the industry with the development and production of High Bandwidth Memory (HBM) solutions, but rival memory manufacturers are working hard on equivalent fifth generation packages. NVIDIA was expected to select SK Hynix as the main supplier of HBM3E parts for utilization on H200 "Hopper" AI GPUs, but a surprise announcement was issued by Micron's press team last month. The American firm revealed that HBM3E volume production had commenced: ""(our) 24 GB 8H HBM3E will be part of NVIDIA H200 Tensor Core GPUs, which will begin shipping in the second calendar quarter of 2024. This milestone positions Micron at the forefront of the industry, empowering artificial intelligence (AI) solutions with HBM3E's industry-leading performance and energy efficiency."

According to a Korea JoongAng Daily report, this boast has reportedly "shocked" the likes of SK Hynix and Samsung Electronics. They believe that Micron's: "announcement was a revolt from an underdog, as the US company barely held 10 percent of the global market last year." The article also points out some behind-the-scenes legal wrangling: "the cutthroat competition became more evident when the Seoul court sided with SK Hynix on Thursday (March 7) by granting a non-compete injunction to prevent its former researcher, who specialized in HBM, from working at Micron. He would be fined 10 million won for each day in violation." SK Hynix is likely pinning its next-gen AI GPU hopes on a 12-layer DRAM stacked HBM3E product—industry insiders posit that evaluation samples were submitted to NVIDIA last month. The outlook for these units is said to be very positive—mass production could start as early as this month.

SK Hynix To Invest $1 Billion into Advanced Chip Packaging Facilities

Lee Kang-Wook, Vice President of Research and Development at SK Hynix, has discussed the increased importance of advanced chip packaging with Bloomberg News. In an interview with the media company's business section, Lee referred to a tradition of prioritizing the design and fabrication of chips: "the first 50 years of the semiconductor industry has been about the front-end." He believes that the latter half of production processes will take precedence in the future: "...but the next 50 years is going to be all about the back-end." He outlined a "more than $1 billion" investment into South Korean facilities—his department is hoping to "improve the final steps" of chip manufacturing.

SK Hynix's Head of Packaging Development pioneered a novel method of packaging the third generation of high bandwidth technology (HBM2E)—that innovation secured NVIDIA as a high-profile and long term customer. Demand for Team Green's AI GPUs has boosted the significance of HBM technologies—Micron and Samsung are attempting to play catch up with new designs. South Korea's leading memory supplier is hoping to stay ahead in the next-gen HBM contest—supposedly 12-layer fifth generation samples have been submitted to NVIDIA for approval. SK Hynix's Vice President recently revealed that HBM production volumes for 2024 have sold out—currently company leadership is considering the next steps for market dominance in 2025. The majority of the firm's newly announced $1 billion budget will be spent on the advancement of MR-MUF and TSV technologies, according to their R&D chief.

NVIDIA Reportedly Sampling SK Hynix 12-layer HBM3E

South Korean tech insiders believe that SK Hynix has sent "12-layer DRAM stacked HBM3E (5th generation HBM)" prototype samples to NVIDIA—according a ZDNET.co.kr article, initial examples were shipped out last month. Reports from mid-2023 suggested that Team Green had sampled 8-layer HBM3E (4th gen) units around summer time—with SK Hynix receiving approval notices soon after. Another South Korean media outlet, DealSite, reckons that NVIDIA's memory qualification process has exposed HBM yield problems across a number of manufacturers. SK Hynix, Samsung and Micron are competing fiercely on the HBM3E front—with hopes of getting their respective products attached to NVIDIA's H200 AI GPU. DigiTimes Asia proposed that SK Hynix is ready to "commence mass production of fifth-generation HBM3E" at some point this month.

SK Hynix is believed to be leading the pack—insiders believe that yield rates are good enough to pass early NVIDIA certification, and advanced 12-layer samples are expected to be approved in the near future. ZDNET reckons that SK Hynix's forward momentum has placed it an advantageous position: "(They) supplied 8-layer HBM3E samples in the second half of last year and passed recent testing. Although the official schedule has not been revealed, mass production is expected to begin as early as this month. Furthermore, SK Hynix supplied 12-layer HBM3E samples to NVIDIA last month. This sample is an extremely early version and is mainly used to establish standards and characteristics of new products. SK Hynix calls it UTV (Universal Test Vehicle)... Since Hynix has already completed the performance verification of the 8-layer HBM3E, it is expected that the 12-layer HBM3E test will not take much time." SK Hynix's Vice President recently revealed that his company's 2024 HBM production volumes for were already sold out, and leadership is already preparing innovations for 2025 and beyond.

Samsung Develops Industry-First 36GB HBM3E 12H DRAM

Samsung Electronics, a world leader in advanced memory technology, today announced that it has developed HBM3E 12H, the industry's first 12-stack HBM3E DRAM and the highest-capacity HBM product to date. Samsung's HBM3E 12H provides an all-time high bandwidth of up to 1,280 gigabytes per second (GB/s) and an industry-leading capacity of 36 gigabytes (GB). In comparison to the 8-stack HBM3 8H, both aspects have improved by more than 50%.

"The industry's AI service providers are increasingly requiring HBM with higher capacity, and our new HBM3E 12H product has been designed to answer that need," said Yongcheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "This new memory solution forms part of our drive toward developing core technologies for high-stack HBM and providing technological leadership for the high-capacity HBM market in the AI era."

Micron Commences Volume Production of Industry-Leading HBM3E Solution

Micron Technology, Inc. (Nasdaq: MU), a global leader in memory and storage solutions, today announced it has begun volume production of its HBM3E (High Bandwidth Memory 3E) solution. Micron's 24 GB 8H HBM3E will be part of NVIDIA H200 Tensor Core GPUs, which will begin shipping in the second calendar quarter of 2024. This milestone positions Micron at the forefront of the industry, empowering artificial intelligence (AI) solutions with HBM3E's industry-leading performance and energy efficiency. As the demand for AI continues to surge, the need for memory solutions to keep pace with expanded workloads is critical.

Micron's HBM3E solution addresses this challenge head-on with:
  • Superior Performance: With pin speed greater than 9.2 gigabits per second (Gb/s), Micron's HBM3E delivers more than 1.2 terabytes per second (TB/s) of memory bandwidth, enabling lightning-fast data access for AI accelerators, supercomputers, and data centers.
  • Exceptional Efficiency: Micron's HBM3E leads the industry with ~30% lower power consumption compared to competitive offerings. To support increasing demand and usage of AI, HBM3E offers maximum throughput with the lowest levels of power consumption to improve important data center operational expense metrics.
  • Seamless Scalability: With 24 GB of capacity today, Micron's HBM3E allows data centers to seamlessly scale their AI applications. Whether for training massive neural networks or accelerating inferencing tasks, Micron's solution provides the necessary memory bandwidth.

SK Hynix Targets HBM3E Launch This Year, HBM4 by 2026

SK Hynix has unveiled ambitious High Bandwidth Memory (HBM) roadmaps at SEMICON Korea 2024. Vice President Kim Chun-hwan announced plans to mass produce the cutting-edge HBM3E within the first half of 2024, touting 8-layer stack samples already supplied to clients. This iteration makes major strides towards fulfilling surging data bandwidth demands, offering 1.2 TB/s per stack and 7.2 TB/s in a 6-stack configuration. VP Kim Chun-hwan cites the rapid emergence of generative AI, forecasted for 35% CAGR, as a key driver. He warns that "fierce survival competition" lies ahead across the semiconductor industry amidst rising customer expectations. With limits approaching on conventional process node shrinks, attention is shifting to next-generation memory architectures and materials to unleash performance.

SK Hynix has already initiated HBM4 development for sampling in 2025 and mass production the following year. According to Micron, HBM4 will leverage a wider 2048-bit interface compared to previous HBM generations to increase per-stack theoretical peak memory bandwidth to over 1.5 TB/s. To achieve these high bandwidths while maintaining reasonable power consumption, HBM4 is targeting a data transfer rate of around 6 GT/s. The wider interface and 6 GT/s speeds allow HBM4 to push bandwidth boundaries significantly compared to prior HBM versions, fueling the need for high-performance computing and AI workloads. But power efficiency is carefully balanced by avoiding impractically high transfer rates. Additionally, Samsung is aligned on a similar 2025/2026 timeline. Beyond pushing bandwidth boundaries, custom HBM solutions will become increasingly crucial. Samsung executive Jaejune Kim reveals that over half its HBM volume already comprises specialized products. Further tailoring HBM4 to individual client needs through logic integration presents an opportunity to cement leadership. As AI workloads evolve at breakneck speeds, memory innovation must keep pace. With HBM3E prepping for launch and HBM4 in the plan, SK Hynix and Samsung are gearing up for the challenges ahead.

SK Hynix to Show Off its GDDR7, 48GB 16-layer HBM3E Stacks, and LPDDR5T-10533 Memory at IEEE-SSCC

Samsung isn't the only Korean memory giant showing off its latest tech at the upcoming IEEE Solid State Circuit Conference (SSCC) in February, 2024; it will be joined by SK Hynix, which will demo competing tech across both its volatile and non-volatile memory lines. To begin with, SK Hynix will be the second company to show off a GDDR7 memory chip, after Samsung. The SK Hynix chip is capable of 35.4 Gbps speeds, which is lower than the 37 Gbps Samsung is showing off, but at the same 16 Gbit density. This density allows the deployment of 16 GB of video memory across a 256-bit memory bus. Not all next-generation GPUs will max out 37 Gbps, some may run at lower memory speeds, and they have suitable options in the SK Hynix product stack. Much like Samsung, SK Hynix is implementing PAM3 I/O signaling, and a proprietary low-power architecture (though the company wouldn't elaborate on whether it's similar to the four low-speed clock states as the Samsung chips).

GDDR7 is bound to dominate the next generation of graphics cards across the gaming and pro-vis segments; however the AI HPC processor market will continue to bank heavily on HBM3E. SK Hynix has innovated here, and will show off a new 16-high 48 GB (384 Gbit) HBM3E stack design that's capable of 1280 GB/s over a single stack. A processor with even four such stacks will have 192 GB of memory at 5.12 TB/s of bandwidth. The stack implements a new all-around power TSV (through silicon via) design, and a 6-phase RDQS (read data queue strobe) scheme, for TSV area optimization. Lastly, the SK Hynix sessions will also include the first demo of its ambitious LPDDR5T (LPDDR5 Turbo) memory standard aimed at smartphones, tablets, and thin-and-light notebooks. This chip achieves a data-rate of 10.5 Gb/s per pin, and a DRAM voltage of 1.05 V. Such high data speeds are possible thanks to a proprietary parasitic capacitance reduction technology, and a voltage offset calibrated receiver tech.

SK hynix Reports Financial Results for 2023, 4Q23

SK hynix Inc. announced today that it recorded an operating profit of 346 billion won in the fourth quarter of last year amid a recovery of the memory chip market, marking the first quarter of profit following four straight quarters of losses. The company posted revenues of 11.31 trillion won, operating profit of 346 billion won (operating profit margin at 3%), and net loss of 1.38 trillion won (net profit margin at negative 12%) for the three months ended December 31, 2023. (Based on K-IFRS)

SK hynix said that the overall memory market conditions improved in the last quarter of 2023 with demand for AI server and mobile applications increasing and average selling price (ASP) rising. "We recorded the first quarterly profit in a year following efforts to focus on profitability," it said. The financial results of the last quarter helped narrow the operating loss for the entire year to 7.73 trillion won (operating profit margin at negative 24%) and net loss to 9.14 trillion won (with net profit margin at negative 28%). The revenues were 32.77 trillion won.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

SK hynix to Exhibit AI Memory Leadership at CES 2024

SK hynix Inc. announced today that it will showcase the technology for ultra-high performance memory products, the core of future AI infrastructure, at CES 2024, the most influential tech event in the world taking place from January 9 through 12 in Las Vegas. SK hynix said that it will highlight its future vision represented by its Memory Centric at the show and promote the importance of memory products accelerating the technological innovation in the AI era and its competitiveness in the global memory markets.

The company will run a space titled SK Wonderland jointly with other major SK Group affiliates including SK Inc., SK Innovation and SK Telecom, and showcase its major AI memory products including HBM3E. SK hynix plans to provide HBM3E, the world's best-performing memory product that it successfully developed in August, to the world's largest AI technology companies by starting mass production from the first half of 2024.

SK hynix Showcases Next-Gen AI and HPC Solutions at SC23

SK hynix presented its leading AI and high-performance computing (HPC) solutions at Supercomputing 2023 (SC23) held in Denver, Colorado between November 12-17. Organized by the Association for Computing Machinery and IEEE Computer Society since 1988, the annual SC conference showcases the latest advancements in HPC, networking, storage, and data analysis. SK hynix marked its first appearance at the conference by introducing its groundbreaking memory solutions to the HPC community. During the six-day event, several SK hynix employees also made presentations revealing the impact of the company's memory solutions on AI and HPC.

Displaying Advanced HPC & AI Products
At SC23, SK hynix showcased its products tailored for AI and HPC to underline its leadership in the AI memory field. Among these next-generation products, HBM3E attracted attention as the HBM solution meets the industry's highest standards of speed, capacity, heat dissipation, and power efficiency. These capabilities make it particularly suitable for data-intensive AI server systems. HBM3E was presented alongside NVIDIA's H100, a high-performance GPU for AI that uses HBM3 for its memory.

ASUS Demonstrates AI and Immersion-Cooling Solutions at SC23

ASUS today announced a showcase of the latest AI solutions to empower innovation and push the boundaries of supercomputing, at Supercomputing 2023 (SC23) in Denver, Colorado, from 12-17 November, 2023. ASUS will demonstrate the latest AI advances, including generative-AI solutions and sustainability breakthroughs with Intel, to deliver the latest hybrid immersion-cooling solutions, plus lots more - all at booth number 257.

At SC23, ASUS will showcase the latest NVIDIA-qualified ESC N8A-E12 HGX H100 eight-GPU server empowered by dual-socket AMD EPYC 9004 processors and is designed for enterprise-level generative AI with market-leading integrated capabilities. Related to NVIDIA announcement on the latest NVIDIA H200 Tensor Core GPU at SC23, which is the first GPU to offer HBM3E for faster, larger memory to fuel the acceleration of generative AI and large language models, ASUS will offer an update of H100-based system with an H200-based drop-in replacement in 2024.

NVIDIA Supercharges Hopper, the World's Leading AI Computing Platform

NVIDIA today announced it has supercharged the world's leading AI computing platform with the introduction of the NVIDIA HGX H200. Based on NVIDIA Hopper architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

The NVIDIA H200 is the first GPU to offer HBM3e - faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads. With HBM3e, the NVIDIA H200 delivers 141 GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100. H200-powered systems from the world's leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.

Samsung Electronics Announces Third Quarter 2023 Results

Samsung Electronics today reported financial results for the third quarter ended September 30, 2023. Total consolidated revenue was KRW 67.40 trillion, a 12% increase from the previous quarter, mainly due to new smartphone releases and higher sales of premium display products. Operating profit rose sequentially to KRW 2.43 trillion based on strong sales of flagship models in mobile and strong demand for displays, as losses at the Device Solutions (DS) Division narrowed.

The Memory Business reduced losses sequentially as sales of high valued-added products and average selling prices somewhat increased. Earnings in system semiconductors were impacted by a delay in demand recovery for major applications, but the Foundry Business posted a new quarterly high for new backlog from design wins. The mobile panel business reported a significant increase in earnings on the back of new flagship model releases by major customers, while the large panel business narrowed losses in the quarter. The Device eXperience (DX) Division achieved solid results due to robust sales of premium smartphones and TVs. Revenue at the Networks Business declined in major overseas markets as mobile operators scaled back investments.
Return to Keyword Browsing
Nov 21st, 2024 11:57 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts