News Posts matching #HBM3

Return to Keyword Browsing

NVIDIA Reportedly Having Issues with Samsung's HBM3 Chips Running Too Hot

According to Reuters, NVIDIA is having some major issues with Samsung's HBM3 chips, as NVIDIA hasn't managed to finalise its validations of the chips. Reuters are citing multiple sources that are familiar with the matter and it seems like Samsung is having some serious issues with its HMB3 chips if the sources are correct. Not only do the chips run hot, which itself is a big issue due to NVIDIA already having issues cooling some of its higher-end products, but the power consumption is apparently not where it should be either. Samsung is said to have tried to get its HBM3 and HBM3E parts validated by NVIDIA since sometime in 2023 according to Reuter's sources, which suggests that there have been issues for at least six months, if not longer.

The sources claim there are issues with both the 8- and 12-layer stacks of HMB3E parts from Samsung, suggesting that NVIDIA might only be able to supply parts from Micron and SK Hynix for now, the latter whom has been supplying HBM3 chips to NVIDIA since the middle of 2022 and HBM3E chips since March of this year. It's unclear if this is a production issue at Samsung's DRAM Fabs, a packaging related issue or something else entirely. The Reuter's piece goes on to speculating about Samsung not having had enough time to develop its HBM parts compared its competitors and that it's a rushed product, but Samsung issued a statement to the publication that it's a matter of customising the product for its customer's needs. Samsung also said that it's "the process of optimising its products through close collaboration with customers" without going into which customer(s). Samsung issued a further statement saying that "claims of failing due to heat and power consumption are not true" and that testing was going as expected.

HBM3e Production Surge Expected to Make Up 35% of Advanced Process Wafer Input by End of 2024

TrendForce reports that the three largest DRAM suppliers are increasing wafer input for advanced processes. Following a rise in memory contract prices, companies have boosted their capital investments, with capacity expansion focusing on the second half of this year. It is expected that wafer input for 1alpha nm and above processes will account for approximately 40% of total DRAM wafer input by the end of the year.

HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50-60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.

DRAM Contract Prices for Q2 Adjusted to a 13-18% Increase; NAND Flash around 15-20%

TrendForce's latest forecasts reveal contract prices for DRAM in the second quarter are expected to increase by 13-18%, while NAND Flash contract prices have been adjusted to a 15-20% Only eMMC/UFS will be seeing a smaller price increase of about 10%.

Before the 4/03 earthquake, TrendForce had initially predicted that DRAM contract prices would see a seasonal rise of 3-8% and NAND Flash 13-18%, significantly tapering from Q1 as seen from spot price indicators which showed weakening price momentum and reduced transaction volumes. This was primarily due to subdued demand outside of AI applications, particularly with no signs of recovery in demand for notebooks and smartphones. Inventory levels were gradually increasing, especially among PC OEMs. Additionally, with DRAM and NAND Flash prices having risen for 2-3 consecutive quarters, the willingness of buyers to accept further substantial price increases had diminished.

HBM Prices to Increase by 5-10% in 2025, Accounting for Over 30% of Total DRAM Value

Avril Wu, TrendForce Senior Research Vice President, reports that the HBM market is poised for robust growth, driven by significant pricing premiums and increased capacity needs for AI chips. HBM's unit sales price is several times higher than that of conventional DRAM and about five times that of DDR5. This pricing, combined with product iterations in AI chip technology that increase single-device HBM capacity, is expected to dramatically raise HBM's share in both the capacity and market value of the DRAM market from 2023 to 2025. Specifically, HBM's share of total DRAM bit capacity is estimated to rise from 2% in 2023 to 5% in 2024 and surpass 10% by 2025. In terms of market value, HBM is projected to account for more than 20% of the total DRAM market value starting in 2024, potentially exceeding 30% by 2025.

2024 sees HBM demand growth rate near 200%, set to double in 2025
Wu also pointed out that negotiations for 2025 HBM pricing have already commenced in 2Q24. However, due to the limited overall capacity of DRAM, suppliers have preliminarily increased prices by 5-10% to manage capacity constraints, affecting HBM2e, HBM3, and HBM3e. This early negotiation phase is attributed to three main factors: Firstly, HBM buyers maintain high confidence in AI demand prospects and are willing to accept continued price increases.

SK hynix Strengthens AI Memory Leadership & Partnership With Host at the TSMC 2024 Tech Symposium

SK hynix showcased its next-generation technologies and strengthened key partnerships at the TSMC 2024 Technology Symposium held in Santa Clara, California on April 24. At the event, the company displayed its industry-leading HBM AI memory solutions and highlighted its collaboration with TSMC involving the host's CoWoS advanced packaging technology.

TSMC, a global semiconductor foundry, invites its major partners to this annual conference in the first half of each year so they can share their new products and technologies. Attending the event under the slogan "Memory, the Power of AI," SK hynix received significant attention for presenting the industry's most powerful AI memory solution, HBM3E. The product has recently demonstrated industry-leading performance, achieving input/output (I/O) transfer speed of up to 10 gigabits per second (Gbps) in an AI system during a performance validation evaluation.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

NVIDIA Hopper Leaps Ahead in Generative AI at MLPerf

It's official: NVIDIA delivered the world's fastest platform in industry-standard tests for inference on generative AI. In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM—software that speeds and simplifies the complex job of inference on large language models—boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago. The dramatic speedup demonstrates the power of NVIDIA's full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI. Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM—a set of inference microservices that includes inferencing engines like TensorRT-LLM—makes it easier than ever for businesses to deploy NVIDIA's inference platform.

Raising the Bar in Generative AI
TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs—the latest, memory-enhanced Hopper GPUs—delivered the fastest performance running inference in MLPerf's biggest test of generative AI to date. The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks. The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf's Llama 2 benchmark. The H200 GPU results include up to 14% gains from a custom thermal solution. It's one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.

2024 HBM Supply Bit Growth Estimated to Reach 260%, Making Up 14% of DRAM Industry

TrendForce reports that significant capital investments have occurred in the memory sector due to the high ASP and profitability of HBM. Senior Vice President Avril Wu notes that by the end of 2024, the DRAM industry is expected to allocate approximately 250K/m (14%) of total capacity to producing HBM TSV, with an estimated annual supply bit growth of around 260%. Additionally, HBM's revenue share within the DRAM industry—around 8.4% in 2023—is projected to increase to 20.1% by the end of 2024.

HBM supply tightens with order volumes rising continuously into 2024
Wu explains that in terms of production differences between HBM and DDR5, the die size of HBM is generally 35-45% larger than DDR5 of the same process and capacity (for example, 24Gb compared to 24Gb). The yield rate (including TSV packaging) for HBM is approximately 20-30% lower than that of DDR5, and the production cycle (including TSV) is 1.5 to 2 months longer than DDR5.

HBM3 Initially Exclusively Supplied by SK Hynix, Samsung Rallies Fast After AMD Validation

TrendForce highlights the current landscape of the HBM market, which as of early 2024, is primarily focused on HBM3. NVIDIA's upcoming B100 or H200 models will incorporate advanced HBM3e, signaling the next step in memory technology. The challenge, however, is the supply bottleneck caused by both CoWoS packaging constraints and the inherently long production cycle of HBM—extending the timeline from wafer initiation to the final product beyond two quarters.

The current HBM3 supply for NVIDIA's H100 solution is primarily met by SK hynix, leading to a supply shortfall in meeting burgeoning AI market demands. Samsung's entry into NVIDIA's supply chain with its 1Znm HBM3 products in late 2023, though initially minor, signifies its breakthrough in this segment.

DRAM Industry Sees Nearly 30% Revenue Growth in 4Q23 Due to Rising Prices and Volume

TrendForce reports a 29.6% QoQ in DRAM industry revenue for 4Q23, reaching US$17.46 billion, propelled by revitalized stockpiling efforts and strategic production control by leading manufacturers. Looking ahead to 1Q24, the intent to further enhance profitability is evident, with a projected near 20% increase in DRAM contract prices—albeit with a slight decrease in shipment volumes to the traditional off-season.

Samsung led the pack with the highest revenue growth among the top manufacturers in Q4 as it jumped 50% QoQ to hit $7.95 billion, largely due to a surge in 1alpha nm DDR5 shipments, boosting server DRAM shipments by over 60%. SK hynix saw a modest 1-3% rise in shipment volumes but benefited from the pricing advantage of HBM and DDR5, especially from high-density server DRAM modules, leading to a 17-19% increase in ASP and a 20.2% rise in revenue to $5.56 billion. Micron witnessed growth in both volume and price, with a 4-6% increase in each, resulting in a more moderate revenue growth of 8.9%, totaling $3.35 billion for the quarter due to its comparatively lower share of DDR5 and HBM.

Samsung Develops Industry-First 36GB HBM3E 12H DRAM

Samsung Electronics, a world leader in advanced memory technology, today announced that it has developed HBM3E 12H, the industry's first 12-stack HBM3E DRAM and the highest-capacity HBM product to date. Samsung's HBM3E 12H provides an all-time high bandwidth of up to 1,280 gigabytes per second (GB/s) and an industry-leading capacity of 36 gigabytes (GB). In comparison to the 8-stack HBM3 8H, both aspects have improved by more than 50%.

"The industry's AI service providers are increasingly requiring HBM with higher capacity, and our new HBM3E 12H product has been designed to answer that need," said Yongcheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "This new memory solution forms part of our drive toward developing core technologies for high-stack HBM and providing technological leadership for the high-capacity HBM market in the AI era."

Google's Gemma Optimized to Run on NVIDIA GPUs, Gemma Coming to Chat with RTX

NVIDIA, in collaboration with Google, today launched optimizations across all NVIDIA AI platforms for Gemma—Google's state-of-the-art new lightweight 2 billion- and 7 billion-parameter open language models that can be run anywhere, reducing costs and speeding innovative work for domain-specific use cases.

Teams from the companies worked closely together to accelerate the performance of Gemma—built from the same research and technology used to create the Gemini models—with NVIDIA TensorRT-LLM, an open-source library for optimizing large language model inference, when running on NVIDIA GPUs in the data center, in the cloud and on PCs with NVIDIA RTX GPUs. This allows developers to target the installed base of over 100 million NVIDIA RTX GPUs available in high-performance AI PCs globally.

SK Hynix Targets HBM3E Launch This Year, HBM4 by 2026

SK Hynix has unveiled ambitious High Bandwidth Memory (HBM) roadmaps at SEMICON Korea 2024. Vice President Kim Chun-hwan announced plans to mass produce the cutting-edge HBM3E within the first half of 2024, touting 8-layer stack samples already supplied to clients. This iteration makes major strides towards fulfilling surging data bandwidth demands, offering 1.2 TB/s per stack and 7.2 TB/s in a 6-stack configuration. VP Kim Chun-hwan cites the rapid emergence of generative AI, forecasted for 35% CAGR, as a key driver. He warns that "fierce survival competition" lies ahead across the semiconductor industry amidst rising customer expectations. With limits approaching on conventional process node shrinks, attention is shifting to next-generation memory architectures and materials to unleash performance.

SK Hynix has already initiated HBM4 development for sampling in 2025 and mass production the following year. According to Micron, HBM4 will leverage a wider 2048-bit interface compared to previous HBM generations to increase per-stack theoretical peak memory bandwidth to over 1.5 TB/s. To achieve these high bandwidths while maintaining reasonable power consumption, HBM4 is targeting a data transfer rate of around 6 GT/s. The wider interface and 6 GT/s speeds allow HBM4 to push bandwidth boundaries significantly compared to prior HBM versions, fueling the need for high-performance computing and AI workloads. But power efficiency is carefully balanced by avoiding impractically high transfer rates. Additionally, Samsung is aligned on a similar 2025/2026 timeline. Beyond pushing bandwidth boundaries, custom HBM solutions will become increasingly crucial. Samsung executive Jaejune Kim reveals that over half its HBM volume already comprises specialized products. Further tailoring HBM4 to individual client needs through logic integration presents an opportunity to cement leadership. As AI workloads evolve at breakneck speeds, memory innovation must keep pace. With HBM3E prepping for launch and HBM4 in the plan, SK Hynix and Samsung are gearing up for the challenges ahead.

NVIDIA Faces AI Chip Shortages, Turns to Intel for Advanced Packaging Services

NVIDIA's supply of AI chips remains tight due to insufficient advanced packaging production capacity from key partner TSMC. As per the UDN report, NVIDIA will add Intel as a provider of advanced packaging services to help ease the constraints. Intel is expected to start supplying NVIDIA with a monthly advanced packaging capacity of about 5,000 units in Q2 at the earliest. While TSMC will remain NVIDIA's primary packaging partner, Intel's participation significantly boosts NVIDIA's total production capacity by nearly 10%. Even after Intel comes online, TSMC will still account for the lion's share—about 90% of NVIDIA's advanced packaging needs. TSMC is also aggressively expanding capacity, with monthly production expected to reach nearly 50,000 units in Q1, a 25% increase over December 2023. Intel has advanced packaging facilities in the U.S. and is expanding its capacity in Penang. The company has an open model, allowing customers to leverage its packaging solutions separately.

The AI chip shortages stemmed from insufficient advanced packaging capacity, tight HBM3 memory supply, and overordering by some cloud providers. These constraints are now easing faster than anticipated. The additional supply will benefit AI server providers like Quanta, Inventec and GIGABYTE. Quanta stated that the demand for AI servers remains robust, with the main limitation being chip supply. Both Inventec and GIGABYTE expect strong AI server shipment growth this year as supply issues resolve. The ramping capacity from TSMC and Intel in advanced packaging and improvements upstream suggest the AI supply crunch may be loosening. This would allow cloud service providers to continue the rapid deployment of AI workloads.

SK hynix Reports Financial Results for 2023, 4Q23

SK hynix Inc. announced today that it recorded an operating profit of 346 billion won in the fourth quarter of last year amid a recovery of the memory chip market, marking the first quarter of profit following four straight quarters of losses. The company posted revenues of 11.31 trillion won, operating profit of 346 billion won (operating profit margin at 3%), and net loss of 1.38 trillion won (net profit margin at negative 12%) for the three months ended December 31, 2023. (Based on K-IFRS)

SK hynix said that the overall memory market conditions improved in the last quarter of 2023 with demand for AI server and mobile applications increasing and average selling price (ASP) rising. "We recorded the first quarterly profit in a year following efforts to focus on profitability," it said. The financial results of the last quarter helped narrow the operating loss for the entire year to 7.73 trillion won (operating profit margin at negative 24%) and net loss to 9.14 trillion won (with net profit margin at negative 28%). The revenues were 32.77 trillion won.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

AI Datacenters Warming Up to Instinct CDNA Causes AMD Stock to Hit Near Record High

With NVIDIA's Ampere and Hopper GPUs enjoying a domination in the AI acceleration industry, compute companies are turning to AMD's Instinct CDNA series accelerators to look for alternatives. It seems like they've found one. This has financial market analysts excited, causing the AMD company stock to hit near record highs. AMD recently launched the Instinct MI300X and MI300A processors based on the CDNA 3 architecture, which the company claims beat NVIDIA's H100 "Hopper" processors at competitive prices, which has encouraged analysts from major financial institutions, including Barclays, KeyBanc Capital, and Susquehanna Financial Group, to increase their price targets for the AMD stock. As of market closure at Jan 17, 7:59:56 PM UTC, the AMD stock stood at $160.17, near its November 2021 record high of $164.46.

AMD's data center business looks to ramp up Instinct CDNA accelerators through 2024. These large chiplet-based GPUs are based on the same 5 nm TSMC foundry nodes to NVIDIA's H100 "Hopper," and to maximize the use of its foundry allocation, it's been reported that AMD might even forego large gaming GPUs based on its Radeon RX RDNA4 architecture, to maximize its allocation for high-margin CDNA3 chips. The Instinct MI300X features a colossal 304 compute units worth 19,456 stream processors capable of AI-relevant math formats, and 192 GB of 8192-bit HBM3 memory, with 5.2 TB/s of memory bandwidth on tap.

GIGABYTE Unveils Next-gen HPC & AI Servers with AMD Instinct MI300 Series Accelerators

GIGABYTE Technology: Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, and IT infrastructure, today announced the GIGABYTE G383-R80 for the AMD Instinct MI300A APU and two GIGABYTE G593 series servers for the AMD Instinct MI300X GPU and AMD EPYC 9004 Series processor. As a testament to the performance of AMD Instinct MI300 Series family of products, the El Capitan supercomputer at Lawrence Livermore National Laboratory uses the MI300A APU to power exascale computing. And these new GIGABYTE servers are the ideal platform to propel discoveries in HPC & AI at exascale.⁠

Marrying of a CPU & GPU: G383-R80
For incredible advancements in HPC there is the GIGABYTE G383-R80 that houses four LGA6096 sockets for MI300A APUs. This chip integrates a CPU that has twenty-four AMD Zen 4 cores with a powerful GPU built with AMD CDNA 3 GPU cores. And the chiplet design shares 128 GB of unified HBM3 memory for impressive performance for large AI models. The G383 server has lots of expansion slots for networking, storage, or other accelerators, with a total of twelve PCIe Gen 5 slots. And in the front of the chassis are eight 2.5" Gen 5 NVMe bays to handle heavy workloads such as real-time big data analytics and latency-sensitive workloads in finance and telecom. ⁠

AWS and NVIDIA Partner to Deliver 65 ExaFLOP AI Supercomputer, Other Solutions

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced an expansion of their strategic collaboration to deliver the most-advanced infrastructure, software and services to power customers' generative artificial intelligence (AI) innovations. The companies will bring together the best of NVIDIA and AWS technologies—from NVIDIA's newest multi-node systems featuring next-generation GPUs, CPUs and AI software, to AWS Nitro System advanced virtualization and security, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability—that are ideal for training foundation models and building generative AI applications.

The expanded collaboration builds on a longstanding relationship that has fueled the generative AI era by offering early machine learning (ML) pioneers the compute performance required to advance the state-of-the-art in these technologies.

Manufacturers Anticipate Completion of NVIDIA's HBM3e Verification by 1Q24; HBM4 Expected to Launch in 2026

TrendForce's latest research into the HBM market indicates that NVIDIA plans to diversify its HBM suppliers for more robust and efficient supply chain management. Samsung's HBM3 (24 GB) is anticipated to complete verification with NVIDIA by December this year. The progress of HBM3e, as outlined in the timeline below, shows that Micron provided its 8hi (24 GB) samples to NVIDIA by the end of July, SK hynix in mid-August, and Samsung in early October.

Given the intricacy of the HBM verification process—estimated to take two quarters—TrendForce expects that some manufacturers might learn preliminary HBM3e results by the end of 2023. However, it's generally anticipated that major manufacturers will have definite results by 1Q24. Notably, the outcomes will influence NVIDIA's procurement decisions for 2024, as final evaluations are still underway.

Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is expanding its AI reach with the upcoming support for the new NVIDIA HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry leading AI platforms, including 8U and 4U Universal GPU Systems, are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x capacity and 1.4x higher bandwidth HBM3e memory compared to the NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of Supermicro NVIDIA MGX systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented performance, scalability, and reliability, Supermicro's rack scale AI solutions accelerate the performance of computationally intensive generative AI, large language Model (LLM) training, and HPC applications while meeting the evolving demands of growing model sizes. Using the building block architecture, Supermicro can quickly bring new technology to market, enabling customers to become more productive sooner.

Supermicro is also introducing the industry's highest density server with NVIDIA HGX H100 8-GPUs systems in a liquid cooled 4U system, utilizing the latest Supermicro liquid cooling solution. The industry's most compact high performance GPU server enables data center operators to reduce footprints and energy costs while offering the highest performance AI training capacity available in a single rack. With the highest density GPU systems, organizations can reduce their TCO by leveraging cutting-edge liquid cooling solutions.

Samsung Electronics Announces Third Quarter 2023 Results

Samsung Electronics today reported financial results for the third quarter ended September 30, 2023. Total consolidated revenue was KRW 67.40 trillion, a 12% increase from the previous quarter, mainly due to new smartphone releases and higher sales of premium display products. Operating profit rose sequentially to KRW 2.43 trillion based on strong sales of flagship models in mobile and strong demand for displays, as losses at the Device Solutions (DS) Division narrowed.

The Memory Business reduced losses sequentially as sales of high valued-added products and average selling prices somewhat increased. Earnings in system semiconductors were impacted by a delay in demand recovery for major applications, but the Foundry Business posted a new quarterly high for new backlog from design wins. The mobile panel business reported a significant increase in earnings on the back of new flagship model releases by major customers, while the large panel business narrowed losses in the quarter. The Device eXperience (DX) Division achieved solid results due to robust sales of premium smartphones and TVs. Revenue at the Networks Business declined in major overseas markets as mobile operators scaled back investments.

Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, today announced that the Rambus HBM3 Memory Controller IP now delivers up to 9.6 Gigabits per second (Gbps) performance supporting the continued evolution of the HBM3 standard. With a 50% increase over the HBM3 Gen 1 data rate of 6.4 Gbps, the Rambus HBM3 Memory Controller can enable a total memory throughput of over 1.2 Terabytes per second (TB/s) for training of recommender systems, generative AI and other demanding data center workloads.

"HBM3 is the memory of choice for AI/ML training, with large language models requiring the constant advancement of high-performance memory technologies," said Neeraj Paliwal, general manager of Silicon IP at Rambus. "Thanks to Rambus innovation and engineering excellence, we're delivering the industry's leading-edge performance of 9.6 Gbps in our HBM3 Memory Controller IP."

SK hynix Reports Third Quarter 2023 Financial Results

SK hynix Inc., today reported the financial results for the third quarter ended September 30, 2023. The company recorded revenues of 9.066 trillion won, operating losses of 1.792 trillion won and net losses of 2.185 trillion won in the three-month period. The operating and net margins were a negative 20% and 24%, respectively. After bottoming out in the first quarter, the business has been on a steady recovery track, helped by growing demand for products such as high-performance memory chips, the company said.

"Revenues grew 24%, while operating losses narrowed 38%, compared with the previous quarter, thanks to strong demand for high-performance mobile flagship products and HBM3, a key product for AI applications, and high-capacity DDR5," the company said, adding that a turnaround of the DRAM business following two quarters of losses is particularly hopeful. SK hynix attributed the growth in sales to increased shipments of both DRAM and NAND and a rise in the average selling price.

Samsung Electronics Holds Memory Tech Day 2023 Unveiling New Innovations To Lead the Hyperscale AI Era

Samsung Electronics Co., Ltd., a world leader in advanced memory technology, today held its annual Memory Tech Day, showcasing industry-first innovations and new memory products to accelerate technological advancements across future applications—including the cloud, edge devices and automotive vehicles.

Attended by about 600 customers, partners and industry experts, the event served as a platform for Samsung executives to expand on the company's vision for "Memory Reimagined," covering long-term plans to continue its memory technology leadership, outlook on market trends and sustainability goals. The company also presented new product innovations such as the HBM3E Shinebolt, LPDDR5X CAMM2 and Detachable AutoSSD.
Return to Keyword Browsing
Dec 21st, 2024 20:54 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts