News Posts matching #HBM3

Return to Keyword Browsing

SK hynix Displays Next-Gen Solutions Set to Unlock AI and More at OCP Global Summit 2023

SK hynix showcased its next-generation memory semiconductor technologies and solutions at the OCP Global Summit 2023 held in San Jose, California from October 17-19. The OCP Global Summit is an annual event hosted by the world's largest data center technology community, the Open Compute Project (OCP), where industry experts gather to share various technologies and visions. This year, SK hynix and its subsidiary Solidigm showcased advanced semiconductor memory products that will lead the AI era under the slogan "United Through Technology".

SK hynix presented a broad range of its solutions at the summit, including its leading HBM(HBM3/3E), CXL, and AiM products for generative AI. The company also unveiled some of the latest additions to its product portfolio including its DDR5 RDIMM, MCR DIMM, enterprise SSD (eSSD), and LPDDR CAMM devices. Visitors to the HBM exhibit could see HBM3, which is utilized in NVIDIA's H100, a high-performance GPU for AI, and also check out the next-generation HBM3E. Due to their low-power consumption and ultra-high-performance, these HBM solutions are more eco-friendly and are particularly suitable for power-hungry AI server systems.

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Samsung Notes: HBM4 Memory is Coming in 2025 with New Assembly and Bonding Technology

According to the editorial blog post published on the Samsung blog by SangJoon Hwang, Executive Vice President and Head of the DRAM Product & Technology Team at Samsung Electronics, we have information that High-Bandwidth Memory 4 (HBM4) is coming in 2025. In the recent timeline of HBM development, we saw the first appearance of HBM memory in 2015 with the AMD Radeon R9 Fury X. The second-generation HBM2 appeared with NVIDIA Tesla P100 in 2016, and the third-generation HBM3 saw the light of the day with NVIDIA Hopper GH100 GPU in 2022. Currently, Samsung has developed 9.8 Gbps HBM3E memory, which will start sampling to customers soon.

However, Samsung is more ambitious with development timelines this time, and the company expects to announce HBM4 in 2025, possibly with commercial products in the same calendar year. Interestingly, the HBM4 memory will have some technology optimized for high thermal properties, such as non-conductive film (NCF) assembly and hybrid copper bonding (HCB). The NCF is a polymer layer that enhances the stability of micro bumps and TSVs in the chip, so memory solder bump dies are protected from shock. Hybrid copper bonding is an advanced semiconductor packaging method that creates direct copper-to-copper connections between semiconductor components, enabling high-density, 3D-like packaging. It offers high I/O density, enhanced bandwidth, and improved power efficiency. It uses a copper layer as a conductor and oxide insulator instead of regular micro bumps to increase the connection density needed for HBM-like structures.

Synopsys and TSMC Streamline Multi-Die System Complexity with Unified Exploration-to-Signoff Platform and Proven UCIe IP on TSMC N3E Process

Synopsys, Inc. today announced it is extending its collaboration with TSMC to advance multi-die system designs with a comprehensive solution supporting the latest 3Dblox 2.0 standard and TSMC's 3DFabric technologies. The Synopsys Multi-Die System solution includes 3DIC Compiler, a unified exploration-to-signoff platform that delivers the highest levels of design efficiency for capacity and performance. In addition, Synopsys has achieved first-pass silicon success of its Universal Chiplet Interconnect Express (UCIe) IP on TSMC's leading N3E process for seamless die-to-die connectivity.

"TSMC has been working closely with Synopsys to deliver differentiated solutions that address designers' most complex challenges from early architecture to manufacturing," said Dan Kochpatcharin, head of the Design Infrastructure Management Division at TSMC. "Our long history of collaboration with Synopsys benefits our mutual customers with optimized solutions for performance and power efficiency to help them address multi-die system design requirements for high-performance computing, data center, and automotive applications."

SK hynix Presents Advanced Memory Technologies at Intel Innovation 2023

SK hynix announced on September 22 that it showcased its latest memory technologies and products at Intel Innovation 2023 held September 19-20 in the western U.S. city of San Jose, California. Hosted by Intel since 2019, Intel Innovation is an annual IT exhibition which brings together the technology company's customers and partners to share the latest developments in the industry. At this year's event held at the San Jose McEnery Convention Center, SK hynix showcased its advanced semiconductor memory products which are essential in the generative AI era under the slogan "Pioneer Tomorrow With the Best."

Products that garnered the most interest were HBM3, which supports the high-speed performance of AI accelerators, and DDR5 RDIMM, a DRAM module for servers with 1bnm process technology. As one of SK hynix's core technologies, HBM3 has established the company as a trailblazer in AI memory. SK hynix plans to further strengthen its position in the market by mass-producing HBM3E (Extended) from 2024. Meanwhile, DDR5 RDIMM with 1bnm, or the 5th generation of the 10 nm process technology, also offers outstanding performance. In addition to supporting unprecedented transfer speeds of more than 6,400 megabits per second (Mbps), this low-power product helps customers simultaneously reduce costs and improve ESG performance.

Suppliers Amp Up Production, HBM Bit Supply Projected to Soar by 105% in 2024

TrendForce highlights in its latest report that memory suppliers are boosting their production capacity in response to escalating orders from NVIDIA and CSPs for their in-house designed chips. These efforts include the expansion of TSV production lines to increase HBM output. Forecasts based on current production plans from suppliers indicate a remarkable 105% annual increase in HBM bit supply by 2024. However, due to the time required for TSV expansion, which encompasses equipment delivery and testing (9 to 12 months), the majority of HBM capacity is expected to materialize by 2Q24.

TrendForce analysis indicates that 2023 to 2024 will be pivotal years for AI development, triggering substantial demand for AI Training chips and thereby boosting HBM utilization. However, as the focus pivots to Inference, the annual growth rate for AI Training chips and HBM is expected to taper off slightly. The imminent boom in HBM production has presented suppliers with a difficult situation: they will need to strike a balance between meeting customer demand to expand market share and avoiding a surplus due to overproduction. Another concern is the potential risk of overbooking, as buyers, anticipating an HBM shortage, might inflate their demand.

NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip Platform With HMB3e

NVIDIA today announced the next-generation NVIDIA GH200 Grace Hopper platform - based on a new Grace Hopper Superchip with the world's first HBM3e processor - built for the era of accelerated computing and generative AI. Created to handle the world's most complex generative AI workloads, spanning large language models, recommender systems and vector databases, the new platform will be available in a wide range of configurations. The dual configuration - which delivers up to 3.5x more memory capacity and 3x more bandwidth than the current generation offering - comprises a single server with 144 Arm Neoverse cores, eight petaflops of AI performance and 282 GB of the latest HBM3e memory technology.

"To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs," said Jensen Huang, founder and CEO of NVIDIA. "The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center."

New AI Accelerator Chips Boost HBM3 and HBM3e to Dominate 2024 Market

TrendForce reports that the HBM (High Bandwidth Memory) market's dominant product for 2023 is HBM2e, employed by the NVIDIA A100/A800, AMD MI200, and most CSPs' (Cloud Service Providers) self-developed accelerator chips. As the demand for AI accelerator chips evolves, manufacturers plan to introduce new HBM3e products in 2024, with HBM3 and HBM3e expected to become mainstream in the market next year.

The distinctions between HBM generations primarily lie in their speed. The industry experienced a proliferation of confusing names when transitioning to the HBM3 generation. TrendForce clarifies that the so-called HBM3 in the current market should be subdivided into two categories based on speed. One category includes HBM3 running at speeds between 5.6 to 6.4 Gbps, while the other features the 8 Gbps HBM3e, which also goes by several names including HBM3P, HBM3A, HBM3+, and HBM3 Gen2.

Micron Delivers Industry's Fastest, Highest-Capacity HBM to Advance Generative AI Innovation

Micron Technology, Inc. today announced it has begun sampling the industry's first 8-high 24 GB HBM3 Gen2 memory with bandwidth greater than 1.2 TB/s and pin speed over 9.2 Gb/s, which is up to a 50% improvement over currently shipping HBM3 solutions. With a 2.5 times performance per watt improvement over previous generations, Micron's HBM3 Gen2 offering sets new records for the critical artificial intelligence (AI) data center metrics of performance, capacity and power efficiency. These Micron improvements reduce training times of large language models like GPT-4 and beyond, deliver efficient infrastructure use for AI inference and provide superior total cost of ownership (TCO).

The foundation of Micron's high-bandwidth memory (HBM) solution is Micron's industry-leading 1β (1-beta) DRAM process node, which allows a 24Gb DRAM die to be assembled into an 8-high cube within an industry-standard package dimension. Moreover, Micron's 12-high stack with 36 GB capacity will begin sampling in the first quarter of calendar 2024. Micron provides 50% more capacity for a given stack height compared to existing competitive solutions. Micron's HBM3 Gen2 performance-to-power ratio and pin speed improvements are critical for managing the extreme power demands of today's AI data centers. The improved power efficiency is possible because of Micron advancements such as doubling of the through-silicon vias (TSVs) over competitive HBM3 offerings, thermal impedance reduction through a five-time increase in metal density, and an energy-efficient data path design.

SK hynix Reports Second Quarter 2023 Financial Results

SK hynix Inc. today reported financial results for the second quarter of 2023. The company recorded revenue of 7.306 trillion won, operating loss of 2.882 trillion won (with operating margin of negative 39%), and net loss of 2.988 trillion won (with net margin of negative 41%) for the three-month period ended June 30, 2023.

"Amid an expansion in generative artificial intelligence (AI) market, which has largely been centered on ChatGPT, demand for AI server memory has increased rapidly," the company said. "As a result, sales of premium products such as HBM3 and DDR5 increased, leading to a 44% sequential increase in revenue for the second quarter, while operating loss narrowed by 15%."

NVIDIA is Looking at Samsung for HBM3 Memory and 2.5D Chip Packaging

According to news out of Korea, NVIDIA is considering Samsung as a partner not only for HBM3 memory, but also as a potential partner when it comes to 2.5D chip packaging. The latter is due to TSMC having limited capacity when it comes to handling all of its customers advanced chip packaging needs, although Samsung is apparently not the only potential partner NVIDIA is looking at. Taiwan based SPIL and US based Amkor Technology are two alternative candidates for the 2.5D chip packaging according to the Elec.

As far as HBM3 memory goes, NVIDIA doesn't have as many potential options, with SK Hynix being its current partner, who NVIDIA will continue to work with when it comes to HBM memory for its high-end AI accelerators and GPUs. It's likely that Samsung is trying to win NVIDIA back as a foundry customer, by proving that it's capable of handling the chip packaging for NVIDIA. Samsung will likely use its I-Cube 2.5D packaging technology and the Elec suggests that Samsung would still be using TSMC made GPU wafers which will be mated with Samsung HMB3 memory. Samsung has as yet not started its mass production of HMB3 memory, but have sampled customers with evaluation samples that are said to have received very positive feedback. For now, nothing has been agreed and TSMC is, as we know, looking to expand its 2.5D packaging business by over 40 percent, but the question is how quickly TSMC can move before its customers consider other competitors.

Two-ExaFLOP El Capitan Supercomputer Starts Installation Process with AMD Instinct MI300A

When Lawrence Livermore National Laboratory (LLNL) announced the creation of a two-ExaFLOP supercomputer named El Capitan, we heard that AMD would power it with its Instinct MI300 accelerator. Today, LNLL published a Tweet that states, "We've begun receiving & installing components for El Capitan, @NNSANews' first #exascale #supercomputer. While we're still a ways from deploying it for national security purposes in 2024, it's exciting to see years of work becoming reality." As published images show, HPE racks filled with AMD Instinct MI300 are showing up now at LNLL's facility, and the supercomputer is expected to go operational in 2024. This could mean that November 2023 TOP500 list update wouldn't feature El Capitan, as system enablement would be very hard to achieve in four months until then.

The El Capitan supercomputer is expected to run on AMD Instinct MI300A accelerator, which features 24 Zen4 cores, CDNA3 architecture, and 128 GB of HBM3 memory. All paired together in a four-accelerator configuration goes inside each node from HPE, also getting water cooling treatment. While we don't have many further details on the memory and storage of El Capitan, we know that the system will exceed two ExFLOPS at peak and will consume close to 40 MW of power.

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023

High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce's forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.

Samsung Electronics Unveils Foundry Vision in the AI Era

Samsung Electronics, a world leader in advanced semiconductor technology, today announced its latest foundry technology innovations and business strategy at the 7th annual Samsung Foundry Forum (SFF) 2023. Under the theme "Innovation Beyond Boundaries," this year's forum delved into Samsung Foundry's mission to address customer needs in the artificial intelligence (AI) era through advanced semiconductor technology.

Over 700 guests, from customers and partners of Samsung Foundry, attended this year's event, of which 38 companies hosted their own booths to share the latest technology trends in the foundry industry.

NVIDIA Allegedly Preparing H100 GPU with 94 and 64 GB Memory

NVIDIA's compute and AI-oriented H100 GPU is supposedly getting an upgrade. The H100 GPU is NVIDIA's most powerful offering and comes in a few different flavors: H100 PCIe, H100 SXM, and H100 NVL (a duo of two GPUs). Currently, the H100 GPU comes with 80 GB of HBM2E, both in the PCIe and SXM5 version of the card. A notable exception if the H100 NVL, which comes with 188 GB of HBM3, but that is for two cards, making it 94 GB per each. However, we could see NVIDIA enable 94 and 64 GB options for the H100 accelerator soon, as the latest PCI ID Repository shows.

According to the PCI ID Repository listing, two messages are posted: "Kindly help to add H100 SXM5 64 GB into 2337." and "Kindly help to add H100 SXM5 94 GB into 2339." These two messages indicate that NVIDIA could prepare its H100 in more variations. In September 2022, we saw NVIDIA prepare an H100 variation with 120 GB of memory, but that still isn't official. These PCIe IDs could just come from engineering samples that NVIDIA is testing in the labs, and these cards could never appear on any market. So, we have to wait and see how it plays out.

Major CSPs Aggressively Constructing AI Servers and Boosting Demand for AI Chips and HBM, Advanced Packaging Capacity Forecasted to Surge 30~40%

TrendForce reports that explosive growth in generative AI applications like chatbots has spurred significant expansion in AI server development in 2023. Major CSPs including Microsoft, Google, AWS, as well as Chinese enterprises like Baidu and ByteDance, have invested heavily in high-end AI servers to continuously train and optimize their AI models. This reliance on high-end AI servers necessitates the use of high-end AI chips, which in turn will not only drive up demand for HBM during 2023~2024, but is also expected to boost growth in advanced packaging capacity by 30~40% in 2024.

TrendForce highlights that to augment the computational efficiency of AI servers and enhance memory transmission bandwidth, leading AI chip makers such as Nvidia, AMD, and Intel have opted to incorporate HBM. Presently, Nvidia's A100 and H100 chips each boast up to 80 GB of HBM2e and HBM3. In its latest integrated CPU and GPU, the Grace Hopper Superchip, Nvidia expanded a single chip's HBM capacity by 20%, hitting a mark of 96 GB. AMD's MI300 also uses HBM3, with the MI300A capacity remaining at 128 GB like its predecessor, while the more advanced MI300X has ramped up to 192 GB, marking a 50% increase. Google is expected to broaden its partnership with Broadcom in late 2023 to produce the AISC AI accelerator chip TPU, which will also incorporate HBM memory, in order to extend AI infrastructure.

Insider Info Alleges SK hynix Preparing HBM3E Samples for NVIDIA

Industry insiders in South Korea have informed news publications that NVIDIA has requested that SK hynix submit samples of next-generation high bandwidth memory (HBM) for evaluation purposes—according to Business Korea's article, workers were preparing an initial batch of HBM3E prototypes for shipment this week. SK hynix has an existing relationship with NVIDIA—it fended off tough competition last year and has since produced (current gen) HBM3 DRAM for the H100 "Hopper" Tensor Core GPU.

The memory manufacturer is hoping to maintain its position as the HBM market leader with fifth generation products in the pipeline—vice president Park Myung-soo revealed back in April that: "we are preparing 8 Gbps HBM3E product samples for the second half of this year and are preparing for mass production in the first half of next year." A new partnership with NVIDIA could help SK hynix widen the gulf between it and and its nearest competitor - Samsung - in the field of HBM production.

AMD Confirms that Instinct MI300X GPU Can Consume 750 W

AMD recently revealed its Instinct MI300X GPU at their Data Center and AI Technology Premiere event on Tuesday (June 15). The keynote presentation did not provide any details about the new accelerator model's power consumption, but that did not stop one tipster - Hoang Anh Phu - from obtaining this information from Team Red's post-event footnotes. A comparative observation was made: "MI300X (192 GB HBM3, OAM Module) TBP is 750 W, compared to last gen, MI250X TBP is only 500-560 W." A leaked Giga Computing roadmap from last month anticipated server-grade GPUs hitting the 700 W mark.

NVIDIA's Hopper H100 took the crown - with its demand for a maximum of 700 W - as the most power-hungry data center enterprise GPU until now. The MI300X's OCP Accelerator Module-based design now surpasses Team Green's flagship with a slightly greater rating. AMD's new "leadership generative AI accelerator" sports 304 CDNA 3 compute units, which is a clear upgrade over the MI250X's 220 (CDNA 2) CUs. Engineers have also introduced new 24G B HBM3 stacks, so the MI300X can be specced with 192 GB of memory (as a maximum), the MI250X is limited to a 128 GB memory capacity with its slower HBM2E stacks. We hope to see sample units producing benchmark results very soon, with the MI300X pitted against H100.

AMD Details New EPYC CPUs, Next-Generation AMD Instinct Accelerator, and Networking Portfolio for Cloud and Enterprise

Today, at the "Data Center and AI Technology Premiere," AMD announced the products, strategy and ecosystem partners that will shape the future of computing, highlighting the next phase of data center innovation. AMD was joined on stage with executives from Amazon Web Services (AWS), Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships with industry leaders to bring the next generation of high performance CPU and AI accelerator solutions to market.

"Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers," said AMD Chair and CEO Dr. Lisa Su. "AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware."

SK hynix Enters Industry's First Compatibility Validation Process for 1bnm DDR5 Server DRAM

SK hynix Inc. announced today that it has completed the development of the industry's most advanced 1bnm, the fifth-generation of the 10 nm process technology, while the company and Intel began a joint evaluation of 1bnm and validation in the Intel Data Center Certified memory program for DDR5 products targeted at Intel Xeon Scalable platforms.

The move comes after SK hynix became the first in the industry to reach 1anm readiness and completed Intel's system validation of the 1anm DDR5, the fourth-generation of the 10 nm technology. The DDR5 products provided to Intel run at the world's fastest speed of 6.4 Gbps (Gigabits per second), representing a 33% improvement in data processing speed compared with test-run products in early days of DDR5 development.

Intel Falcon Shores is Initially a GPU, Gaudi Accelerators to Disappear

During the ISC High Performance 2023 international conference, Intel announced interesting roadmap updates to its high-performance computing (HPC) and artificial intelligence (AI). With the scrapping of Rialto Bridge and Lancaster Sound, Intel merged these accelerator lines into Falcon Shores processor for HPC and AI, initially claiming to be a CPU+GPU solution on a single package. However, during the ISC 2023 talk, the company forced a change of plans, and now, Falcon Shores is GPU only solution destined for a 2025 launch. Originally, Intel wanted to combine x86-64 cores with Xe GPU to form an "XPU" module that powers HPC and AI workloads. However, Intel did not see a point in forcing customers to choose between specific CPU-to-GPU core ratios that would need to be in an XPU accelerator. Instead, a regular GPU solution paired with a separate CPU is the choice of Intel for now. In the future, as workloads get more defined, XPU solutions are still a possibility, just delayed from what was originally intended.

Regarding Intel's Gaudi accelerators, the story is about to end. The company originally paid two billion US Dollars for Habana Labs and its Gaudi hardware. However, Intel now plans to stop the Gaudi development as a standalone accelerator and instead use the IP to integrate it into its Falcon Shores GPU. Using modular, tile-based architecture, the Falcon Shores GPU features standard ethernet switching, up to 288 GB of HBM3 running at 9.8 TB/s throughput, I/O optimized for scaling, and support for FP8 and FP16 floating point precision needed for AI and other workloads. As noted, the creation of XPU was premature, and now, the initial Falcon Shores GPU will become an accelerator for HPC, AI, and a mix of both, depending on a specific application. You can see the roadmap below for more information.

Samsung Trademark Applications Hint at Next Gen DRAM for HPC & AI Platforms

The Korea Intellectual Property Rights Information Service (KIPRIS) has been processing a bunch of trademark applications in recent weeks, submitted by Samsung Electronics Corporation. News outlets pointed out, earlier on this month, that the South Korean multinational manufacturing conglomerate was attempting to secure the term "Snowbolt" as a moniker for an unreleased HBM3P DRAM-based product. Industry insiders and Samsung representatives have indicated that high bandwidth memory (5 TB/s bandwidth speeds per stack) will be featured in upcoming cloud servers, high-performance and AI computing - slated for release later on in 2023.

A Samsung-focused news outlet, SamMobile, has reported (on May 15) of further trademark applications for next generation DRAM (Dynamic Random Access Memory) products. Samsung has filed for two additional monikers - "Shinebolt" and "Flamebolt" - details published online show that these products share the same "designated goods" descriptors with the preceding "Snowbolt" registration: "DRAM modules with high bandwidth for use in high-performance computing equipment, artificial intelligence, and supercomputing equipment" and "DRAM with high bandwidth for use in graphic cards." Kye Hyun Kyung, CEO of Samsung Semiconductor, has been talking up his company's ambitions of competing with rival TSMC in providing cutting edge component technology, especially in the field of AI computing. It is too early to determine whether these "-bolt" DRAM products will be part of that competitive move, but it is good to know that speedier memory is on the way - future generation GPUs are set to benefit.

India Homegrown HPC Processor Arrives to Power Nation's Exascale Supercomputer

With more countries creating initiatives to develop homegrown processors capable of powering powerful supercomputing facilities, India has just presented its development milestone with Aum HPC. Thanks to information from the report by The Next Platform, we learn that India has developed a processor for powering its exascale high-performance computing (HPC) system. Called Aum HPC, the CPU was developed by the National Supercomputing Mission of the Indian government, which funded the Indian Institute of Science, the Department of Science and Technology, the Ministry of Electronics and Information Technology, and C-DAC to design and manufacture the Aum HPC processors and create strong, strong technology independence.

The Aum HPC is based on Armv8.4 CPU ISA and represents a chiplet processor. Each compute chiplet features 48 Arm Zeus Cores based on Neoverse V1 IP, so with two chiplets, the processor has 96 cores in total. Each core gets 1 MB of level two cache and 1 MB of system cache, for 96 MB L2 cache and 96 MB system cache in total. For memory, the processor uses 16-channel 32-bit DDR5-5200 with a bandwidth of 332.8 GB/s. To expand on that, HBM memory is present, and there is 64 GB of HBM3 with four controllers capable of achieving a bandwidth of 2.87 TB/s. As far as connectivity, the Aum HPC processor has 64 PCIe Gen 5 Lanes with CXL enabled. It is manufactured on a 5 nm node from TSMC. With a 3.0 GHz typical and 3.5+ GHz turbo frequency, the Aum HPC processor is rated for a TDP of 300 Watts. It is capable of producing 4.6+ TeraFLOPS per socket. Below are illustrations and tables comparing Aum HPC to Fujitsy A64FX, another Arm HPC-focused design.

Samsung Electronics Announces First Quarter 2023 Results, Profits Lowest in 14 Years

Samsung Electronics today reported financial results for the first quarter ended March 31, 2023. The Company posted KRW 63.75 trillion in consolidated revenue, a 10% decline from the previous quarter, as overall consumer spending slowed amid the uncertain global macroeconomic environment. Operating profit was KRW 0.64 trillion as the DS (Device Solutions) Division faced decreased demand, while profit in the DX (Device eXperience) Division increased.

The DS Division's profit declined from the previous quarter due to weak demand in the Memory Business, a decline in utilization rates in the Foundry Business and continued weak demand and inventory adjustments from customers. Samsung Display Corporation (SDC) saw earnings in the mobile panel business decline quarter-on-quarter amid a market contraction, while the large panel business slightly narrowed its losses. The DX Division's results improved on the back of strong sales of the premium Galaxy S23 series as well as an enhanced sales mix focusing on premium TVs.

SK hynix Develops Industry's First 12-Layer HBM3, Provides Samples To Customers

SK hynix announced today it has become the industry's first to develop 12-layer HBM3 product with a 24 gigabyte (GB) memory capacity, currently the largest in the industry, and said customers' performance evaluation of samples is underway. HBM (High Bandwidth Memory): A high-value, high-performance memory that vertically interconnects multiple DRAM chips and dramatically increases data processing speed in comparison to traditional DRAM products. HBM3 is the 4th generation product, succeeding the previous generations HBM, HBM2 and HBM2E

"The company succeeded in developing the 24 GB package product that increased the memory capacity by 50% from the previous product, following the mass production of the world's first HBM3 in June last year," SK hynix said. "We will be able to supply the new products to the market from the second half of the year, in line with growing demand for premium memory products driven by the AI-powered chatbot industry." SK hynix engineers improved process efficiency and performance stability by applying Advanced Mass Reflow Molded Underfill (MR-MUF)# technology to the latest product, while Through Silicon Via (TSV)## technology reduced the thickness of a single DRAM chip by 40%, achieving the same stack height level as the 16 GB product.
Return to Keyword Browsing
Dec 21st, 2024 21:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts