News Posts matching #HPC

Return to Keyword Browsing

Bolt Graphics Announces Zeus GPU for High Performance Workloads

Bolt Graphics announces Zeus, a completely new GPU design for high performance workloads including rendering, HPC, and gaming. Zeus addresses performance, efficiency, and functionality limitations with legacy GPUs.

Zeus is orders of magnitude faster than any other GPU in key workloads. Users can gain 10x in rendering performance, 6x in FP64 HPC workload performance, and 300x in electromagnetic wave simulations. Users running these types of demanding workloads need access to large amounts of memory. Bolt brings expandable memory to GPUs, for the first time, which allows users to increase their memory up to 384 GB in a PCIe card, and up to 2.25 TB per Zeus in a 2U server. A rack of Zeus 2U servers can be configured with up to 180 TB of memory, 8x larger than legacy GPUs.

Codasip Selected to Design a High-End RISC-V Processor for the EU-Funded DARE Project

Codasip, the European RISC-V leader, announced that it has been selected to provide a general purpose, high-end processor as part of the large-scale European supercomputing project Digital Autonomy with RISC-V in Europe (DARE).

DARE is set to build a supercomputing compute stack, featuring high-performance and energy-efficient RISC-V-based processors and accelerators designed and developed in Europe. The European Union has committed 240 million Euros in funding for the first 3-year program phase. The selected partners will leverage hardware/software co-design to achieve competitive performance and efficiency.

AMD Discusses EPYC's "No Compromise" Driving of Performance and Efficiency

One of the main pillars that vendors of Arm-based processors often cite as a competitive advantage versus x86 processors is a keen focus on energy efficiency and predictability of performance. In the quest for higher efficiency and performance, Arm vendors have largely designed out the ability to operate on multiple threads concurrently—something that most enterprise-class CPUs have enabled for years under the technology description of "SMT"—which was also created in the name of enabling performance and efficiency benefits.

Arm vendors often claim that SMT brings security risks, creates performance unpredictability from shared resource contention and drives added cost and energy needed to implement SMT. Interestingly, Arm does support multi-threading in its Neoverse E1-class processor family for embedded uses such as automotive. Given these incongruities, this blog intends to provide a bit more clarity to help customers assess what attributes of performance and efficiency really bring them value for their critical workloads.

Giga Computing, SK Telecom, and SK Enmove to Collaborate on AI Data Center Liquid Cooling Technology

Giga Computing, a subsidiary of GIGABYTE Technology, has signed a Memorandum of Understanding (MoU) with SK Telecom and SK Enmove to collaborate on advancing AI Data Center (AIDC) and high-performance computing (HPC) while accelerating the adoption of liquid cooling technology in next-generation data centers.
This strategic partnership sets the stage to nurture and develop high-performance, energy-efficient, and sustainable data center solutions.

Driving AI and Cooling Technology Innovation Together
Performance AI servers, liquid cooling technologies, and modular AI clusters to support SK's various business units, including:
  • SK Telecom: Strengthening AIDC infrastructure to support next-generation data centers
  • SK Enmove: Advancing liquid cooling technologies to improve energy efficiency and sustainability in data centers

China Doubles Down on Semiconductor Research, Outpacing US with High-Impact Papers

When the US imposed sanctions on Chinese semiconductor makers, China began the push for sovereign chipmaking tools. According to a study conducted by the Emerging Technology Observatory (ETO), Chinese institutions have dramatically outpaced their US counterparts in next-generation chipmaking research. Between 2018 and 2023, nearly 475,000 scholarly articles on chip design and fabrication were published worldwide. Chinese research groups contributed 34% of the output—compared to just 15% from the United States and 18% from Europe. The study further emphasizes the quality of China's contributions. Focusing on the top 10% of the most-cited articles, Chinese researchers were responsible for 50% of this high-impact work, while American and European research accounted for only 22% and 17%, respectively.

This trend shows China's lead isn't about numbers only, and suggests that its work is resonating strongly within the global academic community. Key research areas include neuromorphic, optoelectric computing, and, of course, lithography tools. China is operating mainly outside the scope of US export restrictions that have, since 2022, shrunk access to advanced chipmaking equipment—precisely, tools necessary for fabricating chips below the 14 nm process node. Although US sanctions were intended to limit China's access to cutting-edge manufacturing technology, the massive body of Chinese research suggests that these measures might eventually prove less effective, with Chinese institutions continuing to push forward with influential, high-citation studies. However, Chinese theoretical work is yet to be proven in the field, as only a single company currently manufactures 7 nm and 5 nm nodes—SMIC. Chinese semiconductor makers still need more advanced lithography solutions to reach high-volume manufacturing on more advanced nodes like 3 nm and 2 nm to create more powerful domestic chips for AI and HPC.

Alibaba Adds New "C930" Server-grade Chip to XuanTie RISC-V Processor Series

Damo Academy—a research and development wing of Alibaba—launched its debut "server-grade processor" design late last week, in Beijing. According to a South China Morning Post (SCMP) news article, the C930 model is a brand-new addition to the e-commerce platform's XuanTie RISC-V CPU series. Company representatives stated that their latest product is designed as a server-level and high-performance computing (HPC) solution. Going back to March 2024, TechPowerUp and other Western hardware news outlets picked up on Alibaba's teasing of the Xuantie C930 SoC, and a related Xuantie 907 matrix processing unit. Fast-forward to the present day; Damo Academy has disclosed that initial shipments—of finalized C930 units—will be sent out to customers this month.

The newly released open-source RISC-V architecture-based HPC chip is an unknown quantity in terms of technical specifications. Damo Academy reps did not provide any detailed information during last Friday's conference (February 28). SCMP's report noted the R&D division's emphasizing of "its role in advancing RISC-V adoption" within various high-end fields. Apparently, the XuanTie engineering team has: "supported the implementation of more than thirty percent of RISC-V high-performance processors." Upcoming additions will arrive in the form of the C908X for AI acceleration, R908A for automotive processing solutions, and an XL200 model for high-speed interconnection. These XuanTie projects are reportedly still deep in development.

Baya Systems and Semidynamics Collaborate to Accelerate RISC-V System-on-Chip Development

Baya Systems, a leader in system IP technology that empowers the acceleration of intelligent compute, and Semidynamics, a provider of fully customizable high-bandwidth and high-performance RISC-V processor IP, today announced a collaboration to boost innovation in development of hyper-efficient, next-generation platforms for artificial intelligence (AI), machine learning (ML) and high-performance computing (HPC) applications.

The collaboration integrates Semidynamics' family of 64-bit RISC-V processor IP cores, known for their exceptional memory bandwidth and configurability, with Baya Systems' innovative WeaveIP Network on Chip (NoC) system IP. WeaveIP is engineered for ultra-efficient, high-bandwidth, and low-latency data transport, crucial for the demands of modern workloads. Complementing this is Baya Systems' software-driven WeaverPro platform, which enables rapid system-level optimization, ensuring that key performance indicators (KPIs) are met based on real-world workloads while providing unparalleled design flexibility for future advancements.

MSI Announces New Server Platforms Supporting Intel Xeon 6 Family of Processors

MSI introduces new server platforms powered by the latest Intel Xeon 6 family of processors with the Performance Cores (P-Cores). Engineered for high-density performance, seamless scalability, and energy-efficient operations, these servers deliver exceptional throughput, dynamic workload flexibility, and optimized power efficiency. Optimized for AI-driven applications, modern data centers, and cloud-native workloads, MSI's new platforms help lower total cost of ownership (TCO) while maximizing infrastructure efficiency and resource optimization.

"As data-driven transformation accelerates across industries, businesses require solutions that not only deliver performance but also enable sustainable growth and operational agility," said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "Our Intel Xeon 6 processor-based servers are designed to support this shift by offering high-core scalability, energy-efficient performance, and dynamic workload optimization. These capabilities empower organizations to maximize compute density, streamline their digital ecosystems, and respond to evolving market demands with greater speed and efficiency."

Intel Unveils High-Performance, Power-Efficient Ethernet Solutions

Intel today launched two new Ethernet product lines - the Intel Ethernet E830 Controllers and Network Adapters, and the Intel Ethernet E610 Controllers and Network Adapters - designed to meet the growing demands of enterprise, telecommunications, cloud, edge, high performance computing (HPC) and artificial intelligence (AI) applications. These next-generation solutions provide robust, high-performance connectivity while enhancing energy efficiency and security, and lowering total cost of ownership (TCO).

"In today's interconnected world, networking is essential to the success of business and technology transformation. With the launch of the Intel Ethernet E830 and E610 products, we are helping customers meet the growing demand for high-performance, energy-efficient solutions that optimize network infrastructures, lower operational costs and enhance TCO." -Bob Ghaffari, Intel vice president, Network and Edge Group

Intel Unveils Leadership AI and Networking Solutions with Xeon 6 Processors

As enterprises modernize infrastructure to meet the demands of next-gen workloads like AI, high-performing and efficient compute is essential across the full spectrum - from data centers to networks, edge and even the PC. To address these challenges, Intel today launched its Xeon 6 processors with Performance-cores (P-cores), providing industry-leading performance for the broadest set of data center and network infrastructure workloads and best-in-class efficiency to create an unmatched server consolidation opportunity.

"We are intensely focused on bringing cutting-edge leadership products to market that solve our customers' greatest challenges and help drive the growth of their business," said Michelle Johnston Holthaus, interim co-CEO of Intel and CEO of Intel Products. "The Xeon 6 family delivers the industry's best CPU for AI and groundbreaking features for networking, while simultaneously driving efficiency and bringing down the total cost of ownership."

MITAC Computing Announces Intel Xeon 6 CPU-powered Next-gen AI & HPC Server Series

MiTAC Computing Technology Corporation, a leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation, today announced the launch of its latest server systems and motherboards powered by the latest Intel Xeon 6 with P-core processors. These industry-leading processors are designed for compute-intensive workloads, providing up to twice the performance for the widest range of workloads including AI and HPC.

Driving Innovation in AI and High-Performance Computing
"For over a decade, MiTAC Computing has collaborated with Intel to push the boundaries of server technology, delivering cutting-edge solutions optimized for AI and high-performance computing (HPC)," said Rick Hwang, President of MiTAC Computing Technology Corporation. "With the integration of the latest Intel Xeon 6 P-core processors our servers now unlock groundbreaking AI acceleration, boost computational efficiency, and scale cloud operations to new heights. These innovations provide our customers with a competitive edge, empowering them to tackle demanding workloads with superior empower our customers with a competitive edge through superior performance and an optimized total cost of ownership."

Senao Networks Unveils AI Driven Computing at MWC Barcelona 2025

Senao Networks Inc. (SNI), a global leader in AI computing and networking solutions, will be exhibiting at 2025 Mobile World Congress (MWC) in Barcelona. At the event, SNI will showcase its latest AI-driven innovations, including AI Servers, AI Cameras, AIPCs, Cloud Solutions, and Titanium Power Supply, reinforcing its vision of "AI Everywhere."

Senao Networks continues to advance AI computing with new products designed to enhance security, efficiency, and connectivity.

Global Semiconductor Manufacturing Industry Reports Solid Q4 2024 Results

The global semiconductor manufacturing industry closed 2024 with strong fourth quarter results and solid year-on-year (YoY) growth across most of the key industry segments, SEMI announced today in its Q4 2024 publication of the Semiconductor Manufacturing Monitor (SMM) Report, prepared in partnership with TechInsights. The industry outlook is cautiously optimistic at the start of 2025 as seasonality and macroeconomic uncertainty may impede near-term growth despite momentum from strong investments related to AI applications.

After declining in the first half of 2024, electronics sales bounced back later in the year resulting in a 2% annual increase. Electronics sales grew 4% YoY in Q4 2024 and are expected to see a 1% YoY increase in Q1 2025 impacted by seasonality. Integrated circuit (IC) sales rose by 29% YoY in Q4 2024 and continued growth is expected in Q1 2025 with a 23% increase YoY as AI-fueled demand continues boosting shipments of high-performance computing (HPC) and datacenter memory chips.

Samsung Electronics Announces Fourth Quarter and FY 2024 Results

Samsung Electronics today reported financial results for the fourth quarter and the fiscal year 2024. The Company posted KRW 75.8 trillion in consolidated revenue and KRW 6.5 trillion in operating profit in the quarter ended December 31, 2024. For the full year, it reported KRW 300.9 trillion in annual revenue and KRW 32.7 trillion in operating profit.

Although fourth quarter revenue and operating profit decreased on a quarter-on-quarter (QoQ) basis, annual revenue reached the second-highest on record, surpassed only in 2022. Meanwhile, operating profit was down KRW 2.7 trillion QoQ, due to soft market conditions especially for IT products, and an increase in expenditures including R&D. In the first quarter of 2025, while overall earnings improvement may be limited due to weakness in the semiconductors business, the Company aims to pursue growth through increased sales of smartphones with differentiated AI experiences, as well as premium products in the Device eXperience (DX) Division.

Fujifilm Pumps ¥100 Billion in Semiconductor Material Expansion to Meet Chip Demand

According to Nikkei, Fujifilm Holdings is reportedly set to invest ¥100 billion ($640.5 million) by March 2027 to expand production capacities in the U.S., Japan, South Korea, and India. The main focus of the expansion is semiconductor materials, which are vital for the modern semiconductor supply chain. While the company has not officially confirmed the plan, it follows global efforts to straighten material supply chains as chipmakers ramp up the construction of cutting-edge fabs in these regions. The investment, doubling Fujifilm's semiconductor materials spending over the past three years, targets rising demand driven by new fabs from Intel, TSMC, Samsung, and SK Hynix, particularly for AI and HPC. Fujifilm, ranked fifth globally in photosensitive materials, is one of only five companies worldwide producing ultra-pure photoresists for extreme ultraviolet (EUV) lithography.

These photoresists must meet rigorous standards due to EUV's 13.5 nm wavelength, requiring precision in sensitivity, resolution, and compatibility with mask materials. Fujifilm strategically locates facilities near major hubs to strengthen partnerships with key clients. In Japan, a ¥13 billion ($83.27 million) plant in Shizuoka Prefecture is underway, while a South Korean site in Pyeongtaek will receive upgraded equipment by autumn. A Cheonan facility, set for spring 2027, aims to boost the output of chemical mechanical planarization (CMP) agents by 30%. The company also eyes India's emerging semiconductor sector, exploring partnerships or joint ventures to establish local production post-2027.

Element Six Introduces Copper-Diamond Composite Material to Enhance Cooling of Advanced Semiconductor Devices

Element Six (E6), a pioneer in the development of synthetic diamond advanced material solutions, will launch an innovative Cu-diamond product at Photonics West 2025. Cu-Diamond is a copper plated diamond composite material that has a high thermal and electrical conductivity. Designed to address the increasingly critical thermal management challenges in advanced semiconductor devices, this cost-effective solution enables greater performance and reliability for applications such as Artificial Intelligence (AI), high-performance computing (HPC), and GaN RF devices.

As semiconductor devices have grown larger and more powerful, managing heat dissipation has become a significant challenge for the industry. More than 50 percent of all electronic device failures are heat-related, and data centers, which today consume 3.7 percent of total U.S. power demand, are predicted to reach 10 percent by 2029. As a result, thermal management innovation is critical to enabling next-generation performance and energy efficiency.

EK Unveils New Fluid Works CASCADE 4U8G Barebone in Collaboration With SilverStone

EK, the leader in liquid cooling solutions, is proud to announce the EK Fluid Works CASCADE 4U8G Barebone, a revolutionary liquid-cooling rack-mount workstation solution developed in collaboration with SilverStone Technology Co., Ltd.. This marks a pivotal expansion of EK's Enterprise product line, designed to meet the growing demand for compact, scalable, and high-performance computing setups for professional and enthusiast applications alike.

Revolutionizing High-Performance Computing
The EK Fluid Works CASCADE 4U8G Barebone is a compact 4U rack-mount workstation engineered to support up to 8 liquid-cooled GPUs, including the new NVIDIA GeForce RTX 50 Series GPUs, doubling the capacity of traditional air-cooled solutions. Leveraging advanced liquid cooling designed by EK, the system ensures superior thermal performance, enabling sustained peak performance and efficiency across demanding workloads such as AI, machine learning, 3D rendering, and scientific simulations.

InWin Introduces New Server & IPC Equipment at CES 2025

InWin has showcased several new server chassis models at CES—these new introductions form part of the company's efforts to expand regional IPC, server, and systems assembly operations going into 2025. New manufacturing facilities in the USA and Malaysia were brought online last year, and new products have sprung forth. TechPowerUp staffers were impressed by InWin's RG650B model—this cavernous rackmount GPU server has been designed with AI and HPC applications in mind. Its 6.5U dual-chamber design is divided into two sections with optimized and independent heat dissipation systems—GPU accelerators are destined for the 4.5U space, while the motherboard and CPUs go into the 2U chamber.

The RG650B's front section is dominated by the nine pre-installed hot swappable 80 x 30 mm (12,000 RPM max. rated) PWM fans. This array should provide plenty of cooling for any contained hardware; these components will be powered by an 80 Plus Titanium CRPS 3200 W PSU (with four 12V-2x6 pin connectors). InWin's spec sheet states that their RG650B supports 18 FHFL PCI-Express slots with four PCI-Express riser cables—granting plenty of potential for the installation of add-in boards.

Gigabyte Expands Its Accelerated Computing Portfolio with New Servers Using the NVIDIA HGX B200 Platform

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, announced new GIGABYTE G893 series servers using the NVIDIA HGX B200 platform. The launch of these flagship 8U air-cooled servers, the G893-SD1-AAX5 and G893-ZD1-AAX5, signifies a new architecture and platform change for GIGABYTE in the demanding world of high-performance computing and AI, setting new standards for speed, scalability, and versatility.

These servers join GIGABYTE's accelerated computing portfolio alongside the NVIDIA GB200 NVL72 platform, which is a rack-scale design that connects 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. At CES 2025 (January 7-10), the GIGABYTE booth will display the NVIDIA GB200 NVL72, and attendees can engage in discussions about the benefits of GIGABYTE platforms with the NVIDIA Blackwell architecture.

Intel Co-CEO Dampens Expectations for First-Gen "Falcon Shores" GPU

Intel's ambitious plan to challenge AMD and NVIDIA in the AI accelerator market may still be a little questionable, according to recent comments from interim co-CEO Michelle Johnston Holthaus at the Barclays 22nd Annual Global Technology Conference. The company's "Falcon Shores" project, which aims to merge Gaudi AI capabilities with Intel's data center GPU technology for HPC workloads, received surprising commentary from Holthaus. "We really need to think about how we go from Gaudi to our first generation of Falcon Shores, which is a GPU," she stated, before acknowledging potential limitations. "And I'll tell you right now, is it going to be wonderful? No, but it is a good first step."

Intel's pragmatic approach to AI hardware development was further highlighted when Holthaus addressed the company's product strategy. Rather than completely overhauling their development pipeline, she emphasized the value of iterative progress: "If you just stop everything and you go back to doing like all new product, products take a really long time to come to market. And so, you know, you're two years to three years out from having something." The co-CEO advocated for a more agile approach, stating, "I'd rather have something that I can do in smaller volume, learn, iterate, and get better so that we can get there." She acknowledged the enduring nature of AI market opportunities, particularly noting the current focus on training while highlighting the potential in other areas: "Obviously, AI is not going away. Obviously training is, you know, the focus today, but there's inference opportunities in other places where there will be different needs from a hardware perspective."

Fujitsu Previews Monaka: 144-Core Arm CPU Made with Chiplets

Fujitsu has previewed its next-generation Monaka processor, a 144-core powerhouse for data center. Satoshi Matsuoka of the RIKEN Center for Computational Science showcased the mechanical sample on social media platform X. The Monaka processor is developed in collaboration with Broadcom and employs an innovative 3.5D eXtreme Dimension System-in-Package architecture featuring four 36-core chiplets manufactured using TSMC's N2 process. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer. A distinguishing feature of the Monaka design is its approach to memory architecture. Rather than incorporating HBM, Fujitsu has opted for pure cache dies below compute logic in combination with DDR5 DRAM compatibility, potentially leveraging advanced modules like MR-DIMM and MCR-DIMM.

The processor's I/O die supports cutting-edge interfaces, including DDR5 memory, PCIe 6.0, and CXL 3.0 for seamless integration with modern data center infrastructure. Security in the design is taken care of with the implementation of Armv9-A's Confidential Computing Architecture for enhanced workload isolation. Fujitsu has set ambitious goals for the Monaka processor. The company aims to achieve twice the energy efficiency of current x86 processors by 2027 while maintaining air cooling capabilities. The processor aims to do AI and HPC with the Arm SVE 2 support, which enables vector lengths up to 2048 bits. Scheduled for release during Fujitsu's fiscal year 2027 (April 2026 to March 2027), the Monaka processor is shaping up as a competitor to AMD's EPYC and Intel's Xeon processors.

Advantech Introduces Its GPU Server SKY-602E3 With NVIDIA H200 NVL

Advantech, a leading global provider of industrial edge AI solutions, is excited to introduce its GPU server SKY-602E3 equipped with the NVIDIA H200 NVL platform. This powerful combination is set to accelerate the offline LLM for manufacturing, providing unprecedented levels of performance and efficiency. The NVIDIA H200 NVL, requiring 600 W passive cooling, is fully supported by the compact and efficient SKY-602E3 GPU server, making it an ideal solution for demanding edge AI applications.

Core of Factory LLM Deployment: AI Vision
The SKY-602E3 GPU server excels in supporting large language models (LLMs) for AI inference and training. It features four PCIe 5.0 x16 slots, delivering high bandwidth for intensive tasks, and four PCIe 5.0 x8 slots, providing enhanced flexibility for GPU and frame grabber card expansion. The half-width design of the SKY-602E3 makes it an excellent choice for workstation environments. Additionally, the server can be equipped with the NVIDIA H200 NVL platform, which offers 1.7x more performance than the NVIDIA H100 NVL, freeing up additional PCIe slots for other expansion needs.

Intel 18A Yields Are Actually Okay, And The Math Checks Out

A few days ago, we published a report about Intel's 18A yields being at an abysmal 10%. This sparked quite a lot of discussion among the tech community, as well as responses from industry analysts and Intel's now ex-CEO Pat Gelsinger. Today, we are diving into known information about Intel's 18A node and checking out what the yields of possible products could be, using tools such as Die Yield Calculator from SemiAnalysis. First, we know that the defect rate of the 18A node is 0.4 defects per cm². This information is from August, and up-to-date defect rates could be much lower, especially since semiconductor nodes tend to evolve even when they are production-ready. To measure yields, manufacturers use various yield models based on the information they have, like the aforementioned 0.4 defect density. Expressed in defects per square centimeter (def/cm²), it measures manufacturing process quality by quantifying the average number of defects present in each unit area of a semiconductor wafer.

Measuring yields is a complex task. Manufacturers design some smaller chips for mobile and some bigger chips for HPC tasks. Thus, these two would have different yields, as bigger chips require more silicon area and are more prone to defects. Smaller mobile chips occupy less silicon area, and defects occurring on the wafer often yield more usable chips than wasted silicon. Stating that a node only yields x% of usable chips is only one side of the story, as the size of the test production chip is not known. For example, NVIDIA's H100 die is measuring at 814 mm²—a size that is pushing modern manufacturing to its limits. The size of a modern photomask, the actual pattern mask used in printing the design of a chip to silicon wafer, is only 858 mm² (26x33 mm). Thus, that is the limit before exceeding the mask and needing a redesign. At that size, nodes are yielding much less usable chips than something like a 100 mm² mobile chip, where defects don't wreak havoc on the yield curve.

"Jaguar Shores" is Intel's Successor to "Falcon Shores" Accelerator for AI and HPC

Intel has prepared "Jaguar Shores," its "next-next" generation AI and HPC accelerator, successor to its upcoming "Falcon Shores" GPU. Revealed during a technical workshop at the SC2024 conference, the chip was unveiled by Intel's Habana Labs division, albeit unintentionally. This announcement positions Jaguar Shores as the successor to Falcon Shores, which is scheduled to launch next year. While details about Jaguar Shores remain sparse, its designation suggests it could be a general-purpose GPU (GPGPU) aimed at both AI training, inferencing, and HPC tasks. Intel's strategy aligns with its push to incorporate advanced manufacturing nodes, such as the 18A process featuring RibbonFET and backside power delivery, which promise significant efficiency gains, so we can expect to see upcoming AI accelerators incorporating these technologies.

Intel's AI chip lineup has faced numerous challenges, including shifting plans for Falcon Shores, which has transitioned from a CPU-GPU hybrid to a standalone GPU, and cancellation of Ponte Vecchio. Despite financial constraints and job cuts, Intel has maintained its focus on developing cutting-edge AI solutions. "We continuously evaluate our roadmap to ensure it aligns with the evolving needs of our customers. While we don't have any new updates to share, we are committed to providing superior enterprise AI solutions across our CPU and accelerator/GPU portfolio." an Intel spokesperson stated. The announcement of Jaguar Shores shows Intel's determination to remain competitive. However, the company faces steep competition. NVIDIA and AMD continue to set benchmarks with performant designs, while Intel has struggled to capture a significant share of the AI training market. The company's Gaudi lineup ends with third generation, and Gaudi IP will get integrated into Falcon Shores.

Renesas Unveils Industry's First Complete Chipset for Gen-2 DDR5 Server MRDIMMs

Renesas Electronics Corporation, a premier supplier of advanced semiconductor solutions, today announced that it has delivered the industry's first complete memory interface chipset solutions for the second-generation DDR5 Multi-Capacity Rank Dual In-Line Memory Modules (MRDIMMs).

The new DDR5 MRDIMMs are needed to keep pace with the ever-increasing memory bandwidth demands of Artificial Intelligence (AI), High-Performance Compute (HPC) and other data center applications. They deliver operating speeds up to 12,800 Mega Transfers Per Second (MT/s), a 1.35x improvement in memory bandwidth over first-generation solutions. Renesas has been instrumental in the design, development and deployment of the new MRDIMMs, collaborating with industry leaders including CPU and memory providers, along with end customers.
Return to Keyword Browsing
Mar 6th, 2025 22:21 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts