News Posts matching #AI

Return to Keyword Browsing

"Jaguar Shores" is Intel's Successor to "Falcon Shores" Accelerator for AI and HPC

Intel has prepared "Jaguar Shores," its "next-next" generation AI and HPC accelerator, successor to its upcoming "Falcon Shores" GPU. Revealed during a technical workshop at the SC2024 conference, the chip was unveiled by Intel's Habana Labs division, albeit unintentionally. This announcement positions Jaguar Shores as the successor to Falcon Shores, which is scheduled to launch next year. While details about Jaguar Shores remain sparse, its designation suggests it could be a general-purpose GPU (GPGPU) aimed at both AI training, inferencing, and HPC tasks. Intel's strategy aligns with its push to incorporate advanced manufacturing nodes, such as the 18A process featuring RibbonFET and backside power delivery, which promise significant efficiency gains, so we can expect to see upcoming AI accelerators incorporating these technologies.

Intel's AI chip lineup has faced numerous challenges, including shifting plans for Falcon Shores, which has transitioned from a CPU-GPU hybrid to a standalone GPU, and cancellation of Ponte Vecchio. Despite financial constraints and job cuts, Intel has maintained its focus on developing cutting-edge AI solutions. "We continuously evaluate our roadmap to ensure it aligns with the evolving needs of our customers. While we don't have any new updates to share, we are committed to providing superior enterprise AI solutions across our CPU and accelerator/GPU portfolio." an Intel spokesperson stated. The announcement of Jaguar Shores shows Intel's determination to remain competitive. However, the company faces steep competition. NVIDIA and AMD continue to set benchmarks with performant designs, while Intel has struggled to capture a significant share of the AI training market. The company's Gaudi lineup ends with third generation, and Gaudi IP will get integrated into Falcon Shores.

Aetina Debuts at SC24 With NVIDIA MGX Server for Enterprise Edge AI

Aetina, a subsidiary of the Innodisk Group and an expert in edge AI solutions, is pleased to announce its debut at Supercomputing (SC24) in Atlanta, Georgia, showcasing the innovative SuperEdge NVIDIA MGX short-depth edge AI server, AEX-2UA1. By integrating an enterprise-class on-premises large language model (LLM) with the advanced retrieval-augmented generation (RAG) technique, Aetina NVIDIA MGX short-depth server demonstrates exceptional enterprise edge AI performance, setting a new benchmark in Edge AI innovation. The server is powered by the latest Intel Xeon 6 processor and dual high-end double-width NVIDIA GPUs, delivering ultimate AI computing power in a compact 2U form factor, accelerating Gen AI at the edge.

The SuperEdge NVIDIA MGX server expands Aetina's product portfolio from specialized edge devices to comprehensive AI server solutions, propelling a key milestone in Innodisk Group's AI roadmap, from sensors and storage to AI software, computing platforms, and now AI edge servers.

Lenovo launches ThinkShield Firmware Assurance for Deep Protection Above and Below the Operating System

Today, Lenovo announced the introduction of ThinkShield Firmware Assurance as part of its portfolio of enterprise-grade cybersecurity solutions. ThinkShield Firmware Assurance is one of the only computer OEM solutions to enable deep visibility and protection below the operating system (OS) by embracing Zero Trust Architecture (ZTA) component-level visibility to generate more accurate and actionable risk management insights.

As a security paradigm, ZTA explicitly identifies users and devices to grant appropriate levels of access so a business can operate with less risk and minimal friction. ZTA is a critical framework to reduce risk as organizations endeavor to complete Zero-Trust implementations.

Renesas Unveils Industry's First Complete Chipset for Gen-2 DDR5 Server MRDIMMs

Renesas Electronics Corporation, a premier supplier of advanced semiconductor solutions, today announced that it has delivered the industry's first complete memory interface chipset solutions for the second-generation DDR5 Multi-Capacity Rank Dual In-Line Memory Modules (MRDIMMs).

The new DDR5 MRDIMMs are needed to keep pace with the ever-increasing memory bandwidth demands of Artificial Intelligence (AI), High-Performance Compute (HPC) and other data center applications. They deliver operating speeds up to 12,800 Mega Transfers Per Second (MT/s), a 1.35x improvement in memory bandwidth over first-generation solutions. Renesas has been instrumental in the design, development and deployment of the new MRDIMMs, collaborating with industry leaders including CPU and memory providers, along with end customers.

NVIDIA and Microsoft Showcase Blackwell Preview, Omniverse Industrial AI and RTX AI PCs at Microsoft Ignite

NVIDIA and Microsoft today unveiled product integrations designed to advance full-stack NVIDIA AI development on Microsoft platforms and applications. At Microsoft Ignite, Microsoft announced the launch of the first cloud private preview of the Azure ND GB200 V6 VM series, based on the NVIDIA Blackwell platform. The Azure ND GB200 v6 will be a new AI-optimized virtual machine (VM) series and combines the NVIDIA GB200 NVL72 rack design with NVIDIA Quantum InfiniBand networking.

In addition, Microsoft revealed that Azure Container Apps now supports NVIDIA GPUs, enabling simplified and scalable AI deployment. Plus, the NVIDIA AI platform on Azure includes new reference workflows for industrial AI and an NVIDIA Omniverse Blueprint for creating immersive, AI-powered visuals. At Ignite, NVIDIA also announced multimodal small language models (SLMs) for RTX AI PCs and workstations, enhancing digital human interactions and virtual assistants with greater realism.

ASUS Republic of Gamers Announces the ROG Phone 9 Series

ASUS Republic of Gamers (ROG) today announced the new ROG Phone 9 series of stylish and powerful gaming phones, developed to offer the best possible gaming experience in a premium phone design. At launch the ROG Phone 9 series comprises the ROG Phone 9 and ROG Phone 9 Pro models, both boasting a sleek high-end design featuring the innovative AniMe Vision mini-LED auxiliary display, which gives the phones stand-out looks and personalization capabilities.

Both models offer an unrivaled gaming experience thanks to a suite of ROG-developed AI-driven gaming features. These features are amply powered by the Qualcomm Snapdragon 8 Elite Mobile Platform, combined with the advanced ROG GameCool 9 cooling system to unlock maximum performance at all times. Console-like game controls have always been a killer feature of the ROG Phone's design philosophy, and the ROG Phone 9 series continues that legacy, with its exclusive AirTrigger touch system and an extensive ecosystem of ROG gaming accessories.

MiTAC Unveils New AI/HPC-Optimized Servers With Advanced CPU and GPU Integration

MiTAC Computing Technology Corporation, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation (TSE:3706), is unveiling its new server lineup at SC24, booth #2543, in Atlanta, Georgia. MiTAC Computing's servers integrate the latest AMD EPYC 9005 Series CPUs, AMD Instinct MI325X GPU accelerators, Intel Xeon 6 processors, and professional GPUs to deliver enhanced performance optimized for HPC and AI workloads.

Leading Performance and Density for AI-Driven Data Center Workloads
MiTAC Computing's new servers, powered by AMD EPYC 9005 Series CPUs, are optimized for high-performance AI workloads. At SC24, MiTAC highlights two standout AI/HPC products: the 8U dual-socket MiTAC G8825Z5, featuring AMD Instinct MI325X GPU accelerators, up to 6 TB of DDR5 6000 memory, and eight hot-swap U.2 drive trays, ideal for large-scale AI/HPC setups; and the 2U dual-socket MiTAC TYAN TN85-B8261, designed for HPC and deep learning applications with support for up to four dual-slot GPUs, twenty-four DDR5 RDIMM slots, and eight hot-swap NVMe U.2 drives. For mainstream cloud applications, MiTAC offers the 1U single-socket MiTAC TYAN GC68C-B8056, with twenty-four DDR5 DIMM slots and twelve tool-less 2.5-inch NVMe U.2 hot-swap bays. Also featured is the 2U single-socket MiTAC TYAN TS70A-B8056, designed for high-IOPS NVMe storage, and the 2U 4-node single-socket MiTAC M2810Z5, supporting up to 3,072 GB of DDR5 6000 RDIMM memory and four easy-swap E1.S drives per node.

Corsair by d-Matrix Enables GPU-Free AI Inference

d-Matrix today unveiled Corsair, an entirely new computing paradigm designed from the ground-up for the next era of AI inference in modern datacenters. Corsair leverages d-Matrix's innovative Digital In-Memory Compute architecture (DIMC), an industry first, to accelerate AI inference workloads with industry-leading real-time performance, energy efficiency, and cost savings as compared to GPUs and other alternatives.

The emergence of reasoning agents and interactive video generation represents the next level of AI capabilities. These leverage more inference computing power to enable models to "think" more and produce higher quality outputs. Corsair is the ideal inference compute solution with which enterprises can unlock new levels of automation and intelligence without compromising on performance, cost or power.

Hypertec Introduces the World's Most Advanced Immersion-Born GPU Server

Hypertec proudly announces the launch of its latest breakthrough product, the TRIDENT iG series, an immersion-born GPU server line that brings extreme density, sustainability, and performance to the AI and HPC community. Purpose-built for the most demanding AI applications, this cutting-edge server is optimized for generative AI, machine learning (ML), deep learning (DL), large language model (LLM) training, inference, and beyond. With up to six of the latest NVIDIA GPUs in a 2U form factor, a staggering 8 TB of memory with enhanced RDMA capabilities, and groundbreaking density supporting up to 200 GPUs per immersion tank, the TRIDENT iG server line is a game-changer for AI infrastructure.

Additionally, the server's innovative design features a single or dual root complex, enabling greater flexibility and efficiency for GPU usage in complex workloads.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive Lineup for AI and HPC Success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that are powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

DDN Unveils Next-Generation Data Intelligence Platform for HPC and AI

DDN, a leading force in AI data intelligence, today set a new standard in AI and HPC data management with the launch of trailblazing upgrades to its data intelligence platform at Supercomputing 2024 in Atlanta. Built in close collaboration with NVIDIA, these advancements give organizations unmatched power to scale and optimize AI data operations—delivering efficiency, seamless scalability, and the kind of ROI that fuels business growth and innovation.

At the core of this innovation is DDN's deep integration with NVIDIA, bringing unparalleled performance enhancements to AI and HPC workloads. With the debut of DDN's next-generation A³I data platform, the AI400X3 organizations can now achieve a staggering 60 percent performance boost over previous generations. This boost translates to faster AI training, real-time insights, and smoother data processing, giving enterprises the agility to make rapid decisions and gain a competitive edge in today's data-driven landscape.

SC24: Supercomputer Fugaku Retains First Place Worldwide in HPCG and Graph500 Rankings

The supercomputer Fugaku, jointly developed by RIKEN and Fujitsu, has successfully retained the top spot for 10 consecutive terms in two major high-performance computer rankings, HPCG and Graph500 BFS (Breadth-First Search), and has also taken sixth place for the TOP500 and fourth place for the HPL-MxP rankings. The HPCG is a performance ranking for computing methods often used for real-world applications, and the Graph500 ranks systems based on graph analytic performance, an important element in data-intensive workloads. The results of the rankings were announced on November 19 at SC24, which is currently being held at Georgia World Congress Center in Atlanta, Georgia, USA.

The top ranking on Graph500 was won by a collaboration involving RIKEN, Institute of Science Tokyo, Fixstars Corporation, Nippon Telegraph and Telephone Corporation, and Fujitsu. It earned a score of 204.068 TeraTEPS with Fugaku's 152,064 nodes, an improvement of 38.038 TeraTEPS in performance from the previous measurement. This is the first time that a score of over 200 TeraTEPS has been recorded on the Graph500 benchmark.

ASRock Rack Brings End-to-End AI and HPC Server Portfolio to SC24

ASRock Rack Inc., a leading innovative server company, today announces its presence at SC24, held at the Georgia World Congress Center in Atlanta from November 18-21. At booth #3609, ASRock Rack will showcase a comprehensive high-performance portfolio of server boards, systems, and rack solutions with NVIDIA accelerated computing platforms, helping address the needs of enterprises, organizations, and data centers.

Artificial intelligence (AI) and high-performance computing (HPC) continue to reshape technology. ASRock Rack is presenting a complete suite of solutions spanning edge, on-premise, and cloud environments, engineered to meet the demand of AI and HPC. The 2U short-depth MECAI, incorporating the NVIDIA GH200 Grace Hopper Superchip, is developed to supercharge accelerated computing and generative AI in space-constrained environments. The 4U10G-TURIN2 and 4UXGM-GNR2, supporting ten and eight NVIDIA H200 NVL PCIe GPUs respectively, are aiming to help enterprises and researchers tackle every AI and HPC challenge with enhanced performance and greater energy efficiency. NVIDIA H200 NVL is ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations, delivering acceleration for AI and HPC workloads regardless of size.

GIGABYTE Showcases a Leading AI and Enterprise Portfolio at Supercomputing 2024

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, shows off at SC24 how the GIGABYTE enterprise portfolio provides solutions for all applications, from cloud computing to AI to enterprise IT, including energy-efficient liquid-cooling technologies. This portfolio is made more complete by long-term collaborations with leading technology companies and emerging industry leaders, which will be showcased at GIGABYTE booth #3123 at SC24 (Nov. 19-21) in Atlanta. The booth is sectioned to put the spotlight on strategic technology collaborations, as well as direct liquid cooling partners.

The GIGABYTE booth will showcase an array of NVIDIA platforms built to keep up with the diversity of workloads and degrees of demands in applications of AI & HPC hardware. For a rack-scale AI solution using the NVIDIA GB200 NVL72 design, GIGABYTE displays how seventy-two GPUs can be in one rack with eighteen GIGABYTE servers each housing two NVIDIA Grace CPUs and four NVIDIA Blackwell GPUs. Another platform at the GIGABYTE booth is the NVIDIA HGX H200 platform. GIGABYTE exhibits both its liquid-cooling G4L3-SD1 server and an air-cooled version, G593-SD1.

NVIDIA Announces Hopper H200 NVL PCIe GPU Availability at SC24, Promising 1.3x HPC Performance Over H100 NVL

Since its introduction, the NVIDIA Hopper architecture has transformed the AI and high-performance computing (HPC) landscape, helping enterprises, researchers and developers tackle the world's most complex challenges with higher performance and greater energy efficiency. During the Supercomputing 2024 conference, NVIDIA announced the availability of the NVIDIA H200 NVL PCIe GPU - the latest addition to the Hopper family. H200 NVL is ideal for organizations with data centers looking for lower-power, air-cooled enterprise rack designs with flexible configurations to deliver acceleration for every AI and HPC workload, regardless of size.

According to a recent survey, roughly 70% of enterprise racks are 20kW and below and use air cooling. This makes PCIe GPUs essential, as they provide granularity of node deployment, whether using one, two, four or eight GPUs - enabling data centers to pack more computing power into smaller spaces. Companies can then use their existing racks and select the number of GPUs that best suits their needs. Enterprises can use H200 NVL to accelerate AI and HPC applications, while also improving energy efficiency through reduced power consumption. With a 1.5x memory increase and 1.2x bandwidth increase over NVIDIA H100 NVL, companies can use H200 NVL to fine-tune LLMs within a few hours and deliver up to 1.7x faster inference performance. For HPC workloads, performance is boosted up to 1.3x over H100 NVL and 2.5x over the NVIDIA Ampere architecture generation.

MSI Claw 8 AI+ To Get Massive Battery and Lunar Lake CPU With Full Unveiling at CES 2025

MSI previously gave us a teaser of what to expect from its upcoming Claw 8 AI+ gaming handheld with Intel's Lunar Lake Core Ultra CPUs, calling it "the most advanced 8-inch gaming handheld in the market," but a recent CES listing has divulged more details about the upcoming challenger to the likes of the Lenovo Legion Go. For starters, the upgraded battery capacity has been revealed, along with an estimated battery life and performance figures. Supposedly, more information will be revealed about the Claw 8 AI+ at CES 2025, which starts on January 7, 2025.

According to the CES page, the new MSI Claw 8 AI+ will have an 8-inch display and an 82 WHr battery, which will supposedly be able to deliver "4+ hours of gameplay for AAA titles." However, these manufacturer claims are generally to be taken with healthy helpings of salt, especially in when it comes to claims as nebulous as "AAA titles" without any proposed quality settings, specific games, or frame rates. Regarding the display, it wouldn't be surprising to see MSI use the same display as the one found in the Lenovo Legion Go, since there is a somewhat limited selection of 8-inch displays for handheld gaming devices. The MSI Claw 8 AI+ will also use Intel's 2nd-generation ARC iGPUs in conjunction with AI-enhanced graphics, which should provide a healthy uptick in both performance and efficiency, with the CES listing touting 48 TOPS of compute power.

IBM Expands Its AI Accelerator Offerings; Announces Collaboration With AMD

IBM and AMD have announced a collaboration to deploy AMD Instinct MI300X accelerators as a service on IBM Cloud. This offering, which is expected to be available in the first half of 2025, aims to enhance performance and power efficiency for Gen AI models such as and high-performance computing (HPC) applications for enterprise clients. This collaboration will also enable support for AMD Instinct MI300X accelerators within IBM's watsonx AI and data platform, as well as Red Hat Enterprise Linux AI inferencing support.

"As enterprises continue adopting larger AI models and datasets, it is critical that the accelerators within the system can process compute-intensive workloads with high performance and flexibility to scale," said Philip Guido, executive vice president and chief commercial officer, AMD. "AMD Instinct accelerators combined with AMD ROCm software offer wide support including IBM watsonx AI, Red Hat Enterprise Linux AI and Red Hat OpenShift AI platforms to build leading frameworks using these powerful open ecosystem tools. Our collaboration with IBM Cloud will aim to allow customers to execute and scale Gen AI inferencing without hindering cost, performance or efficiency."

New Ultrafast Memory Boosts Intel Data Center Chips

While Intel's primary product focus is on the processors, or brains, that make computers work, system memory (that's DRAM) is a critical component for performance. This is especially true in servers, where the multiplication of processing cores has outpaced the rise in memory bandwidth (in other words, the memory bandwidth available per core has fallen). In heavy-duty computing jobs like weather modeling, computational fluid dynamics and certain types of AI, this mismatch could create a bottleneck—until now.

After several years of development with industry partners, Intel engineers have found a path to open that bottleneck, crafting a novel solution that has created the fastest system memory ever and is set to become a new open industry standard. The recently introduced Intel Xeon 6 data center processors are the first to benefit from this new memory, called MRDIMMs, for higher performance—in the most plug-and-play manner imaginable.

Lenovo Shows 16 TB Memory Cluster with CXL in 128x 128 GB Configuration

Expanding the system's computing capability with an additional accelerator like a GPU is common. However, expanding the system's memory capacity with room for more DIMM is something new. Thanks to ServeTheHome, we see that at the OCP Summit 2024, Lenovo showcased its ThinkSystem SR860 V3 server, leveraging CXL technology and Astera Labs Leo memory controllers to accommodate a staggering 16 TB of DDR5 memory across 128 DIMM slots. Traditional four-socket servers face limitations due to the memory channels supported by Intel Xeon processors. With each CPU supporting up to 16 DDR5 DIMMs, a four-socket configuration maxes out at 64 DIMMs, equating to 8 TB when using 128 GB RDIMMs. Lenovo's new approach expands this ceiling significantly by incorporating an additional 64 DIMM slots through CXL memory expansion.

The ThinkSystem SR860 V3 integrates Astera Labs Leo controllers to enable the CXL-connected DIMMs. These controllers manage up to four DDR5 DIMMs each, resulting in a layered memory design. The chassis base houses four Xeon processors, each linked to 16 directly connected DIMMs, while the upper section—called the "memory forest"—houses the additional CXL-enabled DIMMs. Beyond memory capabilities, the server supports up to four double-width GPUs, making it also a solution for high-performance computing and AI workloads. This design caters to scale-up applications requiring vast memory resources, such as large-scale database management, and allows the resources to stay in memory instead of waiting on storage. CXL-based memory architectures are expected to become more common next year. Future developments may see even larger systems with shared memory pools, enabling dynamic allocation across multiple servers. For more pictures and video walkthrough, check out ServeTheHome's post.

ARBOR Unveils IEC-3714 Industrial NUC-Sized PC Featuring Intel Core Ultra and 34 TOPS

ARBOR Technology has unveiled the boundaries of computing with its latest NUC-sized PC, featuring 34 TOPS of AI computing power driven by Intel Core Ultra processors and Intel Arc Graphics, offering unparalleled performance and efficiency in a compact design.

Experience AI-Accelerated Performance
Experience unparalleled performance and AI acceleration with the latest Intel Core Ultra processors. These cutting-edge chips feature a hybrid architecture that seamlessly blends CPU, GPU, and NPU capabilities, delivering up to a 14% boost in CPU productivity. Immerse yourself in stunning visuals and smooth computing experiences with integrated Intel Arc Graphics. These discrete graphics cards offer exceptional performance and energy efficiency.

AMD Claims Ryzen AI 9 HX 370 Outperforms Intel Core Ultra 7 258V by 75% in Gaming

AMD has published a blog post about its latest AMD Ryzen AI 300 series processors, claiming they are changing the game for portable devices. To back these claims, Team Red has compared its Ryzen AI 9 HX 370 processor to Intel's latest Core Ultra 7 258V, using the following games: Assassin's Creed Mirage, Baldur's Gate 3, Borderlands 3, Call of Duty: Black Ops 6, Cyberpunk 2077, Doom Eternal, Dying Light 2 Stay Human, F1 24, Far Cry 6, Forza Horizon 5, Ghost of Tsushima, Hitman 3, Hogwarts Legacy, Shadow of the Tomb Raider, Spider-Man Remastered, and Tiny Tina's Wonderlands. The conclusion was that AMD's Ryzen AI 9 HX 370, with its integrated Radeon 890M graphics powerhouse, outperformed the Intel "Lunar Lake" Core Ultra 7 258V with Intel Arc Graphics 140V by 75% on average.

To support this performance leap, AMD also relies on software technologies, including FidelityFX Super Resolution 3 (FSR 3) and HYPR-RX, to unlock additional power and gaming efficiency. FSR 3 alone enhances visuals in over 95 games, while HYPR-RX, with features like AMD Fluid Motion Frames 2 (AFMF 2) and Radeon Anti-Lag, provides substantial performance boosts across thousands of games. The company has also compared its FSR/HYPR-RS combination with Intel's XeSS, which is available in around 130 games. AMD claims its broader suite supports 415+ games and is optimized for smoother gameplay. The AFMF 2 claims support with thousands of titles, while Intel's GPU software stack lacks a comparison point. Of course, these marketing claims are to be taken with a grain of salt, so independent testing is always the best to compare the two.

ASUS Presents All-New Storage-Server Solutions to Unleash AI Potential at SC24

ASUS today announced its groundbreaking next-generation infrastructure solutions at SC24, featuring a comprehensive lineup powered by AMD and Intel, as well as liquid-cooling solutions designed to accelerate the future of AI. By continuously pushing the limits of innovation, ASUS simplifies the complexities of AI and high-performance computing (HPC) through adaptive server solutions paired with expert cooling and software-development services, tailored for the exascale era and beyond. As a total-solution provider with a distinguished history in pioneering AI supercomputing, ASUS is committed to delivering exceptional value to its customers.

Comprehensive line-up for AI and HPC success
To fuel enterprise digital transformation through HPC and AI-driven architecture, ASUS provides a full lineup of server systems that powered by AMD and Intel. Startups, research institutions, large enterprises or government organizations all could find the adaptive solutions to unlock value and accelerate business agility from the big data.

AMD to Cut its Workforce by About Four Percent

According to CRN, AMD is looking to make some cuts to its workforce of approximately 26,000 employees. The company hasn't announced a specific number, but in a comment to the publication AMD said that "as a part of aligning our resources with our largest growth opportunities, we are taking a number of targeted steps that will unfortunately result in reducing our global workforce by approximately 4 percent". In actual headcount numbers that should be just north of a thousand people that the company will let go. It's not clear which departments or divisions at AMD will be affected the most, but the cutback appears to be a response to AMD's mixed quarterly report.

AMD's statement also doesn't make it clear on exactly what the company will be putting its focus on moving forward, but CRN seems to suggest that the embedded and gaming business is where AMD is struggling. That said, it's not likely that AMD will put an increased focus on those businesses, but instead the company is more likely to invest more into its server products, least not to try and catch up with NVIDIA in the AI server market. According to CRN, AMD has also seen a strong demand in AI PCs, such as the Ryzen AI 300-series of mobile SoCs, so it's possible AMD will put an extra effort into is mobile product range. The Ryzen 9000-series is thankfully also doing well, so it's unlikely there will be any big cutbacks here. We already know that AMD is not going after NVIDIA with a new flagship GPU to compete with NVIDIA's GeForce RTX 5000-series flagship SKU, so it's possible that the company will cut back on some people in its consumer GPU team for the time being, but this should become clear come CES in January.

Phison Unveils Pascari D-Series PCIe Gen 5 128TB Data Center SSDs

Phison Electronics, a leading innovator in NAND Flash technologies, today announced the newest addition and highest available capacity of the Pascari D-Series data center-optimized SSDs to be showcased at SC24. The Pascari D205V drive is the first PCIe Gen 5 128 TB data center class SSD available for preorder to address shifting storage demands across use cases including AI, media and entertainment (M&E), research and beyond. In a single drive the Pascari D205V offers 122.88 TB of storage, creating a four-to-one capacity advantage over traditional cold storage hard drives while shrinking both physical footprint and OPEX costs.

While the exponential data-deluge continues to strain data center infrastructure, organizations face a tipping point to maximize investment while remaining conscious of footprint, cost efficiency and power consumption. The Pascari D205V read-intensive SSD combines Phison's industry-leading X2 controller and the latest 2 Tb 3D QLC technology engineered to enable unequaled 14,600 MB/s sequential read and 3,000K IOPS random read performance. By doubling both the read speeds against Gen 4 as well as the capacity against the 61.44 TB enterprise SSDs currently available on the market today, the Pascari D205V allows customers to upgrade to larger datasets per server, top-tier capacity-per-watt utilization and unparalleled read performance.

HPE Expands Direct Liquid-Cooled Supercomputing Solutions With Two AI Systems for Service Providers and Large Enterprises

Today, Hewlett Packard Enterprise announces its new high performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio that includes leadership-class HPE Cray Supercomputing EX solutions and two systems optimized for large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The new supercomputing solutions are designed to help global customers fast-track scientific research and invention.

"Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems."
Return to Keyword Browsing
Nov 21st, 2024 09:52 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts