News Posts matching #Xeon

Return to Keyword Browsing

Supermicro Announces New Eight- and Four-Socket 4th Gen Intel Xeon Servers

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is introducing the most powerful server in its lineup for large-scale database and enterprise applications. The Multi-Processor product line includes the 8-socket server, ideal for in-memory databases requiring up to 480 cores and 32 TB of DDR5 memory for maximum performance. In addition, the product line includes a 4-socket server, which is ideal for applications that require a single system image of up to 240 cores and 16 TB of high-speed memory.

These powerful systems all use 4th Gen Intel Xeon Scalable processors. Compared with the previous generation of 8-socket and 4-socket servers, the systems have 2X the core count, 1.33X the memory capacity, and 2X the memory bandwidth. Also, these systems deliver up to 4X the I/O bandwidth compared to previous generations of systems for connectivity to peripherals. The Supermicro 8-socket system has attained the highest performance ratings ever for a single system based on the SPECcpu2017 FP Rate benchmarks, for both the base and peak results. In addition, the Supermicro 8-socket and 4-socket servers demonstrate performance leadership on a wide range of SPEC benchmarks.

Nfina Technologies Releases Two New 3rd Gen Intel Xeon Scalable Processor-based Systems

Nfina announces the addition of two new server systems to its lineup, customized for small to medium businesses and virtualized environments. Featuring 3rd Gen Intel Xeon Scalable Processors, these scalable server systems fill a void in the marketplace, bringing exceptional multi-socket processing performance, easy setup, operability, and Nfina's five-year warranty.

"We are excited to add two new 3rd generation Intel systems to Nfina's lineup. Performance, scalability, and flexibility are key deciding factors when expanding our offerings," says Warren Nicholson, President and CEO of Nfina. "Both servers are optimized for high- performance computing, virtualized environments, and growing data needs." He continues by saying, "The two servers can also be leased through our managed services division. We provide customers with choices that fit the size of their application and budget - not a one size fits all approach."

Intel "Emerald Rapids" Doubles Down on On-die Caches, Divests on Chiplets

Finding itself embattled with AMD's EPYC "Genoa" processors, Intel is giving its 4th Gen Xeon Scalable "Sapphire Rapids" processor a rather quick succession in the form of the Xeon Scalable "Emerald Rapids," bound for Q4-2023 (about 8-10 months in). The new processor shares the same LGA4677 platform and infrastructure, and much of the same I/O, but brings about two key design changes that should help Intel shore up per-core performance, making it competitive to EPYC "Zen 4" processors with higher core-counts. SemiAnalysis compiled a nice overview of the changes, the two broadest points of it being—1. Intel is peddling back on the chiplet approach to high core-count CPUs, and 2., that it wants to give the memory sub-system and inter-core performance a massive performance boost using larger on-die caches.

The "Emerald Rapids" processor has just two large dies in its extreme core-count (XCC) avatar, compared to "Sapphire Rapids," which can have up to four of these. There are just three EMIB dies interconnecting these two, compared to "Sapphire Rapids," which needs as many as 10 of these to ensure direct paths among the four dies. The CPU core count itself doesn't see a notable increase. Each of the two dies on "Emerald Rapids" physically features 33 CPU cores, so a total of 66 are physically present, although one core per die is left unused for harvesting, the SemiAnalysis article notes. So the maximum core-count possible commercially is 32 cores per die, or 64 cores per socket. "Emerald Rapids" continues to be based on the Intel 7 process (10 nm Enhanced SuperFin), probably with a few architectural improvements for higher clock-speeds.

Intel Sapphire Rapids Sales Forecasted to Slow Down, Microsoft Cuts Orders

According to Ming-Chi Kuo, an industry analyst known for making accurate predictions about Apple, we have some new information regarding Intel's Sapphire Rapids Xeon processors. As Kuo notes, Intel's major Cloud Service Provider (CSP) client, Microsoft, has notified the supply chain that the company is cutting orders of Sapphire Rapids Xeons by 50-70% in the second half of 2023. Interestingly, Intel's supply chain has notified the company to cut chip orders by around 50% amidst weak server demand. This comes straight after Intel's plans to start shipping Sapphire Rapids processors in the second quarter of 2023 and deliver the highly anticipated lineup to customers.

Additionally, Kuo has stated that Intel isn't only competing for clients with AMD but also with Arm-based CPUs. Microsoft also plans to start buying Arm-based server processors made by Ampere Computing in the first half of 2024. This will reduce Microsoft's dependence on x86 architecture and induce higher competition in the market, especially if other CSPs follow.

Gigabyte Extends Its Leading GPU Portfolio of Servers

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a lineup of powerful GPU-centric servers with the latest AMD and Intel CPUs, including NVIDIA HGX H100 servers with both 4-GPU and 8-GPU modules. With growing interest in HPC and AI applications, specifically generative AI (GAI), this breed of server relies heavily on GPU resources to tackle compute-heavy workloads that handle large amounts of data. With the advent of OpenAI's ChatGPT and other AI chatbots, large GPU clusters are being deployed with system-level optimization to train large language models (LLMs). These LLMs can be processed by GIGABYTE's new design-optimized systems that offer a high level of customization based on users' workloads and requirements.

The GIGABYTE G-series servers are built first and foremost to support dense GPU compute and the latest PCIe technology. Starting with the 2U servers, the new G293 servers can support up to 8 dual-slot GPUs or 16 single-slot GPUs, depending on the server model. For the ultimate in CPU and GPU performance, the 4U G493 servers offer plenty of networking options and storage configurations to go alongside support for eight (Gen 5 x16) GPUs. And for the highest level of GPU compute for HPC and AI, the G393 & G593 series support NVIDIA H100 Tensor Core GPUs. All these new two CPU socket servers are designed for either 4th Gen AMD EPYC processors or 4th Gen Intel Xeon Scalable processors.

Noctua Cools Down 700 W 56-core Intel Xeon W9-3495X on Air

Noctua has showcased its NH-U14S DX-4677 air cooler in action, cooling down Intel's 56-core Xeon W9-3495X at full load and drawing 700 W of power. While all-in-one (AiO) liquid coolers are popular these days, Noctua aim to show that air coolers are more than capable on handling even the most high-end CPUs, even at continuous load and without throttling.

While the video does not show the full details of the CPU settings, it is still an impressive feat, especially considering the high power draw, which suggest that the CPU was pushed way beyond its default settings for demonstration. The setup includes the aforementioned Intel's 56-core Xeon W9-3495X CPU, running on ASUS Pro WS W790E-SAGE SE motherboard with SK Hynix DDR5 EC8 RDIMM, and powered by Seasonic's PX-1600 PSU.

NVIDIA H100 AI Performance Receives up to 54% Uplift with Optimizations

On Wednesday, the MLCommons team released the MLPerf 3.0 Inference numbers, and there was an exciting submission from NVIDIA. Reportedly, NVIDIA has used software optimization to improve the already staggering performance of its latest H100 GPU by up to 54%. For reference, NVIDIA's H100 GPU first appeared on MLPerf 2.1 back in September of 2022. In just six months, NVIDIA engineers worked on AI optimizations for the MLPerf 3.0 release to find that basic software optimization can catalyze performance increases anywhere from 7-54%. The workloads for measuring the inferencing speed suite included RNN-T speech recognition, 3D U-Net medical imaging, RetinaNet object detection, ResNet-50 object classification, DLRM recommendation, and BERT 99/99.9% natural language processing.

What is interesting is that NVIDIA's submission is a bit modified. There are open and closed categories that vendors have to compete in, where closed is the mathematical equivalent of a neural network. In contrast, the open category is flexible and allows vendors to submit results based on optimizations for their hardware. The closed submission aims to provide an "apples-to-apples" hardware comparison. Given that NVIDIA opted to use the closed category, performance optimization of other vendors such as Intel and Qualcomm are not accounted for here. Still, it is interesting that optimization can lead to a performance increase of up to 54% in NVIDIA's case with its H100 GPU. Another interesting takeaway is that some comparable hardware, like Qualcomm Cloud AI 100, Intel Xeon Platinum 8480+, and NeuChips's ReccAccel N3000, failed to finish all the workloads. This is shown as "X" on the slides made by NVIDIA, stressing the need for proper ML system software support, which is NVIDIA's strength and an extensive marketing claim.

Klas Announces an Innovative Compute Module to Revolutionize Edge Deployments

Klas, a global leader in edge intelligence solutions, launched a new rugged compute module, the VoyagerVM 4.0, powered by the new Intel Xeon D embedded processor (formerly known as Ice Lake D). The innovative design integrates the newest edge compute technology and exposes an expansion interface to support more agile, and open edge deployments.

The rugged design of the VoyagerVM 4.0 allows it to operate continuously in power constrained and austere environments with wide-ranging ambient temperatures with no impact to performance. Its compact size makes it the ideal edge computing platform for heavy-industry, autonomous vehicles, smart transportation, and military applications.

Intel Presents a Refreshed Xeon CPU Roadmap for 2023-2025

All eyes - especially investors' eyes - are on Intel's data center business today. Intel's Sandra Rivera, Greg Lavender and Lisa Spelman hosted a webinar focused on the company's Data Center and Artificial Intelligence business unit. They offered a big update on Intel's latest market forecasts, hardware plans and the way Intel is empowering developers with software.

Executives dished out updates on Intel's data center business for investors. This included disclosures about future generations of Intel Xeon chips, progress updates on 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) and demos of Intel hardware tackling the competition, heavy AI workloads and more.

Xeon Roadmap Roll Call
Among Sapphire Rapids, Emerald Rapids, Sierra Forest and Granite Rapids, there is a lot going on in the server CPU business. Here's your Xeon roadmap updates in order of appearance:

Intel LGA 7529 Socket Photographed Again, Comparisons Show Gargantuan Physical Footprint

A set of detailed photos has been uploaded to a blog on the Chinese Bilibili site, and the subject matter is an engineering sample of a motherboard that features Intel's next generation LGA 7529 socket. Specifications and photos relating to this platform have cropped up in the past, but the latest leak offers many new tidbits of information. The Bilibili blogger placed a Sapphire Rapids Xeon Processor on top of the the new socket, and this provides an interesting point of reference - it demonstrates the expansive physical footprint that the fifth-generation platform occupies on the board.

This year's Sapphire Rapids LGA 4677 (Socket E) is already considered to be a sizeable prospect - measuring at 61 × 82 mm. The upcoming Mountain Stream platform (LGA 7529) is absolutely huge in comparison, with eyeball estimates placing it possessing rough dimensions (including the retention arm) of 66 × 92.5 mm. The fifth generation platform is designed to run Intel's Granite Rapids and Sierra Forest CPUs - this family of Xeons featuring scalable microarchitecture is expected to launch in 2024. The code name "Avenue City" has been given to a reference platform that features a dual socket configuration.

Supermicro Expands GPU Solutions Portfolio with Deskside Liquid-Cooled AI Development Platform, Powered by NVIDIA

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the first in a line of powerful yet quiet and power-efficient NVIDIA-accelerated AI Development platforms which gives information professionals and developers the most powerful technology available today at their deskside. The new AI development platform, the SYS-751GE-TNRT-NV1, is an application-optimized system that excels when developing and running AI-based software. This innovative system gives developers and users a complete HPC and AI resource for department workloads. In addition, this powerful system can support a small team of users running training, inference, and analytics workloads simultaneously.

The self-contained liquid-cooling feature addresses the thermal design power needs of the four NVIDIA A100 Tensor Core GPUs and the two 4th Gen Intel Xeon Scalable CPUs to enable full performance while improving the overall system's efficiency and enabling quiet (approximately 30dB) operation in an office environment. In addition, this system is designed to accommodate high-performing CPUs and GPUs, making it ideal for AI/DL/ML and HPC applications. The system can reside in an office environment or be rack-mounted when installed in a data center environment, simplifying IT management.

TYAN to Showcase Cloud Platforms for Data Centers at CloudFest 2023

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, will showcase its latest cloud server platforms powered by AMD EPYC 9004 Series processors and 4th Gen Intel Xeon Scalable processors for next-generation data centers at CloudFest 2023, Booth #H12 in Europa-Park from March 21-23.

"With the exponential advancement of technologies like AI and Machine Learning, data centers require robust hardware and infrastructure to handle complex computations while running AI workloads and processing big data," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure BU. "TYAN's cloud server platforms with storage performance and computing capability can support the ever-increasing demand for computational power and data processing."

Intel Xeon Granite Rapids and Sierra Forest to Feature up to 500 Watt TDP and 12-Channel Memory

Today, thanks to Yuuki_Ans on the Chinese Bilibili forum, we have more information about the upcoming "Avenue City" platform that powers Granite Rapids and Sierra Forest. Intel's forthcoming Granite Rapids and Sierra Forest Xeon processors will diverge the Xeon family into two offerings: one optimized for performance/core equipped with P-cores and the other for power/core equipped with E-cores. The reference platform Intel designs and shares with OEMs internally is a 16.7" x 20" board with 20 PCB layers, made as a dual-socket solution. Featuring two massive LGA-7529 sockets, the reference design shows the basic layout for a server powered by these new Xeons.

Capable of powering Granite Rapids / Sierra Forest-AP processors of up to 500 Watts, the platform also accommodates next-generation I/O. Featuring 24 DDR5 DIMMs with support for 12-channel memory, with memory speeds of up to 6400 MT/s. The PCIe selection includes six PCIe Gen 5 x16 links supporting CXL cache coherent protocol and 6x24 UPI links. Additionally, we have another piece of information that Granite Rapids will come with up to 128 cores and 256 threads in both regular and HBM-powered Xeon Max flavoring. You can see storage and reference platform configuration details on the slides below.

Intel Might Have Canceled Thunder Bay Hybrid SoC

Intel has quietly canceled its Thunder Bay hybrid system-on-chip (SoC) that combines standard general-purpose CPU Cores and Movidius Vision Processing Unit (VPU) cores. Such chips were aimed at commercial and Internet-of-Things (IoT) applications that relied on computer vision acceleration and edge-computing applications. According to the latest report, it appears that Thunder Bay is canceled as Intel has released a set of patches that removed the Thunder Bay code from the Linux kernel.

Intel kept Thunder bay details well hidden and while earlier rumors pointed to a combination of Intel Xeon x86 CPU cores and Movidius VPU cores, the only Thunder Bay support in Linux patches where showing a combination of ARM Cortex-A53 low-power cores with the Movidius VPU. Intel acquired Movidius back in 2016 and while it is not something that Intel talks about often, there are several products based on Movidius VPUs, including Neural Compute Stick, Intel drones, the Intel RealSense Tracking Camera, and most recently, the Intel Movidius 3700VC VPU, formerly Keem Bay. In any case, it appears that Intel is abandoning an idea of combining general purpose x86 CPU cores and Movidius VPU cores.

Intel Xeon W9-3495X Can Pull up to 1,900 Watts in Extreme OC Scenarios

Intel's latest Xeon processors based on Sapphire Rapids uArch have arrived in the hands of overclockers. Last week, we reported that the Intel Xeon W9-3495X is officially a world record holder for achieving the best scores in Cinebench R23 and R20, Y-Cruncher, 3DMark CPU test, and Geekbench 3. However, today we have another extreme overclocking attempt to beat the world record, with little more details about power consumption and what the new SKU is capable of. Elmor, an overclocker working with ASUS, has tried to break the world record and overclocked the Intel Xeon W9-3495X CPU to 5.5 GHz on all 56 cores. What is more impressive is the power that the processor can consume.

With a system powered by two Superflower Leadex 1,600 Watt power supply units, the CPU consumed almost 1,900 Watts of power from the wall. To manage to cool this heat output, liquid nitrogen was used, and the CPU stayed at a cool negative 95 degrees Celsius. The motherboard of choice for this attempt was ASUS Pro WS W790E-SAGE SE, paired with eight GSKILL Zeta R5 DDR5 R-DIMMs modules. And results were incredible, as the CPU achieved 132,220 points in Cinebench R23. However, the world record of the previous week has remained intact, as Elmor 's result is a bit behind last week's score of 132,484 points. Check the video below for more info.

Supermicro Expands Storage Solutions Portfolio for Intensive I/O Workloads with Industry Standard Based All-Flash Servers Utilizing EDSFF E3.S, and E1

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the latest addition to its revolutionary ultra-high performance, high-density petascale class all-flash NVMe server family. Supermicro systems in this high-performance storage product family will support the next-generation EDSFF form factor, including the E3.S and E1.S devices, in form factors that accommodate 16- and 32 high-performance PCIe Gen 5 NVMe drive bays.

The initial offering of the updated product line will support up to one-half of a petabyte of storage space in a 1U 16 bay rackmount system, followed by a full petabyte of storage space in a 2U 32 bay rackmount system for both Intel and AMD PCIe Gen 5 platforms. All of the Supermicro systems that support either the E1.S or E3.s form factors enable customers to realize the benefits in various application-optimized servers.

ASRock W790 WS Achieves Multiple Overclocking Records on HWBOT

ASRock, with the help of legendary overclocker Splave have used their new W790 WS to take Intel's Sapphire Rapids based Xeon w7-2495X as high as 5.3 GHz to grab a pile of first place global performance scores on HWBOT. In total they've secured 14 global records in the 24-core category, and 18 more records in the w7-2495X hardware category. What is perhaps slightly more impressive is that they managed to exceed the officially supported DDR5-4800 by 2000 MT/s, achieving a frequency of 3400 MHz (6800 MT/s) with a CAS Latency of 32. The results were all produced with a custom water cooled configuration, with many benchmarks needing below 5 GHz clocks to remain stable enough for validation, so there is certainly room for higher scores under more exotic cooling solutions.

One of the more interesting details presented in the screenshots of certain benchmark results such as Y-Cruncher and Cinebench R15 is the gargantuan power draw exhibited by the w7-2495X. With a rated Vcore of 1.22v the 24-core sucks down over 500 W. We have already seen the 56-core w9-3495X pull a full kilowatt in early overclocking sessions, so the power draw here is not entirely without merit. Needless to say while Intel is finally offering a compelling HEDT lineup of unlocked processors, you may not necessarily be able to squeeze much out of them before tripping your breakers. As for ASRock's records it's still early days for the new Xeon W lineup and there will be a revolving door of world record holders before the final ounce of performance is squeezed from these chips.

4th Gen Intel Xeon Scalable Sprints into the Market

On Jan. 10, 2023, Intel officially launched its 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) for data center customers around the globe. Fueled by years of hard work and focus, Intel delivered its highest quality Xeon product ever, and is already seeing a rapid ramp and the greatest-ever level of customer and partner support.

Launch day brought to market a new Intel leadership product bolstered by significant advancements in performance with industry-leading accelerators, increased core count and improved performance per watt. With a strong new product in the market, customer adoption of these new technologies has been swift. In just eight weeks since launch, Intel has executed the most ever design wins for any Xeon family and the most ever platforms available and shipping in this short time-to-market window.

Microsoft Azure Announces New Scalable Generative AI VMs Featuring NVIDIA H100

Microsoft Azure announced their new ND H100 v5 virtual machine which packs Intel's Sapphire Rapids Xeon Scalable processors with NVIDIA's Hopper H100 GPUs, as well as NVIDIA's Quantum-2 CX7 interconnect. Inside each physical machine sits eight H100s—presumably the SXM5 variant packing a whopping 132 SMs and 528 4th generation tensor cores—interconnected by NVLink 4.0 which ties them all together with 3.6 TB/s bisectional bandwidth. Outside each local machine is a network of thousands more H100s connected together with 400 GB/s Quantum-2 CX7 InfiniBand, which Microsoft says allows 3.2 Tb/s per VM for on-demand scaling to accelerate the largest AI training workloads.

Generative AI solutions like ChatGPT have accelerated demand for multi-ExaOP cloud services that can handle the large training sets and utilize the latest development tools. Azure's new ND H100 v5 VMs offer that capability to organizations of any size, whether you're a smaller startup or a larger company looking to implement large-scale AI training deployments. While Microsoft is not making any direct claims for performance, NVIDIA has advertised H100 as running up to 30x faster than the preceding Ampere architecture that is currently offered with the ND A100 v4 VMs.

Giga Computing Releases First Workstation Motherboards to Support DDR5 and PCIe Gen5 Technologies

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced two new workstation motherboards, GIGABYTE MW83-RP0 and MW53-HP0, built to support the Intel Xeon W-3400 or Intel Xeon W-2400 desktop workstation processors. The new CPU platform, developed on the Intel W790 chipset, is the first workstation platform in the market that supports both DDR5 and PCIe 5.0 technology, and this platform excels at demanding applications such as complex 3D CAD, AI development, simulations, 3D rendering, and more.

The new generation of Intel "Sapphire Rapids" Xeon W-3400 & W-2400 series processors adds some significant benefits to its workstation processors when compared to the prior gen of "Ice Lake" Xeon W-3300 processors. Like its predecessor, the new Xeon processors support up to 4 TB of 8-channel memory; however, the new Xeon CPUs have moved to DDR5, which is incredibly advantageous because of the big jump in memory bandwidth performance. Second, higher CPU performance across most workloads, partially due to the higher CPU core count and higher clock speeds. As mentioned before, the new Xeon processors support PCIe Gen 5 devices and speeds for higher throughput between CPU and devices such as GPU.

Intel Xeon W9-3495X Sets World Record, Dethrones AMD Threadripper

When Intel announced the appearance of the 4th generation Xeon-W processors, the company announced that the clock multiplier was left unlocked, available for overclockers to try and push these chips even harder. However, it was only a matter of time before we saw the top-end Xeon-W SKU take a chance at beating the world record in Cinebench R23. The Intel Xeon W9-3495X SKU is officially the world record score holder with 132,484 points in Cinebench R23. The overclocker OGS from Greece managed to push all 56 cores and 112 threads of the CPU to 5.4 GHz clock frequency using liquid nitrogen (LN2) cooling setup. Using ASUS Pro WS W790E-SAGE SE motherboard and G-SKILL Zeta R5 RAM kit, the OC record was set on March 8th.

The previous record holder of this position was AMD with its Threadripper Pro 5995WX with 64 cores and 128 threads clocked at 5.4 GHz. Not only did Xeon W9-3495X set the Cinebench R23 record, but the SKU also placed the newest record for Cinebench R20, Y-Cruncher, 3DMark CPU test, and Geekbench 3 as well.

Supermicro Expands Storage Solutions Portfolio for Intensive I/O Workloads with All-Flash Servers Utilizing EDSFF E3.S and E1.S Storage Drives

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the latest addition to its revolutionary ultra-high performance, high-density petascale class all-flash NVMe server family. Supermicro systems in this high performance storage product family will support the next-generation EDSFF form factor, including the E3.S and E1.S devices, in form factors that accommodate 16- and 32 high-performance PCIe Gen 5 NVMe drive bays.

The initial offering of the updated product line will support up to one-half of a petabyte storage space in a 1U 16 bay rackmount system, followed by a fully petabyte storage space in 2U 32 bay rackmount system for both Intel and AMD PCIe Gen 5 platforms.

Team Group Announces DDR5-6800 ECC RDIMM with XMP 3.0

Leading memory provider Team Group today has announced a breakthrough in specs for its newest DDR5 ECC R-DIMM memory module, which has an increased clock rate of 5,600 MHz, meeting the JEDEC standard for high-performance specifications. In addition, the company has collaborated with well-known motherboard manufacturer ASRock to complete compatibility testing on HEDT platforms equipped with Intel 4th Gen Xeon processors, codenamed Sapphire Rapids, and W790 motherboards. The memory module not only fully supports XMP 3.0, it is also the overclocking DDR5 ECC R-DIMM memory in the market today with a highest clock rate of 6,800 MHz.

Sapphire Rapids is Intel's first server processor to support DDR5 ECC R-DIMM memory. When it's paired with the next-generation W790 workstation motherboard, users can adjust the CPU's overclocking settings in BIOS and enable the clock speed adjustment feature of DDR5 ECC R-DIMM memory. Having undergone strict compatibility and stability testing, the JEDEC-compliant, high-frequency memory comes in both 16 GB and 32 GB capacity variants to meet the demand for workstation upgrades. The memory is also available in 6,400 MHz and 6,800 MHz models with XMP 3.0 support, providing next-gen HEDT platforms with cutting-edge performance.

Supermicro Accelerates A Wide Range of IT Workloads with Powerful New Products Featuring 4th Gen Intel Xeon Scalable Processors

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, will be showcasing its latest generation of systems that accelerate workloads for the entire Telco industry, specifically at the edge of the network. These systems are part of the newly introduced Supermicro Intel-based product line; the better, faster, and greener systems based on the brand new 4th Gen Intel Xeon Scalable processors (formerly codenamed Sapphire Rapids) that deliver up to 60% better workload-optimized performance. From a performance standpoint these new systems that demonstrate up to 30X faster AI inference speedups on large models for AI and edge workloads with the NVIDIA H100 GPUs. In addition, Supermicro systems support the new Intel Data Center GPU Max Series (formerly codenamed Ponte Vecchio) across a wide range of servers. The Intel Data Center GPU Max Series contains up to 128 Xe-HPC cores and will accelerate a range of AI, HPC, and visualization workloads. Supermicro X13 AI systems will support next-generation built-in accelerators and GPUs up to 700 W from Intel, NVIDIA, and others.

Supermicro's wide range of product families is deployed in a broad range of industries to speed up workloads and allow faster and more accurate decisions. With the addition of purpose-built servers tuned for networking workloads, such as Open RAN deployments and private 5G, the 4th Gen Intel Xeon Scalable processor vRAN Boost technology reduces power consumption while improving performance. Supermicro continues to offer a wide range of environmentally friendly servers for workloads from the edge to the data center.

Intel Accelerates 5G Leadership with New Products

For more than a decade, Intel and its partners have been on a mission to virtualize the world's networks, from the core to the RAN (radio access network) and out to the edge, moving them from fixed-function hardware onto programmable, software-defined platforms, making networks more agile while driving down their complexity and cost.

Now operators are looking to cross the next chasm in delivering cloud-native functionality for automating, managing and responding to an increasingly diverse mix of data and services, providing organizations with the intelligence needed at the edge of their operations. Today, Intel announced a range of products and solutions driving this transition and broad industry support from leading operators, original equipment manufacturers (OEMs) and independent software vendors (ISVs).
Return to Keyword Browsing
Dec 20th, 2024 12:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts