News Posts matching #Xeon

Return to Keyword Browsing

Intel Xeon Platinum "Emerald Rapids" 8558P and 8551C 48-Core CPU SKUs Leak

The Geekbench database of benchmark submissions is yielding more leaks about Intel's upcoming 5th generation Xeon Scalable processors codenamed Emerald Rapids. Previously, we have covered the leak of the possibly top-end 64-core Xeon 8592+ Platinum and a 48-core Xeon 8558U processor. However, today, we are seeing information about lower-stack SKUs carrying up to 48 cores each. The first in line is the Xeon Platinum 8558P, a 48-core, 96-threaded CPU that runs at 2.7 GHz base frequency and 4.0 GHz boost frequency. It is equipped with 16 MB of L3 cache in addition to 192 MB of L2, making the total cache memory 260 MB. The integrated memory controller (IMC) of the Xeon Platinum 8558P supports eight-channel DDR5 running at 4800 MT/s, and the CPU has a TDP of 350 Watts.

The other SKU that was also listed was the Xeon Platinum 8551C, also a 48-core, 96-thread model with the same 260 MB cache configuration. However, this SKU has a higher base frequency of 2.9 GHz, with an unknown boost speed and unknown IMC configuration. An interesting thing to note about these 48C/96T SKUs is that they feature less cache compared to the previously leaked 48-core Xeon 8558U processor, which had 96 MB of L2 cache and 260 MB of L3 cache, making for a total of 356 MB of cache (which includes L1D and L1I as well). The segmentation that Intel is doing for its Xeon processors will be based not only on core count, frequency, and TDP but also on CPU cache sizes.

TYAN Unveils its Robuste Immersion Cooling Solution that Delivering Significant PUE Enhancement at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, unveils an immersion cooling solution that delivering significant PUE (Power Usage Effectiveness) enhancement and showcases its latest server platforms powered by 4th Gen. Intel Xeon Scalable Processors targeting HPC, AI and Cloud Computing applications at SC23, Booth #1917.

Significant PUE Enhancement shown in an Immersion-cooling Tank vs. Conventional Air-cooling Operation Cabinet
The immersion cooling system live demonstrated at TYAN booth during SC23 is a 4U hybrid single phase tank enclosure equipped with 4 units of TYAN GC68A-B7136 cloud computing servers. Comparing to conventional Air-cooling operating cabinet, this hybrid immersion cooling system could offer huge improvement of PUE which makes it become an ideal mission-critical solution for the users aimed in energy-saving and green products.

ASRock Rack Announces Support of NVIDIA H200 GPUs and GH200 Superchips and Highlights HPC and AI Server Platforms at SC 23

ASRock Rack Inc., the leading innovative server company, today is set to showcase a comprehensive range of servers for diverse AI workloads catering to scenarios from the edge, on-premises, and to the cloud at booth #1737 at SC 23 held at the Colorado Convention Center in Denver, USA. The event is from November 13th to 16th, and ASRock Rack will feature the following significant highlights:

At SC 23, ASRock Rack will demonstrate the NVIDIA-Qualified 2U4G-GENOA/M3 and 4U8G series GPU server solutions along with the NVIDIA H100 PCIe. The ASRock Rack 4U8G and 4U10G series GPU servers are able to accommodate eight to ten 400 W dual-slot GPU cards and 24 hot-swappable 2.5" drives, designed to deliver exceptional performance for demanding AI workloads deployed in the cloud environment. The 2U4G-GENOA/M3, tailored for lighter workloads, is powered by a single AMD EPYC 9004 series processor and is able to support four 400 W dual-slot GPUs while having additional PCIe and OCP NIC 3.0 slots for expansions.

MSI Introduces New AI Server Platforms with Liquid Cooling Feature at SC23

MSI, a leading global server provider, is showcasing its latest GPU and CXL memory expansion servers powered by AMD EPYC processors and 4th Gen Intel Xeon Scalable processors, which are optimized for enterprises, organizations and data centers, at SC23, booth #1592 in the Colorado Convention Center in Denver from November 13 to 16.

"The exponential growth of human- and machine-generated data demands increased data center compute performance. To address this demand, liquid cooling has emerged as a key trend, said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's server platforms offer a well-balanced hardware foundation for modern data centers. These platforms can be tailored to specific workloads, optimizing performance and aligning with the liquid cooling trend."

Intel "Emerald Rapids" 8592+ and 8558U Xeon CPUs with 64C and 48C Configurations Spotted

Intel's next-generation Emerald Rapids Xeon lineup is just around the corner, and we are now receiving more leaks as the launch nears. Today, we get to see leaks of two models: a 64-core Xeon 8592+ Platinum and a 48-core Xeon 8558U processor. First is the Xeon 8592+ Platinum, which is possibly Intel's top-end design with 64 cores and 128 threads. Running at the base frequency of 1.9 GHz, the CPU can boost up to 3.9 GHz. This SKU carries 488 MB of total cache, where 120 MB is dedicated to L2 and 320 MB is there for L3. With a TDP of 350 Watts, the CPU can even be adjusted to 420 Watts.

Next up, we have the Xeon 8558U processor, which has been spotted in Geekbench. The Xeon 8558U is a 48-core, 96-threaded CPU with a 2.0 GHz base clock whose boost frequency has yet to be shown or enabled, likely because it is an engineering sample. It carries 96 MB of L2 cache and 260 MB of L3 cache, making for a total of 356 MB of cache (which includes L1D and L1I as well). Both of these SKUs should launch with the remaining models in the Emerald Rapids family, dubbed 5th generation Xeon Scalable, on December 14 this year.

Intel Readies Xeon W-2500 Series with 4-channel Memory to Square Off Against Threadripper 7000

The HEDT/workstation segment is heating up, with Intel preparing to launch a new line of low(er) core-count processor models with I/O features competitive to those of the AMD Ryzen Threadripper 7000 series for the AMD TRX50 platform. The new W-2500 series is designed for the same Intel W790 chipset Socket LGA4677 motherboards as the W-2400 series, but with increased CPU core-counts across the board. The top W-2500 series processor model comes with a 26-core/52-thread core-configuration, 2 MB of dedicated L2 cache per core, and 48.75 MB of shared L3 cache.

Where the Intel Xeon W-2500 series aces over the AMD Ryzen Threadripper 7000 (TRX50), is the platform I/O. While both processors offer a 4-channel DDR5 interface, the Intel chip offers a 64-lane PCI-Express Gen 5 root complex, in comparison to the 48-lane PCIe Gen 5 root complex from the processor. The TRX50 platform itself adds up to 88 PCIe lanes, but only 48 of these are Gen 5. The W-2500 series includes seven processor models, with the lowest model giving you 8-core/16-thread, and the highest one being 26-core/52-thread. Here the Threadripper 7000 TRX50 has a distinct advantage, as it offers core counts of up to 64-core/128-thread.

Intel Itanium Reaches End of the Road with Linux Kernel Stopping Updates

Today marks the end of support for Itanium's IA-64 architecture in the Linux kernel's 6.7 update—a significant milestone in the winding-down saga of Intel Itanium. Itanium, initially Intel's ambitious venture into 64-bit computing, faced challenges and struggled throughout its existence. It was jointly developed by Intel and HP but encountered delays and lacked compatibility with x86 software, a significant obstacle to its adoption. When AMD introduced x86-64 (AMD64) for its Opteron CPUs, which could run x86 software natively, Intel was compelled to update Xeon, based on x86-64 technology, leaving Itanium to fade into the background.

Despite ongoing efforts to sustain Itanium, it no longer received annual CPU product updates, and the last update came in 2017. The removal of IA-64 support in the Linux kernel will have a substantial impact since Linux is an essential operating system for Itanium CPUs. Without ongoing updates, the usability of Itanium servers will inevitably decline, pushing the (few) remaining Itanium users to migrate to alternative solutions, which are most likely looking to modernize their product stack.

Intel, Dell Technologies and University of Cambridge Announce Deployment of Dawn Supercomputer

Dell Technologies, Intel and the University of Cambridge announce the deployment of the co-designed Dawn Phase 1 supercomputer. Leading technical teams built the U.K.'s fastest AI supercomputer that harnesses the power of both artificial intelligence (AI) and high performance computing (HPC) to solve some of the world's most pressing challenges. This sets a clear way forward for future U.K. technology leadership and inward investment into the U.K. technology sector. Dawn kickstarts the recently launched U.K. AI Research Resource (AIRR), which will explore the viability of associated systems and architectures. Dawn brings the U.K. closer to reaching the compute threshold of a quintillion (1018) floating point operations per second - one exaflop, better known as exascale. For perspective: Every person on earth would have to make calculations 24 hours a day for more than four years to equal a second's worth of processing power in an exascale system.

"Dawn considerably strengthens the scientific and AI compute capability available in the U.K., and it's on the ground, operational today at the Cambridge Open Zettascale Lab. Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI. I'm very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the U.K. scientific and AI community," said Adam Roe, EMEA HPC technical director at Intel.

Velocity Micro Announces ProMagix G480a and G480i, Two GPU Server Solutions for AI and HPC

Velocity Micro, the premier builder of award-winning enthusiast desktops, laptops, high performance computing solutions, and professional workstations announces the immediate availability of the ProMagix G480a and G480i - two GPU servers optimized for High Performance Computing and Artificial Intelligence. Powered by either dual AMD Epyc 4th Gen or dual 4th Gen Intel Scalable Xeon processors, these 4U form factor servers support up to eight dual slot PCIe Gen 5 GPUs to create incredible compute power designed specifically for the highest demand workflows including simulation, rendering, analytics, deep learning, AI, and more. Shipments begin immediately.

"By putting emphasis on scalability, functionality, and performance, we've created a line of server solutions that tie in the legacy of our high-end brand while also offering businesses alternative options for more specialized solutions for the highest demand workflows," said Randy Copeland, President and CEO of Velocity Micro. "We're excited to introduce a whole new market to what we can do."

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Intel Partners with Submer to Cool 1,000+ Watt Processors using Immersion Cooling

Intel and Submer, a company specializing in immersion cooling, are set to unveil a creative immersion cooling system at the OCP Global Summit. This system can efficiently dissipate 1,000 W of power in a single-phase liquid cooling setup designed for deployment in data centers. Unlike traditional water cooling, immersion cooling systems offer higher efficiency and reliability. The solution developed by Submer and Intel is based on a Forced Convection Heat Sink (FCHS) and leverages a heat exchanger for heat transfer with a second liquid. The primary advantage of immersion cooling is its lack of active components on the cooling element, making it possible for immersed systems to operate without them for extended periods.

In this new system, a copper cooler is housed with two fans at one end to enhance liquid flow through the heat sink using forced convection. However, this active cooling component contradicts the traditional passive concept of immersion cooling based on natural convection. In its initial phase, Submer and Intel utilized Xeon processors with an 800 W TDP, with plans to increase that figure to 1,000 W in the next step. This Forced Convection Heat Sink (FCHS) offers the advantages of easy manufacturing and cost-effective usage while effectively dissipating up to 1,000 W of waste heat, making it a compelling option for immersion cooling. There are even possibilities of being 3D printed, according to Submer, and the plan is to achieve cooling of 1kW+ chip. We expect to hear more about the system during the OCP Global Summit, running from October 17 to 19, as currently, we only have a lower-resolution image from Submer's press release.

GIGABYTE Announces Immersion GPU Servers and Nodes Following Open Rack V3 Specifications at the OCP Global Summit

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced ahead of exhibiting at the OCP Global Summit new OCP ORv3 based solutions, including ones designed for GPU centric workloads in immersion cooling. The new GIGABYTE products include a 2OU node tray, TO25-BT0, and compute nodes: TO25-S10, TO25-S11, TO25-S12, TO25-Z10, TO25-Z11, and TO25-Z12. For immersion cooling server deployments with ORv3 specifications are the new GPU servers: TO15-S40, TO15-S41, and TO15-Z40.

Our commitment to support Open Compute Project's designs dates back almost a decade starting with our OCP v1.0 products, and now we are releasing some well-designed, in demand ORv3 products. Through discussions with several customers, we decided to move forward with the development of these configurations. These designs check the boxes for specifications that are required," said Vincent Wang, Sales VP at Giga Computing. "We will continue supporting ORv3 specifications and we have more products on the horizon in Q1 2024.

Intel 5th Gen Xeon Platinum 8580 CPU Details Leaked

YuuKi_AnS has brought renewed attention to an already leaked Intel "Emerald Rapids" processor—the 5th Gen Xeon Platinum 8580 CPU was identified with 60 cores and 120 threads in a previous post, but a follow up has appeared in the form of an engineering prototype (ES2-Q2SP-A0). Yuuki noted: "samples are for reference only, and the actual performance is subject to the official version." Team Blue has revealed a launch date—December 14 2023—for its 5th Gen Xeon Scalable processor lineup, so it is not surprising to see pre-release examples appear online a couple of months beforehand. This particular ES2 SKU (on A0 silicon) fields an all P-Core configuration consisting of Raptor Cove units, with a dual-chiplet design (30 cores per die). There is a significant bump up in cache sizes when compared to the current "Sapphire Rapids" generation—Wccftech outlines these allocations: "Each core comes with 2 MB of L2 cache for up to 120 MB of L2 cache. The whole chip also features 300 MB of L3 cache which combines to offer a total cache pool of 420 MB."

They bring in some of the competition for comparison: "That's a 2.6x increase in cache versus the existing Sapphire Rapids CPU lineup and while it still doesn't match the 480 MB L3 cache of standard (AMD) Genoa or the 1.5 GB cache pool of Genoa-X, it is a good start for Intel to catch up." Team Blue appears ready to take on AMD on many levels—this week's Innovation Event produced some intriguing announcements including "Sierra Forest vs. Bergamo" and plans to embrace 3D Stacked Cache technology. Yuuki's small batch of screenshots show the Xeon Platinum 8580 CPU's captured clock speeds are far from the finished article—just a touch over 2.0 GHz, so very likely limited to safe margins. An unnamed mainboard utilizing Intel's Eagle Stream platform was logged sporting a dual-socket setup—the test system was running a grand total of 120 cores and 240 threads!

Intel 288 E-core Xeon "Sierra Forest" Out to Eat AMD EPYC Bergamo's Lunch

Intel at the 2023 InnovatiON event unveiled a 288-core extreme core-count variant of the Xeon "Sierra Forest" processor for high-density servers for scale-out, cloud-native environments. It succeeds the current 144-core model. "Sierra Forest" is a server processor based entirely on efficiency cores, or E-cores, based on the "Sierra Glen" core microarchitecture, a server-grade derivative of "Crestmont," Intel's second-generation E-core that's making a client debut with "Meteor Lake."

Xeon "Sierra Forest" is a chiplet-based processor, much like "Meteor Lake" and the upcoming "Emerald Rapids" server processor. It features a total of five tiles—two Compute tiles, two I/O tiles, and a base tile (interposer). Each of the two Compute tiles is built on the Intel 3 foundry node, a more advanced node than Intel 4, featuring higher-density libraries, and an undisclosed performance/Watt increase. Each tile has 36 "Sierra Glen" E-core clusters, 108 MB of shared L3 cache, 6-channel (12 sub-channel) DDR5 memory controllers, and Foveros tile-to-tile interfaces.

Pat Gelsinger Says 3D Stacked Cache Tech Coming to Intel

Intel CEO Pat Gelsinger, in the Q&A session of InnovatiON 2023 Day 1, confirmed that the company is developing 3D-stacked cache technology for its processors. The technology involves expanding the on-die last-level cache (L3 cache) of a processor with an additional SRAM die physically stacked on top, and bonded with the cache's high-bandwidth data fabric. The stacked cache operates at the same speed as the on-die cache, and so the combined cache size is visible to software as a single contiguous addressable block of cache memory.

AMD has used 3D-stacked cache to good effect on its processors. On client processors such as the Ryzen X3D series, the cache provides significant gaming performance uplifts as the larger L3 cache makes more of the game's rendering data immediately accessible to the CPU cores; while on server processors such as EPYC "Milan-X" and "Genoa-X," the added cache provides significant uplifts to memory intensive compute workloads. Intel's approach to 3D-stacked cache will be different at the hardware level compared to AMD's, Gelsinger stated in his response. AMD's tech has been collaboratively developed with TSMC, and hinges on a TSMC-made SoIC packaging tech that facilitates high-density die-to-die wiring between the CCD and cache chiplet. Intel uses its own fabs for processor dies, and will have to use its own IP.

Intel Innovation 2023: Bringing AI Everywhere

As the world experiences a generational shift to artificial intelligence, each of us is participating in a new era of global expansion enabled by silicon. It's the "Siliconomy," where systems powered by AI are imbued with autonomy and agency, assisting us across both knowledge-based and physical-based tasks as part of our everyday environments.

At Intel Innovation, the company unveiled technologies to bring AI everywhere and to make it more accessible across all workloads - from client and edge to network and cloud. These include easy access to AI solutions in the cloud, better price performance for Intel data center AI accelerators than the competition offers, tens of millions of new AI-enabled Intel PCs shipping in 2024 and tools for securely powering AI deployments at the edge.

Supermicro Announces Future Support and Upcoming Early Access for 5th Gen Intel Xeon Processors on the Complete Family of X13 Servers

Intel Innovation 2023 -- Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing future support for the upcoming 5th Gen Intel Xeon processors. In addition, Supermicro will soon offer early shipping and free remote early access testing of the new Systems via its JumpStart Program for qualified customers. To learn more, go to www.supermicro.com/x13 for details. The Supermicro 8x GPU optimized servers, the SuperBlade servers, and the Hyper Series will soon be ready for customers to test their workloads on the new CPU.

"Supermicro's range of Generative High-Performance AI systems, including recently launched GPUs, continues to lead the industry in AI offerings with its broad range of X13 family of servers designed for various workloads, from the edge to the cloud," said Charles Liang, president, and CEO, Supermicro. "Our support for the upcoming 5th Gen Intel Xeon processors, with more cores, an increased performance per watt, and the latest DDR5-5600 MHz memory, will allow our customers to realize even greater application performance and power efficiency for AI, Cloud, 5G Edge, and Enterprise workloads. These new features will help customers accelerate their business and maximize their competitive advantage."

MiTAC to Showcase Cloud and Datacenter Solutions, Empowering AI at Intel Innovation 2023

Intel Innovation 2023 - September 13, 2023 - MiTAC Computing Technology, a professional IT solution provider and a subsidiary of MiTAC Holdings Corporation, will showcase its DSG (Datacenter Solutions Group) product lineup powered by 4th Gen Intel Xeon Scalable processors for enterprise, cloud and AI workloads at Intel Innovation 2023, booth #H216 in the San Jose McEnery Convention Center, USA, from September 19-20.

"MiTAC has seamlessly and successfully managed the Intel DSG business since July. The datacenter solution product lineup enhances MiTAC's product portfolio and service offerings. Our customers can now enjoy a comprehensive one-stop service, ranging from motherboards and barebones servers to Intel Data Center blocks and complete rack integration for their datacenter infrastructure needs," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology.

Intel Shows Strong AI Inference Performance

Today, MLCommons published results of its MLPerf Inference v3.1 performance benchmark for GPT-J, the 6 billion parameter large language model, as well as computer vision and natural language processing models. Intel submitted results for Habana Gaudi 2 accelerators, 4th Gen Intel Xeon Scalable processors, and Intel Xeon CPU Max Series. The results show Intel's competitive performance for AI inference and reinforce the company's commitment to making artificial intelligence more accessible at scale across the continuum of AI workloads - from client and edge to the network and cloud.

"As demonstrated through the recent MLCommons results, we have a strong, competitive AI product portfolio, designed to meet our customers' needs for high-performance, high-efficiency deep learning inference and training, for the complete spectrum of AI models - from the smallest to the largest - with leading price/performance." -Sandra Rivera, Intel executive vice president and general manager of the Data Center and AI Group

Intel Demos 6th Gen Xeon Scalable CPUs, Core Counts Leaked

Intel's advanced packaging prowess demonstration took place this week—attendees were able to get an early-ish look at Team Blue's sixth Generation Xeon Scalable "Sapphire Rapids" processors. This multi-tile datacenter-oriented CPU family is projected to hit the market within the first half of 2024, but reports suggest that key enterprise clients have recently received evaluation samples. Coincidentally, renowned hardware leaker—Yuuki_AnS—has managed to source more information from industry insiders. This follows their complete blowout of more mainstream Raptor Lake Refresh desktop SKUs.

The leaked slide presents a bunch of evaluation sample "Granite Rapids-SP" XCC and "Sierra Forest" HCC SKUs. Intel has not officially published core counts for these upcoming "Avenue City" platform product lines. According to their official marketing blurb: "Intel Xeon processors with P-cores (Granite Rapids) are optimized to deliver the lowest total cost of ownership (TCO) for high-core performance-sensitive workloads and general-purpose compute workloads. Today, Xeon enables better AI performance than any other CPU, and Granite Rapids will further enhance AI performance. Built-in accelerators give an additional boost to targeted workloads for even greater performance and efficiency."

Supermicro Launches Industry Leading vSAN HCI Solution

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, today announced a new VMware vSAN solution optimized to run enterprise class hyperconverged virtualized workloads. As virtualized workloads become more advanced, processing power and storage performance requirements increase, requiring greater capacity to meet application SLAs and maximize virtual machine density. This solution also utilizes the latest Intel AMX accelerator for AI workloads.

When compared to the Supermicro X11 BigTwin, benchmark testing conducted by Supermicro showed up to 4.7X higher IO throughput and 8.2X lower latency on the HCIBench benchmark, up to 4.9X faster image classification inference throughput on the ResNet50 model and up to 4X faster natural language processing throughput on BERT-Large model. In addition, the superior power and efficiency of the Supermicro X13 BigTwin architecture can deliver up to 3X cost and performance improvement within the same node footprint compared to a similar deployment based on older generation Supermicro systems, creating a compelling case for organizations to upgrade their aging infrastructure.

IBASE Announces INA8505 Enterprise 1U Edge Server for 5G Open vRAN & MEC

IBASE Technology Inc. (TPEx: 8050), a global leader in network appliances and embedded solutions, proudly announces the release of the INA8505 enterprise 1U edge server. Powered by the Intel Xeon D-2700 processor and offering versatile connectivity options, this state-of-the-art appliance is specifically designed to excel in demanding 5G Open vRAN & MEC applications such as real-time data analytics, autonomous vehicles, and smart city deployments. It enables full control over resource allocation in the RAN and MEC, and has the potential to seamlessly integrate AI capabilities to dynamically optimize network performance in real time at the edge of the 5G network infrastructure.

The INA8505 delivers unmatched performance, scalability, and efficiency with flexible storage, offering two SATA/NVMe 2.5" HDD/SSD slots, 2x M.2 (M-key) SATA/PCI-E storage slots, and one 16 GB/32 GB/64 GB eMMC. With an FHFL PCI-E (x16) Gen 4 (supports 75 W) and an FHFL PCI-E (x8) Gen 4 (supports 75 W) configurable as PCI-E (x16) Gen 4 double FHFL (supports 120 W), the INA8505 adapts effortlessly into different network environments and meet future demands for increased scalability. It boasts a rich array of I/O connectivity options, including a VGA port from BMC (Aspeed 2600, IPMI 2.0 support), two USB2.0 Type-A ports, an RJ45 console port, and four 25 GbE SFP28 ports, ensuring enhanced adaptability to various connectivity needs.

"Downfall" Intel CPU Vulnerability Can Impact Performance By 50%

Intel has recently revealed a security vulnerability named Downfall (CVE-2022-40982) that impacts multiple generations of Intel processors. The vulnerability is linked to Intel's memory optimization feature, exploiting the Gather instruction, a function that accelerates data fetching from scattered memory locations. It inadvertently exposes internal hardware registers, allowing malicious software access to data held by other programs. The flaw affects Intel mainstream and server processors ranging from the Skylake to Rocket Lake microarchitecture. The entire list of affected CPUs is here. Intel has responded by releasing updated software-level microcode to fix the flaw. However, there's concern over the performance impact of the fix, potentially affecting AVX2 and AVX-512 workloads involving the Gather instruction by up to 50%.

Phoronix tested the Downfall mitigations and reported varying performance decreases on different processors. For instance, two Xeon Platinum 8380 processors were around 6% slower in certain tests, while the Core i7-1165G7 faced performance degradation ranging from 11% to 39% in specific benchmarks. While these reductions were less than Intel's forecasted 50% overhead, they remain significant, especially in High-Performance Computing (HPC) workloads. The ramifications of Downfall are not restricted to specialized tasks like AI or HPC but may extend to more common applications such as video encoding. Though the microcode update is not mandatory and Intel provides an opt-out mechanism, users are left with a challenging decision between security and performance. Executing a Downfall attack might seem complex, but the final choice between implementing the mitigation or retaining performance will likely vary depending on individual needs and risk assessments.

Supermicro Announces High Volume Production of E3.S All-Flash Storage Portfolio with CXL Memory Expansion

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is delivering a high-throughput, low latency E3.S storage solutions supporting the industry's first PCIe Gen 5 drives and CXL modules to meet the demands of large AI Training and HPC clusters, where massive amounts of unstructured data must be delivered to the GPUs and CPUs to achieve faster results.

Supermicro's Petascale systems are a new class of storage servers supporting the latest industry standard E3.S (7.5 mm) Gen 5 NVMe drives from leading storage vendors for up to 256 TB of high throughput, low latency storage in 1U or up to a half petabyte in 2U. Inside, Supermicro's innovative symmetrical architecture reduced latency by ensuring the shortest signal paths for data and maximized airflow over critical components, allowing them to run at optimal speeds. With these new systems, a standard rack can now hold over 20 Petabytes of capacity for high throughput NVMe-oF (NVMe over Fabrics) configurations, ensuring that GPUs remain saturated with data. Systems are available with either the 4th Gen Intel Xeon Scalable processors or 4th Gen AMD EPYC processors.

Intel 4th Gen Xeon Powers New Amazon EC2 M7i-flex and M7i Instances

Today, Amazon Web Services (AWS) announced the general availability of new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by custom 4th Gen Intel Xeon Scalable processors. This launch is the latest on a growing list of 4th Gen Xeon-powered instances that deliver leading total cost of ownership (TCO) and the most built-in accelerators of any CPU to fuel key workloads like AI, database, networking and enterprise applications.

"Intel worked closely with AWS to bring our feature-rich 4th Gen Xeon processors to its cloud customers, many of which have benefited from its performance and value for months in private and public preview. Today, we're happy to bring that same real-world value to cloud customers around the globe," said Lisa Spelman, Intel corporate vice president and general manager of the Xeon Products and Solutions Group.
Return to Keyword Browsing
Dec 20th, 2024 11:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts