News Posts matching #Xeon

Return to Keyword Browsing

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

TYAN Showcases Upcoming 4th Gen Intel Xeon Scalable Processor Powered HPC Platforms at SC22

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, brings its upcoming server platforms powered by 4th Gen Intel Xeon Scalable processors optimized for HPC and storage markets at SC22 on November 14-17, Booth#2000 in the Kay Bailey Hutchison Convention Center Dallas.

"Greater availability of new technology like 4th Gen Intel Xeon Scalable processors continue driving the changes in the HPC landscape", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "The advances in chip technology coupled with the rise in cloud computing has brought high levels of compute power within reach for smaller organizations. HPC now is affordable and accessible to a new generation of users."

AMD 4th Generation EPYC "Genoa" Processors Benchmarked

Yesterday, AMD announced its latest addition to the data center family of processors called EPYC Genoa. Named the 4th generation EPYC processors, they feature a Zen 4 design and bring additional I/O connectivity like PCIe 5.0, DDR5, and CXL support. To disrupt the cloud, enterprise, and HPC offerings, AMD decided to manufacture SKUs with up to 96 cores and 192 threads, an increase from the previous generation's 64C/128T designs. Today, we are learning more about the performance and power aspects of the 4th generation AMD EPYC Genoa 9654, 9554, and 9374F SKUs from 3rd party sources, and not the official AMD presentation. Tom's Hardware published a heap of benchmarks consisting of rendering, compilation, encoding, parallel computing, molecular dynamics, and much more.

In the comparison tests, we have AMD EPYC Milan 7763, 75F3, and Intel Xeon Platinum 8380, a current top-end Intel offering until Sapphire Rapids arrives. Comparing 3rd-gen EPYC 64C/128T SKUs with 4th-gen 64C/128T EPYC SKUs, the new generation brings about a 30% increase in compression and parallel compute benchmarks performance. When scaling to the 96C/192T SKU, the gap is widened, and AMD has a clear performance leader in the server marketplace. For more details about the benchmark results, go here to explore. As far as comparison to Intel offerings, AMD leads the pack as it has a more performant single and multi-threaded design. Of course, beating the Sapphire Rapids to market is a significant win for team red, so we are still waiting to see how the 4th generation Xeon stacks up against Genoa.

Intel Delivers Leading AI Performance Results on MLPerf v2.1 Industry Benchmark for DL Training

Today, MLCommons published results of its industry AI performance benchmark in which both the 4th Generation Intel Xeon Scalable processor (code-named Sapphire Rapids) and Habana Gaudi 2 dedicated deep learning accelerator logged impressive training results.


"I'm proud of our team's continued progress since we last submitted leadership results on MLPerf in June. Intel's 4th gen Xeon Scalable processor and Gaudi 2 AI accelerator support a wide array of AI functions and deliver leadership performance for customers who require deep learning training and large-scale workloads." Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.

Intel 4th Gen Xeon Scalable "Sapphire Rapids" Server Processors Launch in January

Intel just finalized the launch date of its 4th Gen Xeon Scalable "Sapphire Rapids" server processors. The company plans to launch them on January 10, 2023. The new processors will be launched at a special event dedicated to the company's various new Data Center (group) innovations, which cover server processors, new networking innovations, possible launches from Intel's ecosystem partners, and more.

A lot is riding on the success of "Sapphire Rapids," as they see the introduction of Intel's new high-performance CPU core on in the enterprise segment at core-counts of up to 60-core/120-thread per socket; along with cutting-edge new I/O that includes DDR5 memory, PCI-Express Gen 5, next-gen CXL, and on-package HBM memory on certain variants.

Intel Reports Third-Quarter 2022 Financial Results

Intel Corporation today reported third-quarter 2022 financial results. "Despite the worsening economic conditions, we delivered solid results and made significant progress with our product and process execution during the quarter," said Pat Gelsinger, Intel CEO. "To position ourselves for this business cycle, we are aggressively addressing costs and driving efficiencies across the business to accelerate our IDM 2.0 flywheel for the digital future."

"As we usher in the next phase of IDM 2.0, we are focused on embracing an internal foundry model to allow our manufacturing group and business units to be more agile, make better decisions and establish a leadership cost structure," said David Zinsner, Intel CFO. "We remain committed to the strategy and long-term financial model communicated at our Investor Meeting."

48-Core Russian Baikal-S Processor Die Shots Appear

In December of 2021, we covered the appearance of Russia's home-grown Baikal-S processor, which has 48 cores based on Arm Cortex-A75 cores. Today, thanks to the famous chip photographer Fritzchens Fritz, we have the first die shows that show us exactly how Baikal-S SoC is structured internally and what it is made up of. Manufactured on TSMC's 16 nm process, the Baikal-S BE-S1000 design features 48 Arm Cortex-A75 cores running at a 2.0 GHz base and a 2.5 GHz boost frequency. With a TDP of 120 Watts, the design seems efficient, and the Russian company promises performance comparable to Intel Skylake Xeons or Zen1-based AMD EPYC processors. It also uses a home-grown RISC-V core for management and controlling secure boot sequences.

Below, you can see the die shots taken by Fritzchens Fritz and annotated details by Twitter user Locuza that marked the entire SoC. Besides the core clusters, we see that a slum of cache connects everything, with six 72-bit DDR4-3200 PHYs and memory controllers surrounding everything. This model features a pretty good selection of I/O for a server CPU, as there are five PCIe 4.0 x16 (4x4) interfaces, with three supporting CCIX 1.0. You can check out more pictures below and see the annotations for yourself.

NEC Selects Supermicro GPU Systems for One of Japan's Largest Supercomputers for Advanced AI Research

Supermicro, a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing that NEC Corporation has selected over 116 Supermicro GPU servers that contain dual socket 3rd Gen Intel Xeon Scalable processors and each with eight NVIDIA A100 80 GB GPUs. As a result, the Supermicro GPU server line can include the latest and most powerful Intel Xeon scalable processors and the most advanced AI GPUs from NVIDIA.

"Supermicro is thrilled to deliver an additional 580 PFLOPS of AI training power to its worldwide AI installations," said Charles Liang, president, and CEO, Supermicro. "Supermicro GPU servers have been installed at NEC Corporation and are used to conduct state-of-the-art AI research. Our servers are designed for the most demanding AI workloads using the highest-performing CPUs and GPUs. We continue to work with leading customers worldwide to achieve their business objectives faster and more efficiently with our advanced rack-scale server solutions."

SK Hynix Shows Off Odd-sized 48GB and 96GB DDR5 RDIMMs at InnovatiON

SK Hynix at the 2022 Intel InnovatiON event, showed off some unconventional server memory capacities. The company presented DDR5 RDIMMs in 48 GB and 96 GB densities, besides the usual 32 GB, 64 GB, and 128 GB ones. These are being offered in data-rates of DDR5-5600 and DDR5-6400, which indicates that DDR5-5600 (JEDEC-standard) could be the standard memory speed supported by Xeon Scalable "Sapphire Rapids" processors, with some (or all) models also supporting DDR5-6400. These are not XMP or overclocking SPDs, but JEDEC-standard ones that the processors can automatically train to. The flagship product among SK Hynix's booth would have to be a mammoth 256 GB DDR5-5600 RDIMM, which should enable servers with up to 4 TB of memory per socket (@2 RDIMMs per channel).

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Inventec's Rhyperior Is the Powerhouse GPU Accelerator System Every Business in the AI And ML World Needs

Taiwan-based leading server manufacturing company Inventec's powerhouse GPU accelerator system, Rhyperior, is everything any modern-day business needs in the digital era, especially those relying heavily on Artificial Intelligence (AI) and Machine Learning (ML). A unique and optimal combination of GPUs and CPUs, this 4U GPU accelerator system is based on the NVIDIA A100 Tensor Core GPU and Intel Xeon 3rd Gen (Whitley platform). Rhyperior also equips an NVIDIA NVSwitch to enhance performance dramatically, and its power can be an effective tool for modern workloads.

In a world where technology is disrupting our lives as we know it, GPU acceleration is critical: essentially speeding up processes that would otherwise take much longer. Acceleration boosts execution for complex computational problems that can be broken down into similar, parallel operations. In other words, an excellent accelerator can be a game changer for industries like gaming and healthcare, increasingly relying on the latest technologies like AI and ML for better, more robust solutions for consumers.

Axiomtek Launches Edge Computer with Dual GPU Expansion for AI Accelerated Processing

Axiomtek - a world-renowned leader relentlessly devoted to the research, development, and manufacturing of innovative and reliable industrial computer products of high efficiency - is proud to introduce the IPC972, its new industrial edge AI system with dual GPU support. The highly expandable edge computer supports the Intel Xeon or 10th gen Intel Core i7/i5/i3 processor (code name: Comet Lake S) with the Intel W480E chipset. With the ability to support two NVIDIA GeForce RTX 3090 GPU cards, the IPC972 enables to facilitate image processing, real-time control, data analysis, deep learning, AOI, data acquisition, and more automation tasks.

Axiomtek's IPC972 continues the IPC970 series design, offering flexible expansion options with one I/O module slot and four PCIe slots. In addition, it has one M.2 Key B 3042/3050 slot with SIM slot for 5G wireless connection, one M.2 Key E 2234 slot for Wi-Fi/Bluetooth modules and one full-size PCIe Mini Card slot with SIM slot for Wi-Fi/Bluetooth/LTE modules. With the compact and front-facing I/O design, the IPC972 provides the advantages of fast set-up and easy access and deployment. For stable operation in mission-critical environments, the IPC972 has a wide operating temperature range of -10°C to +60°C and a power input of 24 V DC (uMin=19 V/uMax=30 V) with power-on delay function, over-voltage protection, over current protection, and reverse voltage protection.

Intel Expects to Lose More Market Share, to Reconsider Exiting Other Businesses

During Evercore ISI TMT conference, Intel announced that the company would continue to lose market share, with a possible bounce back in the coming years. According to the latest report, Intel's CEO Pat Gelsinger announced that he expects the company to continue to lose its market share to AMD as the competition has "too much momentum" going for it. AMD's Ryzen and EPYC processors continue to deliver power and efficiency performance figures, which drives customers towards the company. On the other hand, Intel expects a competing product, especially in the data center business with Sapphire Rapids Xeon processors, set to arrive in 2023. Pat Gelsinger noted, "Competition just has too much momentum, and we haven't executed well enough. So we expect that bottoming. The business will be growing, but we do expect that there continues to be some share losses. We're not keeping up with the overall TAM growth until we get later into '25 and '26 when we start regaining share, material share gains."

The only down years that are supposed to show a toll of solid competition are 2022 and 2023. As far as creating a bounceback, Intel targets 2025 and 2026. "Now, obviously, in 2024, we think we're competitive. 2025, we think we're back to unquestioned leadership with our transistors and process technology," noted CEO Gelsinger. Additionally, he had a say about the emerging Arm CPUs competing for the same server market share as Intel and AMD do so, stating that "Well, when we deliver the Forest product line, we deliver power performance leadership versus all Arm alternatives, as well. So now you go to a cloud service provider, and you say, 'Well, why would I go through that butt ugly, heavy software lift to an ARM architecture versus continuing on the x86 family?"

DFI Unveils ATX Motherboard ICX610-C621A

DFI, the global leading provider of high-performance computing technology across multiple embedded industries, unveils a server-grade ATX motherboard, designed for Intel Ice Lake platform, powered by the 3rd Generation Intel Xeon Scalable processors, and equipped with ultra-high speed computing that can support up to 205 W. ICX610-C621A also comes with built-in Intel Speed Select Technology (Intel SST), which provides an excellent load balancing between CPUs and multiple accelerator cards to effectively distribute CPU resource, stabilize computation loads and maximize computing power. As a result, it improves the performance by 1.46 times compared to previous generation.

Featuring powerful performance, the offers three PCIe x 16, two PCIe x8 slots and one M.2 Key and enables ultra-performance computing, AI workload and deep learning, specifically for high-end inspection equipment, such as AOI, CT, and MRI application. The ICX610 also supports ECC RDIMM up to 512 GB 3200 MHz enhances high end performance for advanced inspection equipment and improves efficiency.

DFI Unveils ICX610-C621A Motherboard for the Integration of AI Computing

DFI, the global leading provider of high-performance computing technology across multiple embedded industries, unveils a server-grade ATX motherboard, designed for Intel Ice Lake platform, powered by the 3rd Generation Intel Xeon Scalable processors, and equipped with ultra-high speed computing that can support up to 205 W. ICX610-C621A also comes with built-in Intel Speed Select Technology (Intel SST), which provides an excellent load balancing between CPUs and multiple accelerator cards to effectively distribute CPU resource, stabilize computation loads and maximize computing power. As a result, it improves the performance by 1.46 times compared to previous generation.

Featuring powerful performance, the offers three PCIe x16, two PCIe x8 slots and one M.2 Key and enables ultra-performance computing, AI workload and deep learning, specifically for high-end inspection equipment, such as AOI, CT, and MRI application. The ICX610 also supports ECC RDIMM up to 512 GB 3200 MHz, enhances high end performance for advanced inspection equipment and improves efficiency.

NVIDIA Grace CPU Specs Remind Us Why Intel Never Shared x86 with the Green Team

NVIDIA designed the Grace CPU, a processor in the classical sense, to replace the Intel Xeon or AMD EPYC processors it was having to cram into its pre-built HPC compute servers for serial-processing roles, and mainly because those half-a-dozen GPU HPC processors need to be interconnected by a CPU. The company studied the CPU-level limitations and bottlenecks not just with I/O, but also the machine-architecture, and realized its compute servers need a CPU purpose-built for the role, with an architecture that's heavily optimized for NVIDIA's APIs. This, the NVIDIA Grace CPU was born.

This is NVIDIA's first outing with a CPU with a processing footprint rivaling server processors from Intel and AMD. Built on the TSMC N4 (4 nm EUV) silicon fabrication process, it is a monolithic chip that's deployed standalone with an H100 HPC processor on a single board that NVIDIA calls a "Superchip." A board with a Grace and an H100, makes up a "Grace Hopper" Superchip. A board with two Grace CPUs makes a Grace CPU Superchip. Each Grace CPU contains a 900 GB/s switching fabric, a coherent interface, which has seven times the bandwidth of PCI-Express 5.0 x16. This is key to connecting the companion H100 processor, or neighboring Superchips on the node, with coherent memory access.

EK Introduces Fluid Works Compute Series X7000-RM GPU Server

EK Fluid Works, a high-performance workstation manufacturer, is expanding its Compute Series with a rackmount liquid-cooled GPU server, the X7000-RM. The EK Fluid Works Compute Series X7000-RM is tailor-made for high-compute density applications such as machine learning, artificial intelligence, rendering farms, and scientific compute simulations.

What separates the X7000-RM from similar GPU server solutions is EK's renowned liquid cooling and high compute density. It offers 175% more GPU computational power than air-cooled servers of similar size while maintaining 100% of its performance output no matter the intensity or duration of the task. The standard X7000-RM 5U chassis can be equipped with an AMD EPYC Milan-X 64 Core CPU, up to 2 TB of DDR4 RAM, and up to seven NVIDIA A100 80 GB GPUs for the ultimate heavy-duty GPU computational power. Intel Xeon Scalable single and dual socket solutions are also possible, but such configurations are limited to a maximum of five GPUs.

Tachyum Submits Bid for 20-Exaflop Supercomputer to U.S. Department of Energy Advanced Computing Ecosystems

Tachyum today announced that it has responded to a U.S. Department of Energy Request for Information soliciting Advanced Computing Ecosystems for DOE national laboratories engaged in scientific and national security research. Tachyum has submitted a proposal to create a 20-exaflop supercomputer based on Tachyum's Prodigy, the world's first universal processor.

The DOE's request calls for computing systems that are five to 10 times faster than those currently available and/or that can perform more complex applications in "data science, artificial intelligence, edge deployments at facilities, and science ecosystem problems, in addition to the traditional modeling and simulation applications."

AAEON Announces the EPIC-TGH7 4-inch SBC

AAEON, a global leader in industrial computing, has introduced the world to the next generation of single board computer with the release of the EPIC-TGH7, which holds the distinction of being the first board of its kind to host Intel 11th Generation Xeon /Core processors. With such an advanced processor package, the EPIC-TGH7 offers 8 cores and 16 threads to increase processing speed and power for intensive, high-end computing. However, this advancement has not sacrificed power-efficiency, with the EPIC-TGH7 providing up to 45 W with Xeon -level performance.

Hosting up to 8 USB ports, dual LAN ports, and a PCIe[x8] slot; the EPIC-TGH7 enables PCIe 4.0 speeds of up to 16GT/s, despite retaining the same EPIC board form factor measuring just 4.53" x 6.50" (115 mm x 165 mm). AAEON believes this combination of I/O density and high-speed expansion will be particularly applicable to healthcare imaging and military defense applications, with the board being able to accommodate the advanced graphics required for such uses.

Infortrend EonStor GS All Flash U.2 Storage with 100Gb Ethernet Connectivity Tackles Extreme Workloads

Infortrend Technology, Inc., the industry-leading enterprise storage provider, released their flagship EonStor GS all-flash unified storage systems. Featuring the latest Intel Xeon D CPU, PCIe Gen4, and 100GbE, the solutions are perfect for applications requiring low latency and high performance such as database, virtualization, HPC, multimedia and entertainment (M&E).

EonStor GS series is designed for enterprises to flexibly deploy and utilize in a variety of applications. It has been chosen and deployed by several global enterprises and organizations. These organizations include world-renowned car-makers, Czechoslovakia's Municipal Library, Turkish media conglomerate Ciner Media Group, etc.

Shuttle Launches Xeon-Compatible XPC Barebone SW580R8 Mini-PC

To satisfy more than just the requirements of typical desktop applications, with the XPC Barebone SW580R8 the Taiwanese Mini-PC pioneer Shuttle is for the first time marketing a model with a cube format which impresses with features that were previously only found with traditional server and workstation products. Based on the Intel W580 chipset, as well as Intel Core processors of the 10th and 11th generation, the SW580R8 also supports Intel Xeon W series processors whose strengths lie in VFX, 3D rendering, complex 3D CAD and AI development & edge management.

For the first time in a Mini-PC from Shuttle, you can choose error-correcting ECC RAM. Spread over four slots, a maximum of 128 gigabytes is possible. Another first is the total of four network ports, two with a bandwidth of 2.5 Gbit and two with 1 Gbit, which support separate networks, offer failover and load balancing. One of the ports is vPro-compatible and AMT-compatible and, in conjunction with a suitable processor, enables convenient remote management, even when the PC is switched off.

AMD Confirms Ryzen 7000 Launch Within Q3, Radeon RX 7000 Series Within 2022

AMD in its Q2-2022 financial results call with analysts, confirmed that the company's next-generation Ryzen 7000 desktop processors based on the "Zen 4" microarchitecture will debut this quarter (i.e. Q3-2022, or before October 2022). CEO Dr Lisa Su stated "Looking ahead, we're on track to launch our all-new 5 nm Ryzen 7000 desktop processors and AM5 platforms later this quarter with leadership performance in gaming and content creation."

The company also stated that its next-generation Radeon 7000 series GPUs based on the RDNA3 graphics architecture are on-track for launch "later this year," without specifying whether it meant this quarter, which could mean launch any time before January 2023. AMD is also on course to beating Intel to the next-generation of server processors with DDR5 and PCIe Gen 5 support, with its EPYC "Genoa" 96-core processor slated for later this year, as Intel struggles with a Q1-2023 general availability timeline for its Xeon Scalable "Sapphire Rapids" processor.

Intel Moves Xeon Scalable "Sapphire Rapids" General Availability to February-March 2023

Intel is reportedly moving the general availability of its 4th Gen Xeon Scalable processor, codenamed "Sapphire Rapids," in the region of early-February to early-March, 2023. The enterprise processors were expected to debut toward the end of 2022, and some of the oldest company roadmaps referencing the processor put its launch back in Q1-2021. Igor's Lab reports that there are as many as 12 steppings of the processor, with the latest discovered being the E5 (the others being A0, A1, B0, C0, C1, C2, D0, E0, E2, E3 and E4; although these could be validation samples handed out to various large customers of Intel to try these chips with their various applications. Built on the Intel 7 node, the processor features up to 60 "Golden Cove" CPU cores, a DDR5 memory interface, PCI-Express Gen 5, and various on-die accelerators. Certain variants even feature up to 32 GB of on-package HBM.

AIC Introduces a Mainstream Dual Socket Server Product Family Powered by 3rd Gen. Intel Xeon Scalable Processors

AIC Inc., (from now on referred to as "AIC"), a leading provider in enterprise storage and server solutions, today announced its new product family of mainstream dual socket storage servers. Powered by 3rd Gen. Intel Xeon Scalable processors, these rackmount storage servers are high-performance and designed with high flexibility to support various storage devices. This new product family provides options ranging from cost-efficiency servers to high performance all-flash NVMe platforms and can fulfill wide range of data storage applications from storage tiering, virtualization to cloud datacenters and high performance computing (HPC).

This new product family includes three models, SB101-A6, SB202-A6 and SB201-A6. These dual socket server systems are with AIC server board (codename: A6) that is based on 3rd Gen. Intel Xeon Scalable processors. By leveraging 3rd Gen. Intel Xeon Scalable processors 'great features, including CPU TDP supports up to 270 W, DDR4 memory, PCIe Gen4 ready, built-in AI and enhanced security, the new server systems provide excellent performances and low latency while maintaining the cost-effective benefits. Customers can utilize the enhanced Intel CPU performances, memory capabilities and doubled PCIe Gen4 I/O bandwidth to tackle the challenges in data storage workloads. Besides, the new server systems are designed with universal (tri-mode) backplanes and are able to support SAS, SATA, and NVMe, providing great flexibility for customers to load either 3.5"or 2.5" HDDs/SSDs. With tool-less features, the new server systems can save operators significant amount of maintenance resources, which is crucial for hyper-scaler and cloud environments.
Return to Keyword Browsing
May 19th, 2024 19:32 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts