News Posts matching #Xeon

Return to Keyword Browsing

Intel to Host 4th Gen Xeon Scalable and Max Series Launch on the 10th of January

On Jan. 10, Intel will officially welcome to market the 4th Gen Intel Xeon Scalable processors and the Intel Xeon CPU Max Series, as well as the Intel Data Center GPU Max Series for high performance computing (HPC) and AI. Hosted by Sandra Rivera, executive vice president and general manager of the Datacenter and AI Group at Intel, and Lisa Spelman, corporate vice president and general manager of Intel Xeon Products, the event will highlight the value of 4th Gen Intel Xeon Scalable processors and the Intel Max Series product family, while showcasing customer, partner and ecosystem support.

The event will demonstrate how Intel is addressing critical needs in the marketplace with a focus on a workload-first approach, performance leadership in key areas such as AI, networking and HPC, the benefits of security and sustainability, and how the company is delivering significant outcomes for its customers and the industry.

New Intel oneAPI 2023 Tools Maximize Value of Upcoming Intel Hardware

Today, Intel announced the 2023 release of the Intel oneAPI tools - available in the Intel Developer Cloud and rolling out through regular distribution channels. The new oneAPI 2023 tools support the upcoming 4th Gen Intel Xeon Scalable processors, Intel Xeon CPU Max Series and Intel Data Center GPUs, including Flex Series and the new Max Series. The tools deliver performance and productivity enhancements, and also add support for new Codeplay plug-ins that make it easier than ever for developers to write SYCL code for non-Intel GPU architectures. These standards-based tools deliver choice in hardware and ease in developing high-performance applications that run on multiarchitecture systems.

"We're seeing encouraging early application performance results on our development systems using Intel Max Series GPU accelerators - applications built with Intel's oneAPI compilers and libraries. For leadership-class computational science, we value the benefits of code portability from multivendor, multiarchitecture programming standards such as SYCL and Python AI frameworks such as PyTorch, accelerated by Intel libraries. We look forward to the first exascale scientific discoveries from these technologies on the Aurora system next year."
-Timothy Williams, deputy director, Argonne Computational Science Division

Intel Plans "Raptor Lake Refresh" Core Desktop Processors for Q3-2023, "Sapphire Rapids 64L" HEDT in Q1

Intel is planning to refresh its desktop processor product stack with new "Raptor Lake Refresh" SKUs in Q3-2023, according to a leaked roadmap. At this point it's unclear if these are just new SKUs within the 13th Gen Core desktop product stack, or if they'll form the 14th Gen Core family, much in the same way as "Coffee Lake Refresh" formed the 9th Gen Core, replacing the 8th Gen Core "Coffee Lake." At this point we don't know what constitutes "Raptor Lake Refresh," but it provides Intel's product managers with the opportunity to increase CPU core-counts across the product stack without needing a new silicon (the Raptor Lake silicon has 8 P-cores and 16 E-cores), slightly higher clock-speeds, and other improvements. We don't know if this will herald a new CPU socket or platform at this point, either.

The most interesting item in this leaked roadmap slide has to be the reference to the "mainstream workstation" segment, with products in the 250 W TDP bracket. The so-called "Sapphire Rapids 64L" could be a cut-down version of the "Sapphire Rapids" enterprise processor on a new socket, backed by the Intel W790 chipset. The "64L" part of the codename could be a reference to its PCIe Gen 5 lane count of 64, which is less than the 112 available to the full "Sapphire Rapids" silicon in its W-3400 product-stack. It's unclear if these processors feature a Core X branding like their predecessors from the "Cascade Lake-X" family, or Xeon W. Besides fewer PCIe lanes, Intel could also segment these chips with fewer DDR5 memory channels, though both the PCIe and DDR5 connectivity will be much wider than those of the "Raptor Lake-S" mainstream desktop processors.

Nfina Technologies Releases 3rd Gen Intel Xeon Scalable Processor-based Systems

Nfina announces the addition of three new server systems to its lineup, customized for hybrid/multi-cloud, hyperconverged HA infrastructure, HPC, backup/disaster recovery, and business storage solutions. Featuring 3rd Gen Intel Xeon Scalable Processors, Nfina-Store, and Nfina-View software, these scalable server systems fill a void in the marketplace, bringing exceptional multi-socket processing performance, easy-to-use management tools, built-in backup, and rapid disaster recovery.

"We know we must build systems for the business IT needs of today while planning for unknown future demands. Flexible infrastructure is key, optimized for hybrid/multi-cloud, backup/disaster recovery, HPC, and growing storage needs," says Warren Nicholson, President, and CEO of Nfina. He continues by saying, "Flexible infrastructure also means offering managed services like IaaS, DRaaS, etc., that provide customers with choices that fit the size of their application and budget - not a one size fits all approach like many of our competitors. Our goal is to serve many different business IT applications, any size, anywhere, at any time."

AWS Updates Custom CPU Offerings with Graviton3E for HPC Workloads

Amazon Web Services (AWS) cloud division is extensively developing custom Arm-based CPU solutions to suit its enterprise clients and is releasing new iterations of the Graviton series. Today, during the company re:Invent week, we are getting a new CPU custom-tailored to high-performance computing (HPC) workloads called Graviton3E. Given that HPC workloads require higher bandwidth, wider datapaths, and data types span in multiple dimensions, AWS redesigned the Graviton3 processor and enhanced it with new vector processing capabilities with a new name—Graviton3E. This CPU is promised to offer up to 35% higher performance in workloads that depend on heavy vector processing.

With the rising popularity of HPC in the cloud, AWS sees a significant market opportunity and is trying to capture it. Available in the AWS EC2 instance types, this chip will be available with up to 64 vCPU cores and 128 GiB of memory. The supported EC2 tiers that will offer this enhanced chip are C7gn and Hpc7g instances that provide 200 Gbps of dedicated network bandwidth that is optimized for traffic between instances in the same VPC. In addition, Intel-based R7iz instances are available for HPC users in the cloud, now powered by 4th generation Xeon Scalable processors codenamed Sapphire Rapids.

Intel Finally Reveals its Software Defined Silicon as Intel On Demand

Back in September 2021, reports about Intel working on something called SDSi or software defined silicon, started to appear. Now, over a year later, the company has finally launched its SDSi products under the Intel On Demand branding. Back then, we speculated about what features Intel would put behind a paywall and although we were somewhat off track, Intel has put some specific "instructions" behind the paywall on the supported Xeon processors. Specifically, some CPUs will have Quick Assist, Dynamic Load Balancer and Data Streaming Accelerator available as an On Demand feature. Additionally, Intel is also putting its Software Guard Extensions and In-Memory Analytics Accelerator behind the same pay wall.

It appears that these features will be offered as-a-service offering from some of Intel's service partners, but there's also a "one-time activation of select CPU accelerators and security features" according to the Intel On Demand website. It's unclear which Xeon SKUs will get Intel On Demand, but according to The Register, it'll be the upcoming Sapphire Rapids based Xeon processors which should be the first parts affected. Intel has listed partners like HP, Lenovo and SuperMicro, among others, that are involved with the Intel On Demand program. It will still be possible to buy next gen Xeon CPUs that are fully feature enabled like today, but it's unclear if the Intel On Demand Xeon SKUs will offer some kind of cost benefits to companies that don't need the additional features that are behind the paywall.

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

TYAN Showcases Upcoming 4th Gen Intel Xeon Scalable Processor Powered HPC Platforms at SC22

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, brings its upcoming server platforms powered by 4th Gen Intel Xeon Scalable processors optimized for HPC and storage markets at SC22 on November 14-17, Booth#2000 in the Kay Bailey Hutchison Convention Center Dallas.

"Greater availability of new technology like 4th Gen Intel Xeon Scalable processors continue driving the changes in the HPC landscape", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "The advances in chip technology coupled with the rise in cloud computing has brought high levels of compute power within reach for smaller organizations. HPC now is affordable and accessible to a new generation of users."

AMD 4th Generation EPYC "Genoa" Processors Benchmarked

Yesterday, AMD announced its latest addition to the data center family of processors called EPYC Genoa. Named the 4th generation EPYC processors, they feature a Zen 4 design and bring additional I/O connectivity like PCIe 5.0, DDR5, and CXL support. To disrupt the cloud, enterprise, and HPC offerings, AMD decided to manufacture SKUs with up to 96 cores and 192 threads, an increase from the previous generation's 64C/128T designs. Today, we are learning more about the performance and power aspects of the 4th generation AMD EPYC Genoa 9654, 9554, and 9374F SKUs from 3rd party sources, and not the official AMD presentation. Tom's Hardware published a heap of benchmarks consisting of rendering, compilation, encoding, parallel computing, molecular dynamics, and much more.

In the comparison tests, we have AMD EPYC Milan 7763, 75F3, and Intel Xeon Platinum 8380, a current top-end Intel offering until Sapphire Rapids arrives. Comparing 3rd-gen EPYC 64C/128T SKUs with 4th-gen 64C/128T EPYC SKUs, the new generation brings about a 30% increase in compression and parallel compute benchmarks performance. When scaling to the 96C/192T SKU, the gap is widened, and AMD has a clear performance leader in the server marketplace. For more details about the benchmark results, go here to explore. As far as comparison to Intel offerings, AMD leads the pack as it has a more performant single and multi-threaded design. Of course, beating the Sapphire Rapids to market is a significant win for team red, so we are still waiting to see how the 4th generation Xeon stacks up against Genoa.

Intel Delivers Leading AI Performance Results on MLPerf v2.1 Industry Benchmark for DL Training

Today, MLCommons published results of its industry AI performance benchmark in which both the 4th Generation Intel Xeon Scalable processor (code-named Sapphire Rapids) and Habana Gaudi 2 dedicated deep learning accelerator logged impressive training results.


"I'm proud of our team's continued progress since we last submitted leadership results on MLPerf in June. Intel's 4th gen Xeon Scalable processor and Gaudi 2 AI accelerator support a wide array of AI functions and deliver leadership performance for customers who require deep learning training and large-scale workloads." Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.

Intel 4th Gen Xeon Scalable "Sapphire Rapids" Server Processors Launch in January

Intel just finalized the launch date of its 4th Gen Xeon Scalable "Sapphire Rapids" server processors. The company plans to launch them on January 10, 2023. The new processors will be launched at a special event dedicated to the company's various new Data Center (group) innovations, which cover server processors, new networking innovations, possible launches from Intel's ecosystem partners, and more.

A lot is riding on the success of "Sapphire Rapids," as they see the introduction of Intel's new high-performance CPU core on in the enterprise segment at core-counts of up to 60-core/120-thread per socket; along with cutting-edge new I/O that includes DDR5 memory, PCI-Express Gen 5, next-gen CXL, and on-package HBM memory on certain variants.

Intel Reports Third-Quarter 2022 Financial Results

Intel Corporation today reported third-quarter 2022 financial results. "Despite the worsening economic conditions, we delivered solid results and made significant progress with our product and process execution during the quarter," said Pat Gelsinger, Intel CEO. "To position ourselves for this business cycle, we are aggressively addressing costs and driving efficiencies across the business to accelerate our IDM 2.0 flywheel for the digital future."

"As we usher in the next phase of IDM 2.0, we are focused on embracing an internal foundry model to allow our manufacturing group and business units to be more agile, make better decisions and establish a leadership cost structure," said David Zinsner, Intel CFO. "We remain committed to the strategy and long-term financial model communicated at our Investor Meeting."

48-Core Russian Baikal-S Processor Die Shots Appear

In December of 2021, we covered the appearance of Russia's home-grown Baikal-S processor, which has 48 cores based on Arm Cortex-A75 cores. Today, thanks to the famous chip photographer Fritzchens Fritz, we have the first die shows that show us exactly how Baikal-S SoC is structured internally and what it is made up of. Manufactured on TSMC's 16 nm process, the Baikal-S BE-S1000 design features 48 Arm Cortex-A75 cores running at a 2.0 GHz base and a 2.5 GHz boost frequency. With a TDP of 120 Watts, the design seems efficient, and the Russian company promises performance comparable to Intel Skylake Xeons or Zen1-based AMD EPYC processors. It also uses a home-grown RISC-V core for management and controlling secure boot sequences.

Below, you can see the die shots taken by Fritzchens Fritz and annotated details by Twitter user Locuza that marked the entire SoC. Besides the core clusters, we see that a slum of cache connects everything, with six 72-bit DDR4-3200 PHYs and memory controllers surrounding everything. This model features a pretty good selection of I/O for a server CPU, as there are five PCIe 4.0 x16 (4x4) interfaces, with three supporting CCIX 1.0. You can check out more pictures below and see the annotations for yourself.

NEC Selects Supermicro GPU Systems for One of Japan's Largest Supercomputers for Advanced AI Research

Supermicro, a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing that NEC Corporation has selected over 116 Supermicro GPU servers that contain dual socket 3rd Gen Intel Xeon Scalable processors and each with eight NVIDIA A100 80 GB GPUs. As a result, the Supermicro GPU server line can include the latest and most powerful Intel Xeon scalable processors and the most advanced AI GPUs from NVIDIA.

"Supermicro is thrilled to deliver an additional 580 PFLOPS of AI training power to its worldwide AI installations," said Charles Liang, president, and CEO, Supermicro. "Supermicro GPU servers have been installed at NEC Corporation and are used to conduct state-of-the-art AI research. Our servers are designed for the most demanding AI workloads using the highest-performing CPUs and GPUs. We continue to work with leading customers worldwide to achieve their business objectives faster and more efficiently with our advanced rack-scale server solutions."

SK Hynix Shows Off Odd-sized 48GB and 96GB DDR5 RDIMMs at InnovatiON

SK Hynix at the 2022 Intel InnovatiON event, showed off some unconventional server memory capacities. The company presented DDR5 RDIMMs in 48 GB and 96 GB densities, besides the usual 32 GB, 64 GB, and 128 GB ones. These are being offered in data-rates of DDR5-5600 and DDR5-6400, which indicates that DDR5-5600 (JEDEC-standard) could be the standard memory speed supported by Xeon Scalable "Sapphire Rapids" processors, with some (or all) models also supporting DDR5-6400. These are not XMP or overclocking SPDs, but JEDEC-standard ones that the processors can automatically train to. The flagship product among SK Hynix's booth would have to be a mammoth 256 GB DDR5-5600 RDIMM, which should enable servers with up to 4 TB of memory per socket (@2 RDIMMs per channel).

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Inventec's Rhyperior Is the Powerhouse GPU Accelerator System Every Business in the AI And ML World Needs

Taiwan-based leading server manufacturing company Inventec's powerhouse GPU accelerator system, Rhyperior, is everything any modern-day business needs in the digital era, especially those relying heavily on Artificial Intelligence (AI) and Machine Learning (ML). A unique and optimal combination of GPUs and CPUs, this 4U GPU accelerator system is based on the NVIDIA A100 Tensor Core GPU and Intel Xeon 3rd Gen (Whitley platform). Rhyperior also equips an NVIDIA NVSwitch to enhance performance dramatically, and its power can be an effective tool for modern workloads.

In a world where technology is disrupting our lives as we know it, GPU acceleration is critical: essentially speeding up processes that would otherwise take much longer. Acceleration boosts execution for complex computational problems that can be broken down into similar, parallel operations. In other words, an excellent accelerator can be a game changer for industries like gaming and healthcare, increasingly relying on the latest technologies like AI and ML for better, more robust solutions for consumers.

Axiomtek Launches Edge Computer with Dual GPU Expansion for AI Accelerated Processing

Axiomtek - a world-renowned leader relentlessly devoted to the research, development, and manufacturing of innovative and reliable industrial computer products of high efficiency - is proud to introduce the IPC972, its new industrial edge AI system with dual GPU support. The highly expandable edge computer supports the Intel Xeon or 10th gen Intel Core i7/i5/i3 processor (code name: Comet Lake S) with the Intel W480E chipset. With the ability to support two NVIDIA GeForce RTX 3090 GPU cards, the IPC972 enables to facilitate image processing, real-time control, data analysis, deep learning, AOI, data acquisition, and more automation tasks.

Axiomtek's IPC972 continues the IPC970 series design, offering flexible expansion options with one I/O module slot and four PCIe slots. In addition, it has one M.2 Key B 3042/3050 slot with SIM slot for 5G wireless connection, one M.2 Key E 2234 slot for Wi-Fi/Bluetooth modules and one full-size PCIe Mini Card slot with SIM slot for Wi-Fi/Bluetooth/LTE modules. With the compact and front-facing I/O design, the IPC972 provides the advantages of fast set-up and easy access and deployment. For stable operation in mission-critical environments, the IPC972 has a wide operating temperature range of -10°C to +60°C and a power input of 24 V DC (uMin=19 V/uMax=30 V) with power-on delay function, over-voltage protection, over current protection, and reverse voltage protection.

Intel Expects to Lose More Market Share, to Reconsider Exiting Other Businesses

During Evercore ISI TMT conference, Intel announced that the company would continue to lose market share, with a possible bounce back in the coming years. According to the latest report, Intel's CEO Pat Gelsinger announced that he expects the company to continue to lose its market share to AMD as the competition has "too much momentum" going for it. AMD's Ryzen and EPYC processors continue to deliver power and efficiency performance figures, which drives customers towards the company. On the other hand, Intel expects a competing product, especially in the data center business with Sapphire Rapids Xeon processors, set to arrive in 2023. Pat Gelsinger noted, "Competition just has too much momentum, and we haven't executed well enough. So we expect that bottoming. The business will be growing, but we do expect that there continues to be some share losses. We're not keeping up with the overall TAM growth until we get later into '25 and '26 when we start regaining share, material share gains."

The only down years that are supposed to show a toll of solid competition are 2022 and 2023. As far as creating a bounceback, Intel targets 2025 and 2026. "Now, obviously, in 2024, we think we're competitive. 2025, we think we're back to unquestioned leadership with our transistors and process technology," noted CEO Gelsinger. Additionally, he had a say about the emerging Arm CPUs competing for the same server market share as Intel and AMD do so, stating that "Well, when we deliver the Forest product line, we deliver power performance leadership versus all Arm alternatives, as well. So now you go to a cloud service provider, and you say, 'Well, why would I go through that butt ugly, heavy software lift to an ARM architecture versus continuing on the x86 family?"

DFI Unveils ATX Motherboard ICX610-C621A

DFI, the global leading provider of high-performance computing technology across multiple embedded industries, unveils a server-grade ATX motherboard, designed for Intel Ice Lake platform, powered by the 3rd Generation Intel Xeon Scalable processors, and equipped with ultra-high speed computing that can support up to 205 W. ICX610-C621A also comes with built-in Intel Speed Select Technology (Intel SST), which provides an excellent load balancing between CPUs and multiple accelerator cards to effectively distribute CPU resource, stabilize computation loads and maximize computing power. As a result, it improves the performance by 1.46 times compared to previous generation.

Featuring powerful performance, the offers three PCIe x 16, two PCIe x8 slots and one M.2 Key and enables ultra-performance computing, AI workload and deep learning, specifically for high-end inspection equipment, such as AOI, CT, and MRI application. The ICX610 also supports ECC RDIMM up to 512 GB 3200 MHz enhances high end performance for advanced inspection equipment and improves efficiency.

DFI Unveils ICX610-C621A Motherboard for the Integration of AI Computing

DFI, the global leading provider of high-performance computing technology across multiple embedded industries, unveils a server-grade ATX motherboard, designed for Intel Ice Lake platform, powered by the 3rd Generation Intel Xeon Scalable processors, and equipped with ultra-high speed computing that can support up to 205 W. ICX610-C621A also comes with built-in Intel Speed Select Technology (Intel SST), which provides an excellent load balancing between CPUs and multiple accelerator cards to effectively distribute CPU resource, stabilize computation loads and maximize computing power. As a result, it improves the performance by 1.46 times compared to previous generation.

Featuring powerful performance, the offers three PCIe x16, two PCIe x8 slots and one M.2 Key and enables ultra-performance computing, AI workload and deep learning, specifically for high-end inspection equipment, such as AOI, CT, and MRI application. The ICX610 also supports ECC RDIMM up to 512 GB 3200 MHz, enhances high end performance for advanced inspection equipment and improves efficiency.

NVIDIA Grace CPU Specs Remind Us Why Intel Never Shared x86 with the Green Team

NVIDIA designed the Grace CPU, a processor in the classical sense, to replace the Intel Xeon or AMD EPYC processors it was having to cram into its pre-built HPC compute servers for serial-processing roles, and mainly because those half-a-dozen GPU HPC processors need to be interconnected by a CPU. The company studied the CPU-level limitations and bottlenecks not just with I/O, but also the machine-architecture, and realized its compute servers need a CPU purpose-built for the role, with an architecture that's heavily optimized for NVIDIA's APIs. This, the NVIDIA Grace CPU was born.

This is NVIDIA's first outing with a CPU with a processing footprint rivaling server processors from Intel and AMD. Built on the TSMC N4 (4 nm EUV) silicon fabrication process, it is a monolithic chip that's deployed standalone with an H100 HPC processor on a single board that NVIDIA calls a "Superchip." A board with a Grace and an H100, makes up a "Grace Hopper" Superchip. A board with two Grace CPUs makes a Grace CPU Superchip. Each Grace CPU contains a 900 GB/s switching fabric, a coherent interface, which has seven times the bandwidth of PCI-Express 5.0 x16. This is key to connecting the companion H100 processor, or neighboring Superchips on the node, with coherent memory access.

EK Introduces Fluid Works Compute Series X7000-RM GPU Server

EK Fluid Works, a high-performance workstation manufacturer, is expanding its Compute Series with a rackmount liquid-cooled GPU server, the X7000-RM. The EK Fluid Works Compute Series X7000-RM is tailor-made for high-compute density applications such as machine learning, artificial intelligence, rendering farms, and scientific compute simulations.

What separates the X7000-RM from similar GPU server solutions is EK's renowned liquid cooling and high compute density. It offers 175% more GPU computational power than air-cooled servers of similar size while maintaining 100% of its performance output no matter the intensity or duration of the task. The standard X7000-RM 5U chassis can be equipped with an AMD EPYC Milan-X 64 Core CPU, up to 2 TB of DDR4 RAM, and up to seven NVIDIA A100 80 GB GPUs for the ultimate heavy-duty GPU computational power. Intel Xeon Scalable single and dual socket solutions are also possible, but such configurations are limited to a maximum of five GPUs.

Tachyum Submits Bid for 20-Exaflop Supercomputer to U.S. Department of Energy Advanced Computing Ecosystems

Tachyum today announced that it has responded to a U.S. Department of Energy Request for Information soliciting Advanced Computing Ecosystems for DOE national laboratories engaged in scientific and national security research. Tachyum has submitted a proposal to create a 20-exaflop supercomputer based on Tachyum's Prodigy, the world's first universal processor.

The DOE's request calls for computing systems that are five to 10 times faster than those currently available and/or that can perform more complex applications in "data science, artificial intelligence, edge deployments at facilities, and science ecosystem problems, in addition to the traditional modeling and simulation applications."
Return to Keyword Browsing
Dec 20th, 2024 11:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts