News Posts matching #EPYC

Return to Keyword Browsing

Linux Performance of AMD Rome vs Intel Cascade Lake, 1 Year On

Michael Larabel over at Phoronix posted an extremely comprehensive analysis on the performance differential between AMD's Rome-based EPYC and Intel's Cascade Lake Xeons one-year after release. The battery of tests, comprising more than 116 benchmark results, pits a Xeon Platinum 8280 2P system against an EPYC 7742 2P one. The tests were conducted pitting performance of both systems while running benchmarks under the Ubuntu 19.04 release, which was chosen as the "one year ago" baseline, against the newer Linux software stack (Ubuntu 20.10 daily + GCC 10 + Linux 5.8).

The benchmark conclusions are interesting. For one, Intel gained more ground than AMD over the course of the year, with the Xeon platform gaining 6% performance across releases, while AMD's EPYC gained just 4% over the same period of time. This means that AMD's system is still an average of 14% faster across all tests than the Intel platform, however, which speaks to AMD's silicon superiority. Check some benchmark results below, but follow the source link for the full rundown.

Advanced Security Features of AMD EPYC Processors Enable New Google Cloud Confidential Computing Portfolio

AMD and Google Cloud today announced the beta availability of Confidential Virtual Machines (VMs) for Google Compute Engine powered by 2nd Gen AMD EPYC processors, taking advantage of the processors' advanced security features. The first product in the Google Cloud Confidential Computing portfolio, Confidential VMs, enables customers for the first time to encrypt data in-use while it is being processed and not just when at rest and in-transit. Based on the N2D family of VMs for Google Compute Engine, Confidential VMs provide customers high performance processing for the most demanding computational tasks and enable encryption for even the most sensitive data in the cloud while it is being processed.

"At Google Cloud, we believe the future of cloud computing will increasingly shift to private, encrypted services where users can be confident that the confidentiality of their data is always under their control. To help customers in making that transition, we've created Confidential VMs, the first product in our Google Cloud Confidential Computing portfolio," said Vint Cerf, vice president and chief internet evangelist, Google. "By using advanced security technology in the AMD EPYC processors, we've created a breakthrough technology that allows customers to encrypt their data in the cloud while it's being processed and unlock computing scenarios that had previously not been possible."

AMD 64-core EPYC "Milan" Based on "Zen 3" Could Ship with 3.00 GHz Clocks

AMD's 3rd generation EPYC line of enterprise processors that leverage the "Zen 3" microarchitecture, could innovate in two directions - towards increasing performance by doing away with the CCX (compute complex) multi-core topology; and taking advantage of a newer/refined 7 nm-class node to increase clock-speeds. Igor's Lab decoded as many as three OPNs of the upcoming 3rd gen EPYC series, including a 64-core/128-thread part that ships with frequency of 3.00 GHz. The top 2nd gen EPYC 64-core part, the 7662, ships with 2.00 GHz base frequency and 3.30 GHz boost; and 225 W TDP. AMD is expected to unveil its "Zen 3" microarchitecture within 2020.

AMD Ryzen Threadripper PRO 3995WX Processor Pictured: 8-channel DDR4

Here is the first picture of the Ryzen Threadripper PRO 3995WX processor, designed to be part of AMD's HEDT/workstation processor launch for this year. The picture surfaced briefly on the ChipHell forums, before being picked up by HXL (@9550pro) This processor is designed to compete with Intel Xeon W series processors, such as the W-3175X, and is hence located a segment above even the "normal" Threadripper series led by the 64-core/128-thread Threadripper 3990X. Besides certain features exclusive to Ryzen PRO series processors, the killer feature with the 3995WX is a menacing 8-channel DDR4 memory interface, that can handle up to 2 TB of memory with ECC.

The Threadripper PRO 3995X is expected to have a mostly identical I/O to the most expensive EPYC 7662 processor. As a Ryzen-branded chip, it could feature higher clock speeds than its EPYC counterpart. To enable its 8-channel memory, the processor could come with a new socket, likely the sWRX8, and AMD WRX80 chipset, although it wouldn't surprise us if these processors have some form of inter-compatibility with sTRX4 and TRX40 (at limited memory bandwidth and PCIe capabilities, of course). Sources tell VideoCardz that AMD could announce the Ryzen Threadripper PRO series as early as July 14, 2020.

As CERN Plans LHC Expansion, AMD Powers Latest Science Feats

AMD has entered a strategic partnership with the European Organization for Nuclear Research (CERN) in which the company seems poised to see its EPYC processors powering the latest and greatest when it comes to man-made incursions into the secrets of the universe. AMD's 2nd Gen EPYC 7742 processors are already being deployed in CERN's current Large Hadron Collider (LHC), a physics-defying particle accelerator. The LHC has already given us discoveries as important as the Higgs-Boson - a fundamental particle that has given profound insight into the workings of the Universe according to the Standard Model, and the discovery of which garnered the 2013 Nobel Prize for physics.

The current LHC is a 17-mile-long (27 km) underground ring of superconducting magnets housed in a pipe-like structure, or cryostat, which is cooled to temperatures just above absolute zero. Every single particle collision in the LHC generates some 40 TB/s of data that has to be stored, analyzed, and its irrelevant components discarded so as to generate usable data (all in the name of science). Even as AMD's EPYC 2 lineup is already being used for this effect in the current LHC, CERN has recently announced plans to back a €20bn investment on a second generation Hadron Collider. The Future Circular Collider (FCC), as it is being tentatively called, will be four times the size (over 100 km long) and six times more powerful than the LHC. And you can rest assured that all that data will still need to be processed, at a rate that's likely to increase proportionally to the power of the Future Circular Collider. Whether AMD will be the chosen partner for the hardware needed for this task remains unclear, but the fact that AMD's products are already being used in the current LHC could spell a very relevant outcome for AMD's financials in the future. Not to mention the earned bragging rights on account of their hardware being used for sciences' most extraordinary feats.

AMD EPYC Scores New Supercomputing and High-Performance Cloud Computing System Wins

AMD today announced multiple new high-performance computing wins for AMD EPYC processors, including that the seventh fastest supercomputer in the world and four of the 50 highest-performance systems on the bi-annual TOP500 list are now powered by AMD. Momentum for AMD EPYC processors in advanced science and health research continues to grow with new installations at Indiana University, Purdue University and CERN as well as high-performance computing (HPC) cloud instances from Amazon Web Services, Google, and Oracle Cloud.

"The leading HPC institutions are increasingly leveraging the power of 2nd Gen AMD EPYC processors to enable cutting-edge research that addresses the world's greatest challenges," said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. "Our AMD EPYC CPUs, Radeon Instinct accelerators and open software programming environment are helping to advance the industry towards exascale-class computing, and we are proud to strengthen the global HPC ecosystem through our support of the top supercomputing clusters and cloud computing environments."

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

TYAN Brings the Latest Server Advancements at its 2020 Server Solutions Online Exhibition

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, is showcasing its latest lineup of HPC, storage, cloud and embedded platforms powered by 2nd Gen AMD EPYC 7002 series processors and 2nd Gen Intel Xeon Scalable Processors at TYAN server solutions online exhibition.

"With over 30 years of experience offering state-of-the-art server platforms and server motherboards, TYAN has been recognized by large scale data center customers and server channels," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "Combining the latest innovation from our partners, like Intel and AMD, TYAN customers enable to win the market opportunities precisely with TYAN's server building block offerings."

ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs

ASUSTek, the leading IT Company in server systems, server motherboards and workstations today announced the new NVIDIA A100-powered server - ESC4000A E10 to accelerate and optimize data centers for high utilization and low total cost of ownership with the PCIe Gen 4 expansions, OCP 3.0 networking, faster compute and better GPU performance. ASUS continues building a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications.

ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

2nd Gen AMD EPYC Processors Now Delivering More Computing Power to Amazon Web Services Customers

AMD today announced that 2nd Gen AMD EPYC processor powered Amazon Elastic Compute Cloud (EC2) C5a instances are now generally available in the AWS U.S. East, AWS U.S. West, AWS Europe and AWS Asia Pacific regions.

Powered by a 2nd Gen AMD EPYC processor running at frequencies up to 3.3Ghz, the Amazon EC2 C5a instances are the sixth instance family at AWS powered by AMD EPYC processors. By using the 2nd Gen AMD EPYC processor, the C5a instance delivers leadership x86 price-performance for a broad set of compute-intensive workloads including batch processing, distributed analytics, data transformations, log analytics and web applications.

AMD CEO Lisa Su Tops Earnings as Highest Paid CEO in The S&P 500

Lisa Su of Advanced Micro Devices has become the world's highest-paid CEO, according to a recent survey from The Associated Press on CEO compensation. Lisa Su's pay package was valued at $58.5 million after some extremely impressive company performance over her last five years as CEO on the back of the wild success of EPYC, Ryzen, and Radeon. This pay package comprised a base salary of $1 million, a performance bonus of $1.2 million, $56 million in stocks. This makes Lisa Su the first woman to become the highest-paid CEO and one of only 20 women on the list, versus 309 men.

AMD COVID-19 HPC Fund Donates 7 Petaflops of Compute Power to Researchers

AMD and technology partner Penguin Computing Inc., a division of SMART Global Holdings, Inc, today announced that New York University (NYU), Massachusetts Institute of Technology (MIT) and Rice University are the first universities named to receive complete AMD-powered, high-performance computing systems from the AMD HPC Fund for COVID-19 research. AMD also announced it will contribute a cloud-based system powered by AMD EPYC and AMD Radeon Instinct processors located on-site at Penguin Computing, providing remote supercomputing capabilities for selected researchers around the world. Combined, the donated systems will collectively provide researchers with more than seven petaflops of compute power that can be applied to fight COVID-19.

"High performance computing technology plays a critical role in modern viral research, deepening our understanding of how specific viruses work and ultimately accelerating the development of potential therapeutics and vaccines," said Lisa Su, president and CEO, AMD. "AMD and our technology partners are proud to provide researchers around the world with these new systems that will increase the computing capability available to fight COVID-19 and support future medical research."

Distant Blips on the AMD Roadmap Surface: Rembrandt and Raphael

Several future AMD processor codenames across various computing segments surfaced courtesy of an Expreview leak that's largely aligned with information from Komachi Ensaka. It does not account for "Matisse Refresh" that's allegedly coming out in June-July as three gaming-focused Ryzen socket AM4 desktop processors; but roadmap from 2H-2020 going up to 2022 sees many codenames surface. To begin with, the second half of 2020 promises to be as action packed as last year's 7/7 mega launch. Over in the graphics business, the company is expected to debut its DirectX 12 Ultimate-compliant RDNA2 client graphics, and its first CDNA architecture-based compute accelerators. Much of the processor launch cycle is based around the new "Zen 3" microarchitecture.

The server platform debuting in the second half of 2020 is codenamed "Genesis SP3." This will be the final processor architecture for the SP3-class enterprise sockets, as it has DDR4 and PCI-Express gen 4.0 I/O. The EPYC server processor is codenamed "Milan," and combines "Zen 3" chiplets along with an sIOD. EPYC Embedded (FP6 package) processors are codenamed "Grey Hawk."

GIGABYTE Announces HPC Systems Powered by NVIDIA A100 Tensor Core GPUs

GIGABYTE, a supplier of high-performance computing (HPC) systems, today disclosed four NVIDIA HGX A100 platforms under development. These platforms will be available with NVIDIA A100 Tensor Core GPUs. NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics. These four products include G262 series servers that can hold four NVIDIA A100 GPUs and G492 series that can provide eight A100 GPUs. Each series also distinguishes between two models, which support the 3rd generation Intel Xeon Scalable processor and the 2nd generation AMD EPYC processor. The NVIDIA HGX A100 platform is a key element in the NVIDIA accelerated data center concept that brings huge parallel computing power to customers, thereby helping customers accelerate their digital transformation.

With GPU acceleration becoming the mainstream technology in today's data center. Scientists, researchers and engineers are committed to using GPU-accelerated HPC and artificial intelligence (AI) to meet the important challenges of the current world. The NVIDIA accelerated data center concept, including GIGABYTE high-performance servers with NVIDIA NVSwitch, NVIDIA NVLink, and NVIDIA A100 GPUs, will provide GPU computing power required for different computing scales. The NVIDIA accelerated data center also features NVIDIA Mellanox HDR InfiniBand high-speed networking and NVIDIA Magnum IO software that supports GPUDirect RDMA and GPUDirect Storage.

AMD 2nd Gen EPYC Processors Set to Power Oracle Cloud Infrastructure Compute E3 Platform

Today, AMD announced that 2nd Gen AMD EPYC processors are powering the Oracle Cloud Infrastructure Compute E3 platform, bringing a new level of high-performance computing to Oracle Cloud. Using the AMD EPYC 7742 processor, the Oracle Cloud "E3 standard" and the bare metal compute instances are available today and leverage key features of the Gen AMD EPYC processors including class-leading memory bandwidth and the highest core count for an x86 data center processor. These features enable the Oracle Cloud E3 platform to be well suited for both general purpose and high bandwidth workloads such as big data analytics, memory intense workloads and Oracle business applications.

AMD Reports First Quarter 2020 Financial Results

AMD today announced revenue for the first quarter of 2020 of $1.79 billion, operating income of $177 million, net income of $162 million and diluted earnings per share of $0.14. On a non-GAAP* basis, operating income was $236 million, net income was $222 million and diluted earnings per share was $0.18.

"We executed well in the first quarter, navigating the challenging environment to deliver 40 percent year-over-year revenue growth and significant gross margin expansion driven by our Ryzen and EPYC processors," said Dr. Lisa Su, AMD president and CEO. "While we expect some uncertainty in the near-term demand environment, our financial foundation is solid and our strong product portfolio positions us well across a diverse set of resilient end markets. We remain focused on strong business execution while ensuring the safety of our employees and supporting our customers, partners and communities. Our strategy and long-term growth plans are unchanged."

AMD "Matisse" and "Rome" IO Controller Dies Mapped Out

Here are the first detailed die maps of the I/O controller dies of AMD's "Matisse" and "Rome" multi-chip modules that make up the company's 3rd generation Ryzen and 2nd generation EPYC processor families, respectively, by PC enthusiast and VLSI engineer "Nemez" aka @GPUsAreMagic on Twitter, with underlying die-shots by Fitzchens Fitz. The die maps of the "Matisse" cIOD in particular give us fascinating insights to how AMD designed the die to serve both as a cIOD and as an external FCH (AMD X570 and TRX40 chipsets). At the heart of both these chips' design effort is using highly configurable SerDes (serializer/deserializers) that can work as PCIe, SATA, USB 3, or other high-bandwidth serial interfaces, using a network of fabric switches and PHYs. This is how motherboard designers are able to configure the chipsets for the I/O they want for their specific board designs.

The "Matisse" cIOD has two x16 SerDes controllers and an I/O root hub, along with two configurable x16 SerDes PHYs, while the "Rome" sIOD has four times as many SerDes controllers, along with eight times as many PHYs. The "Castle Peak" cIOD (3rd gen Ryzen Threadripper) disables half the SerDes resources on the "Rome" sIOD, along with half as many memory controllers and PHYs, limiting it to 4-channel DDR4. The "Matisse" cIOD features two IFOP (Infinity Fabric over Package) links, wiring out to the two "Zen 2" CCDs (chiplets) on the MCM, while the "Rome" sIOD features eight such IFOP interfaces for up to eight CCDs, along with IFIS (Infinity Fabric Inter-Socket) links for 2P motherboards. Infinity Fabric internally connects all components on both IOD dies. Both dies are built on the 12 nm FinFET (12LP) silicon fabrication node at GlobalFoundries.
Matisse cIOD Rome cIOD

TYAN Updates Transport HX Barebones with New AMD EPYC 7002 Series Processors

TYAN, an industry-leading server platform design manufacturer and MiTAC Computing Technology Corporation subsidiary, today announced support for high frequency AMD EPYC 7F32, AMD EPYC 7F52 and AMD EPYC 7F72 processor-based server motherboards and server systems to the market. TYAN's HPC and storage server platforms continue to offer exceptional performance to datacenter customers.

"Leveraging AMD's innovation in 7 nm process technology, PCIe 4.0 I/O, and an embedded security architecture, TYAN's 2nd Gen AMD EPYC processor-based platforms are designed to address the most demanding challenges facing the datacenter", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "Adding the new AMD EPYC 7002 Series processors with TYAN server platforms enable us to provide new capabilities to our customers and partners."

x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

Intel's x86 processor architecture has been the dominant CPU instruction set for many decades, since IBM decided to put the Intel 8086 microprocessor into its first Personal Computer. Later, in 2006, Apple decided to replace their PowerPC based processors in Macintosh computers with Intel chips, too. This was the time when x86 became the only option for the masses to use and develop all their software on. While mobile phones and embedded devices are mostly Arm today, it is clear that x86 is still the dominant ISA (Instruction Set Architecture) for desktop computers today, with both Intel and AMD producing processors for it. Those processors are going inside millions of PCs that are used every day. Today I would like to share my thoughts on the demise of the x86 platform and how it might vanish in favor of the RISC-based Arm architecture.

Both AMD and Intel as producer, and millions of companies as consumer, have invested heavily in the x86 architecture, so why would x86 ever go extinct if "it just works"? The answer is that it doesn't just work.

AMD Financial Analyst Day 2020 Live Blog

AMD Financial Analyst Day presents an opportunity for AMD to talk straight with the finance industry about the company's current financial health, and a taste of what's to come. Guidance and product teasers made during this time are usually very accurate due to the nature of the audience. In this live blog, we will post information from the Financial Analyst Day 2020 as it unfolds.
20:59 UTC: The event has started as of 1 PM PST. CEO Dr Lisa Su takes stage.

AMD Scores Another EPYC Win in Exascale Computing With DOE's "El Capitan" Two-Exaflop Supercomputer

AMD has been on a roll in both consumer, professional, and exascale computing environments, and it has just snagged itself another hugely important contract. The US Department of Energy (DOE) has just announced the winners for their next-gen, exascale supercomputer that aims to be the world's fastest. Dubbed "El Capitan", the new supercomputer will be powered by AMD's next-gen EPYC Genoa processors (Zen 4 architecture) and Radeon GPUs. This is the first such exascale contract where AMD is the sole purveyor of both CPUs and GPUs, with AMD's other design win with EPYC in the Cray Shasta being paired with NVIDIA graphics cards.

El Capitan will be a $600 million investment to be deployed in late 2022 and operational in 2023. Undoubtedly, next-gen proposals from AMD, Intel and NVIDIA were presented, with AMD winning the shootout in a big way. While initially the DOE projected El Capitan to provide some 1.5 exaflops of computing power, it has now revised their performance goals to a pure 2 exaflop machine. El Capitan willl thus be ten times faster than the current leader of the supercomputing world, Summit.

Cloudflare Deploys AMD EPYC Processors Across its Latest Gen X Servers

The ubiquitous DDoS-mitigation and CDN provider, Cloudflare, announced that its latest Gen X servers implement AMD EPYC processors ditching Intel Xeons with its older Gen 9 servers. Cloudflare uses multi-functional servers (just like Google), in which each server is capable of handling any kind of the company's workloads (DDoS mitigation, content delivery, DNS, web-security, etc.). The company minimizes server hardware configurations so they're easier to maintain and lower TCO. The hardware specs of its servers are periodically updated and classified by "generations."

Cloudflare's Gen X server is configured with a single-socket 2nd gen AMD EPYC 7642 processor (48-core/96-thread, 256 MB L3 cache), and 256 GB of octa-channel DDR4-2933 memory, along with NVMe flash-based primary storage. "We selected the AMD EPYC 7642 processor in a single-socket configuration for Gen X. This CPU has 48-cores (96 threads), a base clock speed of 2.4 GHz, and an L3 cache of 256 MB. While the rated power (225 W) may seem high, it is lower than the combined TDP in our Gen 9 servers and we preferred the performance of this CPU over lower power variants. Despite AMD offering a higher core count option with 64-cores, the performance gains for our software stack and usage weren't compelling enough," Cloudflare writes in its blog post announcing Gen X. The new servers will go online in the coming weeks.
Many Thanks to biffzinker for the tip.

KIOXIA First to Deliver Enterprise and Data Center PCIe 4.0 U.3 SSDs

The PCI Express 4.0 specification was designed to double the performance of server and storage systems, pushing speeds up to 16.0 gigatransfers per second (GT/s) or 2 gigabits per second (Gb/s) throughput per lane, and driving new performance levels for cloud and enterprise applications. Today, KIOXIA America, Inc. (formerly Toshiba Memory America, Inc.) announced that its lineup of CM6 and CD6 Series PCIe 4.0 NVM Express (NVMe ) enterprise and data center solid state drives (SSDs) are now shipping to customers.

An established leader in developing PCIe and NVMe SSDs, KIOXIA delivers never-before-seen performance. KIOXIA was the first company to publicly demonstrate PCIe 4.0 SSDs and is now the first to ship these next-generation drives. The CM6 and CD6 Series SSDs are compliant to the latest NVMe specification, and include key features such as in-band NVMe-MI, persistent event log, namespace granularity, and shared stream writes. Additionally, both drives are SFF-TA-1001 conformant (also known as U.3), which allows them to be used in tri-mode enabled backplanes, which can accept SAS, SATA or NVMe SSDs.

AMD Gets Design Win in Cray Shasta Supercomputer for US Navy DSRC With 290,304 EPYC Cores

AMD has scored yet another design win for usage of its high-performance EPYC processors in the Cray Shasta supercomputer. The Cray Shasta will be deployed in the US Navy's Department of Defense Supercomputing Resource Center (DSRC) as part of the High Performance Computing Modernization Program. The peak theoretical computing capability of 12.8 PetaFLOPS, or 12.8 quadrillion floating point operations per second supercomputer will be built with 290,304 AMD EPYC (Rome) processor cores and 112 NVIDIA Volta V100 General-Purpose Graphics Processing Units (GPGPUs). The system will also feature 590 total terabytes (TB) of memory and 14 petabytes (PB) of usable storage, including 1 PB of NVMe-based solid state storage. Cray's Slingshot network will make sure all those components talk to each other at a rate of 200 Gigabits per second.

Navy DSRC supercomputers support climate, weather, and ocean modeling by NMOC, which assists U.S. Navy meteorologists and oceanographers in predicting environmental conditions that may affect the Navy fleet. Among other scientific endeavors, the new supercomputer will be used to enhance weather forecasting models; ultimately, this improves the accuracy of hurricane intensity and track forecasts. The system is expected to be online by early fiscal year 2021.
Return to Keyword Browsing
Mar 30th, 2025 23:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts