News Posts matching #EPYC

Return to Keyword Browsing

EK Introduces Fluid Works Compute Series X7000-RM GPU Server

EK Fluid Works, a high-performance workstation manufacturer, is expanding its Compute Series with a rackmount liquid-cooled GPU server, the X7000-RM. The EK Fluid Works Compute Series X7000-RM is tailor-made for high-compute density applications such as machine learning, artificial intelligence, rendering farms, and scientific compute simulations.

What separates the X7000-RM from similar GPU server solutions is EK's renowned liquid cooling and high compute density. It offers 175% more GPU computational power than air-cooled servers of similar size while maintaining 100% of its performance output no matter the intensity or duration of the task. The standard X7000-RM 5U chassis can be equipped with an AMD EPYC Milan-X 64 Core CPU, up to 2 TB of DDR4 RAM, and up to seven NVIDIA A100 80 GB GPUs for the ultimate heavy-duty GPU computational power. Intel Xeon Scalable single and dual socket solutions are also possible, but such configurations are limited to a maximum of five GPUs.

AMD Confirms Ryzen 7000 Launch Within Q3, Radeon RX 7000 Series Within 2022

AMD in its Q2-2022 financial results call with analysts, confirmed that the company's next-generation Ryzen 7000 desktop processors based on the "Zen 4" microarchitecture will debut this quarter (i.e. Q3-2022, or before October 2022). CEO Dr Lisa Su stated "Looking ahead, we're on track to launch our all-new 5 nm Ryzen 7000 desktop processors and AM5 platforms later this quarter with leadership performance in gaming and content creation."

The company also stated that its next-generation Radeon 7000 series GPUs based on the RDNA3 graphics architecture are on-track for launch "later this year," without specifying whether it meant this quarter, which could mean launch any time before January 2023. AMD is also on course to beating Intel to the next-generation of server processors with DDR5 and PCIe Gen 5 support, with its EPYC "Genoa" 96-core processor slated for later this year, as Intel struggles with a Q1-2023 general availability timeline for its Xeon Scalable "Sapphire Rapids" processor.

QNAP Launches TS-h1090FU High-Density 10-Bay U.2 NVMe All-flash NAS

QNAP Systems, today introduced the high-density NVMe all-flash NAS TS-h1090FU, featuring U.2 NVMe PCIe Gen 4 x4 SSD architecture for high throughput and low latency workloads. With built-in dual-port 25 GbE connectivity, PCIe Gen 4 expandability, ZFS reliability, and QNAP's SSD optimization technologies, the TS-h1090FU provides exceptional performance for IO-intensive file servers, virtualization, collaborative 4K/8K video editing, and efficient data backup/recovery applications. Businesses can retire hybrid arrays and opt for the TS-h1090FU at the same cost as optimal Tier 2 storage applications with an ideal low cost-per-GB.

"QNAP's all-flash storage lineup offers a great combination of performance, reliability, and capacity, providing businesses with the highest system performance at the lowest storage cost," said Jason Hsu, Product Manager of QNAP, adding "Running on a ZFS-based OS that is optimized for enterprise storage, the 25 GbE-ready TS-h1090FU all-flash NAS demonstrates superior performance with future-proof growth to empower mission critical workloads."

QNAP launches the TS-h1290FX, the first tower U.2 NVMe/SATA All-Flash NAS, powered by AMD EPYC, for 25GbE collaborative workflow environments

QNAP Systems, Inc., today launched the TS-h1290FX NAS. Providing QNAP's first PCIe 4.0 and U.2 NVMe/SATA all-flash NAS in a tower form factor, the TS-h1290FX excels in the most demanding work environments such as collaborative high-resolution video workflows. Featuring AMD EPYC 8/16-core processors, built-in 25GbE and 2.5GbE connectivity, PCIe Gen 4 expansion, and up to petabyte-scale storage capacity, the TS-h1290FX tackles data-intensive and latency-sensitive applications, such as large media file transfer, real-time editing of 4K/8K high-resolution media, online collaborative workflows, and virtualization applications.

"Modern businesses and studios shouldn't need to dedicate entire rooms to accommodate hot and loud servers, and that's where the TS-h1290FX comes in. Contained within a unique tower form factor and utilizing quiet cooling is exceptional performance driven by a server-grade processor, all-flash U.2 NVMe SSD storage, and QNAP's enterprise-grade QuTS hero operating system," said Jason Hsu, Product Manager of QNAP.

QNAP Launches the TS-h1290FX Tower NAS Powered by AMD EPYC, with 25 GbE

QNAP Systems, Inc., a leading computing, networking, and storage solution innovator, today launched the TS-h1290FX NAS. Providing QNAP's first PCIe 4.0 and U.2 NVMe/SATA all-flash NAS in a tower form factor, the TS-h1290FX excels in the most demanding work environments such as collaborative high-resolution video workflows. Featuring AMD EPYC 8/16-core processors, built-in 25 GbE and 10GbE connectivity, PCIe Gen 4 expansion, and up to petabyte-scale storage capacity, the TS-h1290FX provides up to 816K/318K iSCSI 4K random read/write IOPS for tackling data-intensive and latency-sensitive applications, such as large media file transfer, real-time editing of 4K/8K media, and virtualization applications.

"Modern businesses and studios shouldn't need to dedicate entire rooms to accommodate hot and loud servers, and that's where the TS-h1290FX comes in. Contained within a unique tower form factor and utilizing quiet cooling is exceptional performance driven by a server-grade processor, all-flash U.2 NVMe SSD storage, and QNAP's enterprise-grade QuTS hero operating system," said Jason Hsu, Product Manager of QNAP, adding "the TS-h1290FX is a remarkable storage solution that provides exceptional power to deal with the modern demands of businesses and studios - including high-resolution media editing and online collaborative workflows."

AMD Announces the "Zen 5" Microarchitecture and EPYC "Turin" Processor on 4nm

AMD in its Financial Analyst Day 2022 presentation, unveiled its next-generation "Zen 5" CPU microarchitecture. The company's latest CPU microarchitecture roadmap also confirms that variants of its "Zen 4" CCDs with 3D Vertical Cache (3DV Cache) are very much in the works, and there will be variants of the EPYC "Genoa" processors with 3DV Cache, besides standard ones.

AMD stated that it completed the design goal of the current "Zen 3" architecture, by building it on both 7 nm and 6 nm nodes (the latter being the client "Rembrandt" processor). The new "Zen 4" architecture will debut on the 5 nm node (TSMC N5), and could see a similar optical shrink to the newer 4 nm node somewhere down the line, although AMD wouldn't specify whether it's on the enterprise segment, or client. The next-gen "Zen 5" architecture will debut on 4 nm, and see an optical shrink to 3 nm on some future product.

ORNL Frontier Supercomputer Officially Becomes the First Exascale Machine

Supercomputing game has been chasing various barriers over the years. This has included MegaFLOP, GigaFLOP, TeraFLOP, PetaFLOP, and now ExaFLOP computing. Today, we are witnessing for the first time an introduction of an Exascale-level machine contained at Oak Ridge National Laboratory. Called the Frontier, this system is not really new. We have known about its upcoming features for months now. What is new is the fact that it was completed and is successfully running at ORNL's facilities. Based on the HPE Cray EX235a architecture, the system uses 3rd Gen AMD EPYC 64-core processors with a 2 GHz frequency. In total, the system has 8,730,112 cores that work in conjunction with AMD Instinct MI250X GPUs.

As of today's TOP500 supercomputers list, the system is overtaking Fugaku's spot to become the fastest supercomputer on the planet. Delivering a sustained HPL (High-Performance Linpack) score of 1.102 Exaflop/s, it features a 52.23 GigaFLOPs/watt power efficiency rating. In the HPL-AI metric, dedicated to measuring the system's AI capabilities, the Frontier machine can output 6.86 exaFLOPs at reduced precisions. This alone is, of course, not a capable metric for Exascale machines as AI works with INT8/FP16/FP32 formats, while the official results are measured in FP64 double-precision form. Fugaku, the previous number one, scores about 2 ExaFLOPs in HPL-AI while delivering "only" 442 PetaFlop/s in HPL FP64 benchmarks.

AMD Expands Confidential Computing Presence on Google Cloud

AMD today announced new Confidential virtual machines (VMs) on the existing the N2D and C2D VMs on Google Cloud, all powered by AMD EPYC processors. These VMs extend the AMD EPYC processor portfolio of Confidential Computing on Google Cloud with the performance of 3rd Gen EPYC processors in compute-optimized VMs.

A key Confidential Computing component provided by AMD EPYC processors is AMD Secure Encrypted Virtualization (SEV), part of AMD Infinity Guard. This advanced hardware-based security feature encrypts full system memory and individual virtual machine memory as well as isolating the VM memory from the hypervisor, without dramatically impacting performance. With the expansion of Confidential Computing in N2D and C2D VMs, Google Cloud customers now have access to advanced hardware enabled security features powered by 3rd Gen AMD EPYC processors that will help protect sensitive, wide-variety workloads.

AMD EPYC "Bergamo" 128-core Processor Based on Same SP5 Socket as "Genoa"

AMD is launching two distinct classes of next-generation enterprise processors, the 4th Generation EPYC "Genoa" with CPU core-counts up to 96-core/192-thread; and the new EPYC "Bergamo" with a massive 128-core/256-thread compute density. Pictures of the "Genoa" MCM are already out in the wild, revealing twelve "Zen 4" CCDs built on 5 nm, and a new-generation sIOD (I/O die) that's very likely built on 6 nm. The fiberglass substrate of "Genoa" already looks crowded with twelve chiplets, making us wonder if AMD needed a larger package for "Bergamo." Turns out, it doesn't.

In its latest Corporate presentation, AMD reiterated that "Bergamo" will be based on the same SP5 (LGA-6096) package as "Genoa." This would mean that the company either made room for more CCDs, or the CCDs themselves are larger in size. AMD states that "Bergamo" CCDs are based on the "Zen 4c" microarchitecture. Details about "Zen 4c" are scarce, but from what we gather, it is a cloud-optimized variant of "Zen 4" probably with the entire ISA of "Zen 4," and power characteristics suited for high-density cloud environments. These chiplets are built on the same TSMC N5 (5 nm EUV) process as the regular "Zen 4" CCDs.

AMD Selects Google Cloud to Provide Additional Scale for Chip Design Workloads

Google Cloud and AMD today announced a technology partnership in which AMD will run electronic design automation (EDA) for its chip-design workloads on Google Cloud, further extending the on-premises capabilities of AMD data centers. AMD will also leverage Google Cloud's global networking, storage, artificial intelligence, and machine learning capabilities to further improve upon its hybrid and multicloud strategy for these EDA workloads.

Scale, elasticity, and efficient utilization of resources play critical roles in chip design, particularly given that the demand for compute processing grows with each node advancement. To remain flexible and scale easily, AMD will add Google Cloud's newest compute-optimized C2D VM instance, powered by 3rd Gen AMD EPYC processors, to its suite of resources focused on EDA workloads. By leveraging Google Cloud, AMD anticipates being able to run more designs in parallel, giving the team more flexibility to manage short-term compute demands, without reducing allocation on long-term projects.

AMD EPYC Processors Power New Oracle Cloud Infrastructure Compute Instances and Enable Hybrid Cloud Environment

AMD (NASDAQ: AMD) today announced the expansion of the AMD EPYC processor footprint within the cloud ecosystem, powering the new Oracle Cloud Infrastructure (OCI) E4 Dense instances. These new instances are part of the Oracle Cloud VMware Solution offerings, enable customers to build and run a hybrid-cloud environment for their VMware based workloads.

Based on 3rd Gen AMD EPYC processors, the new E4 Dense instances expand the AMD EPYC presence at OCI and are made to support memory and storage intense VMware workloads. The E4 Dense instances utilize the core density and performance capabilities of EPYC processors to provide customers a fast path to a cloud environment, enabling similar performance and advanced security features through enabling the AMD Secure Encrypted Virtualization (SEV) security feature for VMware workloads that they have on-premises.

AMD EPYC Processors Power Mercedes-AMG Petronas Formula One Racing Team

AMD and the Mercedes-AMG Petronas Formula One (F1) Team today showcased how AMD EPYC processors improved aerodynamics testing capacity, contributing to the Mercedes-AMG Petronas team winning its eighth Constructors' Championship in the 2021 racing season. By using AMD EPYC processors, the team was able to achieve a 20 percent performance improvement for computational fluid dynamics (CFD) workloads that were used to model and test aerodynamic flow of their F1 car.

"We are proud to partner with the reigning Constructors' Champions, the Mercedes-AMG Petronas Formula One Team, operating at the cutting edge of racing and technology," said, Dan McNamara, senior vice president and general manager, Server Business Unit, AMD. "For F1 teams, having the most effective computational analysis of aerodynamics can mean the difference between winning and losing a race. With AMD EPYC processors, the Mercedes-AMG F1 team can iterate on vehicle design faster and more efficiently than their previous system."

AMD Ryzen 7000 "Zen 4" Processors Have DDR5 Memory Overclocking Design-Focus

AMD's first desktop processor with DDR5 memory support, the Ryzen 7000 series "Raphael," based on the "Zen 4" microarchitecture, will come with a design focus on DDR5 memory overclocking capabilities, with the company claiming that the processors will be capable of handling DDR5 memory clock speeds "you maybe thought couldn't be possible," according to Joseph Tao who is a Memory Enabling Manager at AMD.

Tao stated: "Our first DDR5 platform for gaming is our Raphael platform and one of the awesome things about Raphael is that we are really gonna try to make a big splash with overclocking and I'll just kinda leave it there but speeds that you maybe thought couldn't be possible, may be possible with this overclocking spec." We are hearing reports of AMD innovating a new overclocking standard for DDR5 memory, which it calls RAMP (Ryzen Accelerated Memory Profile), which it is positioning as a competing standard to Intel's XMP 3.0 spec.

"Navi 31" RDNA3 Sees AMD Double Down on Chiplets: As Many as 7

Way back in January 2021, we heard a spectacular rumor about "Navi 31," the next-generation big GPU by AMD, being the company's first logic-MCM GPU (a GPU with more than one logic die). The company has a legacy of MCM GPUs, but those have been a single logic die surrounded by memory stacks. The RDNA3 graphics architecture that the "Navi 31" is based on, sees AMD fragment the logic die into smaller chiplets, with the goal of ensuring that only those specific components that benefit from the TSMC N5 node (6 nm), such as the number crunching machinery, are built on the node, while ancillary components, such as memory controllers, display controllers, or even media accelerators, are confined to chiplets built on an older node, such as the TSMC N6 (6 nm). AMD had taken this approach with its EPYC and Ryzen processors, where the chiplets with the CPU cores got the better node, and the other logic components got an older one.

Greymon55 predicts an interesting division of labor on the "Navi 31" MCM. Apparently, the number-crunching machinery is spread across two GCD (Graphics Complex Dies?). These dies pack the Shader Engines with their RDNA3 compute units (CU), Command Processor, Geometry Processor, Asynchronous Compute Engines (ACEs), Rendering Backends, etc. These are things that can benefit from the advanced 5 nm node, enabling AMD to the CUs at higher engine clocks. There's also sound logic behind building a big GPU with two such GCDs instead of a single large GCD, as smaller GPUs can be made with a single such GCD (exactly why we have two 8-core chiplets making up a 16-core Ryzen processors, and the one of these being used to create 8-core and 6-core SKUs). The smaller GCD would result in better yields per wafer, and minimize the need for separate wafer orders for a larger die (such as in the case of the Navi 21).

AMD EPYC "Genoa" Zen 4 Processor Multi-Chip Module Pictured

Here is the first picture of a next-generation AMD EPYC "Genoa" processor with its integrated heatspreader (IHS) removed. This is also possibly the first picture of a "Zen 4" CPU Complex Die (CCD). The picture reveals as many as twelve CCDs, and a large sIOD silicon. The "Zen 4" CCDs, built on the TSMC N5 (5 nm EUV) process, look visibly similar in size to the "Zen 3" CCDs built on the N7 (7 nm) process, which means the CCD's transistor count could be significantly higher, given the transistor-density gained from the 5 nm node. Besides more number-crunching machinery on the CPU core, we're hearing that AMD will increase cache sizes, particularly the dedicated L2 cache size, which is expected to be 1 MB per core, doubling from the previous generations of the "Zen" microarchitecture.

Each "Zen 4" CCD is reported to be about 8 mm² smaller in die-area than the "Zen 3" CCD, or about 10% smaller. What's interesting, though, is that the sIOD (server I/O die) is smaller in size, too, estimated to measure 397 mm², compared to the 416 mm² of the "Rome" and "Milan" sIOD. This is good reason to believe that AMD has switched over to a newer foundry process, such as the TSMC N7 (7 nm), to build the sIOD. The current-gen sIOD is built on Global Foundries 12LPP (12 nm). Supporting this theory is the fact that the "Genoa" sIOD has a 50% wider memory I/O (12-channel DDR5), 50% more IFOP ports (Infinity Fabric over package) to interconnect with the CCDs, and the mere fact that PCI-Express 5.0 and DDR5 switching fabric and SerDes (serializer/deserializers), may have higher TDP; which together compel AMD to use a smaller node such as 7 nm, for the sIOD. AMD is expected to debut the EPYC "Genoa" enterprise processors in the second half of 2022.

AMD SP5 EPYC "Genoa" Zen4 Processor Socket Pictured in the Flesh

Here's the first picture of AMD Socket SP5, the huge new CPU socket the company is building its next-generation EPYC "Genoa" enterprise processors around. "Genoa" will be AMD's first server products to implement the new "Zen 4" CPU cores, and next-gen I/O, including DDR5 memory and PCI-Express Gen 5. SP5, much like its predecessor SP3, is a land-grid array (LGA) socket, and has 6,096 pins.

The vast pin-count enables power to support CPU core-counts of up to 96 on the EPYC "Genoa," and up to 128 on the EPYC "Bergamo" cloud processor; a 12-channel DDR5 memory interface (24 sub-channels); and up to 128 PCI-Express 5.0 lanes. The socket's retention mechanism and processor installation procedure appears similar to that of the SP3, although the thermal requirements of SP5 will be entirely new, with processors expected to ship with TDP as high as 400 W, compared to 280 W on the current-generation EPYC "Milan." AMD is expected to debut EPYC "Genoa" in the second half of 2022.

AMD Expands Data Center Solutions Capabilities with Acquisition of Pensando

AMD today announced a definitive agreement to acquire Pensando for approximately $1.9 billion before working capital and other adjustments. Pensando's distributed services platform includes a high-performance, fully programmable packet processor and comprehensive software stack that accelerate networking, security, storage and other services for cloud, enterprise and edge applications.

"To build a leading-edge data center with the best performance, security, flexibility and lowest total cost of ownership requires a wide range of compute engines," said Dr. Lisa Su, AMD chair and CEO. "All major cloud and OEM customers have adopted EPYC processors to power their data center offerings. Today, with our acquisition of Pensando, we add a leading distributed services platform to our high-performance CPU, GPU, FPGA and adaptive SoC portfolio. The Pensando team brings world-class expertise and a proven track record of innovation at the chip, software and platform level which expands our ability to offer leadership solutions for our cloud, enterprise and edge customers."

AMD's Upcoming Zen 4 Based Genoa CPUs Confirmed to Have 1 MB L2 Cache per Core

As unreliable as Geekbench can be as a comparative benchmark, it's also an excellent source for upcoming hardware leaks and in this case more details about AMD's upcoming Zen 4 based Genoa server and workstation processors has leaked. Someone with access to a 32-core engineering sample thought it was a good idea to run geekbench on it and upload the results. As the engineering sample CPU is locked at 1.2 GHz, the actual benchmark numbers aren't particularly interesting, but the one interesting titbit we get is that AMD has increased the L2 cache to 1 MB per core, or twice as much as its predecessor.

What seems to be missing from this engineering sample is any kind of 3D V-Cache, as it only has a total of 128 MB L3 cache. Despite the gimped clock speed, the Genoa CPU is close to an EPYC 7513 in the single core tests and that CPU has a 2.6 GHz base clock and a 3.65 GHz boost clock, both system running Ubuntu 20.04 LTS. It manages to beat it in a couple of the sub-tests, such as Navigation, SQLite, HTML5, gaussian blur and face detection and it's within a few points in things like speech recognition and rigid body physics. This is quite impressive considering the Genoa engineering sample is operating at less than half the clock speed, or possibly even at a third of the clock speed of the EPYC 7513. AMD is said to be launching its Zen 4 based Genoa CPUs later this year and models with up to 96 core and 192 threads, with 12-channel DDR5 memory and PCIe 5.0 support are expected.

TYAN Drives Innovation in the Data Center with 3rd Gen AMD EPYC Processors with AMD 3D V-Cache Technology

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today announced availability of high-performance server platforms supporting new 3rd Gen AMD EPYC Processors with AMD 3D V-Cache technology for the modern data center. "The modern data center requires a powerful foundation to balance compute, storage, memory and IO that can efficiently manage growing volumes in the digital transformation trend," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "TYAN's industry-leading server platforms powered by 3rd Gen AMD EPYC processors with AMD 3D V-Cache technology give our customers better energy efficiency and increased performance for a current and future of highly complex workloads."

"3rd Gen AMD EPYC processors with AMD 3D V-Cache technology continue to drive a new standard for the modern data center with breakthrough performance for technical computing workloads due to 768 MB of L3 cache, enabling faster time-to-results on targeted workloads. Fully socket compatible with our 3rd Gen AMD EPYC platforms, customers can adopt these processors to transform their data center operations to achieve faster product development along with exceptional energy savings," said Ram Peddibhotla, corporate vice president, EPYC product management, AMD.

Supermicro's SuperBlade, Twin and Ultra Server Families Powered by 3rd Gen AMD EPYC Processors with 3D V-Cache Technology

Super Micro Computer, Inc. (SMCI), a global leader in high-performance computing, storage, networking solutions, and green computing technology, announces breakthrough performance with the 3rd Gen AMD EPYC Processors with AMD 3D V-Cache Technology in Supermicro advanced servers. The high density, performance-optimized, and environmentally friendly SuperBlade and multi-node optimized TwinPro and the dual-processor optimized Ultra systems will show significant performance improvement when using the new AMD EPYC 7003 Processors with AMD 3D V-Cache for technical computing applications.

"Supermicro servers, leveraging new AMD CPUs, will deliver the increased performance gains our manufacturing customers are looking for to run higher-resolution simulations to design better and more optimized products using the latest CAE applications," said Vik Malyala, President, EMEA, senior vice president, WW FAE, solutions and business. "Our high-performance server platforms will solve more complex problems for engineers and researchers with the new 3rd Gen AMD EPYC Processors with AMD 3D V-Cache."

Supermicro Breakthrough Universal GPU System - Supports All Major CPU, GPU, and Fabric Architectures

Super Micro Computer, Inc. (SMCI), a global leader in enterprise computing, storage, networking solutions, and green computing technology, has announced a revolutionary technology that simplifies large scale GPU deployments and is a future proof design that supports yet to be announced technologies. The Universal GPU server provides the ultimate flexibility in a resource-saving server.

The Universal GPU system architecture combines the latest technologies supporting multiple GPU form factors, CPU choices, storage, and networking options optimized together to deliver uniquely-configured and highly scalable systems. Systems can be optimized for each customer's specific Artificial Intelligence (AI), Machine Learning (ML), and High-Performance Computing (HPC) applications. Organizations worldwide are demanding new options for their next generation of computing environments, which have the thermal headroom for the next generation of CPUs and GPUs.

AMD Threadripper PRO 5000 and EPYC "Milan-X" Join Ryzen 5800X3D for March Availability

It will be an unexpectedly busy March for AMD, with the company launching three distinct products across its processor lines. The first one, which we reported earlier this morning, speaks of a late-March availability of the Ryzen 7 5800X3D 8-core/16-thread Socket AM4 processor, which AMD claims offers gaming performance on par with the Core i9-12900K "Alder Lake." It turns out, there are two more surprises.

Apparently the company is ready with Ryzen Threadripper PRO 5000 series workstation processors. Designed for Socket sWRX8 motherboards based on the only chipset option available—the AMD WRX80, these are the first Threadripper products based on the "Zen 3" microarchitecture, and feature 8-channel DDR4 memory, and up to 128 PCI-Express Gen4 lanes for workstation connectivity. Unfortunately, you can't buy one of these in the retail channel, as AMD is making them OEM-only. The first pre-built workstations will arrive as early as next week (March 8). At this point we still don't know if these chips use the newer "Zen 3" CCD with 3D Vertical Cache, or the conventional "Zen 3" CCD with 32 MB planar L3 cache.

AMD EPYC Powers Amazon EC2 C6a HPC Instance

AMD announced the expansion of Amazon Web Services Inc. (AWS) AMD EPYC processor-based offerings with the general availability of compute optimized Amazon EC2 C6a instances. According to AWS, the C6a instances offer up to 15% better compute price performance over previous generation C5a instances for a variety of compute focused workloads.

The C6a instances support compute-intensive workloads such as batch processing, distributed analytics, ad serving, highly scalable multiplayer gaming, and video encoding. This is the second AWS EC2 instance type powered by 3rd Gen AMD EPYC processors, following the release of the M6a instances. These instances continue the collaboration between AWS and AMD providing AWS EC2 customers access to the performance and price performance capabilities of the latest generation of AMD EPYC processors.

Google Cloud Chooses 3rd Gen AMD EPYC Processors to Power New Compute Focused Instance

AMD (NASDAQ: AMD) today announced that AMD EPYC processors will power the new C2D virtual machine offering from Google Cloud, bringing customers strong performance and compute power for high-performance (HPC) memory-bound workloads in areas like electronic design automation (EDA) and computational fluid dynamics (CFD). This announcement continues the momentum for AMD EPYC processors, marking the third family of instances powered by 3rd Gen EPYC processors at Google Cloud, joining the T2D and N2D instances.

With the help of AMD EPYC processors and its high core density, the C2D VMs will provide the largest VM sizes within compute optimized family at Google Cloud. As well, because of the EPYC processor's performance for compute focused workloads, Google Cloud showcased the C2D VMs can provide up to 30 percent better performance for targeted workloads compared to previous generation EPYC based VMs at a comparable size.

AMD EPYC Milan-X 7773X 64-Core CPU Benchmarked & Overclocked

The AMD Milan-X EPYC 7773X 3D V-Cache is a 64-core, 128-thread server processor with 804 MB of cache that is currently shipping to global data centers. These processors are not yet officially available in retail channels but Chinese content creator kenaide has managed to acquire and test two qualification sample chips on a SuperMicro dual-socket motherboard. The AMD EPYC 7773X is detected as 100-000000504-04 CPU by CPU-Z confirming that it's an engineering sample with clock speeds 100 MHz below the 2.2 GHz and 3.5 GHz base and boost speeds of the official processor.

The processors each feature 32 MB L2, 256 MB L3, and 512 MB of 3D V-Cache for a total of 1608 MB cache in the configuration that was benchmarked with Cinebench R23 and 3DMark. The processors were also "overclocked" to 4.8 GHz using the EPYC Milan/Rome ES/QS Overclocking tool by increasing their power limit to 1500 W from 280 W and boosting the voltage to 1.55 V. This 4.8 GHz clock speed is only a target with the actual speed reached not reported and no benchmarks for the overclocked processors shared.
Return to Keyword Browsing
Nov 23rd, 2024 02:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts