News Posts matching #EPYC

Return to Keyword Browsing

ASRock Rack Brings AMD EPYC CPUs to "Deep" Mini-ITX Form Factor

ASRock Rack, a branch of ASRock focused on making server products, has today launched a new motherboard that can accommodate up to 64 core AMD EPYC CPU. Built on the new, proprietary form factor called "Deep Mini-ITX", the ROMED4ID-2T motherboard is just a bit bigger than the standard ITX board. The standard ITX boards are 170 x 170 mm, while this Deep Mini-ITX form extends the board by a bit. It measures 170 x 208.28 mm, or 6.7" x 8.2" for all of the American readers. ASRock specifies that the board supports AMD's second-generation EPYC "Rome" 7002 series processors. Of course, the socket for these CPUs is socket SP3 (LGA4094) with 4094 pins.

The motherboard comes with 4 DDR4 DIMM slots, of any type. Supported DIMM types are R-DIMM, LR-DIMM, and NV-DIMM. If you want the best capacity, LR-DIMM use enables you to use up to 256 GB of memory. When it comes to expansion, you can hook-up any PCIe 4.0 device to the PCIe 4.0 x16 slot. There is also an M.2 2280 key present, so you can fit in one of those high-speed PCIe 4.0 x4 M.2 SSDs. For connection to the outside world, the board uses an Intel X550-AT2 controller that controls two RJ45 10 GbE connectors. There are also two Slimline (PCIe 4.0 x8 or 8 SATA 6 Gb/s), and four Slimline (PCIe 4.0 x8) storage U.2 ports.

TOP500 Expands Exaflops Capacity Amidst Low Turnover

The 56th edition of the TOP500 saw the Japanese Fugaku supercomputer solidify its number one status in a list that reflects a flattening performance growth curve. Although two new systems managed to make it into the top 10, the full list recorded the smallest number of new entries since the project began in 1993.

The entry level to the list moved up to 1.32 petaflops on the High Performance Linpack (HPL) benchmark, a small increase from 1.23 petaflops recorded in the June 2020 rankings. In a similar vein, the aggregate performance of all 500 systems grew from 2.22 exaflops in June to just 2.43 exaflops on the latest list. Likewise, average concurrency per system barely increased at all, growing from 145,363 cores six months ago to 145,465 cores in the current list.

AMD Announces CDNA Architecture. Radeon MI100 is the World's Fastest HPC Accelerator

AMD today announced the new AMD Instinct MI100 accelerator - the world's fastest HPC GPU and the first x86 server GPU to surpass the 10 teraflops (FP64) performance barrier. Supported by new accelerated compute platforms from Dell, Gigabyte, HPE, and Supermicro, the MI100, combined with AMD EPYC CPUs and the ROCm 4.0 open software platform, is designed to propel new discoveries ahead of the exascale era.

Built on the new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. The MI100 offers up to 11.5 TFLOPS of peak FP64 performance for HPC and up to 46.1 TFLOPS peak FP32 Matrix performance for AI and machine learning workloads. With new AMD Matrix Core technology, the MI100 also delivers a nearly 7x boost in FP16 theoretical peak floating point performance for AI training workloads compared to AMD's prior generation accelerators.

AMD Wins Contract for European LUMI Supercomputer: 552 petaflop/s Powered by Epyc, AMD Instinct

AMD has won a contract to empower the LUMI supercomputer, designed for the EuroHPC Joint Undertaking (EuroHPC JU) in conjunction with 10 European countries. The contract will see AMD provide both the CPU and GPU innards of the LUMI, set to be populated with next-generation AMD Epyc CPUs and AMD Instinct GPUs. The supercomputer, which is set to enter operation come next year, will deliver an estimated 552 petaflop/s - higher than the world's current fastest supercomputer, Fugaku in Japan, which reaches peak performance of 513 petaflop/s - and is an Arm-powered affair.

The contract for LUMI's construction has been won by Hewlett Packard Enterprise (HPE), which will be providing an HPE Cray EX supercomputer powered by the aforementioned AMD hardware. LUMI has an investment cost set at 200 million euros, for both hardware, installation, and the foreseeable lifetime of its operation. This design win by AMD marks another big contract for the company, which was all but absent from the supercomputing space until launch, and subsequent iterations, of its Zen architecture and latest generations of Instinct HPC accelerators.

QNAP Launches 24-bay U.2 NVMe NAS Featuring 2nd Gen AMD EPYC

QNAP Systems, Inc., a leading computing, networking, and storage solution innovator, today introduced the lightning-fast NVMe all-flash TS-h2490FU. With 24 drive bays for U.2 NVMe Gen 3 x4 SSD, the TS-h2490FU provides up to 472K/205K iSCSI 4K random read/write IOPS with ultra-low latency. Equipped with four 25 GbE SFP28 and two 2.5 GbE RJ45 LAN ports, five PCIe Gen 4 slots, and 1100 W redundant power supplies, the TS-h2490FU provides exceptional hardware and connectivity. The ZFS-based QuTS hero operating system also includes powerful applications for data reduction and SSD optimization, ensuring that SSD performance and lifespan is maximized for mission-critical virtualized workloads and data centers with all-flash investments.

Los Alamos National Laboratory Deploys HPE Cray EX 'Chicoma' Supercomputer Powered by AMD EPYC Processors

Los Alamos National Laboratory has completed the installation of a next-generation high performance computing platform, with aim to enhance its ongoing R&D efforts in support of the nation's response to COVID-19. Named Chicoma, the new platform is poised to demonstrate Hewlett Packard Enterprise's new HPE Cray EX supercomputer architecture for solving complex scientific problems.

"As extensive social and economic impacts from COVID-19 continue to grip the nation, Los Alamos scientists are actively engaged in a number of critical research efforts ranging from therapeutics design to epidemiological modeling," said Irene Qualters, Associate Laboratory Director for Simulation and Computing at Los Alamos. "High Performance Computing is playing a critical role by allowing scientists to model the complex phenomena involved in viral evolution and propagation."

OIST Deploys AMD EPYC Processors with Over 2 PFLOPs of Computing Power Dedicated to Scientific Research

Today, AMD and Okinawa Institute of Science and Technology Graduate University (OIST), announced the deployment of AMD EPYC 7702 processors for use in a new, high performance computing system. The EPYC processor-based supercomputer will deliver the 2.36 petaflops of computing power OIST plans to use for scientific research at the University. The Scientific Computing & Data Analysis Section (SCDA) of OIST plans to implement the new supercomputer for supporting OIST computationally intensive research ranging from bioinformatics, computational neuroscience, and physics. SCDA adopted AMD EPYC after significant growth, including a 2X increase in users.

"2020 is a milestone year for OIST with new research units expanding the number of research areas. This growth is driving a significant increase in our computational needs," said Eddy Taillefer, Ph.D., Section Leader, Scientific Computing & Data Analysis Section. "Under the common resource model for which the computing system is shared by all OIST users we needed a significant increase in core-count capacity to both absorb these demands and cope with the significant growth of OIST. The latest AMD EPYC processor was the only technology that could match this core-count need in a cost-performance effective way."

AMD EPYC Processors Optimized for VMware vSphere 7.0U1

AMD today highlighted the latest expansion of the AMD EPYC processor ecosystem for virtualized and hyperconverged infrastructure (HCI) environments with VMware adding support for AMD Secure Encrypted Virtualization-Encrypted State (SEV-ES) in its newest vSphere release, 7.0U1. With the latest release, VMware vSphere now enables AMD SEV-ES, which is part of AMD Infinity Guard, a robust set of modern, hardware enabled features found in all 2nd Gen AMD EPYC processors. In addition to VM memory encryption, SEV-ES also provides encryption of CPU registers and provides VMware customers with easy-to-implement and enhanced security for their environments.

"As the modern data center continues to evolve into a virtualized, hybrid cloud environment, AMD and VMware are working together to make sure customers have access to systems that provide high levels of performance on virtualization workloads, while enabling advanced security features that are simple to implement for better protection of data," said Dan McNamara, senior vice president and general manager, Server Business Unit, AMD. "A virtualized data center with AMD EPYC processors and VMware enables customers to modernize the data center and have access to high-performance and leading-edge security features, across a wide variety of OEM platforms."

GIGABYTE, Northern Data AG and AMD Join Forces to Drive HPC Mega-Project

GIGABYTE Technology, an industry leader in high-performance servers and workstations, today is announcing a partnership with Northern Data AG to create a HPC mega-project with computing power of around 3.1 exaflops. GIGABYTE will supply GPU-based server systems equipped with proven AMD EPYC processors and AMD Radeon Instinct accelerators from technology partner AMD, a leading provider of high performance computing and graphics technologies, to Northern Data.

Northern Data develops a distributed computing cluster based on the hardware at locations in Norway, Sweden and Canada, which in its final stage of deployment will provide FP32 computing power of around 3.1 exaflops (3.1 million teraflops and 274.54 petaflops FP64). The world's fastest supercomputer, the Japanese "Fukagu" (Fujitsu), has a calculation power of 1.07 exaflops FP32 and 415.3 petaflops FP64, whereas the second fastest, the US supercomputer "Summit" (IBM) has a calculation power of 0.414 exaflops FP32 and 148.0 petaflops FP64.

AMD Zen 3-based EPYC Milan CPUs to Usher in 20% Performance Increase Compared to Rome

According to a report courtesy of Hardwareluxx, where contributor Andreas Schilling reportedly gained access to OEM documentation, AMD's upcoming EPYC Milan CPUs are bound to offer up to 20% performance improvements over the previous EPYC generation. The report claims a 15% IPC performance, paired with an extra 5% added via operating frequency optimization. The report claims that AMD's 64-core designs will feature a lower-clock all-core operating mode, and a 32-core alternate for less threaded workloads where extra frequency is added to the working cores.

Apparently, AMD's approach for the Zen 3 architecture does away with L3 subdivisions according to CCXs; now, a full 32 MB of L3 cache is available for each 8-core Core Compute Die (CCD). AMD has apparently achieved new levels of frequency optimization under Zen 3, with higher upward frequency limits than before. This will see the most benefits in lower core-count designs, as the amount of heat being generated is necessarily lesser compared to more core-dense designs. Milan keeps the same 7 nm manufacturing tech, DDR4, PCIe 4.0, and 120-225 W TDP as the previous-gen Rome. It remains to be seen how these changes actually translate to the consumer versions of Zen 3, Vermeer, later this year.

GIGABYTE Announces G242-Z11 HPC Node with PCIe 4.0

GIGABYTE Technology,, an industry leader in high-performance servers and workstations, today announced the launch of the GIGABYTE G242-Z11 with PCIe 4.0, which adds to an already extensive line of G242 series servers, designed for AI, deep learning, data analytics, and scientific computing. High-speed interfaces such as Ethernet, Infiniband, and PCI Express rely on fast data transfer, and PCIe 3.0 can pose a bottleneck in some servers. With the expansion of the AMD EPYC family of processors comes PCIe Gen 4.0, which is valuable to servers so as not to bottleneck high bandwidth applications. The 2nd Gen AMD EPYC 7002 processors have added PCIe Gen 4.0, and GIGABYTE has included an ever-evolving line of servers to accommodate the latest technology.

The G242-Z11 caters to the capabilities of 2nd Gen AMD EPYC 7002 series processors. The G242-Z11 is built around a single AMD EPYC processor, and this even includes the new 280 W 64-core (128 threads) AMD EPYC 7H12. Besides a high core count, the 7002 series has 128 PCIe lanes and natively supports PCIe Gen 4.0. It offers double the speed and bandwidth when compared to PCIe 3.0. Having PCIe 4.0 allows for 16GT/s per lane and a total bandwidth of 64 GB/s. As far as memory support, the G242-Z11 has support for 8-channel DDR4 with room for up to 8 DIMMs. In this 1 DIMM per channel configuration, it can support up to 2 TB of memory and speeds up to 3200 MHz.

Intel Ice Lake-SP Processors Get Benchmarked Against AMD EPYC Rome

Intel is preparing to launch its next-generation for server processors and the next in line is the Ice Lake-SP 10 nm CPU. Featuring a Golden Cove CPU and up to 28 cores, the CPU is set to bring big improvements over the past generation of server products called Cascade Lake. Today, thanks to the sharp eye of TUM_APISAK, we have a new benchmark of the Ice Lake-SP platform, which is compared to AMD's EPYC Rome offerings. In the latest GeekBench 4 score, appeared an engineering sample of unknown Ice Lake-SP model with 28 cores, 56 threads, a base frequency of 1.5 GHz, and a boost of 3.19 GHz.

This model was put in a dual-socket configuration that ends up at a total of 56 core and 112 threads, against a single 64 core AMD EPYC 7442 Rome CPU. The dual-socket Intel configuration scored 3424 points in the single-threaded test, where AMD configuration scored notably higher 4398 points. The lower score on Intel's part is possibly due to lower clocks, which should improve in the final product, as this is only an engineering sample. When it comes to the multi-threaded test, Intel configuration scored 38079 points, where the AMD EPYC system did worse and scored 35492 points. The reason for this higher result is unknown, however, it shows that Ice Lake-SP has some potential.

AMD Confirms "Zen 4" on 5nm, Other Interesting Tidbits from Q2-2020 Earnings Call

AMD late Tuesday released its Q2-2020 financial results, which saw the company rake in revenue of $1.93 billion for the quarter, and clock a 26 percent YoY revenue growth. In both its corporate presentation targeted at the financial analysts, and its post-results conference call, AMD revealed a handful interesting bits looking into the near future. Much of the focus of AMD's presentation was in reassuring investors that [unlike Intel] it is promising a stable and predictable roadmap, that nothing has changed on its roadmap, and that it intends to execute everything on time. "Over the past couple of quarters what we've seen is that they see our performance/capability. You can count on us for a consistent roadmap. Milan point important for us, will ensure it ships later this year. Already started engaging people on Zen4/5nm. We feel customers are very open. We feel well positioned," said president and CEO Dr Lisa Su.

For starters, there was yet another confirmation from the CEO that the company will launch the "Zen 3" CPU microarchitecture across both the consumer and data-center segments before year-end, which means both Ryzen and EPYC "Milan" products based on "Zen 3." Also confirmed was the introduction of the RDNA2 graphics architecture across consumer graphics segments, and the debut of the CDNA scalar compute architecture. The company started shipping semi-custom SoCs to both Microsoft and Sony, so they could manufacture their next-generation Xbox Series X and PlayStation 5 game consoles in volumes for the Holiday shopping season. Semi-custom shipments could contribute big to the company's Q3-2020 earnings. CDNA won't play a big role in 2020 for AMD, but there will be more opportunities for the datacenter GPU lineup in 2021, according to the company. CDNA2 debuts next year.

AMD Reports Second Quarter 2020 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the second quarter of 2020 of $1.93 billion, operating income of $173 million, net income of $157 million and diluted earnings per share of $0.13. On a non-GAAP basis, operating income was $233 million, net income was $216 million and diluted earnings per share was $0.18. "We delivered strong second quarter results, led by record notebook and server processor sales as Ryzen and EPYC revenue more than doubled from a year ago," said Dr. Lisa Su, AMD president and CEO. "Despite some macroeconomic uncertainty, we are raising our full-year revenue outlook as we enter our next phase of growth driven by the acceleration of our business in multiple markets."

Linux Performance of AMD Rome vs Intel Cascade Lake, 1 Year On

Michael Larabel over at Phoronix posted an extremely comprehensive analysis on the performance differential between AMD's Rome-based EPYC and Intel's Cascade Lake Xeons one-year after release. The battery of tests, comprising more than 116 benchmark results, pits a Xeon Platinum 8280 2P system against an EPYC 7742 2P one. The tests were conducted pitting performance of both systems while running benchmarks under the Ubuntu 19.04 release, which was chosen as the "one year ago" baseline, against the newer Linux software stack (Ubuntu 20.10 daily + GCC 10 + Linux 5.8).

The benchmark conclusions are interesting. For one, Intel gained more ground than AMD over the course of the year, with the Xeon platform gaining 6% performance across releases, while AMD's EPYC gained just 4% over the same period of time. This means that AMD's system is still an average of 14% faster across all tests than the Intel platform, however, which speaks to AMD's silicon superiority. Check some benchmark results below, but follow the source link for the full rundown.

Advanced Security Features of AMD EPYC Processors Enable New Google Cloud Confidential Computing Portfolio

AMD and Google Cloud today announced the beta availability of Confidential Virtual Machines (VMs) for Google Compute Engine powered by 2nd Gen AMD EPYC processors, taking advantage of the processors' advanced security features. The first product in the Google Cloud Confidential Computing portfolio, Confidential VMs, enables customers for the first time to encrypt data in-use while it is being processed and not just when at rest and in-transit. Based on the N2D family of VMs for Google Compute Engine, Confidential VMs provide customers high performance processing for the most demanding computational tasks and enable encryption for even the most sensitive data in the cloud while it is being processed.

"At Google Cloud, we believe the future of cloud computing will increasingly shift to private, encrypted services where users can be confident that the confidentiality of their data is always under their control. To help customers in making that transition, we've created Confidential VMs, the first product in our Google Cloud Confidential Computing portfolio," said Vint Cerf, vice president and chief internet evangelist, Google. "By using advanced security technology in the AMD EPYC processors, we've created a breakthrough technology that allows customers to encrypt their data in the cloud while it's being processed and unlock computing scenarios that had previously not been possible."

AMD 64-core EPYC "Milan" Based on "Zen 3" Could Ship with 3.00 GHz Clocks

AMD's 3rd generation EPYC line of enterprise processors that leverage the "Zen 3" microarchitecture, could innovate in two directions - towards increasing performance by doing away with the CCX (compute complex) multi-core topology; and taking advantage of a newer/refined 7 nm-class node to increase clock-speeds. Igor's Lab decoded as many as three OPNs of the upcoming 3rd gen EPYC series, including a 64-core/128-thread part that ships with frequency of 3.00 GHz. The top 2nd gen EPYC 64-core part, the 7662, ships with 2.00 GHz base frequency and 3.30 GHz boost; and 225 W TDP. AMD is expected to unveil its "Zen 3" microarchitecture within 2020.

AMD Ryzen Threadripper PRO 3995WX Processor Pictured: 8-channel DDR4

Here is the first picture of the Ryzen Threadripper PRO 3995WX processor, designed to be part of AMD's HEDT/workstation processor launch for this year. The picture surfaced briefly on the ChipHell forums, before being picked up by HXL (@9550pro) This processor is designed to compete with Intel Xeon W series processors, such as the W-3175X, and is hence located a segment above even the "normal" Threadripper series led by the 64-core/128-thread Threadripper 3990X. Besides certain features exclusive to Ryzen PRO series processors, the killer feature with the 3995WX is a menacing 8-channel DDR4 memory interface, that can handle up to 2 TB of memory with ECC.

The Threadripper PRO 3995X is expected to have a mostly identical I/O to the most expensive EPYC 7662 processor. As a Ryzen-branded chip, it could feature higher clock speeds than its EPYC counterpart. To enable its 8-channel memory, the processor could come with a new socket, likely the sWRX8, and AMD WRX80 chipset, although it wouldn't surprise us if these processors have some form of inter-compatibility with sTRX4 and TRX40 (at limited memory bandwidth and PCIe capabilities, of course). Sources tell VideoCardz that AMD could announce the Ryzen Threadripper PRO series as early as July 14, 2020.

As CERN Plans LHC Expansion, AMD Powers Latest Science Feats

AMD has entered a strategic partnership with the European Organization for Nuclear Research (CERN) in which the company seems poised to see its EPYC processors powering the latest and greatest when it comes to man-made incursions into the secrets of the universe. AMD's 2nd Gen EPYC 7742 processors are already being deployed in CERN's current Large Hadron Collider (LHC), a physics-defying particle accelerator. The LHC has already given us discoveries as important as the Higgs-Boson - a fundamental particle that has given profound insight into the workings of the Universe according to the Standard Model, and the discovery of which garnered the 2013 Nobel Prize for physics.

The current LHC is a 17-mile-long (27 km) underground ring of superconducting magnets housed in a pipe-like structure, or cryostat, which is cooled to temperatures just above absolute zero. Every single particle collision in the LHC generates some 40 TB/s of data that has to be stored, analyzed, and its irrelevant components discarded so as to generate usable data (all in the name of science). Even as AMD's EPYC 2 lineup is already being used for this effect in the current LHC, CERN has recently announced plans to back a €20bn investment on a second generation Hadron Collider. The Future Circular Collider (FCC), as it is being tentatively called, will be four times the size (over 100 km long) and six times more powerful than the LHC. And you can rest assured that all that data will still need to be processed, at a rate that's likely to increase proportionally to the power of the Future Circular Collider. Whether AMD will be the chosen partner for the hardware needed for this task remains unclear, but the fact that AMD's products are already being used in the current LHC could spell a very relevant outcome for AMD's financials in the future. Not to mention the earned bragging rights on account of their hardware being used for sciences' most extraordinary feats.

AMD EPYC Scores New Supercomputing and High-Performance Cloud Computing System Wins

AMD today announced multiple new high-performance computing wins for AMD EPYC processors, including that the seventh fastest supercomputer in the world and four of the 50 highest-performance systems on the bi-annual TOP500 list are now powered by AMD. Momentum for AMD EPYC processors in advanced science and health research continues to grow with new installations at Indiana University, Purdue University and CERN as well as high-performance computing (HPC) cloud instances from Amazon Web Services, Google, and Oracle Cloud.

"The leading HPC institutions are increasingly leveraging the power of 2nd Gen AMD EPYC processors to enable cutting-edge research that addresses the world's greatest challenges," said Forrest Norrod, senior vice president and general manager, data center and embedded systems group, AMD. "Our AMD EPYC CPUs, Radeon Instinct accelerators and open software programming environment are helping to advance the industry towards exascale-class computing, and we are proud to strengthen the global HPC ecosystem through our support of the top supercomputing clusters and cloud computing environments."

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

TYAN Brings the Latest Server Advancements at its 2020 Server Solutions Online Exhibition

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, is showcasing its latest lineup of HPC, storage, cloud and embedded platforms powered by 2nd Gen AMD EPYC 7002 series processors and 2nd Gen Intel Xeon Scalable Processors at TYAN server solutions online exhibition.

"With over 30 years of experience offering state-of-the-art server platforms and server motherboards, TYAN has been recognized by large scale data center customers and server channels," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit. "Combining the latest innovation from our partners, like Intel and AMD, TYAN customers enable to win the market opportunities precisely with TYAN's server building block offerings."

ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs

ASUSTek, the leading IT Company in server systems, server motherboards and workstations today announced the new NVIDIA A100-powered server - ESC4000A E10 to accelerate and optimize data centers for high utilization and low total cost of ownership with the PCIe Gen 4 expansions, OCP 3.0 networking, faster compute and better GPU performance. ASUS continues building a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications.

ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

2nd Gen AMD EPYC Processors Now Delivering More Computing Power to Amazon Web Services Customers

AMD today announced that 2nd Gen AMD EPYC processor powered Amazon Elastic Compute Cloud (EC2) C5a instances are now generally available in the AWS U.S. East, AWS U.S. West, AWS Europe and AWS Asia Pacific regions.

Powered by a 2nd Gen AMD EPYC processor running at frequencies up to 3.3Ghz, the Amazon EC2 C5a instances are the sixth instance family at AWS powered by AMD EPYC processors. By using the 2nd Gen AMD EPYC processor, the C5a instance delivers leadership x86 price-performance for a broad set of compute-intensive workloads including batch processing, distributed analytics, data transformations, log analytics and web applications.
Return to Keyword Browsing
May 15th, 2024 15:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts