News Posts matching #EPYC

Return to Keyword Browsing

Latest TOP500 List Highlights World's Fastest and Most Energy Efficient Supercomputers are Powered by AMD

Today, AMD (NASDAQ: AMD) showcased its high performance computing (HPC) leadership at ISC High Performance 2023 and celebrated, along with key partners, its first year of breaking the exascale barrier. AMD EPYC processors and AMD Instinct accelerators continue to be the solutions of choice behind many of the most innovative, green and powerful supercomputers in the world, powering 121 supercomputers on the latest TOP500 list.

"AMD's mission in high-performance computing is to enable our customers to tackle the world's most important challenges," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "Our industry partners and the global HPC community continue to leverage the performance and efficiency of AMD EPYC processors and Instinct accelerators to advance their groundbreaking work and scientific discoveries."

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized.

"Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time."

AMD EPYC 8004 Data Center "Siena" CPUs Certified for General SATA and PCI Support

Keen-eyed hardware tipster momomo_us this week spotted that an upcoming AMD data center "Siena Dense" CPU has received verification, in the general sense, for SATA and PCI support - courtesy of the Serial ATA International Organization (SATA-IO). The information dump was uploaded to SATA-IO's online database on April 6 of this year - under the heading: "AMD EPYC 8004 Series Processors." As covered by TPU mid-way through this month the family of enterprise-grade processors, bearing codename Siena, is expected to be an entry-level alternative to the EPYC Genoa-X range, set for launch later in 2023.

The EPYC Siena series is reported to arrive with a new socket type - SP6 (LGA 4844) - which is said to be similar in size to the older Socket SP3. The upcoming large "Genoa-X" and "Bergamo" processors will sit in the already existing Socket SP5 (LGA 6096) - 2022's EPYC Genoa lineup makes use of it already. AMD has not made its SP6 socket official to the public, but industry figures have been informed that it can run up to 64 "Zen 4" cores. This new standard has been designed with more power efficient tasks in mind - targeting intelligent edge and telecommunication sectors. The smaller SP6 socket will play host to CPUs optimized for as low as 70 W operation, with hungrier variants accommodated up to 225 W. This single platform solution is said to offer 6-channel memory, 96 PCIe Gen 5.0 lanes, 48 lanes for CXL V1.1+, and 8 PCIe Gen 3.0 lanes.

Ericsson strikes Cloud RAN agreement with AMD

Ericsson is boosting its Open RAN and Cloud RAN ecosystem commitment through an agreement with US-based global ICT industry leader AMD. The agreement - intended to strengthen the Open RAN ecosystem and vendor-agnostic Cloud RAN environment - aims to offer communications service providers (CSPs) a combination of high performance and additional flexibility for open architecture offerings.

The Ericsson-AMD collaboration will see additional processing technologies in the Ericsson Cloud RAN offering. The expanded offering aims to enhance the performance of Cloud RAN and secure high-capacity solutions. The collaboration will enable joint exploration of AMD EPYC processors and T2 Telco accelerator for utilization in Cloud RAN solutions, while also investigating future platform generations of these technologies.

Gigabyte Extends Its Leading GPU Portfolio of Servers

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a lineup of powerful GPU-centric servers with the latest AMD and Intel CPUs, including NVIDIA HGX H100 servers with both 4-GPU and 8-GPU modules. With growing interest in HPC and AI applications, specifically generative AI (GAI), this breed of server relies heavily on GPU resources to tackle compute-heavy workloads that handle large amounts of data. With the advent of OpenAI's ChatGPT and other AI chatbots, large GPU clusters are being deployed with system-level optimization to train large language models (LLMs). These LLMs can be processed by GIGABYTE's new design-optimized systems that offer a high level of customization based on users' workloads and requirements.

The GIGABYTE G-series servers are built first and foremost to support dense GPU compute and the latest PCIe technology. Starting with the 2U servers, the new G293 servers can support up to 8 dual-slot GPUs or 16 single-slot GPUs, depending on the server model. For the ultimate in CPU and GPU performance, the 4U G493 servers offer plenty of networking options and storage configurations to go alongside support for eight (Gen 5 x16) GPUs. And for the highest level of GPU compute for HPC and AI, the G393 & G593 series support NVIDIA H100 Tensor Core GPUs. All these new two CPU socket servers are designed for either 4th Gen AMD EPYC processors or 4th Gen Intel Xeon Scalable processors.

AMD Joins AWS ISV Accelerate Program

AMD announced it has joined the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners - like AMD - who provide integrated solutions on AWS. The program helps AWS Partners drive new business by directly connecting participating ISVs with the AWS Sales organization.

Through the AWS ISV Accelerate Program, AMD will receive focused co-selling support from AWS, including, access to further sales enablement resources, reduced AWS Marketplace listing fees, and incentives for AWS Sales teams. The program will also allow participating ISVs access to millions of active AWS customers globally.

AMD 96-Core EPYC 9684X Zen 4 Genoa-X CPU Shows Up for Sale in China

The second-hand market in China is always full of gems, but we never expected to see an unreleased 5 nm 96-core EPYC 9684X Genoa-X CPU with 1152 MB of L3 cache. According to the seller, the CPU is "almost new" and in working condition.

In case you missed it earlier, AMD is working on 5 nm Genoa-X EPYC CPUs which will feature up to 96 Zen 4 cores in 5 nm with over 1 GB of L3 cache per socket. These are scheduled to release this year, optimized for technical computing and databases. AMD is also working on Siena CPUs, which should also come this year, featuring up to 64 Zen 4 cores with optimized performance-per-watt, meant for intelligent edge and telco markets.

MediaWorkstation Packs 192 Cores and 3TB DDR5 Into Their Updated a-X2P Luggable

MediaWorkstation first announced the a-X2P mobile workstation back in 2020, turning heads with its impressive dual AMD EPYC "Rome" processors that packed in 128 "Zen 2" cores into a transportable package nearly small enough to take as carry-on luggage on a flight. The a-X2P chassis hosts an "EATX" server motherboard, seven full height expansion slots, five 5.25-inch bays, and a slot load optical drive within a 24-inch chassis that features an integrated LCD display and fold out mechanical keyboard. The a-X2P also supports connecting up to six total 24-inch displays, all mounted to the chassis and up to 4K resolution each, for a configuration which the company says is, "ideal for production, live broadcast, and monitoring." This incredible amount of expansion brings the weight of the a-X2P up to 55 lbs (~25 kg).

This most recent update to the a-X2P changes nothing about the exterior feature set, but instead brings the internals to the modern computing age with AMD's EPYC 9004 "Genoa" processors, with support for up to 3 TB of DDR5-4800 in a 12-channel configuration. MediaWorkstation does not specify which SKUs of EPYC they're employing, but the highest configuration of 192 total cores leaves little guesswork as AMD really only has the EPYC 9654(P) at the top of their lineup providing 96 cores at a whopping $11,805 each. An interesting note on the EPYC 9654 variants is that they officially support a cTDP down to 320 W, a bit off the advertised maximum allowed cTDP of 300 W in the a-X2P. MediaWorkstation also does not specify what kind of power supply they've sourced for this behemoth, but it's a safe bet that it'll be over 1.5 kW. Don't expect to power this monster with batteries for any usable amount of time.

AMD Speeds Up Development of "Zen 5" to Thwart Intel Xeon "Emerald Rapids"?

In no mood to cede its market-share growth to Intel, AMD has reportedly decided to accelerate the development of its next-generation "Zen 5" microarchitecture for debut within 2023. In its mid-2022 presentations, AMD had publicly given "Zen 5" a 2024 release date. This is part of a reading-in-between the lines for a recent GIGABYTE press release announcing server platforms powered by relatively low-cost Ryzen desktop processors. The specific sentence from that release reads "The next generation of AMD Ryzen desktop processors that will come out later this year will also be supported on this AM5 platform, so customers who purchase these servers today have the opportunity to upgrade to the Ryzen 7000 series successor."

While the GIGABYTE press release speaks of a next-generation Ryzen desktop processor, it stands to reason that it is referencing an early release of "Zen 5," and since AMD shares the CPU complex dies (CCDs) between its Ryzen client and EPYC server processors, the company is looking at a two-pronged upgrade to its processor lineup, with its next-generation EPYC "Turin" processor competing with Xeon Scalable "Emerald Rapids," and Ryzen "Granite Ridge" desktop processors taking on Intel's Core "Raptor Lake Refresh" and "Meteor Lake-S" desktop processors. It is rumored that "Zen 5" is being designed for the TSMC 3 nm node, and could see an increase in CPU core count per CCD, up from the present 8. TSMC 3 nm node goes into commercial mass-production in the first half of 2023 as the TSMC N3 node, with a refined N3E node slated for the second half of the year.

AMD Hybrid Phoenix APU Comes With Performance and Efficiency Cores

According to the latest leak, AMD's upcoming Phoenix accelerated processing units (APUs) could feature a hybrid design, featuring Performance and Efficiency cores. While there are no precise details, the latest AMD processor programming guide, leaked online, clearly marks these as two types of cores, most likely standard Zen 4 and energy-efficient Zen 4c cores.

These two set of cores will features a different feature set, and the latest document gives software designers guidelines. Such hybrid CPU design, similar to ARM's BIG.little architecture, will allow AMD to be more competitive with Intel's similar P- and E-core design, allowing it to achieve certain performance levels while also maintaining power efficiency.

TYAN to Showcase Cloud Platforms for Data Centers at CloudFest 2023

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, will showcase its latest cloud server platforms powered by AMD EPYC 9004 Series processors and 4th Gen Intel Xeon Scalable processors for next-generation data centers at CloudFest 2023, Booth #H12 in Europa-Park from March 21-23.

"With the exponential advancement of technologies like AI and Machine Learning, data centers require robust hardware and infrastructure to handle complex computations while running AI workloads and processing big data," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure BU. "TYAN's cloud server platforms with storage performance and computing capability can support the ever-increasing demand for computational power and data processing."

AMD EPYC Genoa-X Processor Spotted with 1248 MBs of 3D V-Cache

AMD's EPYC lineup already features the new Zen 4 core designed for better performance and efficiency. However, since the release of EPYC Milan-X processors with 3D V-cache integrated into server offerings, we wondered if AMD will continue to make such SKUs for upcoming generations. According to the report from Wccftech, we have a leaked table of specifications that showcase what some seemingly top-end Genoa-X SKUs will look like. The two SKUs listed here are the "100-000000892-04" coded engineering sample and the "100-000000892-06" coded retail sample. With support for the same SP5 platform, these CPUs should be easily integrated with the existing offerings from OEM.

As far as specifications, this processor features 384 MBs of L3 cache coming from CCDs, 768 MBs of L3 cache from the 3D V-Cache stacks, and 96 MBs of L2 cache for a total of 1248 MBs in the usable cache. A 3 MB stack of L1 cache is also dedicated to instructions and primary CPU data. Compared to the regular Genoa design, this is a 260% increase in cache sizes, and compared to Milan-X, the Genoa-X design also progresses with 56% more cache. With a TDP of up to 400 Watts, configurable to 320 Watts, this CPU can boost up to 3.7 GHz. AMD EPYC Genoa-X CPUs are expected to hit the shelves in the middle of 2023.

Supermicro Expands Storage Solutions Portfolio for Intensive I/O Workloads with Industry Standard Based All-Flash Servers Utilizing EDSFF E3.S, and E1

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the latest addition to its revolutionary ultra-high performance, high-density petascale class all-flash NVMe server family. Supermicro systems in this high-performance storage product family will support the next-generation EDSFF form factor, including the E3.S and E1.S devices, in form factors that accommodate 16- and 32 high-performance PCIe Gen 5 NVMe drive bays.

The initial offering of the updated product line will support up to one-half of a petabyte of storage space in a 1U 16 bay rackmount system, followed by a full petabyte of storage space in a 2U 32 bay rackmount system for both Intel and AMD PCIe Gen 5 platforms. All of the Supermicro systems that support either the E1.S or E3.s form factors enable customers to realize the benefits in various application-optimized servers.

AIC Collaborates with AMD to Introduce Its New Edge Server Powered By 4th Gen AMD EPYC Embedded Processors

AIC today announced its EB202-CP is ready to support newly launched 4th Gen AMD EPYC Embedded 9004 processors. By leveraging the five-year product longevity supported by AMD EPYC Embedded processors, EB202-CP provides customers with stable and long-term support. AIC and AMD will join forces to showcase EB202-CP at Embedded World in AMD stand No. 2-411 from 14th to 16th March, 2023 in Nuremberg, Germany.

AIC EB202-CP, a 2U rackmount server designed for AI and edge appliances, is powered by the newly released 4th Gen AMD EPYC Embedded processors. Featuring the world's highest-performing x86 processor and PCIe 5.0 ready, the 4th Gen AMD EPYC Embedded processors enable low TCO and delivers leadership energy efficiency as well as state-of-the-art security, optimized for workloads across enterprise and edge. EB202-CP, with 22 inch in depth, supports eight front-serviceable and hot-swappable E1.S/ E3.S and four U.2 SSDs. By leveraging the features of the 4th Gen AMD Embedded EPYC processors, EB202-CP is well suited for broadcasting, edge and AI applications, which require greater processing performance and within the most efficient, space-saving format.

AMD Brings 4th Gen AMD EPYC Processors to Embedded Systems

AMD today announced it is bringing world-class performance and energy efficiency to embedded systems with AMD EPYC Embedded 9004 Series processors. The new 4th generation EPYC Embedded processors powered by "Zen 4" architecture provide technology and features for embedded networking, security/firewall and storage systems in cloud and enterprise computing as well as industrial edge servers for the factory floor.

Built on the "Zen 4" 5 nm core, the processors combine speed and performance while helping reduce both overall system energy costs and TCO. The series is comprised of 10 processor models with performance options ranging from 16 to 96 cores, and a thermal design power (TDP) profile ranging from 200 W to 400 W. The performance and power scalability afforded with AMD EPYC Embedded 9004 Series processors make them an ideal fit for embedded system OEMs expanding their product portfolios across a range of performance and pricing options. The AMD EPYC Embedded 9004 Series processors also include enhanced security features to help minimize threats and maintain a secure compute environment from power-on to run time, making them well suited for applications with enterprise-class performance and security needs.

Supermicro Expands Storage Solutions Portfolio for Intensive I/O Workloads with All-Flash Servers Utilizing EDSFF E3.S and E1.S Storage Drives

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the latest addition to its revolutionary ultra-high performance, high-density petascale class all-flash NVMe server family. Supermicro systems in this high performance storage product family will support the next-generation EDSFF form factor, including the E3.S and E1.S devices, in form factors that accommodate 16- and 32 high-performance PCIe Gen 5 NVMe drive bays.

The initial offering of the updated product line will support up to one-half of a petabyte storage space in a 1U 16 bay rackmount system, followed by a fully petabyte storage space in 2U 32 bay rackmount system for both Intel and AMD PCIe Gen 5 platforms.

Data Center CPU Landscape Allows Ampere Computing to Gain Traction

Once upon a time, the data center market represented a duopoly of x86-64 makers AMD and Intel. However, in recent years companies started developing custom Arm-based processors to handle workloads as complex within smaller power envelopes and doing it more efficiently. According to Counterpoint Research firm, we have the latest data highlighting a significant new player called Ampere Computing in the data center world. With the latest data center revenue share report, we get to see Intel/AMD x86-64 and AWS/Ampere Arm CPU revenue. For the first time, we see that a 3rd party company, Ampere Computing, managed to capture as much as 1.54% market revenue share of the entire data center market in 2022. Thanks to having CPUs in off-the-shelf servers from OEMs, enterprises and cloud providers are able to easily integrate Ampere Altra processors.

Intel, still the most significant player, saw a 70.77% share of the overall revenue; however, that comes as a drop from 2021 data which stated an 80.71% revenue share in the data center market. This represents a 16% year-over-year decline. This reduction is not due to the low demand for server processors, as the global data center CPU market's revenue registered only a 4.4% YoY decline in 2022, but due to the high demand for AMD EPYC solutions, where team red managed to grab 19.84% of the revenue from 2022. This is a 62% YoY growth from last year's 11.74% revenue share. Slowly but surely, AMD is eating Intel's lunch. Another revenue source comes from Amazon Web Services (AWS), which the company filled with its Graviton CPU offerings based on Arm ISA. AWS Graviton CPUs accounted for 3.16% of the market revenue, up 74% from 1.82% in 2021.

AMD Expected to Occupy Over 20% of Server CPU Market and Arm 8% in 2023

AMD and Arm have been gaining up on Intel in the server CPU market in the past few years, and the margins of the share that AMD had won over were especially large in 2022 as datacenter operators and server brands began finding that solutions from the number-2 maker growing superior to those of the long-time leader, according to Frank Kung, DIGITIMES Research analyst focusing primarily on the server industry, who anticipates that AMD's share will well stand above 20% in 2023, while Arm will get 8%.

Prices are one of the three major drivers that resulted in datacenter operators and server brands switching to AMD. Comparing server CPUs from AMD and Intel with similar numbers of cores, clockspeed, and hardware specifications, the price tags of most of the former's products are at least 30% cheaper than the latter's, and the differences could go as high as over 40%, Kung said.

Atos to Build Max Planck Society's new BullSequana XH3000-based Supercomputer, Powered by AMD MI300 APU

Atos today announces a contract to build and install a new high-performance computer for the Max Planck Society, a world-leading science and technology research organization. The new system will be based on Atos' latest BullSequana XH3000 platform, which is powered by AMD EPYC CPUs and Instinct accelerators. In its final configuration, the application performance will be three times higher than the current "Cobra" system, which is also based on Atos technologies.

The new supercomputer, with a total order value of over 20 million euros, will be operated by the Max Planck Computing and Data Facility (MPCDF) in Garching near Munich and will provide high-performance computing (HPC) capacity for many institutes of the Max Planck Society. Particularly demanding scientific projects, such as those in astrophysics, life science research, materials research, plasma physics, and AI will benefit from the high-performance capabilities of the new system.

Intel LGA-7529 Socket for "Sierra Forest" Xeon Processors Pictured

Intel's upcoming LGA-7529 socket designed for next-generation Xeon processors has been pictured, thanks to Yuuki_Ans and Hassan Mujtaba. According to the latest photos, we see the massive LGA-7529 socket with an astonishing 7,529 pins placed inside of a single socket. Made for Intel's upcoming "Birch Stream" platform, this socket is going to power Intel's next-generation "Sierra Forest" Xeon processors. With Sierra Forest representing a new way of thinking about Xeon processors, it also requires a special socket. Built on Intel 3 manufacturing process, these Xeon processors use only E-cores in their design to respond to AMD EPYC Bergamo with Zen4c.

The Intel Xeon roadmap will split in 2024, where Sierra Forest will populate dense and efficient cloud computing with E-cores, while its Granite Rapids sibling will power high-performance computing using P-cores. This interesting split will be followed by the new LGA-7529 socket pictured below, which is a step up from Intel's current LGA-4677 socket with 4677 pins used for Sapphire Rapids. With higher core densities and performance targets, the additional pins are likely to be mostly power/ground pins, while the smaller portion is picking up the additional I/O of the processor.

20:20 UTC: Updated with motherboard picture of dual-socket LGA-7529 system, thanks to findings of @9550pro lurking in the Chinese forums.

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

Cerebras Unveils Andromeda, a 13.5 Million Core AI Supercomputer that Delivers Near-Perfect Linear Scaling for Large Language Models

Cerebras Systems, the pioneer in accelerating artificial intelligence (AI) compute, today unveiled Andromeda, a 13.5 million core AI supercomputer, now available and being used for commercial and academic work. Built with a cluster of 16 Cerebras CS-2 systems and leveraging Cerebras MemoryX and SwarmX technologies, Andromeda delivers more than 1 Exaflop of AI compute and 120 Petaflops of dense compute at 16-bit half precision. It is the only AI supercomputer to ever demonstrate near-perfect linear scaling on large language model workloads relying on simple data parallelism alone.

With more than 13.5 million AI-optimized compute cores and fed by 18,176 3rd Gen AMD EPYC processors, Andromeda features more cores than 1,953 Nvidia A100 GPUs and 1.6 times as many cores as the largest supercomputer in the world, Frontier, which has 8.7 million cores. Unlike any known GPU-based cluster, Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX.

AMD Explains the Economics Behind Chiplets for GPUs

AMD, in its technical presentation for the new Radeon RX 7900 series "Navi 31" GPU, gave us an elaborate explanation on why it had to take the chiplets route for high-end GPUs, devices that are far more complex than CPUs. The company also enlightened us on what sets chiplet-based packages apart from classic multi-chip modules (MCMs). An MCM is a package that consists of multiple independent devices sharing a fiberglass substrate.

An example of an MCM would be a mobile Intel Core processor, in which the CPU die and the PCH die share a substrate. Here, the CPU and the PCH are independent pieces of silicon that can otherwise exist on their own packages (as they do on the desktop platform), but have been paired together on a single substrate to minimize PCB footprint, which is precious on a mobile platform. A chiplet-based device is one where a substrate is made up of multiple dies that cannot otherwise independently exist on their own packages without an impact on inter-die bandwidth or latency. They are essentially what should have been components on a monolithic die, but disintegrated into separate dies built on different semiconductor foundry nodes, with a purely cost-driven motive.

AMD 4th Generation EPYC "Genoa" Processors Benchmarked

Yesterday, AMD announced its latest addition to the data center family of processors called EPYC Genoa. Named the 4th generation EPYC processors, they feature a Zen 4 design and bring additional I/O connectivity like PCIe 5.0, DDR5, and CXL support. To disrupt the cloud, enterprise, and HPC offerings, AMD decided to manufacture SKUs with up to 96 cores and 192 threads, an increase from the previous generation's 64C/128T designs. Today, we are learning more about the performance and power aspects of the 4th generation AMD EPYC Genoa 9654, 9554, and 9374F SKUs from 3rd party sources, and not the official AMD presentation. Tom's Hardware published a heap of benchmarks consisting of rendering, compilation, encoding, parallel computing, molecular dynamics, and much more.

In the comparison tests, we have AMD EPYC Milan 7763, 75F3, and Intel Xeon Platinum 8380, a current top-end Intel offering until Sapphire Rapids arrives. Comparing 3rd-gen EPYC 64C/128T SKUs with 4th-gen 64C/128T EPYC SKUs, the new generation brings about a 30% increase in compression and parallel compute benchmarks performance. When scaling to the 96C/192T SKU, the gap is widened, and AMD has a clear performance leader in the server marketplace. For more details about the benchmark results, go here to explore. As far as comparison to Intel offerings, AMD leads the pack as it has a more performant single and multi-threaded design. Of course, beating the Sapphire Rapids to market is a significant win for team red, so we are still waiting to see how the 4th generation Xeon stacks up against Genoa.

SK hynix DDR5 & CXL Solutions Validated with AMD EPYC 9004 Series Processors

SK hynix announced that its DRAM, and CXL solutions have been validated with the new AMD EPYC 9004 Series processors, which were unveiled during the company's "together we advance_data centers" event on November 10. SK hynix has worked closely with AMD to provide fully compatible memory solutions for the 4th Gen AMD EPYC processors.

4th Gen AMD EPYC processors are built on an all-new SP5 socket and offer innovative technologies and features including support for advanced DDR5 and CXL 1.1+ memory expansion. SK hynix 1ynm, 1a nm 16 Gb DDR5 and 1a nm 24Gb DDR5 DRAM support 4800 Mbps on 4th Gen AMD EPYC processors, which deliver up to 50% more memory bandwidth than DDR4 product. SK hynix also provides CXL memory device that is a 96 GB product composed of 24 Gb DDR5 DRAMs based on 1a nm. The company expects high customer satisfaction of this product with flexible configuration of bandwidth and capacity expanded cost-efficiently.
Return to Keyword Browsing
Mar 28th, 2025 03:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts