News Posts matching #EPYC

Return to Keyword Browsing

Data Center CPU Landscape Allows Ampere Computing to Gain Traction

Once upon a time, the data center market represented a duopoly of x86-64 makers AMD and Intel. However, in recent years companies started developing custom Arm-based processors to handle workloads as complex within smaller power envelopes and doing it more efficiently. According to Counterpoint Research firm, we have the latest data highlighting a significant new player called Ampere Computing in the data center world. With the latest data center revenue share report, we get to see Intel/AMD x86-64 and AWS/Ampere Arm CPU revenue. For the first time, we see that a 3rd party company, Ampere Computing, managed to capture as much as 1.54% market revenue share of the entire data center market in 2022. Thanks to having CPUs in off-the-shelf servers from OEMs, enterprises and cloud providers are able to easily integrate Ampere Altra processors.

Intel, still the most significant player, saw a 70.77% share of the overall revenue; however, that comes as a drop from 2021 data which stated an 80.71% revenue share in the data center market. This represents a 16% year-over-year decline. This reduction is not due to the low demand for server processors, as the global data center CPU market's revenue registered only a 4.4% YoY decline in 2022, but due to the high demand for AMD EPYC solutions, where team red managed to grab 19.84% of the revenue from 2022. This is a 62% YoY growth from last year's 11.74% revenue share. Slowly but surely, AMD is eating Intel's lunch. Another revenue source comes from Amazon Web Services (AWS), which the company filled with its Graviton CPU offerings based on Arm ISA. AWS Graviton CPUs accounted for 3.16% of the market revenue, up 74% from 1.82% in 2021.

AMD Expected to Occupy Over 20% of Server CPU Market and Arm 8% in 2023

AMD and Arm have been gaining up on Intel in the server CPU market in the past few years, and the margins of the share that AMD had won over were especially large in 2022 as datacenter operators and server brands began finding that solutions from the number-2 maker growing superior to those of the long-time leader, according to Frank Kung, DIGITIMES Research analyst focusing primarily on the server industry, who anticipates that AMD's share will well stand above 20% in 2023, while Arm will get 8%.

Prices are one of the three major drivers that resulted in datacenter operators and server brands switching to AMD. Comparing server CPUs from AMD and Intel with similar numbers of cores, clockspeed, and hardware specifications, the price tags of most of the former's products are at least 30% cheaper than the latter's, and the differences could go as high as over 40%, Kung said.

Atos to Build Max Planck Society's new BullSequana XH3000-based Supercomputer, Powered by AMD MI300 APU

Atos today announces a contract to build and install a new high-performance computer for the Max Planck Society, a world-leading science and technology research organization. The new system will be based on Atos' latest BullSequana XH3000 platform, which is powered by AMD EPYC CPUs and Instinct accelerators. In its final configuration, the application performance will be three times higher than the current "Cobra" system, which is also based on Atos technologies.

The new supercomputer, with a total order value of over 20 million euros, will be operated by the Max Planck Computing and Data Facility (MPCDF) in Garching near Munich and will provide high-performance computing (HPC) capacity for many institutes of the Max Planck Society. Particularly demanding scientific projects, such as those in astrophysics, life science research, materials research, plasma physics, and AI will benefit from the high-performance capabilities of the new system.

Intel LGA-7529 Socket for "Sierra Forest" Xeon Processors Pictured

Intel's upcoming LGA-7529 socket designed for next-generation Xeon processors has been pictured, thanks to Yuuki_Ans and Hassan Mujtaba. According to the latest photos, we see the massive LGA-7529 socket with an astonishing 7,529 pins placed inside of a single socket. Made for Intel's upcoming "Birch Stream" platform, this socket is going to power Intel's next-generation "Sierra Forest" Xeon processors. With Sierra Forest representing a new way of thinking about Xeon processors, it also requires a special socket. Built on Intel 3 manufacturing process, these Xeon processors use only E-cores in their design to respond to AMD EPYC Bergamo with Zen4c.

The Intel Xeon roadmap will split in 2024, where Sierra Forest will populate dense and efficient cloud computing with E-cores, while its Granite Rapids sibling will power high-performance computing using P-cores. This interesting split will be followed by the new LGA-7529 socket pictured below, which is a step up from Intel's current LGA-4677 socket with 4677 pins used for Sapphire Rapids. With higher core densities and performance targets, the additional pins are likely to be mostly power/ground pins, while the smaller portion is picking up the additional I/O of the processor.

20:20 UTC: Updated with motherboard picture of dual-socket LGA-7529 system, thanks to findings of @9550pro lurking in the Chinese forums.

ORNL's Exaflop Machine Frontier Keeps Top Spot, New Competitor Leonardo Breaks the Top10 List

The 60th edition of the TOP500 reveals that the Frontier system is still the only true exascale machine on the list.

With an HPL score of 1.102 EFlop/s, the Frontier machine at Oak Ridge National Laboratory (ORNL) did not improve upon the score it reached on the June 2022 list. That said, Frontier's near-tripling of the HPL score received by second-place winner is still a major victory for computer science. On top of that, Frontier demonstrated a score of 7.94 EFlop/s on the HPL-MxP benchmark, which measures performance for mixed-precision calculation. Frontier is based on the HPE Cray EX235a architecture and it relies on AMD EPYC 64C 2 GHz processor. The system has 8,730,112 cores and a power efficiency rating of 52.23 gigaflops/watt. It also relies on gigabit ethernet for data transfer.

Cerebras Unveils Andromeda, a 13.5 Million Core AI Supercomputer that Delivers Near-Perfect Linear Scaling for Large Language Models

Cerebras Systems, the pioneer in accelerating artificial intelligence (AI) compute, today unveiled Andromeda, a 13.5 million core AI supercomputer, now available and being used for commercial and academic work. Built with a cluster of 16 Cerebras CS-2 systems and leveraging Cerebras MemoryX and SwarmX technologies, Andromeda delivers more than 1 Exaflop of AI compute and 120 Petaflops of dense compute at 16-bit half precision. It is the only AI supercomputer to ever demonstrate near-perfect linear scaling on large language model workloads relying on simple data parallelism alone.

With more than 13.5 million AI-optimized compute cores and fed by 18,176 3rd Gen AMD EPYC processors, Andromeda features more cores than 1,953 Nvidia A100 GPUs and 1.6 times as many cores as the largest supercomputer in the world, Frontier, which has 8.7 million cores. Unlike any known GPU-based cluster, Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX.

AMD Explains the Economics Behind Chiplets for GPUs

AMD, in its technical presentation for the new Radeon RX 7900 series "Navi 31" GPU, gave us an elaborate explanation on why it had to take the chiplets route for high-end GPUs, devices that are far more complex than CPUs. The company also enlightened us on what sets chiplet-based packages apart from classic multi-chip modules (MCMs). An MCM is a package that consists of multiple independent devices sharing a fiberglass substrate.

An example of an MCM would be a mobile Intel Core processor, in which the CPU die and the PCH die share a substrate. Here, the CPU and the PCH are independent pieces of silicon that can otherwise exist on their own packages (as they do on the desktop platform), but have been paired together on a single substrate to minimize PCB footprint, which is precious on a mobile platform. A chiplet-based device is one where a substrate is made up of multiple dies that cannot otherwise independently exist on their own packages without an impact on inter-die bandwidth or latency. They are essentially what should have been components on a monolithic die, but disintegrated into separate dies built on different semiconductor foundry nodes, with a purely cost-driven motive.

AMD 4th Generation EPYC "Genoa" Processors Benchmarked

Yesterday, AMD announced its latest addition to the data center family of processors called EPYC Genoa. Named the 4th generation EPYC processors, they feature a Zen 4 design and bring additional I/O connectivity like PCIe 5.0, DDR5, and CXL support. To disrupt the cloud, enterprise, and HPC offerings, AMD decided to manufacture SKUs with up to 96 cores and 192 threads, an increase from the previous generation's 64C/128T designs. Today, we are learning more about the performance and power aspects of the 4th generation AMD EPYC Genoa 9654, 9554, and 9374F SKUs from 3rd party sources, and not the official AMD presentation. Tom's Hardware published a heap of benchmarks consisting of rendering, compilation, encoding, parallel computing, molecular dynamics, and much more.

In the comparison tests, we have AMD EPYC Milan 7763, 75F3, and Intel Xeon Platinum 8380, a current top-end Intel offering until Sapphire Rapids arrives. Comparing 3rd-gen EPYC 64C/128T SKUs with 4th-gen 64C/128T EPYC SKUs, the new generation brings about a 30% increase in compression and parallel compute benchmarks performance. When scaling to the 96C/192T SKU, the gap is widened, and AMD has a clear performance leader in the server marketplace. For more details about the benchmark results, go here to explore. As far as comparison to Intel offerings, AMD leads the pack as it has a more performant single and multi-threaded design. Of course, beating the Sapphire Rapids to market is a significant win for team red, so we are still waiting to see how the 4th generation Xeon stacks up against Genoa.

SK hynix DDR5 & CXL Solutions Validated with AMD EPYC 9004 Series Processors

SK hynix announced that its DRAM, and CXL solutions have been validated with the new AMD EPYC 9004 Series processors, which were unveiled during the company's "together we advance_data centers" event on November 10. SK hynix has worked closely with AMD to provide fully compatible memory solutions for the 4th Gen AMD EPYC processors.

4th Gen AMD EPYC processors are built on an all-new SP5 socket and offer innovative technologies and features including support for advanced DDR5 and CXL 1.1+ memory expansion. SK hynix 1ynm, 1a nm 16 Gb DDR5 and 1a nm 24Gb DDR5 DRAM support 4800 Mbps on 4th Gen AMD EPYC processors, which deliver up to 50% more memory bandwidth than DDR4 product. SK hynix also provides CXL memory device that is a 96 GB product composed of 24 Gb DDR5 DRAMs based on 1a nm. The company expects high customer satisfaction of this product with flexible configuration of bandwidth and capacity expanded cost-efficiently.

AMD Launches 4th Gen EPYC "Genoa" Zen 4 Server Processors: 100% Performance Uplift for 50% More Cores

AMD at a special media event titled "together we advance_data centers," formally launched its 4th generation EPYC "Genoa" server processors based on the "Zen 4" microarchitecture. These processors debut an all new platform, with modern I/O connectivity that includes PCI-Express Gen 5, CXL, and DDR5 memory. The processors come in CPU core-counts of up to 96-core/192-thread. There are as many as 18 processor SKUs, differentiated not just in CPU core-counts, but also the way the the cores are spread across the up to 12 "Zen 4" chiplets (CCDs). Each chiplet features up to 8 "Zen 4" CPU cores, depending on the model; up to 32 MB of L3 cache, and is built on the 5 nm EUV process at TSMC. The CCDs talk to a centralized server I/O die (sIOD), which is built on the 6 nm process.

The processors AMD is launching today are the EPYC "Genoa" series, targeting general purpose servers, although they can be deployed in large cloud data-centers, too. To large-scale cloud providers such as AWS, Azure, and Google Cloud, AMD is readying a different class of processor, codenamed "Bergamo," which is plans to launch later. In 2023, the company will launch the "Genoa-X" line of processor for technical-compute and HPC applications, which benefit from large on-die caches, as they feature the 3D Vertical Cache technology. There will also be "Siena," a class of EPYC processors targeting the telecom and edge-computing markets, which could see an integration of more Xilinx IP.

ASUS Announces AMD EPYC 9004-Powered Rack Servers and Liquid-Cooling Solutions

ASUS, a leading provider of server systems, server motherboards and workstations, today announced new best-in-class server solutions powered by the latest AMD EPYC 9004 Series processors. ASUS also launched superior liquid-cooling solutions that dramatically improve the data-center power-usage effectiveness (PUE).

The breakthrough thermal design in this new generation delivers superior power and thermal capabilities to support class-leading features, including up to 400-watt CPUs, up to 350-watt GPUs, and 400 Gbps networking. All ASUS liquid-cooling solutions will be demonstrated in the ASUS booth (number 3816) at SC22 from November 14-17, 2022, at Kay Bailey Hutchison Convention Center in Dallas, Texas.

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.

Supermicro Opens Remote Online Access Program, JumpStart for the H13 Portfolio of Systems Based on the All-New 4th Gen AMD EPYC Processors

Supermicro, a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, announces its JumpStart remote access program - Supermicro H13 JumpStart -- for workload testing and application tuning on 4th Gen AMD EPYC processor-based systems. Developers and IT administrators in AI, Deep Learning, Manufacturing, Telco, Storage, and other industries will get immediate access to leading-edge technologies to accelerate their solution deployment on Supermicro's extensive portfolio of upcoming H13 systems. Customers will be able to test compatibility with previous generations of AMD EPYC processor-based systems and optimize their applications to take advantage of DDR5 memory, PCI-E 5.0 storage, networking, accelerators, and CXL 1.1+ peripherals of the 4th Gen AMD EPYC processors.

"Supermicro's 4th Gen AMD EPYC processor JumpStart program is poised to give customers a market advantage over their competitors through quick validation of workload performance to accelerate data center deployments," said Charles Liang, president and CEO, Supermicro. "Supermicro's application-optimized servers incorporate the latest technologies to improve performance per watt, energy efficiency, and reduced TCO."

AMD Reports Third Quarter 2022 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the third quarter of 2022 of $5.6 billion, gross margin of 42%, operating loss of $64 million, net income of $66 million and diluted earnings per share of $0.04. On a non-GAAP(*) basis, gross margin was 50%, operating income was $1.3 billion, net income was $1.1 billion and diluted earnings per share was $0.67.

"Third quarter results came in below our expectations due to the softening PC market and substantial inventory reduction actions across the PC supply chain," said AMD Chair and CEO Dr. Lisa Su. "Despite the challenging macro environment, we grew revenue 29% year-over-year driven by increased sales of our data center, embedded and game console products. We are confident that our leadership product portfolio, strong balance sheet, and ongoing growth opportunities in our data center and embedded businesses position us well to navigate the current market dynamics."

AMD Set to Unveil its Next Generation Server Processors on the 10th of November

AMD appears to like to play coy when it comes to new product announcements, or at least the reveal of upcoming product announcements. Just as with its November 3rd event, the company has put out a miniscule teaser for its November 10th announcement of what the company is simply calling "the unveiling of our next-gen server processors" on Twitter. The event will kick off at 10 am Pacific time and it appears there will be a live stream, as AMD is inviting people to watch the event online. It's highly likely that we're talking about new EPYC parts here, as the event is called "together we advance_data centers".

AMD Rolls Out GCC Enablement for "Zen 4" Processors with Zenver4 Target, Enables AVX-512 Instructions

AMD earlier this week released basic enablement for the GNU Compiler Collections (GCC), which extend "Zen 4" microarchitecture awareness. The "basic enablement patch" for the new Zenver4 target is essentially similar to Zenver3, but with added support for the new AVX-512 instructions, namely AVX512F, AVX512DQ, AVX512IFMA, AVX512CD, AVX512BW, AVX512VL, AVX512BF16, AVX512VBMI, AVX512VBMI2, GFNI, AVX512VNNI, AVX512BITALG, and AVX512VPOPCNTDQ. Besides AVX-512, "Zen 4" is largely identical to its predecessor, architecturally, and so the enablement is rather basic. This should come just in time for software vendors to prepare for next-generation EPYC "Genoa" server processors, or even small/medium businesses building servers with Ryzen 7000-series processors.

48-Core Russian Baikal-S Processor Die Shots Appear

In December of 2021, we covered the appearance of Russia's home-grown Baikal-S processor, which has 48 cores based on Arm Cortex-A75 cores. Today, thanks to the famous chip photographer Fritzchens Fritz, we have the first die shows that show us exactly how Baikal-S SoC is structured internally and what it is made up of. Manufactured on TSMC's 16 nm process, the Baikal-S BE-S1000 design features 48 Arm Cortex-A75 cores running at a 2.0 GHz base and a 2.5 GHz boost frequency. With a TDP of 120 Watts, the design seems efficient, and the Russian company promises performance comparable to Intel Skylake Xeons or Zen1-based AMD EPYC processors. It also uses a home-grown RISC-V core for management and controlling secure boot sequences.

Below, you can see the die shots taken by Fritzchens Fritz and annotated details by Twitter user Locuza that marked the entire SoC. Besides the core clusters, we see that a slum of cache connects everything, with six 72-bit DDR4-3200 PHYs and memory controllers surrounding everything. This model features a pretty good selection of I/O for a server CPU, as there are five PCIe 4.0 x16 (4x4) interfaces, with three supporting CCIX 1.0. You can check out more pictures below and see the annotations for yourself.

Comino's High-quality Liquid Cooler Meets the Latest AMD Ryzen Threadripper PRO 5000 WX-Series

The leading manufacturer of multi-GPU workstations and professional liquid cooling solutions, Comino has confirmed that their Comino CPU (AMD EPYC, Ryzen Threadripper, Ryzen Threadripper PRO) WCB for Socket SP3 / TR4, Cu-Steel is fully compatible with the AMD Threadripper PRO 5000WX-Series processors. Comino's thermal performance comparison between 3995WX and 5995WX shows a better delta T between the chip and the coolant.

On top of that, the waterblock can be used for different motherboards thanks to its unique design with interchangeable VRM cold-plates. See the compatibility list for more information. The Comino liquid cooling solution can ensure smooth operation of power-hungry hardware even if it is overclocked. Waterblocks are made purely of non-corrosive materials: copper, stainless steel, and plastic which ensures low hardware temperatures even during 24/7 operation. Learn more at Comino's website.

AMD Collaborates with The Energy Sciences Network on Launch of its Next-Generation, High-Performance Network to Enhance Data-Intensive Science

Today AMD (NASDAQ: AMD) announced its collaboration with the Energy Sciences Network (ESnet) on the launch of ESnet6, the newest generation of the U.S. Department of Energy's (DOE's) high-performance network dedicated to science. AMD worked closely with ESnet since 2018 to integrate powerful adaptive computing for the smart and programmable network nodes of ESnet6. ESnet6's extreme scale packet monitoring system uses AMD Alveo U280 FPGA-based network-attached accelerator cards at the core network switching nodes. This will enable high-touch packet processing and help improve the accuracy of network monitoring and management to enhance performance. The programmable hardware allows for new capabilities to be added for continuous innovation in the network.

In order to customize AMD Alveo U280 2x100Gb/s accelerators as network interface cards (NIC) for ESnet6, the OpenNIC overlay - developed by AMD - was used to provide standard network-interface and host-attachment hardware, allowing novel and experimental networking functions to be implemented easily on the Alveo card. OpenNIC has since been open-sourced after being successfully used and evolved by ESnet, as well as various leading academic research groups. Also important for rapid innovation by non-hardware experts was the use of the AMD VitisNetP4 development tools for compiling the P4 packet processing language to FPGA hardware.

AMD Alveo U280 cards, using OpenNIC and VitisNetP4, are being deployed on every node of the ESnet6 network. The high-touch approach based on FPGA-accelerated processing allows every packet in the ESnet6 network to be monitored at extremely high transfer rates to enable deep insights into the behavior of the network, as well as helping to rapidly detect and correct problems and hot spots in the network. The Alveo U280 card with OpenNIC platform also supplies the adaptability to allow the continuous roll-out of new capabilities to the end user community as needs evolve over the lifetime of ESnet6.

AMD-Powered Frontier Supercomputer Faces Difficulties, Can't Operate a Day without Issues

When AMD announced that the company would deliver the world's fastest supercomputer, Frontier, the company also took a massive task to provide a machine capable of producing one ExaFLOP of total sustained ability to perform computing tasks. While the system is finally up and running, making a machine of that size run properly is challenging. In the world of High-Performance Computing, getting the hardware is only a portion of running the HPC center. In an interview with InsideHPC, Justin Whitt, program director for the Oak Ridge Leadership Computing Facility (OLCF), provided insight into what it is like to run the world's fastest supercomputer and what kinds of issues it is facing.

The Frontier system is powered by AMD EPYC 7A53s "Trento" 64-core 2.0 GHz CPUs and Instinct MI250X GPUs. Interconnecting everything is the HPE (Cray) Slingshot 64-port switch, which is responsible for sending data in and out of compute blades. The recent interview points out a rather interesting finding: exactly AMD Instinct MI250X GPUs and Slingshot interconnect cause hardware troubles for the Frontier. "It's mostly issues of scale coupled with the breadth of applications, so the issues we're encountering mostly relate to running very, very large jobs using the entire system … and getting all the hardware to work in concert to do that," says Justin Whitt. In addition to the limits of scale "The issues span lots of different categories, the GPUs are just one. A lot of challenges are focused around those, but that's not the majority of the challenges that we're seeing," he said. "It's a pretty good spread among common culprits of parts failures that have been a big part of it. I don't think that at this point that we have a lot of concern over the AMD products. We're dealing with a lot of the early-life kind of things we've seen with other machines that we've deployed, so it's nothing too out of the ordinary."

Intel Outs First Xeon Scalable "Sapphire Rapids" Benchmarks, On-package Accelerators Help Catch Up with AMD EPYC

Intel in the second day of its InnovatiON event, turned attention to its next-generation Xeon Scalable "Sapphire Rapids" server processors, and demonstrated on-package accelerators. These are fixed-function hardware components that accelerate specific kinds of popular server workloads (i.e. run them faster than a CPU core can). With these, Intel hopes to close the CPU core-count gap it has with AMD EPYC, with the upcoming "Zen 4" EPYC chips expected to launch with up to 96 cores per socket in its conventional variant, and up to 128 cores per socket in its cloud-optimized variant.

Intel's on-package accelerators include AMX (advanced matrix extensions), which accelerate recommendation-engines, natural language processing (NLP), image-recognition, etc; DLB (dynamic load-balancing), which accelerates security-gateway and load-balancing; DSA (data-streaming accelerator), which speeds up the network stack, guest OS, and migration; IAA (in-memory analysis accelerator), which speeds up big-data (Apache Hadoop), IMDB, and warehousing applications; a feature-rich implementation of the AVX-512 instruction-set for a plethora of content-creation and scientific applications; and lastly, the QAT (QuickAssist Technology), with speed-ups for data compression, OpenSSL, nginx, IPsec, etc. Unlike "Ice Lake-SP," QAT is now implemented on the processor package instead of the PCH.

Intel Expects to Lose More Market Share, to Reconsider Exiting Other Businesses

During Evercore ISI TMT conference, Intel announced that the company would continue to lose market share, with a possible bounce back in the coming years. According to the latest report, Intel's CEO Pat Gelsinger announced that he expects the company to continue to lose its market share to AMD as the competition has "too much momentum" going for it. AMD's Ryzen and EPYC processors continue to deliver power and efficiency performance figures, which drives customers towards the company. On the other hand, Intel expects a competing product, especially in the data center business with Sapphire Rapids Xeon processors, set to arrive in 2023. Pat Gelsinger noted, "Competition just has too much momentum, and we haven't executed well enough. So we expect that bottoming. The business will be growing, but we do expect that there continues to be some share losses. We're not keeping up with the overall TAM growth until we get later into '25 and '26 when we start regaining share, material share gains."

The only down years that are supposed to show a toll of solid competition are 2022 and 2023. As far as creating a bounceback, Intel targets 2025 and 2026. "Now, obviously, in 2024, we think we're competitive. 2025, we think we're back to unquestioned leadership with our transistors and process technology," noted CEO Gelsinger. Additionally, he had a say about the emerging Arm CPUs competing for the same server market share as Intel and AMD do so, stating that "Well, when we deliver the Forest product line, we deliver power performance leadership versus all Arm alternatives, as well. So now you go to a cloud service provider, and you say, 'Well, why would I go through that butt ugly, heavy software lift to an ARM architecture versus continuing on the x86 family?"

AMD EPYC "Genoa" Zen 4 Product Stack Leaked

With its recent announcement of the Ryzen 7000 desktop processors, the action now shifts to the server, with AMD preparing a wide launch of its EPYC "Genoa" and "Bergamo" processors this year. Powered by the "Zen 4" microarchitecture, and contemporary I/O that includes PCI-Express Gen 5, CXL, and DDR5, these processors dial the CPU core-counts per socket up to 96 in case of "Genoa," and up to 128 in case of "Bergamo." The EPYC "Genoa" series represents the main trunk of the company's server processor lineup, with various internal configurations targeting specific use-cases.

The 96 cores are spread twelve 5 nm 8-core CCDs, each with a high-bandwidth Infinity Fabric path to the sIOD (server I/O die), which is very likely built on the 6 nm node. Lower core-count models can be built either by lowering the CCD count (ensuring more cores/CCD), or by reducing the number of cores/CCD and keeping the CCD-count constant, to yield more bandwidth/core. The leaked product-stack table below shows several of these sub-classes of "Genoa" and "Bergamo," classified by use-cases. The leaked slide also details the nomenclature AMD is using with its new processors. The leaked roadmap also mentions the upcoming "Genoa-X" processor for HPC and cloud-compute uses, which features the 3D Vertical Cache technology.

AMD Pensando Distributed Services Card to Support VMware vSphere 8

AMD announced that the AMD Pensando Distributed Services Card, powered by the industry's most advanced data processing unit (DPU)1, will be one of the first DPU solutions to support VMware vSphere 8 available from leading server vendors including Dell Technologies, HPE and Lenovo.

As data center applications grow in scale and sophistication, the resulting workloads increase the demand on infrastructure services as well as crucial CPU resources. VMware vSphere 8 aims to reimagine IT infrastructure as a composable architecture with a goal of offloading infrastructure workloads such as networking, storage, and security from the CPU by leveraging the new vSphere Distributed Services Engine, freeing up valuable CPU cycles to be used for business functions and revenue generating applications.

AMD Confirms Optical-Shrink of Zen 4 to the 4nm Node in its Latest Roadmap

AMD in its Ryzen 7000 series launch event shared its near-future CPU architecture roadmap, in which it confirmed that the "Zen 4" microarchitecture, currently on the 5 nm foundry node, will see an optical-shrink to the 4 nm process in the near future. This doesn't necessarily indicate a new-generation CCD (CPU complex die) on 4 nm, it could even be a monolithic mobile SoC on 4 nm, or perhaps even "Zen 4c" (high core-count, low clock-speed, for cloud-compute); but it doesn't rule out the possibility of a 4 nm CCD that the company can use across both its enterprise and client processors.

The last time AMD hyphenated two foundry nodes for a single generation of the "Zen" architecture, was with the original (first-generation) "Zen," which debuted on the 14 nm node, but was optically shrunk and refined on the 12 nm node, with the company designating the evolution as "Zen+." The Ryzen 7000-series desktop processors, as well as the upcoming EPYC "Genoa" server processors, will ship with 5 nm CCDs, with AMD ticking it off in its roadmap. Chronologically placed next to it are "Zen 4" with 3D Vertical Cache (3DV Cache), and the "Zen 4c." The company is planning "Zen 4" with 3DV Cache both for its server- and desktop segments. Further down the roadmap, as we approach 2024, we see the company debut the future "Zen 5" architecture on the same 4 nm node, evolving into 3 nm on certain variants.
Return to Keyword Browsing
Dec 3rd, 2024 12:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts