News Posts matching #Emerald Rapids

Return to Keyword Browsing

Intel's Flagship 128-Core Xeon 6980P Processor Sets Record $17,800 Flagship Price

The title has no typo, and what you are reading is correct. Intel's flagship 128-core 256-threaded CPU Xeon 6980P compute monster processor carries a substantial $17,800 price point. Intel's Xeon 6 "Granite Rapids" family of processors appears to be its most expensive yet, with the flagship SKU now carrying more than a 50% price increase compared to the previous "Emerald Rapids" generation. However, the economics of computing are more nuanced than simple comparisons. While the last generation Emerald Rapids Xeon 8592+ (64 cores, 128 threads) cost about $181 per core, the new Granite Rapids Xeon 6980P comes in at approximately $139 per core, offering faster cores at a lower per-core cost.

The economics of data centers aren't always tied to the cost of a single product. When building total cost of ownership models, factors such as power consumption, compute density, and performance impact the final assessment. Even with the higher price of this flagship Granite Rapids Xeon processor, the economics of data center deployment may work in its favor. Customers get more cores in a single package, increasing density and driving down cost-per-core per system. This also improves operational efficiency, which is crucial considering that operating expenses account for about 10% of data center costs.

Linux Patch Boosts Intel 5th Generation Xeon "Emerald Rapids" Performance by up to 38%, up to 18% Less Power

Intel's 5th generation Xeon Scalable processors codenamed Emerald Rapids, have been shipping since late 2023 and are installed at numerous servers today. However, Emerald Rapids appears to possess more performance and efficiency tricks than it initially revealed at launch. According to the report from Phoronix, reporting on a Linux kernel patch sent to the Linux Kernel Mailing List (LKML), there is a chance for up to 38% performance increase while using up to 18% less power on all Intel 5th generation Xeon machines. Thanks to Canonical (maker of Ubuntu Linux) engineer Pedro Henrique Kopper, who explained the patch on the LKML, we found out that changing a single line of code yielded this massive increase.

Ubuntu Linux, as well as many other distributions, ship with Energy Performance Preference (EPP) for Emerald Rapids with a "balance_performance" value of 128. However, changing the value to 32 now yields a massive performance improvement alongside using less power. The EPP "balance_performance" is the default out-of-the-box setting for many Linux distributions. Users manually setting the "performance" mode in the EPP are not expecting any increase from this patch, as the "balance_performance" mode had issues balancing power and efficiency. Introducing this new setting yields more performance for machines that run at default settings, and this is especially important for data centers where the need for lower power and increased performance is constantly surging. Especially at hyperscalers like Amazon, Google, and Meta, which may run tens of thousands of these CPUs at default settings to keep them stable and well-cooled, who can now enjoy a massive performance increase with less power consumed.
Below, you can see the patch quote as well as more performance/power measurements.

ASRock Rack Offers World's Smallest NVIDIA Grace Hopper Superchip Server, Other Innovations at MWC 2024

ASRock Rack is a subsidiary of ASRock that deals with servers, workstations, and other data-center hardware, and comes with the enormous brand trust not just of ASRock, but also the firm hand of parent company and OEM giant Pegatron. At the 2024 Mobile World Congress, ASRock Rack introduced several new server innovations relevant to the AI Edge, and 5G cellular carrier industries. A star attraction here is the new ASRock Rack MECAI-GH200, claimed to be the world's smallest server powered by the NVIDIA GH200 Grace Hopper Superchip.

The ASRock Rack MECAI-GH200 executes one of NVIDIA's original design goals behind the Grace Hopper Superchip—AI deployments in an edge environment. The GH200 module combines an NVIDIA Grace CPU with a Hopper AI GPU, and a performance-optimized NVLink interconnect between them. The CPU features 72 Arm Neoverse V2 cores, and 480 GB of LPDDR5X memory; while the Hopper GPU has 132 SM with 528 Tensor cores, and 96 GB of HBM3 memory across a 6144-bit memory interface. Given that the newer HBM3e version of the GH200 won't come out before Q2-2024, this has to be the version with HBM3. What makes the MECAI-GH200 the world's smallest server with the GH200 has to be its compact 2U form-factor—competing solutions tend to be 3U or larger.

6th Gen Intel Xeon "Granite Rapids" CPU L3 Cache Totals 480 MB

Intel has recently updated its Software Development Emulator (now version 9.33.0)—InstLatX64 noted some intriguing cache designations for Fifth Generation Xeon Scalable Processors. The "Emerald Rapids" family was introduced at last December's "AI Everywhere" event—with sample units released soon after for review. Tom's Hardware was impressed by the Platinum 8592+ CPU's tripled L3 Cache (over the previous generation): "(it) contributed significantly to gains in Artificial Intelligence inference, data center, video encoding, and general compute workloads. While AMD EPYC generally remains the player to beat in the enterprise CPU space, Emerald Rapids marks a significant improvement from Intel's side of that battlefield, especially as it pertains to Artificial Intelligence workloads and multi-core performance in general."

Intel's SDE 9.33.0 update confirms 320 MB of L3 cache for "Emerald Rapids," but the next line down provides a major "Granite Rapids" insight—480 MB of L3 cache, representing a 2.8x leap over the previous generation. Team Blue's 6th Gen (all P-core) Xeon processor series is expected to launch within the latter half of 2024. The American multinational technology company is evidently keen to take on AMD in the enterprise CPU market segment, although Team Red is already well ahead with its current crop of L3 cache designations. EPYC CPUs in Genoa and Genoa-X guises offer maximum totals of 384 MB and 1152 MB (respectively). Intel's recently launched "Emerald Rapids" server chips are observed as being a good match against Team Red EPYC "Bergamo" options.

Chinese Firm Montage Repackages Intel's 5th Generation Emerald Rapids Xeon Processor into Domestic Product Lineup

Chinese chipmaker Montage Technology has unveiled new data center processors under its Jintide brand based on Intel's latest Emerald Rapids Xeon architecture. The 5th generation Jintide lineup offers anywhere from 16-core to 48-core options for enterprise customers needing advanced security specific to China's government and enterprise requirements. Leveraging a long-running joint venture with Intel, Jintide combines standard high-performance Xeon microarchitectures with added on-die monitoring and encryption blocks, PrC (Pre-check) and DSC (Dynamic Security Check), which are security-hardened for sensitive Chinese use cases. The processors retain all core performance attributes of Intel's vanilla offerings thanks to IP access, only with extra protections mandated by national security interests. While missing the very highest core counts, the new Jintide chips otherwise deliver similar Emerald Rapids features like 8-channel DDR5-5600 memory, 80 lanes of speedy PCIe 5.0, and elevated clock speeds over 4.0 GHz at peak. The Jintide processors have 2S scaling, which allows for dual-socket systems with up to 96 cores and 192 threads.

Pricing remains unpublished but likely carries a premium over Intel list prices thanks to the localized security customization required. However, with Jintide uniquely meeting strict Chinese government and data regulations, cost becomes secondary for target customers needing compliant data center hardware. After matching lockstep with Intel's last several leading Xeon generations, Jintide's continued iteration highlights its strategic value in enabling high-performance domestic infrastructure as China eyes IT supply chain autonomy. Intel gets expanded access to the growing Chinese server market, while Chinese partners utilize Intel IP to strengthen localized offerings without foreign dependency. It manifests the delicate balance of advanced chip joint ventures between global tech giants and rising challengers. More details about the SKUs are listed in the table below.

Intel "Emerald Rapids" Xeon Platinum 8592+ Tested, Shows 20%+ Improvement over Sapphire Rapids

Yesterday, Intel unveiled its latest Xeon data center processors, codenamed Emerald Rapids, delivering the new Xeon Platinum 8592+ flagship SKU with 64 cores and 128 threads. Packed into its fresh silicon, Intel promises boosted performance and reduced power hunger. The comprehensive tech benchmarking website Phoronix essentially confirms Intel's pitch. Testing production servers running the new 8592+ showed solid gains over prior Intel models, let alone older generations still commonplace in data centers. On average, upgrading to the 8592+ increased single-socket server performance by around 23.5% compared to the previous generation configs of Sapphire Rapid, Xeon Platinum 8490H. The dual-socket configuration records a 17% boost in performance.

However, Intel is not in the data center market by itself. AMD's 64-core offering that Xeon Platinum 8592+ is competing with is AMD EPYC 9554. The Emerald Rapids chip is faster by about 2.3%. However, AMD's lineup doesn't stop at only 64 cores. AMD's Genoa and Genoa-X with 3D V-cache top out at 96 cores, while Bergamo goes up to 128 cores. On the power consumption front, the Xeon Platinum 8592+ was pulling about 289 Watts compared to the Xeon Platinum 8490H average of 306 Watts. At peak, the Xeon Platinum 8592+ CPU managed to hit 434 Watts compared to the Xeon Platinum 8490H peak of 469 Watts. This aligns with Intel's claims of enhanced efficiency. However, compared to the 64-core counterpart from AMD, the EPYC 9554 had an average power consumption of 227 Watts and a recorded peak of 369 Watts.

TYAN Upgrades HPC, AI and Data Center Solutions with the Power of 5th Gen Intel Xeon Scalable Processors

TYAN, a leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced upgraded server platforms and motherboards based on the brand-new 5th Gen Intel Xeon Scalable Processors, formerly codenamed Emerald Rapids.

5th Gen Intel Xeon processor has increased to 64 cores, featuring a larger shared cache, higher UPI and DDR5 memory speed, as well as PCIe 5.0 with 80 lanes. Growing and excelling with workload-optimized performance, 5th Gen Intel Xeon delivers more compute power and faster memory within the same power envelope as the previous generation. "5th Gen Intel Xeon is the second processor offering inside the 2023 Intel Xeon Scalable platform, offering improved performance and power efficiency to accelerate TCO and operational efficiency", said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation. "By harnessing the capabilities of Intel's new Xeon CPUs, TYAN's 5th-Gen Intel Xeon-supported solutions are designed to handle the intense demands of HPC, data centers, and AI workloads.

Intel's New 5th Gen "Emerald Rapids" Xeon Processors are Built with AI Acceleration in Every Core

Today at the "AI Everywhere" event, Intel launched its 5th Gen Intel Xeon processors (code-named Emerald Rapids) that deliver increased performance per watt and lower total cost of ownership (TCO) across critical workloads for artificial intelligence, high performance computing (HPC), networking, storage, database and security. This launch marks the second Xeon family upgrade in less than a year, offering customers more compute and faster memory at the same power envelope as the previous generation. The processors are software- and platform-compatible with 4th Gen Intel Xeon processors, allowing customers to upgrade and maximize the longevity of infrastructure investments while reducing costs and carbon emissions.

"Designed for AI, our 5th Gen Intel Xeon processors provide greater performance to customers deploying AI capabilities across cloud, network and edge use cases. As a result of our long-standing work with customers, partners and the developer ecosystem, we're launching 5th Gen Intel Xeon on a proven foundation that will enable rapid adoption and scale at lower TCO." -Sandra Rivera, Intel executive vice president and general manager of Data Center and AI Group.

Intel "Emerald Rapids" Die Configuration Leaks, More Details Appear

Thanks to the leaked slides obtained by @InstLatX64, we have more details and some performance estimates about Intel's upcoming 5th Generation Xeon "Emerald Rapids" CPUs, boasting a significant performance leap over its predecessors. Leading the Emerald Rapids family is the top-end SKU, the Xeon 8592+, which features 64 cores and 128 threads, backed by a massive 480 MB L3 cache pool. The upcoming lineup shifts from a 4-tile to a 2-tile design to minimize latency and improve performance. The design utilizes the P-Core architecture under the Raptor Cove ISA and promises up to 40% faster performance than the current 4th Generation "Sapphire Rapids" CPUs in AI applications utilizing Intel AMX engine. Each chiplet has 35 cores, three of which are disabled, and each tile has two DDR5-5600 MT/s memory controllers, which operate two memory channels each and translating that into eight-channel design. There are three PCIe controllers per die, making it six in total.

Newer protocols and AI accelerators also back the upcoming lineup. Now, the Emerald Rapids family supports the Compute Express Link (CXL) Types 1/2/3 in addition to up to 80 PCIe Gen 5 lanes and enhanced Intel Ultra Path Interconnect (UPI). There are four UPI controllers spread over two dies. Moreover, features like the four on-die Intel Accelerator Engines, optimized power mode, and up to 17% improvement in general-purpose workloads make it seem like a big step up from the current generation. Much of this technology is found on the existing Sapphire Rapids SKUs, with the new generation enhancing the AI processing capability further. You can see the die configuration below. The 5th Generation Emerald Rapids designs are supposed to be official on December 14th, just a few days away.

Intel Xeon Platinum "Emerald Rapids" 8558P and 8551C 48-Core CPU SKUs Leak

The Geekbench database of benchmark submissions is yielding more leaks about Intel's upcoming 5th generation Xeon Scalable processors codenamed Emerald Rapids. Previously, we have covered the leak of the possibly top-end 64-core Xeon 8592+ Platinum and a 48-core Xeon 8558U processor. However, today, we are seeing information about lower-stack SKUs carrying up to 48 cores each. The first in line is the Xeon Platinum 8558P, a 48-core, 96-threaded CPU that runs at 2.7 GHz base frequency and 4.0 GHz boost frequency. It is equipped with 16 MB of L3 cache in addition to 192 MB of L2, making the total cache memory 260 MB. The integrated memory controller (IMC) of the Xeon Platinum 8558P supports eight-channel DDR5 running at 4800 MT/s, and the CPU has a TDP of 350 Watts.

The other SKU that was also listed was the Xeon Platinum 8551C, also a 48-core, 96-thread model with the same 260 MB cache configuration. However, this SKU has a higher base frequency of 2.9 GHz, with an unknown boost speed and unknown IMC configuration. An interesting thing to note about these 48C/96T SKUs is that they feature less cache compared to the previously leaked 48-core Xeon 8558U processor, which had 96 MB of L2 cache and 260 MB of L3 cache, making for a total of 356 MB of cache (which includes L1D and L1I as well). The segmentation that Intel is doing for its Xeon processors will be based not only on core count, frequency, and TDP but also on CPU cache sizes.

Intel "Emerald Rapids" 8592+ and 8558U Xeon CPUs with 64C and 48C Configurations Spotted

Intel's next-generation Emerald Rapids Xeon lineup is just around the corner, and we are now receiving more leaks as the launch nears. Today, we get to see leaks of two models: a 64-core Xeon 8592+ Platinum and a 48-core Xeon 8558U processor. First is the Xeon 8592+ Platinum, which is possibly Intel's top-end design with 64 cores and 128 threads. Running at the base frequency of 1.9 GHz, the CPU can boost up to 3.9 GHz. This SKU carries 488 MB of total cache, where 120 MB is dedicated to L2 and 320 MB is there for L3. With a TDP of 350 Watts, the CPU can even be adjusted to 420 Watts.

Next up, we have the Xeon 8558U processor, which has been spotted in Geekbench. The Xeon 8558U is a 48-core, 96-threaded CPU with a 2.0 GHz base clock whose boost frequency has yet to be shown or enabled, likely because it is an engineering sample. It carries 96 MB of L2 cache and 260 MB of L3 cache, making for a total of 356 MB of cache (which includes L1D and L1I as well). Both of these SKUs should launch with the remaining models in the Emerald Rapids family, dubbed 5th generation Xeon Scalable, on December 14 this year.

Intel 5th Gen Xeon Platinum 8580 CPU Details Leaked

YuuKi_AnS has brought renewed attention to an already leaked Intel "Emerald Rapids" processor—the 5th Gen Xeon Platinum 8580 CPU was identified with 60 cores and 120 threads in a previous post, but a follow up has appeared in the form of an engineering prototype (ES2-Q2SP-A0). Yuuki noted: "samples are for reference only, and the actual performance is subject to the official version." Team Blue has revealed a launch date—December 14 2023—for its 5th Gen Xeon Scalable processor lineup, so it is not surprising to see pre-release examples appear online a couple of months beforehand. This particular ES2 SKU (on A0 silicon) fields an all P-Core configuration consisting of Raptor Cove units, with a dual-chiplet design (30 cores per die). There is a significant bump up in cache sizes when compared to the current "Sapphire Rapids" generation—Wccftech outlines these allocations: "Each core comes with 2 MB of L2 cache for up to 120 MB of L2 cache. The whole chip also features 300 MB of L3 cache which combines to offer a total cache pool of 420 MB."

They bring in some of the competition for comparison: "That's a 2.6x increase in cache versus the existing Sapphire Rapids CPU lineup and while it still doesn't match the 480 MB L3 cache of standard (AMD) Genoa or the 1.5 GB cache pool of Genoa-X, it is a good start for Intel to catch up." Team Blue appears ready to take on AMD on many levels—this week's Innovation Event produced some intriguing announcements including "Sierra Forest vs. Bergamo" and plans to embrace 3D Stacked Cache technology. Yuuki's small batch of screenshots show the Xeon Platinum 8580 CPU's captured clock speeds are far from the finished article—just a touch over 2.0 GHz, so very likely limited to safe margins. An unnamed mainboard utilizing Intel's Eagle Stream platform was logged sporting a dual-socket setup—the test system was running a grand total of 120 cores and 240 threads!

Intel Unveils Future-Generation Xeon with Robust Performance and Efficiency Architectures

At this year's Hot Chips event, Intel provided the first in-depth look at its next-generation Intel Xeon product lineup, built on a new, innovative platform architecture. The platform marks an important evolution for Intel Xeon by introducing processors with a new Efficient-core (E-core) architecture alongside its well-established Performance-core (P-core) architecture. Code-named Sierra Forest and Granite Rapids, respectively, these new products will bring simplicity and flexibility to customers, offering a compatible hardware architecture and shared software stack to tackle critical workloads such as artificial intelligence.

"It is an exciting time for Intel and its Xeon roadmap. We recently shipped our millionth 4th Gen Xeon, our 5th Gen Xeon (code-named Emerald Rapids) will launch in Q4 2023 and our 2024 portfolio of data center products will prove to be a force in the industry," said Lisa Spelman, Intel corporate vice president and general manager of Xeon Products and Solutions.

Intel "Emerald Rapids" Doubles Down on On-die Caches, Divests on Chiplets

Finding itself embattled with AMD's EPYC "Genoa" processors, Intel is giving its 4th Gen Xeon Scalable "Sapphire Rapids" processor a rather quick succession in the form of the Xeon Scalable "Emerald Rapids," bound for Q4-2023 (about 8-10 months in). The new processor shares the same LGA4677 platform and infrastructure, and much of the same I/O, but brings about two key design changes that should help Intel shore up per-core performance, making it competitive to EPYC "Zen 4" processors with higher core-counts. SemiAnalysis compiled a nice overview of the changes, the two broadest points of it being—1. Intel is peddling back on the chiplet approach to high core-count CPUs, and 2., that it wants to give the memory sub-system and inter-core performance a massive performance boost using larger on-die caches.

The "Emerald Rapids" processor has just two large dies in its extreme core-count (XCC) avatar, compared to "Sapphire Rapids," which can have up to four of these. There are just three EMIB dies interconnecting these two, compared to "Sapphire Rapids," which needs as many as 10 of these to ensure direct paths among the four dies. The CPU core count itself doesn't see a notable increase. Each of the two dies on "Emerald Rapids" physically features 33 CPU cores, so a total of 66 are physically present, although one core per die is left unused for harvesting, the SemiAnalysis article notes. So the maximum core-count possible commercially is 32 cores per die, or 64 cores per socket. "Emerald Rapids" continues to be based on the Intel 7 process (10 nm Enhanced SuperFin), probably with a few architectural improvements for higher clock-speeds.

Intel Reports First-Quarter 2023 Financial Results: Client and Server Businesses Down 38-39% Each

Intel Corporation today reported first-quarter 2023 financial results. "We delivered solid first-quarter results, representing steady progress with our transformation," said Pat Gelsinger, Intel CEO. "We hit key execution milestones in our data center roadmap and demonstrated the health of the process technology underpinning it. While we remain cautious on the macroeconomic outlook, we are focused on what we can control as we deliver on IDM 2.0: driving consistent execution across process and product roadmaps and advancing our foundry business to best position us to capitalize on the $1 trillion market opportunity ahead."

David Zinsner, Intel CFO, said, "We exceeded our first-quarter expectations on the top and bottom line, and continued to be disciplined on expense management as part of our commitment to drive efficiencies and cost savings. At the same time, we are prioritizing the investments needed to advance our strategy and establish an internal foundry model, one of the most consequential steps we are taking to deliver on IDM 2.0."

AMD Speeds Up Development of "Zen 5" to Thwart Intel Xeon "Emerald Rapids"?

In no mood to cede its market-share growth to Intel, AMD has reportedly decided to accelerate the development of its next-generation "Zen 5" microarchitecture for debut within 2023. In its mid-2022 presentations, AMD had publicly given "Zen 5" a 2024 release date. This is part of a reading-in-between the lines for a recent GIGABYTE press release announcing server platforms powered by relatively low-cost Ryzen desktop processors. The specific sentence from that release reads "The next generation of AMD Ryzen desktop processors that will come out later this year will also be supported on this AM5 platform, so customers who purchase these servers today have the opportunity to upgrade to the Ryzen 7000 series successor."

While the GIGABYTE press release speaks of a next-generation Ryzen desktop processor, it stands to reason that it is referencing an early release of "Zen 5," and since AMD shares the CPU complex dies (CCDs) between its Ryzen client and EPYC server processors, the company is looking at a two-pronged upgrade to its processor lineup, with its next-generation EPYC "Turin" processor competing with Xeon Scalable "Emerald Rapids," and Ryzen "Granite Ridge" desktop processors taking on Intel's Core "Raptor Lake Refresh" and "Meteor Lake-S" desktop processors. It is rumored that "Zen 5" is being designed for the TSMC 3 nm node, and could see an increase in CPU core count per CCD, up from the present 8. TSMC 3 nm node goes into commercial mass-production in the first half of 2023 as the TSMC N3 node, with a refined N3E node slated for the second half of the year.

Intel Presents a Refreshed Xeon CPU Roadmap for 2023-2025

All eyes - especially investors' eyes - are on Intel's data center business today. Intel's Sandra Rivera, Greg Lavender and Lisa Spelman hosted a webinar focused on the company's Data Center and Artificial Intelligence business unit. They offered a big update on Intel's latest market forecasts, hardware plans and the way Intel is empowering developers with software.

Executives dished out updates on Intel's data center business for investors. This included disclosures about future generations of Intel Xeon chips, progress updates on 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) and demos of Intel hardware tackling the competition, heavy AI workloads and more.

Xeon Roadmap Roll Call
Among Sapphire Rapids, Emerald Rapids, Sierra Forest and Granite Rapids, there is a lot going on in the server CPU business. Here's your Xeon roadmap updates in order of appearance:

Intel Xeon "Sapphire Rapids" to be Quickly Joined by "Emerald Rapids," "Granite Rapids," and "Sierra Forest" in the Next Two Years

Intel's server processor lineup led by the 4th Gen Xeon Scalable "Sapphire Rapids" processors face stiff competition from AMD 4th Gen EPYC "Genoa" processors that offer significantly higher multi-threaded performance per Watt on account of a higher CPU core-count. The gap is only set to widen, as AMD prepares to launch the "Bergamo" processor for cloud data-centers, with core-counts of up to 128-core/256-thread per socket. A technologically-embattled Intel is preparing quick counters as many as three new server microarchitecture launches over the next 23 months, according to Intel, in its Q4-2022 Financial Results presentation.

The 4th Gen Xeon Scalable "Sapphire Rapids," with a core-count of up to 60-core/120-thread, and various application-specific accelerators, witnessed a quiet launch earlier this month, and is shipping to Intel customers. The company says that it will be joined by the Xeon Scalable "Emerald Rapids" architecture in the second half of 2023; followed by "Granite Rapids" and "Sierra Forest" in 2024. Built on the same LGA4677 package as "Sapphire Rapids," the new "Emerald Rapids" MCM packs up to 64 "Raptor Cove" CPU cores, which support higher clock-speeds, higher memory speeds, and introduce the new Intel Trust Domain Extensions (TDX) instruction-set. The processor retains the 8-channel DDR5 memory interface, but with higher native memory speeds. The chip's main serial interface is a PCI-Express Gen 5 root-complex with 80 lanes. The processor will be built on the last foundry-level refinement of the Intel 7 node (10 nm Enhanced SuperFin); many of these refinements were introduced with the company's 13th Gen Core "Raptor Lake" client processors.
Return to Keyword Browsing
Dec 18th, 2024 10:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts