Wednesday, November 9th 2022

Intel Introduces the Max Series Product Family: Ponte Vecchio and Sapphire Rapids

In advance of Supercomputing '22 in Dallas, Intel Corporation has introduced the Intel Max Series product family with two leading-edge products for high performance computing (HPC) and artificial intelligence (AI): Intel Xeon CPU Max Series (code-named Sapphire Rapids HBM) and Intel Data Center GPU Max Series (code-named Ponte Vecchio). The new products will power the upcoming Aurora supercomputer at Argonne National Laboratory, with updates on its deployment shared today.

The Xeon Max CPU is the first and only x86-based processor with high bandwidth memory, accelerating many HPC workloads without the need for code changes. The Max Series GPU is Intel's highest density processor, packing over 100 billion transistors into a 47-tile package with up to 128 gigabytes (GB) of high bandwidth memory. The oneAPI open software ecosystem provides a single programming environment for both new processors. Intel's 2023 oneAPI and AI tools will deliver capabilities to enable the Intel Max Series products' advanced features.
"To ensure no HPC workload is left behind, we need a solution that maximizes bandwidth, maximizes compute, maximizes developer productivity and ultimately maximizes impact. The Intel Max Series product family brings high bandwidth memory to the broader market, along with oneAPI, making it easy to share code between CPUs and GPUs and solve the world's biggest challenges faster." -Jeff McVeigh, Corporate Vice President and General Manager, Super Compute Group at Intel

Why It Matters
High performance computing (HPC) represents the vanguard of technology, employing the most advanced innovations at scale to solve science and society's biggest challenges, from mitigating the impacts of climate change to curing the world's deadliest diseases.

The Max Series products meet the needs of this community with scalable, balanced CPUs and GPUs, incorporating memory bandwidth breakthroughs and united by oneAPI, an open, standards-based, cross-architecture programming framework. Researchers and businesses will solve problems faster and more sustainably using Max Series products.

When It's Arriving
The Max Series products are slated to launch in January 2023. Executing on its commitments to customers, Intel is shipping blades with Max Series GPUs to Argonne National Laboratory to power the Aurora supercomputer and will deliver Xeon Max CPUs to Los Alamos National Laboratory, Kyoto University and other supercomputing sites.

What the Intel Xeon Max CPU Delivers
The Xeon Max CPU offers up to 56 performance cores constructed of four tiles and connected using Intel's embedded multi-die interconnect bridge (EMIB) technology, in a 350-watt envelope. Xeon Max CPUs contain 64 GB of high bandwidth in-package memory, as well as PCI Express 5.0 and CXL 1.1 I/O. Xeon Max CPUs will provide more than 1 GB of high bandwidth memory (HBM) capacity per core, enough to fit most common HPC workloads. The Max Series CPU provides up to 4.8x better performance compared to competition on real-world HPC workloads.
  • 68% less power usage than an AMD Milan-X cluster for the same HPCG performance.
  • AMX extensions boost AI performance and deliver 8x peak throughput over AVX-512 for INT8 with INT32 accumulation operations.
  • Provides flexibility to run in different HBM and DDR memory configurations.
  • Workload benchmarks:
    • Climate modeling: 2.4x faster than AMD Milan-X on MPAS-A using only HBM.
    • Molecular dynamics: On DeePMD, 2.8x performance improvement against competing products with DDR memory.
What the Intel Max Series GPU Delivers
Max Series GPUs deliver up to 128 Xe-HPC cores, the new foundational architecture targeted at the most demanding computing workloads. Additionally, the Max Series GPU features:
  • 408 MB of L2 cache - the highest in the industry - and 64 MB of L1 cache to increase throughput and performance.
  • The only HPC/AI GPU with native ray tracing acceleration, designed to speed scientific visualization and animation.
  • Workload benchmarks:
    • Finance: 2.4x performance gain over NVIDIA's A100 on Riskfuel credit option pricing.
    • Physics: 1.5x improvement over A100 for NekRS virtual reactor simulations.
Max Series GPUs will be available in several form factors to address different customer needs:
  • Max Series 1100 GPU: A 300-watt double-wide PCIe card with 56 Xe cores and 48 GB of HBM2e memory. Multiple cards can be connected via Intel Xe Link bridges.
  • Max Series 1350 GPU: A 450-watt OAM module with 112 Xe cores and 96 GB of HBM.
  • Max Series 1550 GPU: Intel's maximum performance 600-watt OAM module with 128 Xe cores and 128 GB of HBM.
Beyond individual cards and modules, Intel will offer the Intel Data Center GPU Max Series subsystem with x4 GPU OAM carrier board and Intel Xe Link to enable high performance multi-GPU communication within the subsystem.

What Max Series Products Enable
In 2023, the Aurora supercomputer, currently under construction at Argonne National Laboratory, is expected to become the first supercomputer to exceed 2 exaflops of peak double-precision compute performance. Aurora will also be the first to showcase the power of pairing Max Series GPUs and CPUs in a single system, with more than 10,000 blades, each containing six Max Series GPUs and two Xeon Max CPUs.

In advance of SC22, Argonne and Intel unveiled Sunspot, Aurora's Test Development System consisting of 128 production blades. Researchers from the Aurora Early Science Program will have access to the system beginning in late 2022.

The Max Series products will power several other HPC systems critical for national security and basic research, including Crossroads at Los Alamos National Laboratory, CTS-2 systems at Lawrence Livermore National Laboratory and Sandia National Laboratory, and Camphor3 at Kyoto University.

What's Next
At Supercomputing '22, Intel and its customers will showcase more than 40 upcoming system designs from 12 original equipment manufacturers using Max Series products. Attendees can explore demos showcasing the performance and capability of Max Series products for a range of AI and HPC applications, as well as hear from Intel architects, customers, and end-users about the power of Intel's platform solutions at the Intel booth, #2428. More information on Intel's activities at SC22 is available.

The Intel Data Center Max Series GPU, code-named Rialto Bridge, is the successor to the Max Series GPU and is intended to arrive in 2024 with improved performance and a seamless path to upgrade. Intel is then planning to release the next major architecture innovation to enable the future of HPC. The company's upcoming XPU, code-named Falcon Shores, will combine Xe and x86 cores on a single package. This groundbreaking new architecture will also have the flexibility to integrate new IPs from Intel and customers, manufactured using our IDM 2.0 model.

Source: Intel
Add your own comment

31 Comments on Intel Introduces the Max Series Product Family: Ponte Vecchio and Sapphire Rapids

#1
Daven
Apple leaves Intel so Intel decides to steal Apple’s crappy model names. That’ll show them.
Posted on Reply
#4
TheoneandonlyMrK
Noice, get it out there before Genoa gets a preview tomorrow.

Going to be interesting to see how next year pans out.
Posted on Reply
#5
AnotherReader
Seems like a good entry for HPC. How're the 128 Xe cores distributed; are their 4 different GPU dies? It looks like Intel is going to beat AMD's MI300 to the market.
Posted on Reply
#6
TheLostSwede
News Editor
AnotherReaderSeems like a good entry for HPC. How're the 128 Xe cores distributed; are their 4 different GPU dies? It looks like Intel is going to beat AMD's MI300 to the market.
Intel didn't excactly go into a lot of details, but this might help.
Phoronix has some more details.
www.phoronix.com/review/intel-xeon-max-dc

Posted on Reply
#7
catulitechup
DavenApple leaves Intel so Intel decides to steal Apple’s crappy model names. That’ll show them.
Them think about sell (cough scam) more with apple type presentations



:)
Posted on Reply
#8
lemonadesoda
Data Centre GPU. What an oxymoron. They should have rethought their strategy and product branding. Data Processing Unit, Data Centre DPU, would be ok. But a device designed for GPU, failing in GPU performance, but can do some data centre comp workloads. So rebrand fgs
Posted on Reply
#10
Dirt Chip
They have 300, 450 and 600 watt models so Intel is also using the notorious 12VHPWR connector?
Will be interesting to see their kind of adapter.
Posted on Reply
#11
Valantar
Dirt ChipThey have 300, 450 and 600 watt models so Intel is also using the notorious 12VHPWR connector?
Will be interesting to see their kind of adapter.
Do you think an accelerator card for servers and HPC will come with a PSU adapter? :wtf:These will be installed in built-for-puropose computers with suitable native power connections.
Posted on Reply
#13
Wirko
lemonadesodaData Centre GPU. What an oxymoron. They should have rethought their strategy and product branding. Data Processing Unit, Data Centre DPU, would be ok. But a device designed for GPU, failing in GPU performance, but can do some data centre comp workloads. So rebrand fgs
Then what should the ray tracing units be "rebranded" to?
ValantarDo you think an accelerator card for servers and HPC will come with a PSU adapter? :wtf:These will be installed in built-for-puropose computers with suitable native power connections.
The 300W model will indeed take the shape of a PCIe card as they say, so that may be the amount Intel dares to push through the "high power" connector.
Posted on Reply
#14
ZoneDymo
Dirt ChipGlue jokes in 3..2..1...
They deserve it though
Posted on Reply
#15
Dirt Chip
ValantarDo you think an accelerator card for servers and HPC will come with a PSU adapter? :wtf:These will be installed in built-for-puropose computers with suitable native power connections.
I hope not.
But Max Series 1100 is a PCIe card with 300w.
NV A100 come in PCIe flavor which intel is after.
Why not put the other MAX on PCIe also?
You already have "1 power connector to rule them all"...
Posted on Reply
#16
Valantar
WirkoThe 300W model will indeed take the shape of a PCIe card as they say, so that may be the amount Intel dares to push through the "high power" connector.
Dirt ChipI hope not.
But Max Series 1100 is a PCIe card with 300w.
NV A100 come in PCIe flavor which intel is after.
Why not put the other MAX on PCIe also?
You already have "1 power connector to rule them all"...
These being PCIe cards doesn't mean they'll use adapters. There is literally no direct relation between the two - PCIe cards are the standard for servers and HPC as well (the newer mezzanine standards are gaining adoption, but it'll still be a long time for them to be dominant). These will still be natively wired for 12VHPWR. I mean, both AMD and Nvidia make tons of PCIe accelerators for server and HPC as well, you still don't see Radeon Instinct or Nvidia A100s in regular PCs outside of a few very very specialized workstations.
Posted on Reply
#17
Richards
The most advanced gpu ever created.. a work of art
Posted on Reply
#18
dragontamer5788
TheLostSwede
  • 408 MB of L2 cache - the highest in the industry - and 64 MB of L1 cache to increase throughput and performance.
Okay, I'm listening. That sounds pretty absurd, but useful!

GPUs traditionally have had very little cache. But AMD's infinity cache shows that it works for video games, and I think Intel's move here to have a ton of cache will also help the supercomputer world.
Posted on Reply
#19
Wirko
ValantarThese being PCIe cards doesn't mean they'll use adapters. There is literally no direct relation between the two - PCIe cards are the standard for servers and HPC as well (the newer mezzanine standards are gaining adoption, but it'll still be a long time for them to be dominant). These will still be natively wired for 12VHPWR. I mean, both AMD and Nvidia make tons of PCIe accelerators for server and HPC as well, you still don't see Radeon Instinct or Nvidia A100s in regular PCs outside of a few very very specialized workstations.
Sure, such adapters have no business in servers and workstations.
Posted on Reply
#20
Patriot
Given they are sticking to 300w, they will continue to use the server industry standard 1x8 pin 12V EPS 8pin power connector, 4 power, 4 ground. It has been used since the K80 on Nvidia and they continue to use it on the H100 pcie. AMD Finally changed to 1x8 pin 12V EPS with the Mi210 but use 6/8 or twin 8 pcie power before that.
I don't expect to see PCIE 12v HPWR connector in servers. For anything over 300w they will use OAM or SXM. Intel's OAM is 450/600w, AMD's is 560w for mi250x, intel Gaudi 2 is 600w, Nvidia SXM5 700w.
and Intel is planning 800w OAM for next gen.



I hope the XE cores can keep up with their Habana labs acquisition Gaudi2 cores.

Overall an interesting space... Looking forward to Genoa reveal in 7.5hrs. Hopefully the MI300 gets some time.
Posted on Reply
#21
Valantar
PatriotGiven they are sticking to 300w, they will continue to use the server industry standard 1x8 pin 12V EPS 8pin power connector, 4 power, 4 ground. It has been used since the K80 on Nvidia and they continue to use it on the H100 pcie. AMD Finally changed to 1x8 pin 12V EPS with the Mi210 but use 6/8 or twin 8 pcie power before that.
I don't expect to see PCIE 12v HPWR connector in servers. For anything over 300w they will use OAM or SXM. Intel's OAM is 450/600w, AMD's is 560w for mi250x, intel Gaudi 2 is 600w, Nvidia SXM5 700w.
and Intel is planning 800w OAM for next gen.



I hope the XE cores can keep up with their Habana labs acquisition Gaudi2 cores.

Overall an interesting space... Looking forward to Genoa reveal in 7.5hrs. Hopefully the MI300 gets some time.
My impression is that server/HPC is the main driving force behind the 12VHPWR connector, precisely to allow for PCIe AICs to exceed the 336W rating of an 8-pin EPS connector without going dual connector (which would both make cable routing a mess and obstruct airflow through the passive coolers used on these cards). The various mezzanine card standards are definitely the main focus for super-high power implementations, but there's also a lot of push for compatibility with existing infrastructure without having to move to an entirely new server layout, which is where PCIe shines.
Posted on Reply
#22
TheLostSwede
News Editor
PatriotGiven they are sticking to 300w, they will continue to use the server industry standard 1x8 pin 12V EPS 8pin power connector, 4 power, 4 ground. It has been used since the K80 on Nvidia and they continue to use it on the H100 pcie. AMD Finally changed to 1x8 pin 12V EPS with the Mi210 but use 6/8 or twin 8 pcie power before that.
I don't expect to see PCIE 12v HPWR connector in servers. For anything over 300w they will use OAM or SXM. Intel's OAM is 450/600w, AMD's is 560w for mi250x, intel Gaudi 2 is 600w, Nvidia SXM5 700w.
and Intel is planning 800w OAM for next gen.



I hope the XE cores can keep up with their Habana labs acquisition Gaudi2 cores.

Overall an interesting space... Looking forward to Genoa reveal in 7.5hrs. Hopefully the MI300 gets some time.
It seems like Intel went with the new connector.
videocardz.com/newz/intel-adopts-12vhpwr-connector-for-its-data-center-gpu-max-1100-pcie-series
Posted on Reply
#24
Valantar
Dirt ChipThat's what I meant.
And if so, spaghetti adapters will fallow :)
No they won't. These accelerators will go into built-for-purpose servers (and a very few workstations), all of which will have built-for-purpose PSUs with native 12VHPWR cabling. If you're buying a $5-10 000 accelerator card, you're also buying a PSU that natively supports what that system needs.

Please stop acting as if servers, compute clusters and ultra-high-end workstations are built in ways that resemble consumer PCs whatsoever.
Posted on Reply
#25
Wirko
ValantarPlease stop acting as if servers, compute clusters and ultra-high-end workstations are built in ways that resemble consumer PCs whatsoever.
... also those are built by people that tend to read manuals *before* anything goes the way of the Etna Krakatoa.
Posted on Reply
Add your own comment
Dec 19th, 2024 07:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts