• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Introduces the Max Series Product Family: Ponte Vecchio and Sapphire Rapids

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,570 (2.40/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
In advance of Supercomputing '22 in Dallas, Intel Corporation has introduced the Intel Max Series product family with two leading-edge products for high performance computing (HPC) and artificial intelligence (AI): Intel Xeon CPU Max Series (code-named Sapphire Rapids HBM) and Intel Data Center GPU Max Series (code-named Ponte Vecchio). The new products will power the upcoming Aurora supercomputer at Argonne National Laboratory, with updates on its deployment shared today.

The Xeon Max CPU is the first and only x86-based processor with high bandwidth memory, accelerating many HPC workloads without the need for code changes. The Max Series GPU is Intel's highest density processor, packing over 100 billion transistors into a 47-tile package with up to 128 gigabytes (GB) of high bandwidth memory. The oneAPI open software ecosystem provides a single programming environment for both new processors. Intel's 2023 oneAPI and AI tools will deliver capabilities to enable the Intel Max Series products' advanced features.



"To ensure no HPC workload is left behind, we need a solution that maximizes bandwidth, maximizes compute, maximizes developer productivity and ultimately maximizes impact. The Intel Max Series product family brings high bandwidth memory to the broader market, along with oneAPI, making it easy to share code between CPUs and GPUs and solve the world's biggest challenges faster." -Jeff McVeigh, Corporate Vice President and General Manager, Super Compute Group at Intel

Why It Matters
High performance computing (HPC) represents the vanguard of technology, employing the most advanced innovations at scale to solve science and society's biggest challenges, from mitigating the impacts of climate change to curing the world's deadliest diseases.

The Max Series products meet the needs of this community with scalable, balanced CPUs and GPUs, incorporating memory bandwidth breakthroughs and united by oneAPI, an open, standards-based, cross-architecture programming framework. Researchers and businesses will solve problems faster and more sustainably using Max Series products.

When It's Arriving
The Max Series products are slated to launch in January 2023. Executing on its commitments to customers, Intel is shipping blades with Max Series GPUs to Argonne National Laboratory to power the Aurora supercomputer and will deliver Xeon Max CPUs to Los Alamos National Laboratory, Kyoto University and other supercomputing sites.

What the Intel Xeon Max CPU Delivers
The Xeon Max CPU offers up to 56 performance cores constructed of four tiles and connected using Intel's embedded multi-die interconnect bridge (EMIB) technology, in a 350-watt envelope. Xeon Max CPUs contain 64 GB of high bandwidth in-package memory, as well as PCI Express 5.0 and CXL 1.1 I/O. Xeon Max CPUs will provide more than 1 GB of high bandwidth memory (HBM) capacity per core, enough to fit most common HPC workloads. The Max Series CPU provides up to 4.8x better performance compared to competition on real-world HPC workloads.

  • 68% less power usage than an AMD Milan-X cluster for the same HPCG performance.
  • AMX extensions boost AI performance and deliver 8x peak throughput over AVX-512 for INT8 with INT32 accumulation operations.
  • Provides flexibility to run in different HBM and DDR memory configurations.
  • Workload benchmarks:
    • Climate modeling: 2.4x faster than AMD Milan-X on MPAS-A using only HBM.
    • Molecular dynamics: On DeePMD, 2.8x performance improvement against competing products with DDR memory.

What the Intel Max Series GPU Delivers
Max Series GPUs deliver up to 128 Xe-HPC cores, the new foundational architecture targeted at the most demanding computing workloads. Additionally, the Max Series GPU features:

  • 408 MB of L2 cache - the highest in the industry - and 64 MB of L1 cache to increase throughput and performance.
  • The only HPC/AI GPU with native ray tracing acceleration, designed to speed scientific visualization and animation.
  • Workload benchmarks:
    • Finance: 2.4x performance gain over NVIDIA's A100 on Riskfuel credit option pricing.
    • Physics: 1.5x improvement over A100 for NekRS virtual reactor simulations.

Max Series GPUs will be available in several form factors to address different customer needs:
  • Max Series 1100 GPU: A 300-watt double-wide PCIe card with 56 Xe cores and 48 GB of HBM2e memory. Multiple cards can be connected via Intel Xe Link bridges.
  • Max Series 1350 GPU: A 450-watt OAM module with 112 Xe cores and 96 GB of HBM.
  • Max Series 1550 GPU: Intel's maximum performance 600-watt OAM module with 128 Xe cores and 128 GB of HBM.
Beyond individual cards and modules, Intel will offer the Intel Data Center GPU Max Series subsystem with x4 GPU OAM carrier board and Intel Xe Link to enable high performance multi-GPU communication within the subsystem.

What Max Series Products Enable
In 2023, the Aurora supercomputer, currently under construction at Argonne National Laboratory, is expected to become the first supercomputer to exceed 2 exaflops of peak double-precision compute performance. Aurora will also be the first to showcase the power of pairing Max Series GPUs and CPUs in a single system, with more than 10,000 blades, each containing six Max Series GPUs and two Xeon Max CPUs.

In advance of SC22, Argonne and Intel unveiled Sunspot, Aurora's Test Development System consisting of 128 production blades. Researchers from the Aurora Early Science Program will have access to the system beginning in late 2022.

The Max Series products will power several other HPC systems critical for national security and basic research, including Crossroads at Los Alamos National Laboratory, CTS-2 systems at Lawrence Livermore National Laboratory and Sandia National Laboratory, and Camphor3 at Kyoto University.

What's Next
At Supercomputing '22, Intel and its customers will showcase more than 40 upcoming system designs from 12 original equipment manufacturers using Max Series products. Attendees can explore demos showcasing the performance and capability of Max Series products for a range of AI and HPC applications, as well as hear from Intel architects, customers, and end-users about the power of Intel's platform solutions at the Intel booth, #2428. More information on Intel's activities at SC22 is available.

The Intel Data Center Max Series GPU, code-named Rialto Bridge, is the successor to the Max Series GPU and is intended to arrive in 2024 with improved performance and a seamless path to upgrade. Intel is then planning to release the next major architecture innovation to enable the future of HPC. The company's upcoming XPU, code-named Falcon Shores, will combine Xe and x86 cores on a single package. This groundbreaking new architecture will also have the flexibility to integrate new IPs from Intel and customers, manufactured using our IDM 2.0 model.


View at TechPowerUp Main Site | Source
 
Joined
Jul 15, 2020
Messages
1,021 (0.64/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
Glue jokes in 3..2..1...
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Noice, get it out there before Genoa gets a preview tomorrow.

Going to be interesting to see how next year pans out.
 
Joined
Nov 26, 2021
Messages
1,641 (1.51/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Seems like a good entry for HPC. How're the 128 Xe cores distributed; are their 4 different GPU dies? It looks like Intel is going to beat AMD's MI300 to the market.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,570 (2.40/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Seems like a good entry for HPC. How're the 128 Xe cores distributed; are their 4 different GPU dies? It looks like Intel is going to beat AMD's MI300 to the market.
Intel didn't excactly go into a lot of details, but this might help.
Phoronix has some more details.

 
Joined
May 21, 2009
Messages
269 (0.05/day)
Processor AMD Ryzen 5 4600G @4300mhz
Motherboard MSI B550-Pro VC
Cooling Scythe Mugen 5 Black Edition
Memory 16GB DDR4 4133Mhz Dual Channel
Video Card(s) IGP AMD Vega 7 Renoir @2300mhz (8GB Shared memory)
Storage 256GB NVMe PCI-E 3.0 - 6TB HDD - 4TB HDD
Display(s) Samsung SyncMaster T22B350
Software Xubuntu 24.04 LTS x64 + Windows 10 x64
Apple leaves Intel so Intel decides to steal Apple’s crappy model names. That’ll show them.

Them think about sell (cough scam) more with apple type presentations



:)
 
Joined
Aug 30, 2006
Messages
7,221 (1.08/day)
System Name ICE-QUAD // ICE-CRUNCH
Processor Q6600 // 2x Xeon 5472
Memory 2GB DDR // 8GB FB-DIMM
Video Card(s) HD3850-AGP // FireGL 3400
Display(s) 2 x Samsung 204Ts = 3200x1200
Audio Device(s) Audigy 2
Software Windows Server 2003 R2 as a Workstation now migrated to W10 with regrets.
Data Centre GPU. What an oxymoron. They should have rethought their strategy and product branding. Data Processing Unit, Data Centre DPU, would be ok. But a device designed for GPU, failing in GPU performance, but can do some data centre comp workloads. So rebrand fgs
 
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Joined
Jul 15, 2020
Messages
1,021 (0.64/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
They have 300, 450 and 600 watt models so Intel is also using the notorious 12VHPWR connector?
Will be interesting to see their kind of adapter.
 
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
They have 300, 450 and 600 watt models so Intel is also using the notorious 12VHPWR connector?
Will be interesting to see their kind of adapter.
Do you think an accelerator card for servers and HPC will come with a PSU adapter? :wtf:These will be installed in built-for-puropose computers with suitable native power connections.
 
Joined
Nov 26, 2021
Messages
1,641 (1.51/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Last edited:
Joined
Jan 3, 2021
Messages
3,475 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Data Centre GPU. What an oxymoron. They should have rethought their strategy and product branding. Data Processing Unit, Data Centre DPU, would be ok. But a device designed for GPU, failing in GPU performance, but can do some data centre comp workloads. So rebrand fgs
Then what should the ray tracing units be "rebranded" to?

Do you think an accelerator card for servers and HPC will come with a PSU adapter? :wtf:These will be installed in built-for-puropose computers with suitable native power connections.
The 300W model will indeed take the shape of a PCIe card as they say, so that may be the amount Intel dares to push through the "high power" connector.
 
Joined
Feb 11, 2009
Messages
5,545 (0.96/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
Joined
Jul 15, 2020
Messages
1,021 (0.64/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
Do you think an accelerator card for servers and HPC will come with a PSU adapter? :wtf:These will be installed in built-for-puropose computers with suitable native power connections.
I hope not.
But Max Series 1100 is a PCIe card with 300w.
NV A100 come in PCIe flavor which intel is after.
Why not put the other MAX on PCIe also?
You already have "1 power connector to rule them all"...
 
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The 300W model will indeed take the shape of a PCIe card as they say, so that may be the amount Intel dares to push through the "high power" connector.
I hope not.
But Max Series 1100 is a PCIe card with 300w.
NV A100 come in PCIe flavor which intel is after.
Why not put the other MAX on PCIe also?
You already have "1 power connector to rule them all"...
These being PCIe cards doesn't mean they'll use adapters. There is literally no direct relation between the two - PCIe cards are the standard for servers and HPC as well (the newer mezzanine standards are gaining adoption, but it'll still be a long time for them to be dominant). These will still be natively wired for 12VHPWR. I mean, both AMD and Nvidia make tons of PCIe accelerators for server and HPC as well, you still don't see Radeon Instinct or Nvidia A100s in regular PCs outside of a few very very specialized workstations.
 
Joined
Apr 24, 2020
Messages
2,705 (1.62/day)
  • 408 MB of L2 cache - the highest in the industry - and 64 MB of L1 cache to increase throughput and performance.

Okay, I'm listening. That sounds pretty absurd, but useful!

GPUs traditionally have had very little cache. But AMD's infinity cache shows that it works for video games, and I think Intel's move here to have a ton of cache will also help the supercomputer world.
 
Joined
Jan 3, 2021
Messages
3,475 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
These being PCIe cards doesn't mean they'll use adapters. There is literally no direct relation between the two - PCIe cards are the standard for servers and HPC as well (the newer mezzanine standards are gaining adoption, but it'll still be a long time for them to be dominant). These will still be natively wired for 12VHPWR. I mean, both AMD and Nvidia make tons of PCIe accelerators for server and HPC as well, you still don't see Radeon Instinct or Nvidia A100s in regular PCs outside of a few very very specialized workstations.
Sure, such adapters have no business in servers and workstations.
 
Joined
Oct 27, 2009
Messages
1,179 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
Given they are sticking to 300w, they will continue to use the server industry standard 1x8 pin 12V EPS 8pin power connector, 4 power, 4 ground. It has been used since the K80 on Nvidia and they continue to use it on the H100 pcie. AMD Finally changed to 1x8 pin 12V EPS with the Mi210 but use 6/8 or twin 8 pcie power before that.
I don't expect to see PCIE 12v HPWR connector in servers. For anything over 300w they will use OAM or SXM. Intel's OAM is 450/600w, AMD's is 560w for mi250x, intel Gaudi 2 is 600w, Nvidia SXM5 700w.
and Intel is planning 800w OAM for next gen.

1668068582344.png


I hope the XE cores can keep up with their Habana labs acquisition Gaudi2 cores.
1668068633746.png

Overall an interesting space... Looking forward to Genoa reveal in 7.5hrs. Hopefully the MI300 gets some time.
 
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Given they are sticking to 300w, they will continue to use the server industry standard 1x8 pin 12V EPS 8pin power connector, 4 power, 4 ground. It has been used since the K80 on Nvidia and they continue to use it on the H100 pcie. AMD Finally changed to 1x8 pin 12V EPS with the Mi210 but use 6/8 or twin 8 pcie power before that.
I don't expect to see PCIE 12v HPWR connector in servers. For anything over 300w they will use OAM or SXM. Intel's OAM is 450/600w, AMD's is 560w for mi250x, intel Gaudi 2 is 600w, Nvidia SXM5 700w.
and Intel is planning 800w OAM for next gen.

View attachment 269379

I hope the XE cores can keep up with their Habana labs acquisition Gaudi2 cores.
View attachment 269380
Overall an interesting space... Looking forward to Genoa reveal in 7.5hrs. Hopefully the MI300 gets some time.
My impression is that server/HPC is the main driving force behind the 12VHPWR connector, precisely to allow for PCIe AICs to exceed the 336W rating of an 8-pin EPS connector without going dual connector (which would both make cable routing a mess and obstruct airflow through the passive coolers used on these cards). The various mezzanine card standards are definitely the main focus for super-high power implementations, but there's also a lot of push for compatibility with existing infrastructure without having to move to an entirely new server layout, which is where PCIe shines.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,570 (2.40/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Given they are sticking to 300w, they will continue to use the server industry standard 1x8 pin 12V EPS 8pin power connector, 4 power, 4 ground. It has been used since the K80 on Nvidia and they continue to use it on the H100 pcie. AMD Finally changed to 1x8 pin 12V EPS with the Mi210 but use 6/8 or twin 8 pcie power before that.
I don't expect to see PCIE 12v HPWR connector in servers. For anything over 300w they will use OAM or SXM. Intel's OAM is 450/600w, AMD's is 560w for mi250x, intel Gaudi 2 is 600w, Nvidia SXM5 700w.
and Intel is planning 800w OAM for next gen.

View attachment 269379

I hope the XE cores can keep up with their Habana labs acquisition Gaudi2 cores.
View attachment 269380
Overall an interesting space... Looking forward to Genoa reveal in 7.5hrs. Hopefully the MI300 gets some time.
It seems like Intel went with the new connector.
 
Joined
Jul 15, 2020
Messages
1,021 (0.64/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
It seems like Intel went with the new connector.
That's what I meant.
And if so, spaghetti adapters will fallow :)
 
Joined
May 2, 2017
Messages
7,762 (2.82/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
That's what I meant.
And if so, spaghetti adapters will fallow :)
No they won't. These accelerators will go into built-for-purpose servers (and a very few workstations), all of which will have built-for-purpose PSUs with native 12VHPWR cabling. If you're buying a $5-10 000 accelerator card, you're also buying a PSU that natively supports what that system needs.

Please stop acting as if servers, compute clusters and ultra-high-end workstations are built in ways that resemble consumer PCs whatsoever.
 
Top