• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AVX-512 Doubles Intel 5th Gen "Emerald Rapids" Xeon Processor Performance, Up to 10x Improvement in AI Workloads

I'm just not convinced they explicitly wanted to segment AVX-512 out after investing so much time and money into mainstream hardware and software enablement in their awesome libraries.
So why doesn't Meteor Lake support AVX-512? I don't think it even supports AVX10. It has a new E-core, so this difference could've been solved. To follow your thinking and answer my own question, probably because Intel felt AVX-512 required too much silicon for an E-core and AVX10 didn't exist before Meteor Lake was finalized.
Most Android phones nowadays have SoCs containing mixed architectures...
This was addressed by ncrs but I'll add this: the instruction set architecture (ISA) is the language software and hardware use to communicate to one another. A software developer will tell a piece of software to check once whether the hardware can speak AVX-512, and probably won't check again. The OS constantly moves that software between cores and regardless of how smart the OS is, the piece of software will crash if it tries to speak AVX-512 after moving to a core that doesn't speak it. The software is blind to the microarchitecture, which is the underlying logic that actually does work that is communicated through the ISA. (Evidently the logic required to perform an AVX-512 instruction is pretty complex.)
 
I don't think Intel had any reason to take away AVX-512 when E-cores are disabled other than product segmentation, unless it was literally removed from the die design to make for a smaller die.
Then that means 12600 nonk and below could have AVX512 but the ones above could not?

So in certain work loads a 90 dollar i3 could wipe the floor with a near 800 dollar i9? You really think Intel would let than fly? :D
 
So why doesn't Meteor Lake support AVX-512?
Easy
1. Product positioning.
2. The way Intel implements, or used to implement, the avx-512 drew a lot of power when in use. Unlike Intel, for AMD realisation, we've already seen some tests where increase in consumption is negligible.
3. Meteor Lake, is a mobile series with which Intel bets on the maximum performance of the graphics chiplet and at the same time without exceeding the energy budget because to have advantage time for using with one battery charge.. For this purpose, it has even reduced the IPC slightly…
 
Last edited:
So why doesn't Meteor Lake support AVX-512? I don't think it even supports AVX10. It has a new E-core, so this difference could've been solved. To follow your thinking and answer my own question, probably because Intel felt AVX-512 required too much silicon for an E-core and AVX10 didn't exist before Meteor Lake was finalized.
From GCC source code we know (P_PROC_AVX2 instead of P_PROC_AVX512F) that Arrow Lake, Lunar Lake and Panther Lake all won't have AVX-512. At least for now - AVX10 is a complicated issue despite trying to disentangle the AVX mess. I don't think it's even fully upstreamed and wired up in GCC.
Another reason is that Meteor Lake implements another level of E-cores called L(ow) P(ower) E-cores.
 
Easy
1. Product positioning.
2. The way Intel implements, or used to implement, the avx-512 drew a lot of power when in use. Unlike Intel, for AMD realisation, we've already seen some tests where increase in consumption is negligible.
3. Meteor Lake, is a mobile series with which Intel bets on the maximum performance of the graphics chiplet and at the same time without exceeding the energy budget because to have advantage time for using with one battery charge.. For this purpose, it has even reduced the IPC slightly…
Hi,
Yep a lot of power and a lot of heat.
 
2. The way Intel implements, or used to implement, the avx-512 drew a lot of power when in use. Unlike Intel, for AMD realisation, we've already seen some tests where increase in consumption is negligible.
The Phoronix article linked by this article shows that the power consumption of AVX-512 in Emerald Rapids isn't bad. Since Emerald Rapids is using Raptor Cove cores which are just Golden Cove cores (which came out 2 years ago) with more cache and since the Redwood Cove architecture in Meteor Lake is just the die shrink version of Raptor Cove, Intel would've known that AVX-512 wouldn't be an efficiency problem in Meteor Lake.
3. Meteor Lake, is a mobile series with which Intel bets on the maximum performance of the graphics chiplet and at the same time without exceeding the energy budget because to have advantage time for using with one battery charge.. For this purpose, it has even reduced the IPC slightly…
Meteor Lake wasn't designed to be mobile-only. As recently as last summer Intel was updating Linux code to support Meteor Lake-S, the desktop version. It seems the decision to cut the desktop line came very late. Perhaps it couldn't reach high enough clock speeds to compete with Raptor Lake-S. I've never seen evidence of an IPC decrease in Meteor Lake, nor any evidence of an architectural difference between Raptor Cove and Redwood Cove. Surely if an architecture update was made to improve efficiency Intel would've brought it up? Actually isolating a reduction in instructions per clock cycle would require comparing multiple Raptor Lake laptops to multiple Meteor Lake laptops running multiple benchmarks while monitoring the frequency looking for a pattern of similar frequency but lower performance. No one has done this test.
 
Back
Top