Monday, January 3rd 2022

Intel to Disable Rudimentary AVX-512 Support on Alder Lake Processors

Intel is reportedly disabling the rudimentary AVX-512 instruction-set support on its 12th Gen Core "Alder Lake" processors using a firmware/ME update, reports Igor's Lab. Intel does not advertise AVX-512 for Alder Lake, even though the instruction-set was much publicized for a couple of its past-generation client-segment chips, namely 11th Gen Rocket Lake, and 10th Gen Cascade Lake-X HEDT processors. The company will likely make AVX-512 a feature that sets apart its next-gen HEDT processors derived from Sapphire Rapids, its upcoming enterprise microarchitecture.

AVX-512 is technically not advertised for Alder Lake, but software that calls for these instructions can utilize them on certain 12th Gen Core processors, when paired with older versions of the Intel ME firmware. The ME version Intel releases to OEMs and motherboard vendors alongside its upcoming 65 W Core desktop processors, and the Alder Lake-P mobile processors, will prevent AVX-512 from being exposed to the software. Intel's reason to deprecate what little client-relevant AVX-512 instructions it had for Core processors, could have do with energy efficiency, as much as lukewarm reception from client software developers. The instruction is more relevant to the HPC and cloud-computing markets.

Many Thanks to TheoneandonlyMrK for the tip.
Source: Igor's Lab
Add your own comment

49 Comments on Intel to Disable Rudimentary AVX-512 Support on Alder Lake Processors

#1
Crackong
No "Real world benchmark" running AVX-512 ?
Posted on Reply
#2
AlwaysHope
Yep, there is something fundamentally weird when a modern desktop cpu has to downclock in order to run an instruction set.
Posted on Reply
#3
Ferrum Master
"I Hope AVX512 Dies A Painful Death"
-Linus Torvalds

And it does.

I remember the useless discussion with that PS3 emulator news, it ain't the first time intel does omit extension due to some reasons. Those extensions are waste of silicon for consumer environment, if a coder decided to rely on it really shows how he doesn't grasp what his real user base is and hides his inability to code normally using alternative aids, let it be CUDA or openCL.
Posted on Reply
#4
ViperXTR
Ferrum Master"I Hope AVX512 Dies A Painful Death"
-Linus Torvalds

And it does.

I remember the useless discussion with that PS3 emulator news, it ain't the first time intel does omit extension due to some reasons. Those extensions are waste of silicon for consumer environment, if a coder decided to rely on it really shows how he doesn't grasp what his real user base is and hides his inability to code normally using alternative aids, let it be CUDA or openCL.
lol i remember that, i was there trying to see who's into emulation and it just turned out this and that is bad etc. heh
Posted on Reply
#5
Jism
Ferrum Master"I Hope AVX512 Dies A Painful Death"
-Linus Torvalds

And it does.

I remember the useless discussion with that PS3 emulator news, it ain't the first time intel does omit extension due to some reasons. Those extensions are waste of silicon for consumer environment, if a coder decided to rely on it really shows how he doesn't grasp what his real user base is and hides his inability to code normally using alternative aids, let it be CUDA or openCL.
God, the PS3 is a risc based platform with a undocumented GPU. You cant just use OpenCL or Cuda to "Emulate" a certain specific console and it's hardware.
Posted on Reply
#6
AusWolf
So then 11th gen supports an instruction set that 12th gen does not? Weird. :wtf:
Posted on Reply
#7
efikkan
btarunrIntel's reason to deprecate what little client-relevant AVX-512 instructions it had for Core processors, could have do with energy efficiency, as much as lukewarm reception from client software developers.
Most certainly not.
Disabling support for an instruction does not improve energy efficiency.

Intel screwed up by having different ISA support on the slow and fast cores of Alder Lake, which I warned would be a headache.

BTW, AVX-512 support can be enabled on certain motherboards, with slow cores disabled, at least with early firmware.
AlwaysHopeYep, there is something fundamentally weird when a modern desktop cpu has to downclock in order to run an instruction set.
So what if the core drops a few hundred MHz? It's still more than twice as fast as AVX2. You need to comprehend that performance is what matters
Posted on Reply
#8
Unregistered
It is reversible, there are "apparently" bios version that are modified to enable AVX-512. This is a very interesting read if you're interested.

www.igorslab.de/en/intel-deactivated-avx-512-on-alder-lake-but-fully-questionable-interpretation-of-efficiency-news-editorial/
This too
www.igorslab.de/en/efficiency-secret-tip-avx-512-on-alder-lake-the-returned-command-set-in-practice-test/

I reckon newer ADL CPU's will have the AVX-512 fused off/disabled
#9
bug
It's ok that they settled the matter rather quickly. It's not so ok they didn't settle it before wasting die area on AVX512 circuitry. Intel would have got more CPUs per die and we would have got lower prices...
Posted on Reply
#10
Unregistered
Intel lied and said it was fused off, when it isn't. i am betting newer ADL dies will have it fused off, so they don't have to force motherboard makers to release a bios to disable it. Raft of bios updates coming me thinks.

Dr Ian cutress
Just to clarify in Roman's video here. Intel's engineers (Ari) specifically stated in Architecture Day (Aug) that AVX512 was fused. I explicitly asked the question if AVX512 would work in any instance, and they said no.
#11
bug
TiggerIntel lied and said it was fused off, when it isn't. i am betting newer ADL dies will have it fused off, so they don't have to force motherboard makers to release a bios to disable it. Raft of bios updates coming me thinks.

Dr Ian cutress
Just to clarify in Roman's video here. Intel's engineers (Ari) specifically stated in Architecture Day (Aug) that AVX512 was fused. I explicitly asked the question if AVX512 would work in any instance, and they said no.
It's entirely possible it was actually fused off in some engineering samples and later on someone else decided it wasn't worth the effort. Or used the wrong blueprint.
Posted on Reply
#12
Ikaruga
Ferrum Master"I Hope AVX512 Dies A Painful Death"
-Linus Torvalds

And it does.

I remember the useless discussion with that PS3 emulator news, it ain't the first time intel does omit extension due to some reasons. Those extensions are waste of silicon for consumer environment, if a coder decided to rely on it really shows how he doesn't grasp what his real user base is and hides his inability to code normally using alternative aids, let it be CUDA or openCL.
First of all Happy New Year to all of you, I hope everybody will have a good one.

To the topic: With all due respect, Linux Tovalds had no idea what he was talking about and his followers just repeating him without thinking. In my humble opinion, AVX512 is an instruction development which we should all thank intel for as they are trying to improve things. It might not be the best first try, it sure could be improved, but definitely not something what should "die a painful death".
It has register orthogonality, so code using the new instructions could use it on 128 or 256 bit registers too, and no down-clocking needed, awesome enhanced vector extensions, embedded broadcasting, mask registers, and the list could go on for quite a while.
AVX512 is not only about power hungry CPU-downclocking floating calculations, it is so much more.

People cry for innovation day and night, and when they getting one, they wish for its death.... idontwanttoliveonthisplanetanymore.jpg
Posted on Reply
#13
TheoneandonlyMrK
IkarugaFirst of all Happy New Year to all of you, I hope everybody will have a good one.

To the topic: With all due respect, Linux Tovalds had no idea what he was talking about and his followers just repeating him without thinking. In my humble opinion, AVX512 is an instruction development which we should all thank intel for as they are trying to improve things. It might not be the best first try, it sure could be improved, but definitely not something what should "die a painful death".
It has register orthogonality, so code using the new instructions could use it on 128 or 256 bit registers too, and no down-clocking needed, awesome enhanced vector extensions, embedded broadcasting, mask registers, and the list could go on for quite a while.
AVX512 is not only about power hungry CPU-downclocking floating calculations, it is so much more.

People cry for innovation day and night, and when they getting one, they wish for its death.... idontwanttoliveonthisplanetanymore.jpg
Dude, it makes CPUs hot when people actually use it/them, we can't have that :p, how can it be good. Massive sarcasm and jest I agree with your points.

I do like Intel leaning towards this RUDIMENTARY ( ahahaa my arse) statement, the information gleaned from the web makes them seem disingenuous about this, the E cores had non the P cores is third gen avx512 no?! , And that's rudimentary whatever Intel.

They're just after segregation again the gits.
Posted on Reply
#14
bug
IkarugaFirst of all Happy New Year to all of you, I hope everybody will have a good one.

To the topic: With all due respect, Linux Tovalds had no idea what he was talking about and his followers just repeating him without thinking. In my humble opinion, AVX512 is an instruction development which we should all thank intel for as they are trying to improve things. It might not be the best first try, it sure could be improved, but definitely not something what should "die a painful death".
It has register orthogonality, so code using the new instructions could use it on 128 or 256 bit registers too, and no down-clocking needed, awesome enhanced vector extensions, embedded broadcasting, mask registers, and the list could go on for quite a while.
AVX512 is not only about power hungry CPU-downclocking floating calculations, it is so much more.

People cry for innovation day and night, and when they getting one, they wish for its death.... idontwanttoliveonthisplanetanymore.jpg
Especially since in the same post, Linus actually said instead of AVX512 Intel should have spent more resources on bringing ECC DRAM to the masses. ECC DRAM is a non-issue for the average consumer device. Someone did the numbers and it turns out your typical desktop (i.e. not equipped with 128GB+ RAM) is hit by about one bit flip per year.
Not saying a power hungry instruction set is what the masses needed either. Just that I disagree with Linus on this one.
Posted on Reply
#15
AusWolf
TheoneandonlyMrKDude, it makes CPUs hot when people actually use it/them, we can't have that :p, how can it be good. Massive sarcasm and jest I agree with your points.
Yeah! CPUs that need cooling? What kind of nonsense is that? :roll:

On a more serious note, bonkers power limits cook CPUs, not AVX-512. Instead of thinking about disabling an instruction set, I'd much rather recommend enforcing a power limit that actually makes sense.
Posted on Reply
#16
TheoneandonlyMrK
AusWolfYeah! CPUs that need cooling? What kind of nonsense is that? :roll:
See the joke imogie, note I said it was sarcasm?!.


I'm a believer in, its hot or your wasting it , I don't mind hot , in use, doing something useful, did it not come across.
I'm ok with innovative new tech avx512.

Not ok with fusing working features or labeling it rudimentary to lessen the blow though
Posted on Reply
#17
Ferrum Master
IkarugaAVX512 is not only about power hungry CPU-downclocking floating calculations, it is so much more.
The first thing it actually is a marketing tool and an attempt to even more fragment the CPU instruction zoo.

This particular case illustrates the circus going on at Intel, where some engineering head doesn't know what their marketing arse does.

It is a bad show. AVX512 is not meant to be run all the time, but in reality some APIs tries to hammer it all the time and the concept fails at it's core, thus Intel has to do something about it ie disable it, meanwhile make more fragmentation and make their Xeon offerings more appealing for those rare peps that actually need the instruction set. And by no means... they won't allow them to save money on cheaper mere mortal desktop offerings that could do the same.

For others... AMD doesn't bother still... so doesn't Apple with their productivity suites on their own silicon, who ditched Intel for a reason with all their fancy AVX512.

Torvalds is right most of the time. He speaks honestly about the induced corporate nonsense let it be intel or nvidia.
Posted on Reply
#18
AusWolf
TheoneandonlyMrKSee the joke imogie, note I said it was sarcasm?!.
I know. I just tried to join the laugh. :ohwell:
TheoneandonlyMrKI'm a believer in, its hot or your wasting it , I don't mind hot , in use, doing something useful, did it not come across.
I believe in running it as hot as your cooling allows. A locked 65 W CPU with stock cooling is just as good in my eyes as an unlocked one at 200+ W with decent liquid cooling, although I always try to aim for the latter. That's why I think Intel's PL values and AMD's PPT are the most useful inventions of recent CPU/motherboard evolution. And that's also why I'm saying it's the incorrectly configured power targets that make CPUs hot, not AVX-512.
TheoneandonlyMrKI'm ok with innovative new tech avx512.

Not ok with fusing working features or labeling it rudimentary to lessen the blow though
I totally agree.
Posted on Reply
#19
TheinsanegamerN
AusWolfYeah! CPUs that need cooling? What kind of nonsense is that? :roll:

On a more serious note, bonkers power limits cook CPUs, not AVX-512. Instead of thinking about disabling an instruction set, I'd much rather recommend enforcing a power limit that actually makes sense.
What, intel make a TDP system that acutally makes sense and is enforced out of the box? What kind of insanity is this? Every board maker should be free to use as much power as they want to get dem bencmark scores!
Posted on Reply
#21
bug
TheinsanegamerNWhat, intel make a TDP system that acutally makes sense and is enforced out of the box? What kind of insanity is this? Every board maker should be free to use as much power as they want to get dem bencmark scores!
The moment Intel published one figure for TDP and adhered to that, you'd be complaining about Intel artificially limiting the performance of their CPUs.
TDP became complicated the moment CPUs learned to adjust frequency (and voltage) on the fly. It just cannot be reduced to a single number anymore.
Posted on Reply
#22
TheinsanegamerN
bugThe moment Intel published one figure for TDP and adhered to that, you'd be complaining about Intel artificially limiting the performance of their CPUs.
TDP became complicated the moment CPUs learned to adjust frequency (and voltage) on the fly. It just cannot be reduced to a single number anymore.
Apparently you've missed my criticism of intel, AMD, AND Nvidia for pushing their parts out of the efficiency sweet spot (and eliminating OC headroom for us) for years now.

Power use can easily be limited to one number. Mobile parts, T series parts, and even normal desktop parts have have power draw limited. In fact, such limits are a thing according to intel. Intel however is very mushy on the actual limit of PL2/3 power draw and time limits as well, things that should be enforced by default then turned off for OC, not the other way around. Most importantly, they need to be consistent, as right now all these board makers can be "in spec" yet have wildly different power draws and time limits.

This wasnt an issue before the boost wars, boost timing and power draw limits were prtty clear in the nehalem/sandy bridge era. AMD today is still more stringent on how much juice ryzen can pull to boost. Intel has been playing fast and loose for years, and it's a headache to keep track of.
Posted on Reply
#23
bug
TheinsanegamerNApparently you've missed my criticism of intel, AMD, AND Nvidia for pushing their parts out of the efficiency sweet spot (and eliminating OC headroom for us) for years now.

Power use can easily be limited to one number. Mobile parts, T series parts, and even normal desktop parts have have power draw limited. In fact, such limits are a thing according to intel. Intel however is very mushy on the actual limit of PL2/3 power draw and time limits as well, things that should be enforced by default then turned off for OC, not the other way around. Most importantly, they need to be consistent, as right now all these board makers can be "in spec" yet have wildly different power draws and time limits.

This wasnt an issue before the boost wars, boost timing and power draw limits were prtty clear in the nehalem/sandy bridge era. AMD today is still more stringent on how much juice ryzen can pull to boost. Intel has been playing fast and loose for years, and it's a headache to keep track of.
It all comes back to what I said: if a beefy heatsink will dissipate 200W+ and the CPU can cope with that, why not let it? It's wasted HP otherwise.
But you can't just publish the highest figure, the vast majority of users don't run high-end heatsinks, so they'll never see that.
Posted on Reply
#24
Ikaruga
Ferrum MasterThe first thing it actually is a marketing tool and an attempt to even more fragment the CPU instruction zoo.

This particular case illustrates the circus going on at Intel, where some engineering head doesn't know what their marketing arse does.

It is a bad show. AVX512 is not meant to be run all the time, but in reality some APIs tries to hammer it all the time and the concept fails at it's core, thus Intel has to do something about it ie disable it, meanwhile make more fragmentation and make their Xeon offerings more appealing for those rare peps that actually need the instruction set. And by no means... they won't allow them to save money on cheaper mere mortal desktop offerings that could do the same.

For others... AMD doesn't bother still... so doesn't Apple with their productivity suites on their own silicon, who ditched Intel for a reason with all their fancy AVX512.

Torvalds is right most of the time. He speaks honestly about the induced corporate nonsense let it be intel or nvidia.
These companies are not charity organization, yes, their practices are mostly disgusting, and yes they do nasty stuff to prosper, and beat the competition, but water is still wet. If we want to have a discussion about the world of giant corporations and their global business practices, then we will end up talking about capitalism and economy. It is a subject I like to discuss, but not in this thread perhaps.

To give you an example, let’s forget how the pandemic and crypto mining affected the prices of GPUs, and let’s just focus on the product line of Nvidia with intended msrp (for the sake of the argument, let’s also ignore that msrp itself might have been a lie too). They made the 3080 and put a 700-ish pricetag on it, the card was a beast when it came out (perhaps still is), there can be very little argument about that. They gave us a ton of proprietary features too, pushing the limits of computer graphics to new heights. Things like RT cores (I don’t care about raytracing much, but I do believe that most people still doesn’t realize how big step real time full path tracing graphics really is, and how it will change computer graphics, and how will we percieve shadowmaps and all the other ancient terrible fake things 10 years from now), DLSS which allows me to play games ~30% more fps in resolutions my card couldn’t even do 60 without it, and all of that with more detail(!) than how it would look in native resolution, etc…. They are in a business of making graphics cards, the best they can do, so they also made the 3090. They took their current tech to the absolute limit, give it all the cores, ram, whatever they could find on the selves and put a stupid high pricetag on it. Nobody was forced to buy the 3090, to have 24GB of vram, to use ray tracing, to use dlss, etc… but we had the option, and I’m glad we did. We only live once and I want all the cool tech and I want it now, thank you very much.
But what did people do? They whined that it is overpriced, whined that only 5 games support those new features, etc, and the tech sites agreed. Yea lets not have any of those because they are segregating… Well sorry, but I disagree.


I’m really tired of the new trend of bashing companies who are giving us new things. I’m sad avx512 is going away now. Who cares if amd doesn’t have that or if it is segregating intel's lineup, if it is a good thing? Who cares if it eats lots of power (most of the avx512 doesn’t btw) if it makes some stuff better?
Intel eats a lot of power already because their cpu’s are not power efficient for a long time, avx512 logic just builds on top of that bad design, so it eats even more power. Is that bad? Yes! Shall we get our pitchforks and “hope” that it will “die in a painful death”? I think not.
I’m a computer enthusiast and I welcome every new feature they gave us, I’m grateful and I’m willing to pay for it if it is a good one, just how I willing to pay for - and use - more electricity for faster processors (and I will try to lower my carbon footprint at other areas of my life to make up for it, of course).

If Torvalds wants better products than intel and avx512, then he shouldn't “hope” for the death of new instructions, he should hope for competition like the apple m1 instead, which shows intel (and nvidia) how inefficient their stuff really are. True competition is our only hope against these monsters with their prices and segregation techniques, not death wishes on instructions.


PS.: I3 processors had ECC support until 9th gen, but nobody bothered (motherboard makers dropped it because there was zero market for it). I personally do think it would be good to have, but apparently most of the users think otherwise (they are probably enthusiast like me and want faster ram, which is a lot harder to do with ECC). :)
Posted on Reply
#25
Vya Domus
I bet they want it disabled just so that people can't run AVX 512 benchmarks that would expose even more laughable power consumption figures. Other than that it speeds up the validation process and practically no consumer software needs AVX 512, so it's completely irrelevant whether it's there or not.
JismGod, the PS3 is a risc based platform with a undocumented GPU. You cant just use OpenCL or Cuda to "Emulate" a certain specific console and it's hardware.
That doesn't really mean anything from an emulation stand point , at the end of the day you still need to emulate more or less the same thing irrespective of the ISA. The reason you couldn't use CUDA or OpenCL is not because the CPU is RISC but because of the software that runs on those SPEs which needs complex thread synchronization logic that simply can't be done on a GPU. The PS3 GPU is documented, it's just some run of the mill 7000GTX series Nvidia architecture, nothing special there so there is no point in trying to use anything else than just OpenGL or any other graphics API.
Posted on Reply
Add your own comment
Dec 22nd, 2024 03:09 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts