Monday, August 7th 2023
Intel to Reveal Meteor Lake Details at Intel Innovation 2023
Intel Innovation is Intel's yearly tech conference and the company has revealed some of what it'll share at the event that kicks off on the 19th of September. One of the sessions at the event is called Intel Client Hardware Roadmap and the Rise of AI and during that event, Intel will be sharing its latest "client hardware platforms" which according to the session blurb will include the upcoming Intel Core Ultra processors which currently goes under the codename of Meteor Lake.
It's unclear how much detail Intell will go into and based on the subject of the session, this should most likely be focused on the desktop platform, but could also cover the mobile parts. According to VideoCardz we should expect Intel to detail the integrated VPU which is said to be based on hardware from Movidius, a company Intel acquired a few years ago and that focused on making machine learning hardware. The VPU should be a low-power accelerator that handles AI inference tasks that will be part of at least some future Intel processors, but for now, we don't really know what Intel's plans are for these types of features in its CPUs, apart from offering something competitive with AMD's Xilinx derived AI Engine.
Sources:
Intel Innovation 2023, via VideoCardz
It's unclear how much detail Intell will go into and based on the subject of the session, this should most likely be focused on the desktop platform, but could also cover the mobile parts. According to VideoCardz we should expect Intel to detail the integrated VPU which is said to be based on hardware from Movidius, a company Intel acquired a few years ago and that focused on making machine learning hardware. The VPU should be a low-power accelerator that handles AI inference tasks that will be part of at least some future Intel processors, but for now, we don't really know what Intel's plans are for these types of features in its CPUs, apart from offering something competitive with AMD's Xilinx derived AI Engine.
18 Comments on Intel to Reveal Meteor Lake Details at Intel Innovation 2023
I dont want e-cores, they can stick them in their server line to save power. nothing to look forward to, even ""tech yes city went back to 10th gen for snappiness.
the constant introducing and tinkering of new sockets rendering it obsolete.
Do you have a crystal ball to know that Intel won't be making a F version for Arrow lake ? (which should be even easier since it's going to be on a different die.) MTL being a laptop only cpu they do need to have an iGPU.
So it will be very interesting to see and Meteor Lake and future designs will continue this trend, or if new design considerations or even settings will let future CPUs perform better and more responsively for real world workloads. It is worth mentioning that the Skylake family (6th-10th gen, Skylake -> Comet Lake) has been remarkable real world performers, even back when facing Zen 2 and Zen 3 at the time, it was well known that Coffee Lake/Comet Lake was much more responsive in e.g. image editing vs. Zen 2/3 while Zen outperforming in larger batch jobs.
Still, there is something I believe is far more important for latency: good software.
While I've been mostly in the Linux space the past 15 years, in the past couple of years I've had to use Windows primarily at work for development again, and it's dreadfully slow compared to what I'm used to. A decent specced machine (i7-10700K, Win 10) is a laggy mess compared to all my machines at home. My trusty old i7-3930K on Ubuntu is far more responsive than the i7-10700K with Windows, even though it pales in comparison when it comes to brute force. (My newer desktop is of course even better.) While I don't think everyone notices such differences, I'm at least super sensitive to latency, when I'm in the zone and focused, minimal jerkiness and stutter is enough to distract me. Even text input on Windows is laggy, not to mention all the missing key-presses etc.
But even the mighty Linux can be slowed to a crawl with modern web browsers, and with Chrome spawning loads of threads for every single tab, it will eventually overload the OS scheduler, creating a lot of lag. So I'm seriously considering doing all my web browsing on a separate computer when working at home.
While my expertise is primarily low-level programming, I've worked quite a bit with web technologies(mostly NodeJs/React, PHP), Java, Android, etc. And I do remember some years ago while working with many others using React that many of them used (horrible) editors like Atom or VS Code. I couldn't handle those laggy and buggy (JavaScript-bases) editors, but many of my former colleges didn't seem to notice any latency, so there is most likely a big individual difference.
I've heard good things about Pop, but haven't had time to try it. It is worth mentioning that I do often tweak my Ubuntus a bit, like installing gnome classic, etc.
My current home dev machine (Ryzen 9 5900X) was actually a temporary purchase, since my "trusty" old i7-3930K was getting less trusty (it crashes sometimes). The only reson for getting the 12-core 5900X was a discount, I was actually deciding between 5600X and 5800X, but at ~$100 off, it was an easy choice. So far it has only run Linux, so I have no perfect comparison for this, but soon it will probably be replaced with a new dev machine and be relegated to being an all-round PC with dual boot. But I've got to say it's been remarkably stable at Linux, which hasn't always been the case for AMD hardware. It was a clean install.
-----
Getting it back to topic; my point was that while I'm concerned about increasing latency in CPUs (and memory too), it seems like software and ever-increasing bloatware there matters much more. There is no question that the Alder Lake family is a computational powerhouse compared to the Skylake family (and even the short-lived Rocket Lake), and the peak computational power is actually higher than their proclaimed IPC gains. I don't know the architectural improvements of Meteor Lake yet, but if it's not just more "AI gimmicks", but actual improvements in execution units, front-end changes, etc. will most certainly result in real-world improvements. Hopefully regressions in latency will be minimal or non-exsistent. But the ever-increasing E-core count should be some grounds for concern when it comes to latency. I wish they redesigned to prioritize latency for the P-cores or released more P-core only models etc.
It's not yet conformed whether Meteor Lake will provide support for the new AVX10 or not, which will be significant considering Intel's screwup of failing to add AVX512 support on E-cores. AMD have shown great performance gains with their double-pumped 256-bit AVX512 implementation, so Intel lost out on a huge potential advantage here.
I haven't had time to follow all the Meteor Lake rumors in the past months, but weren't they talking of it only having 6 P-cores? (any confirmation on that?) If this is accurate, then I would expect them to perform at least 20-30% higher, and it may end up being an improvement for latency too (so the best of both worlds). It will obviously lead to lots of complaints from typical forum users that 6 cores must be worse than 8, but as I've been saying for years; fewer faster cores will remain better for (user-)interactive workloads, and such workloads will never scale very well across very high core counts anyways (Amdahls law, etc.).
I'm much more interested to see when Intel get to implement their research into "threadlets" (basically "sub-threads" on micro-op level, software agnostic), when this happens I would expect IPC gains of >50% (possibly over several iterations), and P-cores that are vastly more advanced than today. I have no idea if any of this will be present in Meteor Lake, or if it's just more of "deeper and wider". Following AMD's example… ;)
It is disappointing that software always seems to get slower. Few years most of my computers moved over to SSDs and everything could boot and launch quickly. Games used to take a minute or two to launch and now the same games take seconds. But new games take half a minute and seem to be getting worse the newer the game. I got an SSD for fast start times, not so that my games could have so many assets.
I too am amazed at Intel's failure to bring AVX-512 to the E-cores. First they failed at the E-core trial-run processor, Lakewood—which is understandable—but then Alder Lake and Raptor Lake. It'd be pretty embarrassing after all these years to release Meteor Lake without AVX-512. But for Alder Lake, Intel was treating it like a server-exclusive, so maybe Intel's plan is to continue to ship it in consumer processors and continue to disable it, and therefore continue to pay for it but have all the customers that want it buy from AMD.
Overengineering and abstraction are major issues for the entire field of software development. The industry is still moving in the wrong direction. It is fairly rare to see anyone have a well maintained, lean, clean codebase which the developers can work on efficiently.
For the gaming industry there are extra issues due to unrealistic deadlines, non-technical staff in management positions, scope creep, and a focus on mass produced "junk" rather than solid products. It's fairly uncommon to see a game "work well" at launch these days, and even after months of sloppy patchwork, the quality is often lacking. Alder Lake did initially have AVX-512 enabled, but I don't know what their intention was to handle the lack of support on the E-cores.
The problem here is that major architectures are in development for 5+ years, and if there are larger oversights or mistakes, even succeeding refreshes may not fix problems, sometimes even the next major architecture may not fix it, because it was already past that development stage when it was discovered. They do have like 3 or so architectures in development at any time. (A good parallel here is the Spectre "flaw", an oversight which every major modern CPU design had, have taken many iterations to iron out, and there are probably still some weaknesses present.)