• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Arrow Lake Retested with Latest 24H2 Updates and 0x114 Microcode

negfuz

New Member
Joined
Dec 21, 2024
Messages
6 (0.33/day)
In response to your first question - you'll have to wait for those who are into the finer techicalites of Intel firmware to answer as I'm only aware of the AMD side of things more.
To your second question - because you play at very high resolutions & need the rig for tasks apart from gaming, my suggestion is to go with the Ryzen 9 9950X with the precision boost overdrive maxed out to whatever level it can be proven to be stable with & that is with a negative offset on the voltage curve in order to reduce heat & power consumption. Of course that will depend on the silicon lottery & what you have in its capability here, but no one knows that until actual testing is done. Performance can be further improved by adjusting the FLCK value beyond stock & with a good RAM configuration applied as well. Other variables come into play here though when those other 2 factors are in the mix because of the various combos of motherboard, RAM kits & agesa releases.
I mean I can undervolt/tune both, not really fair to suggest tuning one when the same refinements could be perhaps had on the 285K too? I'm not really asking for that though, looking for more of a general out of the box (with updates, obviously!) overall experience - I'm not looking to spend hours and hours testing stability here to accomplish this, nor am I looking to run a mini heater (my 13900k right now is running while respecting Intel Limits, i.e. not to exceed 253watts).

The 285k also beats the 9950x in some workflows - and is neck and neck in others - it's just unclear what the power draw is during those specific workflows. Maybe even over the duration of the test? Dont think anyone measures that though - useful metric for me, but not to the benchmarkers it seems.

this video seems to literally be the only video or benchmark that directly compares the results while showing power draw in that benchmark. But obviously it's a month old video so none of the fixes are taken into account here... so I imagine it can only get better here? At least as far as gaming is concerned...

Thoughts on the above video? Anything I'm overlooking here? Even if I look at this video in isolation vs the non gaming performance benchmarks of 285k vs 9950x, 285k wins it.... right?

To answer your first question, you will know for sure when its live by Asrock mentioning it on their download page or some other update, i expect once the bios leaves beta.
It would also be best to wait for Intel's update or "Field Update 2 Of 2" to be completely certain.
Thanks for the lengthy explanation and the picture.
And yeah I'm planning on waiting for the field 2of2 - but coming from Asus board to this Asrock z890 Aqua - their Bios pages don't actually show the ME version though in each bios. It just states: "Update Intel ME version." with no details. Is this somewhere that I'm not seeing? I even downloaded that bios right now and opened the zip file, theres no changelog or .txt file to peek in lol

I'm just a bit confused as I'm not sure if you watched the full part2 of the video, but the Intel guy acknowledged that 0x114 is out, but its useless without the secret sauce which, as we know is coming in January and is being tested by their partners right now. So are you suggesting that it's possible that the Asrock bios currently out right now, marked as 0x114, with no mention of "v2.2" in the ME firmware, was actually based off the 2.2 toolkit? So everything may already be as good as it gets possibly? I know the proper answer is to just wait at this juncture, just wondering about the possibility. Thanks again!
 
Joined
Jan 14, 2021
Messages
25 (0.02/day)
Location
Australia
Processor 14900KF
Motherboard Z790I AORUS ULTRA (BE200 WiFi7 Upgrade)
Memory F5-7200J3646F24GX2-TZ5RK
Video Card(s) B580
Storage 2X 2TB T500
Case NR 200P Max V2
Audio Device(s) Razer Barracuda X (2022)
Mouse Razer DeathAdder V2 X HyperSpeed
Keyboard Razer DeathStalker V2 Pro
Software Windows 11
Thanks for the lengthy explanation and the picture.
And yeah I'm planning on waiting for the field 2of2 - but coming from Asus board to this Asrock z890 Aqua - their Bios pages don't actually show the ME version though in each bios. It just states: "Update Intel ME version." with no details. Is this somewhere that I'm not seeing? I even downloaded that bios right now and opened the zip file, theres no changelog or .txt file to peek in lol

I'm just a bit confused as I'm not sure if you watched the full part2 of the video, but the Intel guy acknowledged that 0x114 is out, but its useless without the secret sauce which, as we know is coming in January and is being tested by their partners right now. So are you suggesting that it's possible that the Asrock bios currently out right now, marked as 0x114, with no mention of "v2.2" in the ME firmware, was actually based off the 2.2 toolkit? So everything may already be as good as it gets possibly? I know the proper answer is to just wait at this juncture, just wondering about the possibility. Thanks again!
I watched the entire video, it repeated a lot of the same information over and over.

At this stage it is not possible to tell what firmware kit the ME is based on, you will need to wait for Asrock to release the full bios or specifically mention firmware kit 19.0.0.1854v2.2 or something similar like "microcode and ME update as per Intel Field Update" or "January ME update".
Knowing Asrock, they may just release the full January release bios with the same notes as current.

There are no notes as far as I can see on the Asrock website or the bios zip, I pulled the version directly out of the bios file with UEFITool, you could always flash the bios if you have not already and check in device manager, HWIFNO, or a version checking tool etc... It will just say "19.0.0.1854" *edit* I believe HWINFO can show some sub-firmware versions and other ME info.

There is no information on what firmware kit 19.0.0.1854v2.2 contains so it's impossible to tell if the current firmware out in the wild is older or newer. ME Analyzer, the ME dump/update version checking tool I normally use, does not work for CSME 19, I am not familiar with the layout so i would rather not try guess what sub-firmware versions a dump/update contains.
To be safe, just assume its older and there will be a revised 19.0.0.1854 in January.

Will that make any difference to performance? You will need to wait for W1zzard and other reviews.
 
Last edited:

DareDevil01

New Member
Joined
Dec 21, 2024
Messages
1 (0.06/day)
Honestly, I feel like Intel is getting too much flak for Arrow Lake. While they obviously need to improve things like thread assignment with the new architecture, I feel like most of the blame needs to be assigned to Microsoft. 24H2 is absolutely broken in every single way. Every day a new article comes out about how 24H2 broke a certain feature, or gaming performance, or a software. I work in IT, and we've had to roll back 24H2 on multiple computers because it breaks multiple unrelated pieces of software that are absolutely business-critical for us, so it's not just an issue with a single feature or part of Windows.

I'm very bullish on Intel overall, even with Arrow Lake. The big-little architecture was absolutely the right move for consumers (which AMD immediately copied), they've adopted the chiplet design after seeing how well it worked for AMD (and let AMD handle much of the "teething issues), and they've moving to drop native support for x86, which I think will pair very nicely with moving to single-threaded cores and allow them to simplify core design immensely.
There's a bit of bias here.
The word selection for Intel vs AMD
~"The big-little which AMD *immediately copied*"
vs
~"they've *adopted* the chiplet design after seeing how well it worked for AMD"

It takes many years to develop a new chip, particularly a new arch with different tiling etc.
I highly doubt either "copied".
 
Joined
Jul 4, 2023
Messages
40 (0.07/day)
Location
You wish
Ok, I guess we can reduce this to a single word: turd, can't find anything else that share that many properties with another.

I mean seriously, can you take a product serious when it's release include "Situation Reports", as if serious sounding words would turn it into something that's not a turd someone just put in the same room and pretends it's fine
 
Joined
Jun 10, 2014
Messages
3,006 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
That, or software emulation of non-existent machine code instructions. The Linux community (that Intel is an important member of) could implement that. MS ... not so sure.
If you mean actual software emulation, then the whole program must be emulated, as the CPU core executes at least tens of thousands of instructions for every kernel tick.

At least on Linux when executing a program with unsupported instruction you'll get a kernel trap "Illegal instruction (core dumped)" (e.g. trying AVX2 on my i7-3930K), but I'm not deeply familiar this mechanism to determine with whether it could successfully catch the entire CPU state well enough to emulate a few clocks and then move it back without causing any program error.

Or quad-pumped or anything that's cheapest to make. If the goal is compatibility, not performance, then any kind of performance above zero is acceptable.
I suggested dual-pumped because the E-core already support 256-bit vectors, and is what Via and AMD successfully did with their first implementations. This would be a relatively marginal cost, only requiring extra complexity for instructions that affect across the 512-bit vector or new types of features that AVX2 lacked (there are a few).

No, these are two different things.
X86S = how can we make the 32-bit parts a little bit leaner (Intel only)
Advisory group = how can we still extract money from our IP when our most important patents expire (Intel and AMD, and I'm not sure what others are doing here)
No silly, the advisory group is about evolving the x86 family, not extracting money. :p
I'm not convinced all the x86S efforts was a good idea, as it might sacrifice to much compatibility. Running old games (like we all like to do), or running MS-DOS for fun isn't the main concern here, but the vast amount of "enterprise" software running everywhere from servers, to workstations, embedded systems(incl. medical, military uses etc.).

Secondly, the great level of compatibility of x86 over the decades is the main reason for the thriving enthusiast community and the the wealth of software that can run on almost anything without a fruity logo. We may take this for granted, but this wasn't a given in the late 70s and early 80s, and the more obvious outcome would be to have 4-5 vendors like Apple, each with their own platforms constantly evolving and breaking compatibility, and each with their niche selection of software. Having a stable "standard" is immensely valuable, far more important than having the perfect standard, and we've gotten a massive amount of small software companies and individuals that we wouldn't have gotten otherwise. If anything, compatibility should have been even better, but this isn't a shortcoming of x86, but mainly MS for terrible API compatibility and some games and applications relying on undefined behavior (e.g. copy protection).

Nothing specific, but 32-bit boot mode/protected mode/ring-0 is an attack surface that X86S would remove. These capabilities also have to be integrated into every new architecture, which is a process that can create new security cracks.
That shouldn't be a concern for anyone except for those consciously choosing to run an antiquated OS, as those who do will do this for a specific purpose.
 
Joined
Jul 17, 2011
Messages
87 (0.02/day)
System Name Custom build, AMD/ATi powered.
Processor AMD FX™ 8350 [8x4.6 GHz]
Motherboard AsRock 970 Extreme3 R2.0
Cooling be quiet! Dark Rock Advanced C1
Memory Crucial, Ballistix Tactical, 16 GByte, 1866, CL9
Video Card(s) AMD Radeon HD 7850 Black Edition, 2 GByte GDDR5
Storage 250/500/1500/2000 GByte, SSD: 60 GByte
Display(s) Samsung SyncMaster 950p
Case CoolerMaster HAF 912 Pro
Audio Device(s) 7.1 Digital High Definition Surround
Power Supply be quiet! Straight Power E9 CM 580W
Software Windows 7 Ultimate x64, SP 1
Thank you @W1zzard for your tireless testing, it must have been a really arduous job! What a show…
So all in all pretty much the Bulldozer-patches 2.0, which just seals the deal on Arrow Lake then. Arrow Lake is truly Intel's Bulldozer!

The worst part is, not even Bulldozer came with such a profound node-advancement which ARL had – Intel's 7 (10nm™ Enhanced SuperFin, aka 10nm™++) jumping to the world 's single-best process to date (TSMC's N3), which is the only reason the design is any better in that particular metric… Intel really effed that up royally.
 
Joined
Jun 10, 2014
Messages
3,006 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Arrow Lake is truly Intel's Bulldozer!
Not by a long shot.
Bulldozer was a poor performer across many real-world workloads, while Arrow Lake falls a little short of Raptor Lake in a few, and excels greatly in others. It's more of a mixed bag, but it would be very unfair to call it a bad product.

Prospective buyers unimpressed with the performance for their workload may look to the other team and see if that's more satisfactory. But if they end up with considering 12- and 16-cores, they might as well throw Threadripper and Xeon W into the mix, as workloads which scales that well could benefit from such platforms.
One tip would be to watch out for CES 2025, as the next Threadripper might be launching then, and it will likely be a powerhouse.
 
Joined
Jun 14, 2020
Messages
3,667 (2.20/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
In an era where 3D cache and memory latency are dominating, Intel's big move seems to have been to put the memory controller on the other side of the processor away from the die.
Yeap, but I have to give it to them, considering the increase in latency (and reduction in power draw) gaming performance is legitimately impressive. Not impressive enough to overcome the latency penalty of the mem controller moving to the moon, but still it's extremely good.

For perspective, if you took Raptor Lake, moved the imc off the die and then dropped gaming power draw by up to half, would it be faster than arrow lake? I doubt it.
 
Joined
Nov 13, 2007
Messages
10,877 (1.74/day)
Location
Austin Texas
System Name stress-less
Processor 9800X3D @ 5.42GHZ
Motherboard MSI PRO B650M-A Wifi
Cooling Thermalright Phantom Spirit EVO
Memory 64GB DDR5 6400 1:1 CL30-36-36-76 FCLK 2200
Video Card(s) RTX 4090 FE
Storage 2TB WD SN850, 4TB WD SN850X
Display(s) Alienware 32" 4k 240hz OLED
Case Jonsbo Z20
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse DeathadderV2 X Hyperspeed
Keyboard 65% HE Keyboard
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
Yeap, but I have to give it to them, considering the increase in latency (and reduction in power draw) gaming performance is legitimately impressive. Not impressive enough to overcome the latency penalty of the mem controller moving to the moon, but still it's extremely good.

For perspective, if you took Raptor Lake, moved the imc off the die and then dropped gaming power draw by up to half, would it be faster than arrow lake? I doubt it.
It would be a total turd... But if you did that in reverese and took Raptor Lake rignbus, added the newer power efficient cores, and the new e-cores and ram controller, it would have been way faster than arrow lake.

The disagreraged design is awesome, if it can net you more cores or better tiles or something... but they still gave us only 8 cores, 16e cores, and no more cache -- in which case the old design is actually still much more performant.

It would have been fine had they kept the adamantium cache, and that would have alleviated alot of the memory latency hit, but somewhere in the overbloated management structure of intel some middle layer executive killed it.
 
Joined
Dec 31, 2020
Messages
1,022 (0.70/day)
Processor E5-4627 v4
Motherboard VEINEDA X99
Memory 32 GB
Video Card(s) 2080 Ti
Storage NE-512
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
The disagreraged design is awesome, if it can net you more cores or better tiles or something... but they still gave us only 8 cores, 16e cores, and no more cache -- in which case the old design is actually still much more performant.
Ring bus and L3$ are probably the same. The cores are updated with one more ALU and remove Htherad. The tile-based approach allows them to shrink the cores first and get better yields the smaller the die. Moving the IMC to Mars is a big mistake, but the AI cores are there too, along with GPU outputs and media codecs, maybe they have a greater need for low latency. Thinking that it would be acceptable to take a wrong step in this day and age has to be for a well-calculated reason, which is a flaw. So you can become a chip architect and change Intel from the inside and until then it will never make sense.
It's best if all GPU related stuff is moved to one tile which will not be physically present in the KF variant and the one most people using discrete GPU will get. All CPU related stuff also one tile for low latency. All E cores in one cluster that ping each other with 25-35, all in the green. One big P-core super long pipeline with 10 ALus to win the single thread benchmarks. PCIE root another tile.
 

truerock

New Member
Joined
Dec 13, 2024
Messages
3 (0.12/day)
Honestly, I feel like Intel is getting too much flak for Arrow Lake. While they obviously need to improve things like thread assignment with the new architecture, I feel like most of the blame needs to be assigned to Microsoft. 24H2 is absolutely broken in every single way. Every day a new article comes out about how 24H2 broke a certain feature, or gaming performance, or a software. I work in IT, and we've had to roll back 24H2 on multiple computers because it breaks multiple unrelated pieces of software that are absolutely business-critical for us, so it's not just an issue with a single feature or part of Windows.

I'm very bullish on Intel overall, even with Arrow Lake. The big-little architecture was absolutely the right move for consumers (which AMD immediately copied), they've adopted the chiplet design after seeing how well it worked for AMD (and let AMD handle much of the "teething issues), and they've moving to drop native support for x86, which I think will pair very nicely with moving to single-threaded cores and allow them to simplify core design immensely.
The concept of big.LITTLE cores was first implemented by ARM Holdings. ARM introduced this heterogeneous computing architecture in October 2011 with the announcement of the Cortex-A7 and Cortex-A15 processors
 
Joined
Jan 14, 2021
Messages
25 (0.02/day)
Location
Australia
Processor 14900KF
Motherboard Z790I AORUS ULTRA (BE200 WiFi7 Upgrade)
Memory F5-7200J3646F24GX2-TZ5RK
Video Card(s) B580
Storage 2X 2TB T500
Case NR 200P Max V2
Audio Device(s) Razer Barracuda X (2022)
Mouse Razer DeathAdder V2 X HyperSpeed
Keyboard Razer DeathStalker V2 Pro
Software Windows 11
Merry Christmas (Christmas morning in Australia right now)

Saw this post on Reddit, just looks like asus marketing.
Poor timing IMO, while not specified that it is, i could see how people would assume this is the promised January performance fix.

 
Last edited:

negfuz

New Member
Joined
Dec 21, 2024
Messages
6 (0.33/day)
Interesting:

Asus just posted this:
Asus's Reddit Post

Meanwhile, I can't make this up, ASROCK on the other hand - REMOVED their 0x114 Bios that they had listed up there since the 19th...???

The "latest" Asrock version is now back to 0x113. Ugh wish I grabbed it. No communication as to why it was pulled.
 
Joined
Jul 4, 2023
Messages
40 (0.07/day)
Location
You wish
Next thing that rolls out will be a ME universal patch for all chipset families, say from Coffee/Kaby Lake era - making all other platforms slower :) problem solved!

I learned that in most cases, if you stick will the factory microcode and just ignore updates you are better off then after Intel starts rolling out "migitations" or oder performance-tax, on linux that's quite simple, on windows it takes some effort to not let windows enable any kind of magic mitigations by default (which makes your windows as secure as flying into space with boening made spacecraft) and I do not actually know if windows can/does load microcode on boot or if its supplied by some intel driver, on linux you can just deny ucode loading.
 
Last edited:

njshah

New Member
Joined
Dec 27, 2024
Messages
5 (0.45/day)
What's with the arrowlake hate? Do people seriously parrot sensationalized channels like HUB or GN that much?

So what if you get 200fps instead of 230 fps in a game vs a 9800x3d? The gains in efficiency and productivity workloads are insane. Remember just how bad zen 1 was in gaming? It regularly got beat by overclocked sandy bridge processors lol. Arrowlake is essentially the zen 1 stage for intel, it'll only go up from here

The fact these processors smash their predecessors in productivity despite lacking HT is a clear sign this is a very strong architecture held back by teething for intel issues in the chiplet approach, there is a good chance a better implementation of HT might make a return in a gen or 2.

Personally im excited for panther lake and what it could do given amd has stagnated with zen as well, x3d is a non factor outside of gaming.
 
Joined
Jul 4, 2023
Messages
40 (0.07/day)
Location
You wish
What's with the arrowlake hate? Do people seriously parrot sensationalized channels like HUB or GN that much?

So what if you get 200fps instead of 230 fps in a game vs a 9800x3d? The gains in efficiency and productivity workloads are insane. Remember just how bad zen 1 was in gaming? It regularly got beat by overclocked sandy bridge processors lol. Arrowlake is essentially the zen 1 stage for intel, it'll only go up from here

The fact these processors smash their predecessors in productivity despite lacking HT is a clear sign this is a very strong architecture held back by teething for intel issues in the chiplet approach, there is a good chance a better implementation of HT might make a return in a gen or 2.

Personally im excited for panther lake and what it could do given amd has stagnated with zen as well, x3d is a non factor outside of gaming.
If you call performance regression a part of an 'strong architecture' that's nice but do you think a consumer wants to pay for it? Is a product that's being sold, or did you missed something?
 
Joined
Feb 22, 2021
Messages
31 (0.02/day)
Location
Austria
Processor AMD Ryzen 9 5950X
Motherboard MSI MAG B550 Tomahawk
Cooling Noctua NH-15D
Memory Crucial Ballistix 4x 16GB, DDR4-3600, CL16 (tuned)
Video Card(s) Sapphire Nitro+ Radeon RX 7900 XT Vapor-X
Storage Samsung SSD 970 EVO Plus 1TB, 2x
Display(s) Gaming: LG OLED48CX9LB 4K@120Hz, Office: Samsung M7 M70A 4K@60Hz
Case be quiet! Pure Base 500DX
Power Supply be quiet! Straight Power 11 Platinum 750W
Software Windows 10 Home, 64-Bit
Intel was absolutely right to bring the p/e cores design to x86. ARM introduced it in 2012 and it's obviously been successful. Even Apple has it in their M chips. AMD has also adopted it in EPYC and their mobile chips. It just makes sense when you're trying to maximize both power efficiency and performance.

As for AMD, it's not a issue with the 3D cache specifically, it's been an issue ever since they introduced the multi-chiplet design. They've always had issues with thread assignment and trying to keep related threads (gaming or not) on the same CCD. That was part of the reason for Ryzen 9000's underwhelming launch, is there was thread assignment issues on the 9900X and 9950X. 3D cache just made it more obvious in gaming workloads.

There's always going to be shared responsibility between Microsoft and Intel/AMD to make a new architecture work. But innovation has to happen, and my point is that Microsoft bares much of the blame for both Arrow Lake and Zen 5 performance issues specifically because 24H2 is an obvious disaster. But the underlying philosophy of what AMD is doing (chiplets, 3D cache) and Intel is doing (single-threading, p/e cores) are solid.
I mostly agree. however, there are two aspects to this story.
the one is resource sharing like threads or a core and cores of a chip and chips of host and hosts of a cluster. so here, the scheduler needs to optimal use the resources and surely consider memory and cache hierarchies.
the second is core-specific tasks assignment. here we talk about p/e cores that are not equivalent! some tasks cannot be executed on an e core that can be on a p core. amd's approach is different in a sense although the core are equivalent from the execution point of view, they do not perform equally, e.g. games shall use collocated 3d v cached chip.
microsoft messed up greatly the the latter aspect and the p/e cores task assignment. reg. amd, I see a lack of proper SW solution to orchestrate the gaming.
 

negfuz

New Member
Joined
Dec 21, 2024
Messages
6 (0.33/day)
If you call performance regression a part of an 'strong architecture' that's nice but do you think a consumer wants to pay for it? Is a product that's being sold, or did you missed something?
A "product" in this case has multiple variables, performance is obviously one of them, but its not the only thing. Performance is not a blanket term, it varies in games, and productivity. The guy you responded to already pointed out the performance is only "lacking" in gaming, not so in productivity apps.

Reliability is another.
Efficiency is another.
After Sale-Customer Service is another.
Obviously Pricing is another.

I too agree with the other guy, all these clickbait reviewers hating on the architecture run the benchmarks at 1080p, but the reality is, thats not real world. People with a 285k arent gaming in 1080p.

I made a post a page or two ago, linking a YouTube video and asking a specific question to a guy in terms of "FPS per Power Draw" and the guy never responded.

We all have different things that matter to us, primarily to me, is the heat generation of the processor, so if I have to "regress" in performance by 3-7%, but get 50% better efficiency, that's a win for my niche needs - but I realize I'm the minority.

Some people might have just sworn off AMD for their terrible CS, who knows.

Just waiting here for AsRock's 0x114 patch to drop...again before building!
 
Joined
Aug 25, 2011
Messages
249 (0.05/day)
Location
Poznan, Poland
Just waiting here for AsRock's 0x114 patch to drop...again before building!

ASRock released 3 beta BIOS versions last week, all with microcode 114. However, I see no difference between them, and the patch notes are the same. I didn't bother to ask ASRock what was changed. On the other hand, I have no performance or stability issues on my rigs. The performance could be better looking at benchmarks, but I don't complain. I'm using the 265K/8800 CUDIMM in my daily/gaming ITX PC right now. It runs 24/7 since I built it. The efficiency improvement over the last gen is huge. In most games at 1440p, I see around 150-170W max (265K).
 

negfuz

New Member
Joined
Dec 21, 2024
Messages
6 (0.33/day)
By the "patch notes" - you mean Description?

I don't see any "patch notes" anywhere except their Description column in the table?

I see it just re-posted for the first time though, this time (first time as far as I know?) they specifically mention the CSME v2.2 kit, so obviously that's good!

VersionDateSizeUpdate methodDescriptionDownload
2.26.AS03
[Beta]
2024/12/2611.33MBInstant Flash Update method icon Flashback
1. Update CPU Microcode to 0x114
2. Update Intel ME version.(19.0.0.1854V2.2)​

Will hopefully get to building it soon now
 
Joined
Jun 10, 2014
Messages
3,006 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
So what if you get 200fps instead of 230 fps in a game vs a 9800x3d? The gains in efficiency and productivity workloads are insane. Remember just how bad zen 1 was in gaming? It regularly got beat by overclocked sandy bridge processors lol. Arrowlake is essentially the zen 1 stage for intel, it'll only go up from here
The comparison to Zen 1 is only relevant to the extent that Zen 1 and Arrow Lake will be laying the groundwork for what's to come, not in terms of performance. While Arrow Lake only falls slightly short of Raptor Lake and Zen 5 in gaming (1440p/4K which is the only thing that matters), Zen 1 was very much slower than the Skylake-based counterparts, and the Skylake architecture was objectively faster per core vs. Zen 1 (and even successors), and Zen 1 used better efficiency and a better node to compensate with more cores.
Meanwhile, Arrow Lake is overall faster than Raptor Lake, and in many (not all) ways faster than Zen 5. So Intel isn't using an inferior architecture here, but most of the bad press is due to unmet expectations and bias (of course). (And don't forget most benchmarks aren't stock vs. stock either…)

The fact these processors smash their predecessors in productivity despite lacking HT is a clear sign this is a very strong architecture held back by teething for intel issues in the chiplet approach, there is a good chance a better implementation of HT might make a return in a gen or 2.
Personally im excited for panther lake and what it could do given amd has stagnated with zen as well, x3d is a non factor outside of gaming.
Gains from HT greatly depends on the workload, as it effectively is other threads using the core when the core is stalled. As architectures get more advanced, the relative gains from HT has dissipated, and in many workloads become negligible (if not a disadvantage).

With Arrow Lake (and Meteor Lake) improving the front-end further, incl. reducing misprediction recovery, the gains from HT would be dwindling. This combined with the ever-growing complexity and security concerns of HT, means that at some point we are just better off without it, and the freed up die space and constraints can (eventually) be used for other improvements. Hopefully HT will not make a return, and eventually also be removed from server and high-end workstation chips, but it will probably cling on there for a while for certain workloads.

What is much more important is the aforementioned "groundwork"; as it now will be much easier for Intel to go even wider with their execution, with better front-end and scheduling, and they have also uncoupled their vector and integer pipelines. Not to mention this would enable them to make much bigger cores for their server/workstation chips (which I hope they will do), based on the same "base" architecture.

I don't know what Intel will bring with Panther Lake, and I surely hope it's not just another boring mobile-only generation or just a refresh with 200 Mhz more. Any meaningful improvement will be welcome, and true computational performance is always more relevant than cache-gimmicks. :)
 

njshah

New Member
Joined
Dec 27, 2024
Messages
5 (0.45/day)
If you call performance regression a part of an 'strong architecture' that's nice but do you think a consumer wants to pay for it? Is a product that's being sold, or did you missed something?

Regression in gaming is a tiny piece of the picture , gaming isn't be be all end all of computer use. The only thing wrong with arrowlake is pricing

The comparison to Zen 1 is only relevant to the extent that Zen 1 and Arrow Lake will be laying the groundwork for what's to come, not in terms of performance. While Arrow Lake only falls slightly short of Raptor Lake and Zen 5 in gaming (1440p/4K which is the only thing that matters), Zen 1 was very much slower than the Skylake-based counterparts, and the Skylake architecture was objectively faster per core vs. Zen 1 (and even successors), and Zen 1 used better efficiency and a better node to compensate with more cores.
Meanwhile, Arrow Lake is overall faster than Raptor Lake, and in many (not all) ways faster than Zen 5. So Intel isn't using an inferior architecture here, but most of the bad press is due to unmet expectations and bias (of course). (And don't forget most benchmarks aren't stock vs. stock either…)


Gains from HT greatly depends on the workload, as it effectively is other threads using the core when the core is stalled. As architectures get more advanced, the relative gains from HT has dissipated, and in many workloads become negligible (if not a disadvantage).

With Arrow Lake (and Meteor Lake) improving the front-end further, incl. reducing misprediction recovery, the gains from HT would be dwindling. This combined with the ever-growing complexity and security concerns of HT, means that at some point we are just better off without it, and the freed up die space and constraints can (eventually) be used for other improvements. Hopefully HT will not make a return, and eventually also be removed from server and high-end workstation chips, but it will probably cling on there for a while for certain workloads.

What is much more important is the aforementioned "groundwork"; as it now will be much easier for Intel to go even wider with their execution, with better front-end and scheduling, and they have also uncoupled their vector and integer pipelines. Not to mention this would enable them to make much bigger cores for their server/workstation chips (which I hope they will do), based on the same "base" architecture.

I don't know what Intel will bring with Panther Lake, and I surely hope it's not just another boring mobile-only generation or just a refresh with 200 Mhz more. Any meaningful improvement will be welcome, and true computational performance is always more relevant than cache-gimmicks. :)

I personally do hope HT variants make a return with a more efficient implementation because i literally see a 33% jump in performance for a tiny increase in thermals while doing my 3d render work on my 5700x , intels implementation hasn't changed much since they first introduced it , SMT from amd is actually a technically superior implementation of the concept.
 
Joined
Jun 10, 2014
Messages
3,006 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I personally do hope HT variants make a return with a more efficient implementation because i literally see a 33% jump in performance for a tiny increase in thermals while doing my 3d render work on my 5700x , intels implementation hasn't changed much since they first introduced it , SMT from amd is actually a technically superior implementation of the concept.
SMT(HT) is a way for multiple threads to saturate a single core, and the combined throughput of the threads sharing a core will never exceed a single thread fully saturating the core. As microarchitectures advance, the incentive to have SMT will only decrease, as the relative gain will decrease for performant code. The only motivation to keep it would be for very poorly written code with lots of stalls (mostly cache misses), but some would argue that E-cores takes care of that. So sooner or later AMD will in all likelihood follow suit, especially as microarchitectures finds ways to avoid most pipeline stalls.

I can assure you that SMT implementations have evolved, as they are the most deeply entangled concept across the front-end and the entire pipeline.

Regarding your 5700X, of the Zen 3 family, the counterpart for this at the time was Comet Lake, the later member of the Skylake family. The architectural differences between these two is very interesting, despite being so "comparable" in performance. While Zen 3 featured 4 integer units + 2 vector units (FMA-pairs), Skylake had a more complex setup with 4 integer units with 3 vector units combined (so 4 execution ports total). So peak throughput was theoretically in many cases higher for AMD's design, but Intel had a much more capable front-end to keep their execution ports fed. So in essence AMD's approach was more brute force vs. Intel's more advanced, and one of the side-effects of this was Intel's performance advantage for difficult workloads came at a high energy consumption. The weaker front-end for AMD also means potentially higher relative gains for SMT.

One interesting bit of history is how the two companies have been evolving since then; AMD have improved their front-end a lot, but Intel has interestingly split up their execution ports with Arrow Lake, which is very likely a large contributor to why Arrow Lake has gained so much in energy efficiency. (It also goes to show how some entangled design features have to come to an end as scaling becomes too difficult.)
 
Joined
Aug 25, 2011
Messages
249 (0.05/day)
Location
Poznan, Poland
By the "patch notes" - you mean Description?

I don't see any "patch notes" anywhere except their Description column in the table?

I see it just re-posted for the first time though, this time (first time as far as I know?) they specifically mention the CSME v2.2 kit, so obviously that's good!

VersionDateSizeUpdate methodDescriptionDownload
2.26.AS03
[Beta]
2024/12/2611.33MBInstant Flash Update method icon Flashback
1. Update CPU Microcode to 0x114
2. Update Intel ME version.(19.0.0.1854V2.2)​

Will hopefully get to building it soon now

Yes, AS01->AS03 had the same description for Z890 OCF and Z890I Nova (OCF has 2.26, Nova has 2.23, but updates are the same for the whole line of Z890 mobos, and all went from .AS01 to .AS03 in a week). I literally see no difference on the Z890I Nova. I didn't check the 2.26.AS03 for the OCF yet, but AS02 was the same as two other versions before. RAM OC profiles and other things are also the same.

One of the differences I noticed is that on the Z890 OCF, you can choose every microcode, including the 114, from the list in the BIOS. On the Z890I Nova, there is no 114 on the list, but the auto option uses the 114 microcode (for example, CPU-Z shows it).
The second thing is that my worse 265K CPU, which I'm using with the ITX Nova, has a 78 score on the 113 or earlier microcode but an 82 on the 114 microcode. In the ASRock scale, it moves from below average to average. The better 265K runs with the OCF and has an 84 score on all microcodes. It doesn't matter much to me, but it's interesting that the score has changed.
 
Joined
Feb 1, 2019
Messages
3,684 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
A "product" in this case has multiple variables, performance is obviously one of them, but its not the only thing. Performance is not a blanket term, it varies in games, and productivity. The guy you responded to already pointed out the performance is only "lacking" in gaming, not so in productivity apps.

Reliability is another.
Efficiency is another.
After Sale-Customer Service is another.
Obviously Pricing is another.

I too agree with the other guy, all these clickbait reviewers hating on the architecture run the benchmarks at 1080p, but the reality is, thats not real world. People with a 285k arent gaming in 1080p.

I made a post a page or two ago, linking a YouTube video and asking a specific question to a guy in terms of "FPS per Power Draw" and the guy never responded.

We all have different things that matter to us, primarily to me, is the heat generation of the processor, so if I have to "regress" in performance by 3-7%, but get 50% better efficiency, that's a win for my niche needs - but I realize I'm the minority.

Some people might have just sworn off AMD for their terrible CS, who knows.

Just waiting here for AsRock's 0x114 patch to drop...again before building!
I wish we would stop having sheep reviewers, they all test the same configurations, we need them to be different to each other testing different things.

Why is the best GPU always used on CPU tests, we are told because its to maximise the effect of CPU power on games, whilst thats true, showing the maximum effect is not necessarily what they should be testing, test a variety of specs, and go for more real world type builds. Vice versa as well.

I think Zen1 got an easy ride primarily for three reasons, (a) there is a lot of AMD bias out there due to past Intel actions, people wont let that go so it affects judgement, and also (b) I remember Zen 1 being pretty cheap. (c) Finally it was clear progress from their disaster of a previous gen.
 
Top