• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Alder Lake CPUs common discussion

Aida memory stress sucks - use Karhu Ram Test or Testmem 5

I don't have Karhu but I've got TestMem5. I'm still needing to get the preset with the 20 pass loop going on this test drive. Even so AIDA Memory stress does and will fail if your having big memory stability issues. I've been running 3+ hrs WCG load and so far no WHEA's... no BSOD.
 
I don't have Karhu but I've got TestMem5. I'm still needing to get the preset with the 20 pass loop going on this test drive. Even so AIDA Memory stress does and will fail if your having big memory stability issues. I've been running 3+ hrs WCG load and so far no WHEA's... no BSOD.

I used to use AIDA exclusively... but then I've passed 24 hour AIDA and kept getting a whea once EVERY 3 DAYS and randomly failing to wake from suspend... just weird stuff... thought "must be something else the ram is stable" -- nope - had a bad stick of ram (failed in Karhu in less than 150% at stock, narrowed the stick all stable now... that took ~3 mins vs MONTHS of testing with AIDA)

If you look online at OC communities Aida is specifically on the 'do not use' list, you have to run it for FOREVER and multiple times to find instability. It can fail unstable ram in 5 mins, or pass it for 3 hours... Just ends up wasting your time as you get to your max OC.

This was not doable with my aida results:
1636253233427.png

I only reboot for updates with the current ram OC, and the proper testing utilites let you confidently overclock tertiaries and tune RTL/IOL without having to sacrifice anything to the RAM stability gods.
 
I don't have Karhu but I've got TestMem5. I'm still needing to get the preset with the 20 pass loop going on this test drive. Even so AIDA Memory stress does and will fail if your having big memory stability issues. I've been running 3+ hrs WCG load and so far no WHEA's... no BSOD.

Not sure about @1usmusv3, but just one run of @anta777extreme1 ensures almost-total stability for mild OCs. 1.5 hours on a 2x8GB kit and 2 hours on a 2x16GB kit. Still recommended to run a few times (I do 3) and verify with another test like overnighting with HCI.

Untested memory is a bad call on a daily rig. You will never see reboots if you're not running memory intensive loads, but /sfc scannow will always find the damage. Windows is extremely enthusiastic about self-destructing on unstable memory.
 
Maybe I'm stupid or inattentive today but I don't see a single Tiger Lake CPU in the video. I certainly didn't mean Rocket Lake (loosely based on Ice Lake) which is a desktop CPU.
Nope that was my bad, I should have explained. Tiger Lake is effectively the mobile(10nm) version of Rocket Lake. Tiger lake performance numbers will be lower than that of Rocket Lake and therefore you can derive Tiger Lake from there. Right now there is no way to compare directly as Apples to Apples just isn't a viable option for the moment. As Alder Lake leaps ahead of Rocket Lake by big margin in most cases, we can safely conclude the difference will be bigger with Tiger Lake, thus Steve's video in which he compares the Desktop 11xxx series to the new 12xxx series. It's desktop VS desktop, but if you look at those number and the ones the W1zzard has done, then compare to the tests done previously between Rocket Lake VS Tiger Lake, you can conclude what the direct comparison would be.

If today I get 196 (which I did) and yesterday 183. Which is correct?
Likely both as, again, margin of error kind of variance. And the variances are in the favor of faster performance. What you likely have is a Windows service running, shutting down or stopping during your testing and restarting as testing continues. It happens, it's a Windows thing. And as such everyone will have a similar experience, therefore your testing results are still valid. So include all results and average between them. Alternatively you can find the service causing the variance and disable it during testing.

To throw out results that do not meet with your satisfaction is to effectively cheat the testing(not accusing you of cheating, only saying that is the effective result), and that will not give a clear picture of the actual performance.
 
Last edited:
Not sure about @1usmusv3, but just one run of @anta777extreme1 ensures almost-total stability for mild OCs. 1.5 hours on a 2x8GB kit and 2 hours on a 2x16GB kit. Still recommended to run a few times (I do 3) and verify with another test like overnighting with HCI.

Untested memory is a bad call on a daily rig. You will never see reboots if you're not running memory intensive loads, but /sfc scannow will always find the damage. Windows is extremely enthusiastic about self-destructing on unstable memory.
Well put.
 
Please excuse my previous post if any of you thought I was trying to make claims towards some kind of true tested stability. This is new setup/platform and I'm just starting to get a feel for it. I was attempting show a quick memory stress using Gear 1 and then I left it running for a few hours on WCG and it was still going when I came back.

Here are two more showing AIDA memory benchmarks for both Gear 1 and Gear 2... I'm not sure if I can get my 12600K sample to run Gear 1 @2000MHz??

4000C17 Gear 2:

i5-12600K 4000C17 Gear 2.PNG


3900C17 Gear 1:

i5-12600K 3900C17 Gear 1.PNG
 
Please excuse my previous post if any of you thought I was trying to make claims towards some kind of true tested stability. This is new setup/platform and I'm just starting to get a feel for it. I was attempting show a quick memory stress using Gear 1 and then I left it running for a few hours on WCG and it was still going when I came back.

Here are two more showing AIDA memory benchmarks for both Gear 1 and Gear 2... I'm not sure if I can get my 12600K sample to run Gear 1 @2000MHz??

4000C17 Gear 2:

View attachment 224097

3900C17 Gear 1:

View attachment 224098

No man not at all... - we love ur results just trying to save you stress (lol)
 
FWIW only memtest86 can test your entire memory. Its free version is enough for absolute most average people including OC/tech enthusiasts. Highly recommended. The application is signed so there's no need to disable EFI secure boot.

memtest_mainmenu.png


Windows built-in memtest can do it too but it's quite simplistic and not as thorough as memtest86 which can find errors not detectable by Windows memory checker.
 
FWIW only memtest86 can test your entire memory. Its free version is enough for absolute most average people including OC/tech enthusiasts. Highly recommended. The application is signed so there's no need to disable EFI secure boot.

View attachment 224142

Windows built-in memtest can do it too but it's quite simplistic and not as thorough as memtest86 which can find errors not detectable by Windows memory checker.
See PM.
 
Steve did. He was very vocal about it.
DDR4 vs DDR4 right?

Most of the reviewer's seemed to only do DDR5.

Having thought about this product release some more, I think I over praised it.

Some of the performance gain is from DDR4 to DDR5, masked due to launch reviews been DDR5 only.
Any other OS aside from Windows 11 is likely to yield problems, solution probably to disable e-cores. Curious how long it will take for Linux and BSD to get scheduler updates. It took them a while to resolve Ryzen issues and those were less complicated issues to fix.
DDR5 is expensive right now, coupled with motherboard prices, it is a very expensive generation.
Power consumption for me is still too high, Intel need lots of work in this area.

DDR5 will get cheaper in newer generations as it ramps up production, but by the time that happens we will have a newer Intel chip.

Its a shame we seen no boards which have dual DDR4/5 slots. Users having to choose between gimped performance or elevated prices for ram.
 
Likely both as, again, margin of error kind of variance. And the variances are in the favor of faster performance. What you likely have is a Windows service running, shutting down or stopping during your testing and restarting as testing continues. It happens, it's a Windows thing. And as such everyone will have a similar experience, therefore your testing results are still valid. So include all results and average between them. Alternatively you can find the service causing the variance and disable it during testing.

To throw out results that do not meet with your satisfaction is to effectively cheat the testing(not accusing you of cheating, only saying that is the effective result), and that will not give a clear picture of the actual performance.
I get where you are coming from but It is not a windows service. It is directly related to the e-cores and how windows is scheduling them. It may be a "valid" result for a user, but it is not consistent across 8 different memory frequencies making it worthless for data collection. I can only explain this anomaly from e-core scheduling problem.

So either I included 2 tests. One with it enabled and one without. Or I remove it completely which I am doing. It is garbage data that is unrelated to system memory and the review. The article is not Windows 11 gaming, or GPU Performance. It is a memory review.

Edit: Skewing the results or removing data to give favoritism is a real thing yes. In this case I am removing it because the benchmark data for this game is unrelated to the memory and only muddies the overall data set. So I'll ask, If DDR5-5200 and DDR5-6400 get anywhere from 183 to 196 avg per run, how does that contribute the the overall conclusion for a anything that isn't a windows 11 gaming performance review? In my opinion it does not add to the overall conclusion that is relevant to the review.

I only posted originally to let people know that this could be a problem in other games as well. Not to turn it into a debate about review ethics.
 
Last edited:
DDR4 vs DDR4 right?

Most of the reviewer's seemed to only do DDR5.

Having thought about this product release some more, I think I over praised it.

Some of the performance gain is from DDR4 to DDR5, masked due to launch reviews been DDR5 only.
Any other OS aside from Windows 11 is likely to yield problems, solution probably to disable e-cores. Curious how long it will take for Linux and BSD to get scheduler updates. It took them a while to resolve Ryzen issues and those were less complicated issues to fix.
DDR5 is expensive right now, coupled with motherboard prices, it is a very expensive generation.
Power consumption for me is still too high, Intel need lots of work in this area.

DDR5 will get cheaper in newer generations as it ramps up production, but by the time that happens we will have a newer Intel chip.

Its a shame we seen no boards which have dual DDR4/5 slots. Users having to choose between gimped performance or elevated prices for ram.
5600X getting trounced by the 12600K using DDR4, and in some cases even the 5800X gets creamed; no, it's not thanks to DDR5 as you can see.

And if you have the right cooler, power consumption really is a non issue after all.
 
Last edited:
FWIW only memtest86 can test your entire memory. Its free version is enough for absolute most average people including OC/tech enthusiasts. Highly recommended. The application is signed so there's no need to disable EFI secure boot.

View attachment 224142

Windows built-in memtest can do it too but it's quite simplistic and not as thorough as memtest86 which can find errors not detectable by Windows memory checker.
I wouldn't recommend it at all. In my experience, it never detects unstable RAM. I have used prime95 and some other memory testing software, but memtest86 just never does what it claims.
 
I wouldn't recommend it at all. In my experience, it never detects unstable RAM. I have used prime95 and some other memory testing software, but memtest86 just never does what it claims.
I always go Memtest86 > Memtest64 > AIDA64 > Prime95. Best way to eliminate problems. Memtest86 is great to narrow down voltage problems and avoid corrupting windows. Test 5-7 is 99% a memory controller problem. 1-2 is low memory voltage and 3-4 is timings. Follow these and you can save yourself a lot of headaches.
 
It seems like this thread has outlived itself.
 
Well, this is interesting.

1636321001501.png


 
Well, this is interesting.

View attachment 224250

You know what's the funny part about this? It shouldn't take that much effort for AMD to do something about this. I've brought this up a few times before.

The 2CCD CPUs function very differently than the 5600X and 5800X. If Windows is doing some low priority task in the background, it can use a low-quality core on the 2nd CCD, which happens pretty often actually, to keep it off the the important cores on 1st CCD which are always better binned as a whole. This CCD2 core can regularly see some pretty high clocks/usage/power draw, almost as high as the preferred cores.

In that sense, it achieved some semblance of big.little, amongst homogeneous cores, before Alder Lake came around. Unfortunately, CPPC and Windows only understand how to use that 1 core on CCD2 for this. Any higher priority and the low priority load won't keep expanding into CCD2 - it'll just be treated as another task for the preferred cores, then the rest of CCD1 if it needs more threads.

But something tells me AMD won't, because it's not wise to place your trust anywhere near AGESA if the last two years have taught me anything, especially for already-released products. Perhaps changes will be made for Zen 4.
 
not far away from a 5900x
View attachment 224071
You finally made it work.:toast:

I haven't seen any reviews which directly answer the question: why did Intel create and use E-cores.

The answer is simple: E-cores improve MT performance without blowing up your power budget - they are a lot more effective in terms of performance per watt in MT tasks than P-Cores.
Actually E cores allow Intel to clock P cores much higher. So Intel dominates when it comes to one or low core performance and stays competitive with 5950x for MT (also by increasing consumption). A let’s say 12 P core CPU could maybe achieve the same by allowing high clocks for up to 3-4 cores, but drastically reducing frequencies for MT load. Obviously Intel engineers know better than me which plan is better for now and years to come.
 
You know what's the funny part about this? It shouldn't take that much effort for AMD to do something about this. I've brought this up a few times before.

The 2CCD CPUs function very differently than the 5600X and 5800X. If Windows is doing some low priority task in the background, it can use a low-quality core on the 2nd CCD, which happens pretty often actually, to keep it off the the important cores on 1st CCD which are always better binned as a whole. This CCD2 core can regularly see some pretty high clocks/usage/power draw, almost as high as the preferred cores.

In that sense, it achieved some semblance of big.little, amongst homogeneous cores, before Alder Lake came around. Unfortunately, CPPC and Windows only understand how to use that 1 core on CCD2 for this. Any higher priority and the low priority load won't keep expanding into CCD2 - it'll just be treated as another task for the preferred cores, then the rest of CCD1 if it needs more threads.

But something tells me AMD won't, because it's not wise to place your trust anywhere near AGESA if the last two years have taught me anything, especially for already-released products. Perhaps changes will be made for Zen 4.
Not sure whether you're aware, but the difference between a so called high or low quality core on Ryzen is dictated by manufacturing differences. It's probably something like 3%.
 
Not sure whether you're aware, but the difference between a so called high or low quality core on Ryzen is dictated by manufacturing differences. It's probably something like 3%.

Okay, cool story? :confused: CCD2 will still always be binned worse as a whole, it's how 5900X and 5950X work.

That's completely unrelated to what I was saying. The point is that AMD understands the importance of heterogeneous compute, but only sees fit to dedicate a single core to background tasks, while the remaining 5 or 7 cores on CCD2 (which may even surpass CCD1 cores in quality) will never do anything beyond participate in all-core.

They have the cores to make it happen. Those cores aren't doing anything 90% of the time.
 
Last edited:
With my 12600K even though I've been able to load Windows with memory Gear 1 @1933/1950MHz... I've been having stability issues. I'm thinking that Gear 1 @1900Mhz may end up being a good daily use type for this particular sample.

HyperPi 32M with 12 threads... (E-Cores disabled) 3800C16 Dram ~1.380v and VCCA ~1.170v:

i5-12600K 3800C16 Gear 1 HyperPi 32M 12T dram 1.38v vccsa 1.17v.PNG



A little bump up on the core overclock on this sample... P-Cores set to 50x and E-cores set to 38x (AVX offset @-2 (48x) Vcore set .1.235v. Temps probably getting close to the limits for this old Noctua NH-U12P cooler.

i5-12600K 50x38x 3733C16 Gear 1 AIDA CPU FPU.PNG
 
You finally made it work.:toast:
yeah :) and even the ram runs now at 3600 Gear 1.
i was used to intels " oh no ram runs at above 2666! lets run the VCCIO/SA at 1.6V
but that's not the case anymore. it seems to just stay at the stock 1.05V (SA) and only changes when you actually do it manually.

at 1.1V everything is fine (passed 1000% memtest)
 
Okay, cool story? :confused: CCD2 will still always be binned worse as a whole, it's how 5900X and 5950X work.

That's completely unrelated to what I was saying. The point is that AMD understands the importance of heterogeneous compute, but only sees fit to dedicate a single core to background tasks, while the remaining 5 or 7 cores on CCD2 (which may even surpass CCD1 cores in quality) will never do anything beyond participate in all-core.

They have the cores to make it happen. Those cores aren't doing anything 90% of the time.
Low powered tasks can run on a single core just fine. The more you distribute those tasks, the more you prevent cores from sleeping/power gating. Never assume you have thought of something a whole team of engineers that earn their living out it haven't before. I know I don't.
 
Got to DDR4-4200 in 1:1 Ratio. DDR4-4400 Boots, but it will BSOD without 1.4v on the SA
Curious as to how much VCCSA for Gear 1 @2000MHz were you needing?


Also on a side note... My last/previous AIDA screenshot proved unstable at 5GHz all core. I think the -2 AVX offset and the FPU stress component was dropping too many cores down to 48x. I took the same settings and disabled the E-cores (for the WCG scheduling bug) and the OC quickly triggered "Watch Dog Timeouts" while running WCG load with 12 threads.
Currently running WCG @49x (12T) with ~1.25v load. I think I'm going to need better cooling to stabilize and run @50x all core.
 
Back
Top