• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Alder Lake CPUs common discussion

Wanna elaborate on the mathematical impossibility of it?
 
Wanna elaborate on the mathematical impossibility of it?
Sure, I'll take a stab at it.. With DDR4, going from 3200mhz to 4400mhz is about a 26% jump in raw clock speed.
However, going from 3200 to 4400 would REQUIRE more relaxed timings, thus diminishing the actual effect of the speed increase.
In the real world, you might have gained as much as 10% boost in overall RAM performance, maybe even 15% if you bought some 4400 with incredible timings.

However, there is no effing way you got a 40% to 50% increase in performance from that memory speed jump!
So either you are wildly exaggerating or you are full of what comes out of the south end of a north-bound moose...
 
Last edited:
Sure, I'll take a stab at it.. With DDR4, going from 3200mhz to 4400mhz is about a 26% jump in raw clock speed.
However, going from 3200 to 4400 would REQUIRE more relaxed timings, thus diminishing the actual effect of the speed increase.
In the real world, you might have gained as much as 10% boost in overall RAM performance, maybe even 15% if you bought some 4400 with incredible timings.

However, there is no effing way you got a 40% to 50% increase in performance from that memory speed jump!
So either you are wildly exaggerating or you are full of what comes out of the south end of a north-bound moose...
Why would you need more laxed timmings? Sorry but its obvious to me you ve never tinkered with ddr overclocking. A 3200c16 xmp kit gets around 45k bandwidth and 55 ns latency. A tuned 4400c16 get 70k bandwidth and 35 to 37ns latency.

Dont have a 10900k anymore but a few days ago i tested an 11600k. With 3200c16 kit was getting 65 fps in cp2077 in front of vis apartment. With 3333c12 manually tuned was getting over 90.

A 30%+ performance gain in gaming is pretty common with memory tuning. Especially if you go down all the way to RTLs, ive seen Ryzens get an 80% increase in minimum fps.
 
Sorry but its obvious to me you ve never tinkered with ddr overclocking.
Irony.
A 3200c16 xmp kit gets around 45k bandwidth and 55 ns latency. A tuned 4400c16 get 70k bandwidth and 35 to 37ns latency.
That's total nonsense. But even if it were true, you're still not getting anywhere near 40% performance improvement. Period, end of story, full stop.
 
Dont have a 10900k anymore but a few days ago i tested an 11600k. With 3200c16 kit was getting 65 fps in cp2077 in front of vis apartment. With 3333c12 manually tuned was getting over 90.


That's total nonsense. But even if it were true, you're still not getting anywhere near 40% performance improvement. Period, end of story, full stop.


I haven't a clue who's correct, but as a rule I go with assertions supported by numbers over data-free certainty.
 
I haven't a clue who's correct, but as a rule I go with assertions supported by numbers over data-free certainty.
Ok. Let's do the simple math. What is the number difference between 3200 and 4400? 1200. 4400 /1200 = 3.6. When converted into what percentage of 4400 1200 is, it's about 27%. Ok, let's remember that number.

Now let's talk about timings.
The best example of 4400 is here;
This is the best 4400 kit I could find.

With 3200, we'll stay with a the same brand and product line;

Now let's look closely at those timings.
4400mhz
16-19-19-39

3200mhz
16-18-18-38

Doing the math those differences are minor until you factor speed difference. In the case of these two kits, the approximate differences would amount to a 21% performance difference.

However, if we go with a better performing kit of 3200mhz the result changes;

Now let's compare again.
4400mhz
16-19-19-39

3200mhz
14-14-14-34

Hmmm... The math shows that performance difference shrinking greatly because the 3200mhz kit now performs better with tighter timings.

What have we learned here? Simple, the maximum increase from 3200mhz to 4400mhz, assuming similar timings, is 21% not the 40% to 50% claimed by @fevgatos. That's assuming one would intentionally buy a lower performing 3200mhz ram kit. The smart buy would be the kit with the tighter timings and at that point the performance difference becomes entirely less rosy.

So once again, the claim of 40% to 50% is hogwash. Total and complete nonsense.
 
Last edited:
That's better, but fevgatos gave (possibly erroneous) measurements vs. your perhaps well-founded conjecture; I don't know enough about where the bottlenecks etc are to judge for myself.
 
That's better, but fevgatos gave (possibly erroneous) measurements vs. your perhaps well-founded conjecture; I don't know enough about where the bottlenecks etc are to judge for myself.
That's the thing, the percentages math doesn't lie. The laws of physics require that two similar things perform similarly. When you increase the speed of one thing, the performance will increase by a proportional degree over the other.
 
I overclocked my ram with my 10850K @ 5.1 - and I think SOTR Trial would go from 124-125 minimum CPU with XMP 3200 14-14-14-31 to something like 152 min with 4133 C17 with tuned subs -- so about a ~22% difference between the two kits in min FPS for about ~29% more bandwith. That was probably the most sensitive bench that I could reliably reproduce for me.

If you OC ram though, you're probably also yeeting the cache and core so you can get 40-50% more FPS, but yeah probably not just from ram alone.
 
Ok. Let's do the simple math. What is the number difference between 3200 and 4400? 1200. 4400 /1200 = 3.6. When converted into what percentage of 4400 1200 is, it's about 27%. Ok, let's remember that number.

Now let's talk about timings.
The best example of 4400 is here;
This is the best 4400 kit I could find.

With 3200, we'll stay with a the same brand and product line;

Now let's look closely at those timings.
4400mhz
16-19-19-39

3200mhz
16-18-18-38

Doing the math those differences are minor until you factor speed difference. In the case of these two kits, the approximate differences would amount to a 21% performance difference.

However, if we go with a better performing kit of 3200mhz the result changes;

Now let's compare again.
4400mhz
16-19-19-39

3200mhz
14-14-14-34

Hmmm... The math shows that performance difference shrinking greatly because the 3200mhz kit now performs better with tighter timings.

What have we learned here? Simple, the maximum increase from 3200mhz to 4400mhz, assuming similar timings, is 21% not the 40% to 50% claimed by @fevgatos. That's assuming one would intentionally buy a lower performing 3200mhz ram kit. The smart buy would be the kit with the tighter timings and at that point the performance difference becomes entirely less rosy.

So once again, the claim of 40% to 50% is hogwash. Total and complete nonsense.
I specifically compared a 3200c16 kit running XMP with a bdie kit being manually tuned. OBVIOUSLY, if you compare a bdie kit to another bdie kit (in your case, the 3200c14 you just put up there) the difference shrinks. Also, as I've said before, you have no clue how ddr oc works. Judging the first 4 numbers is just...wrong. Tuning a ddr4 kit involves around 30+ different timings, all of them are going to be lower on a tuned 4000+ kit than on a 3200c16. Again, these are things that anyone who tried memory ocing would know.

Also, comparing percentages between freequency and CL's is just wrong. The CL timing has to be translated into actual nanoseconds, by multiplying it with the freequency. The formula is CL / freequency x 2000. Basically a 4000c16 kit, besides the bandwidth, has a lot lower latency than a 3200c16 kit. That's without touching any other timings. As for the secondary and tertiary's themselves, a 3200c16 for example runs trfc at 560 on xmp while you can tight it down all the way to ~280-320 on a bdie kit. Trefi runs at 8092 on a 3200c16 kit, you can run it at 65535 on a bdie kit. How do you compare that with a calculator?

So, question, have you ever tuned your ram or are you just theorycrafting?

I overclocked my ram with my 10850K @ 5.1 - and I think SOTR Trial would go from 124-125 minimum CPU with XMP 3200 14-14-14-31 to something like 152 min with 4133 C17 with tuned subs -- so about a ~22% difference between the two kits in min FPS for about ~29% more bandwith. That was probably the most sensitive bench that I could reliably reproduce for me.

If you OC ram though, you're probably also yeeting the cache and core so you can get 40-50% more FPS, but yeah probably not just from ram alone.
That's a GPU bottleneck. The 10850k can get over 220 in SOTR. Check the game CPU numbers under the average FPS. That's how many FPS your CPU can do if there is no GPU bottleneck. Was getting around 240-250 with tuned ram.
 
That's a GPU bottleneck. The 10850k can get over 220 in SOTR. Check the game CPU numbers under the average FPS. That's how many FPS your CPU can do if there is no GPU bottleneck. Was getting around 240-250 with tuned ram.
So this was the freeware version that I did that ram difference bench on -- I have the full version of the game and it was 238 on 1080P lowest so your numbers definitely line up... Unfortunately I didn't write down ram testing settings or anything on the later builds because I am lazy af.

1642197895654.png
 
Was talking about highest preset settings, basically the exact same settings Guru 3d is testing. Here is one of my runs, was testing the 3090 in this one but doesn't really matter, you can see the CPU game graph for CPU results.

195.jpg.b29c84b8c98cde5a73d89090c3e472bf.jpg



And here it is at 1080p highest

233.jpg.6ee540049407b41418494b195543fd33.jpg
 
I specifically compared a 3200c16 kit running XMP with a bdie kit being manually tuned. OBVIOUSLY, if you compare a bdie kit to another bdie kit (in your case, the 3200c14 you just put up there) the difference shrinks. Also, as I've said before, you have no clue how ddr oc works. Judging the first 4 numbers is just...wrong. Tuning a ddr4 kit involves around 30+ different timings, all of them are going to be lower on a tuned 4000+ kit than on a 3200c16. Again, these are things that anyone who tried memory ocing would know.

Also, comparing percentages between freequency and CL's is just wrong. The CL timing has to be translated into actual nanoseconds, by multiplying it with the freequency. The formula is CL / freequency x 2000. Basically a 4000c16 kit, besides the bandwidth, has a lot lower latency than a 3200c16 kit. That's without touching any other timings. As for the secondary and tertiary's themselves, a 3200c16 for example runs trfc at 560 on xmp while you can tight it down all the way to ~280-320 on a bdie kit. Trefi runs at 8092 on a 3200c16 kit, you can run it at 65535 on a bdie kit.
Folks, this is text book example of someone trying to move the goalposts rather than admitting they were wrong or exaggerated. It's also an example of someone letting their pride override logic.
How do you compare that with a calculator?
Very easily.
So, question, have you ever tuned your ram or are you just theorycrafting?
No, I've never done anything of that sort... never.... not at all..... /S

Alderlake doesn't have a magical IMC and the laws of physics prevail. There is no possible way to get a 40% to 50% RAM performance boost going from 3200mhz RAM to 4400mhz RAM. End of story, full stop. And now it's time to use a button...
 
Alderlake doesn't have a magical IMC and the laws of physics prevail. There is no possible way to get a 40% to 50% RAM performance boost going from 3200mhz RAM to 4400mhz RAM. End of story, full stop. And now it's time to use a button...
Was talking about Cometlake actually, not Alderlake. But doesn't matter, you are wrong either way. There is no moving goalposts, from my very first post I was talking about a 3200c16, I even wrote it, lol. Using your argument against you, maybe because you don't want to admit you pretend taht I was talking about a bdie 3200c14.

If I install back my 11600k and run cyberpunk in front of Vi's apartment with 2 different ram configurations and prove you wrong, do I get something?
 
Was talking about Cometlake actually, not Alderlake. But doesn't matter, you are wrong either way. There is no moving goalposts, from my very first post I was talking about a 3200c16, I even wrote it, lol. Using your argument against you, maybe because you don't want to admit you pretend taht I was talking about a bdie 3200c14.

If I install back my 11600k and run cyberpunk in front of Vi's apartment with 2 different ram configurations and prove you wrong, do I get something?
Validated, because at the minute I think it's you stretching reality, I have OCD memory, I haven't messed with tertiary timings though but still, show us the way or do you think we wouldn't want an extra 50% FPS.
 
Validated, because at the minute I think it's you stretching reality, I have OCD memory, I haven't messed with tertiary timings though but still, show us the way or do you think we wouldn't want an extra 50% FPS.
What ram and CPU do you have?

What timings have you tinkered with already?

Generally speaking, the "easy" way of doing it is setting IO / SA to the max voltages you are comfortable with running 24/7 (that's up to you honestly), set VDIMMs to 1.5 volt, enable XMP and start upping the freequency. Then you need multiple TM5 1usmus or antaextreme runs (10 cycles) every time you change any timing to validate stability. It's not something you can do over a weekend, unless you already know the roundabout capabilities of your dimms and IMC. If you don't want to spend an eternity, primaries / trefi and TRFC are the most important timings. You can use AIDA64 to test your latency and see your progress, but make sure to boot in safe mode for consistent results. Every app running on the background makes it incosistent (steam by itself for example adds 2-3ns to your latency).

I can post some old gaming benches where I run 3200c16 and 4400 tuned, but would you actually believe it if I told you it's just the ram making the difference? If yes, I can try to find them, no problem.

*The 50% difference obviously applies to CPU bound runs, I mean it's self evident right? Don't expect to be running 1440p ultra and 4k and see any meaningful difference.
 
What ram and CPU do you have?

What timings have you tinkered with already?

Generally speaking, the "easy" way of doing it is setting IO / SA to the max voltages you are comfortable with running 24/7 (that's up to you honestly), set VDIMMs to 1.5 volt, enable XMP and start upping the freequency. Then you need multiple TM5 1usmus or antaextreme runs (10 cycles) every time you change any timing to validate stability. It's not something you can do over a weekend, unless you already know the roundabout capabilities of your dimms and IMC. If you don't want to spend an eternity, primaries / trefi and TRFC are the most important timings. You can use AIDA64 to test your latency and see your progress, but make sure to boot in safe mode for consistent results. Every app running on the background makes it incosistent (steam by itself for example adds 2-3ns to your latency).

I can post some old gaming benches where I run 3200c16 and 4400 tuned, but would you actually believe it if I told you it's just the ram making the difference? If yes, I can try to find them, no problem.

*The 50% difference obviously applies to CPU bound runs, I mean it's self evident right? Don't expect to be running 1440p ultra and 4k and see any meaningful difference.
I'm sorry but initially it seemed you were talking gaming FPS gain's, I have ran b die , on tight 1usmus settings ,all of them too and didn't see 50% gains in anything, different platform though.
So like I said you need to validate your statement, surely someone has demonstrated your point in a provable way, it shouldn't take you rebuilding anything since it's a big world and I know others oc memory.
I'll have a look round too.
 
Well for example, this is my 10900k running AOTS DX12 stock with tuned ram. A 10900k stock with XMP gets around 130 in normal batch. You can validate that with capframe X, they did some runs back when 5800x performance was leaked. It's not the most memory sensitive game but it's up there. AOTS runs the CPU logic completely independent of the GPU, so the numbers are consistent across GPU's (had a 1080ti in this one). Biggest gains are in Farcry 6 and SOTR.

1080p.jpg.56dc0116ce1279d451205bfcfbfaff66.jpg


I'm sorry but initially it seemed you were talking gaming FPS gain's
I do, but that only applies to me and my specific use case. I'm playing warzone and COD at 5120*1440 but with most settings to low and DLSS balanced. With these settings I'm mostly CPU bound, so for me ram makes a difference even at that resolution. Other people that prefer to run high or ultra will be GPU bound so they don't need to tinker with their ram as much. Also some games just don't care about ram at all, so even if you are playing on 720p ram won't make a difference (the whole crysis series for example) Ram makes a difference when there are lots of cache prediction misses, so the cache has to get data from the the memory.
 
Was talking about Cometlake actually, not Alderlake. But doesn't matter, you are wrong either way. There is no moving goalposts, from my very first post I was talking about a 3200c16, I even wrote it, lol. Using your argument against you, maybe because you don't want to admit you pretend taht I was talking about a bdie 3200c14.

If I install back my 11600k and run cyberpunk in front of Vi's apartment with 2 different ram configurations and prove you wrong, do I get something?
You have proven nothing. You are only making claims that are unsupported by reality. We all know you're full of moose droppings, but hey, do carry on.
 
You have proven nothing. You are only making claims that are unsupported by reality. We all know you're full of moose droppings, but hey, do carry on.
I don't care about proving anything. Everyone that tinkers with ram knows this. You specifically have no idea how memory oc works, that you have proven. You dont know how timings scale with freequency and instead you are comparing flat timings across dimms with different frequencies. Im sorry but its obvious you dont know much about the topic, so why are you so invested in your opinion when it's obviously uninformed..?
 
have you guys never seen this table someone made? THe combination of cas latency and ram clock is what makes speed, it's that simple. This is why I have 3600 C14 and not something else, it's basically as fast as 4600 C18. That 4000 C15 is nice stuff lexluthermiester

ram speed chart.png
 
Back
Top