• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Ryzen 7600 ram 64gb vs 2 /4 channel,vs higher speed

8000 mhz om ryzen o_O

How is infinity fabric ? 1:1 ?

Still wouldn't try more than 6400mhz memory

I ran the super cheap ripjaws 4000mhz cl 18 at 3733mhz cl 16 on msi mb with some ryzen 5000 it ran stable other mb,cpu combos it didn't

There is a reason people run 6000-6400 on AMD..
 
I guees you can't have it all

4x16gb 6000mhz cl 30 with better subtiming than i have now is the best i can hope for (2x32gb was a little more expensive)

Before i forget it, current memory

2025-01-16 02_39_06-.png


2025-01-16 10_12_24-AIDA64 Extreme v7.50.7200.png
 
Last edited:
Ycruncher and AIDA are proper benchmarks and tests. Not that thing.
 
Time to test the new ram in lees than 30 min, going to get them

2025-01-16 10_24_09-Team Group T-Create Expert DDR5-6000 - 32GB - CL30 - Dual Channel (2 pcs) ...png

2025-01-16 10_24_09-Team Group T-Create Expert DDR5-6000 - 32GB - CL30 - Dual Channel (2 pcs) ...png


first test (look at sub timings)

2025-01-16 11_08_20-PerformanceTest 11.0 Evaluation Version.png


Second test (look at subtimings)

2025-01-16 11_31_20-PerformanceTest 11.0 Evaluation Version.png



4x16gb 6000mhz cl 30 works fine and the memory i bought, knock on wood, is stable
 
Last edited:
That's arguing against something I never stated though. I said that more DIMMs is harder to run, not that it is impossible to work at all, nor that it isn't ever worth doing.
I am afraid you missed my point - my fault for failing to explain clearly. First, please note my very first words in my reply were "while perhaps true". Then I said "it is misleading". I did not say it was wrong. And perhaps I should have said "confusing" instead of "misleading". To "mislead" suggests intention and in no way do I believe you intended to mislead.

I acknowledge you said (and agree with it),
If it's not preventing you from reaching that point, then the "harder to run" point is moot.
But you also said (my bold underline added),
More DIMMs is harder to run. That's basically a universal truth.
Which is it? A universal truth? Or moot? It can't be both.

So part of my concern is your claim that it is a "universal truth" that more DIMMs is harder to run. I feel that's "confusing". And more to the point, it might put off, dissuade, or intimidate the less experienced from even trying it.

So what if it is harder to run? And how much harder? It is harder to run with 2 drives instead of 1. It is harder to literally run 200 meters than 100 meters. It is harder to run with 2 monitors instead of 1.

My concern was that "some" might interpret your comment to suggest because it is harder, that it would be best to not even try. I am NOT suggesting that is what you meant, I am saying some might interpret your words that way. That is why I went on to explain for most, it is not harder at all.

So my message is, if the board has 4 slots for 4 sticks, and the user inserts 4 sticks the motherboard maker says are compatible, there is no reason to suspect they will not work as expected right out of the box (i.e. with the default settings). The only "harder" part is the extra physical effort required to insert 4 sticks instead of 2.

And, it should be noted when it comes to tweaking settings (changing the defaults) to increase frequency, doing so with just 2 sticks while still maintaining stability (and heat) is not always a walk in the part either.

Sorry again if my inadequate explanation caused confusion
 
@gasolin
unless something has changed, buying two different kits of the same DC ram isnt a good idea on amd, unless you have sticks that (always) have the same die underneath.
while it might work, its always better to buy total amount of ram you want, in a single kit (no matter how many sticks).

and stable based on what? testing with TM5/HCI (and the like) ?
otherwise...
 
Where the reputation for "harder to run" comes in is with users who wish to tweak the clocks and operating voltages to over or under clock their systems. In some cases, it takes a little longer to fine tune for stability with 4 sticks instead of 2.

For the vast majority of users who simply wish to increase their RAM and intend to stick with the default settings, as the OP stated he intended, simply adding 2 more sticks is not harder to do and not harder to run.
One thing you seem to be missing is that, unlike DDR3 or DDR4, current DDR5 consumer offerings are way more finicky with 4 DIMMs, so slapping 4 sticks in a DDR5 platform has a way higher chance of making XMP/EXPO not work anymore, specially at higher capacities.
As an example, getting 128/192GB (4x32/48GB) at 6000MHz is almost impossible, and making it faster than 5000MHz stable is hard.
One DIMM of DDR5 is Single-Channel over a single 64bit-bus split into two 32bit subchannels. Either you need a Threadripper or XEON to do quad-channel. However, Intel new 1851 CPUs runs in 2x32 per DIMM independently, so that is pretty close to quad-channel.
You still have a bus size of 128-bit, be it dual-channel DDR4, or that config with DDR5.
Intel's 2x32-bit controllers is no different from a single 64-bit controller that's able to handle 2x32-bit channels at once. Bandwidth is not increased, thus it's not really close in any mean to what we usually call "quad-channel".
Quad-channel would be a 256-bit bus, similar to what strix halo will have.
"channel" is a vague term that depends on the controller and memory modules config.
Your GDDR GPU usually has 32-bit controllers, and use multiple of those to increase the bus-size (and no one calls those dual/quad/octa-channel).
Apple uses 16-bit controllers for their M-series lineup, with bus sizes ranging from 128-bit (your regular "dual-channel" desktop config), up to 512-bit (what we would call "octa-channel" found in servers).

When talking about desktops, it's pretty much a convention that a "channel" equates a 64-bit bus, since that is the bus size for a single DIMM. 2 of those? 128-bit, or the so called "dual-channel".
DDR5's smaller channels don't change the above convention much, a single stick is still 64-bit, two of those make for 128-bit, and the relationship with bandwidth is not changed.

So my message is, if the board has 4 slots for 4 sticks, and the user inserts 4 sticks the motherboard maker says are compatible, there is no reason to suspect they will not work as expected right out of the box (i.e. with the default settings). The only "harder" part is the extra physical effort required to insert 4 sticks instead of 2.
If you take a look at the QVL of most mobos, high density configs are not listed at all, so there's a high chance 4x32GB or 4x48GB won't go past 4000MHz and likely get stuck at 3600MHz (which is the "stock" value for 2DPC 2R for Intel and AMD's consumer DDR5 platforms), whereas two sticks usually happy work at 6000MHz+ out of the box with XMP/EXPO.
 
I have 4x16gb of the same memory https://valid.x86.fr/laffxs

It takes super long time with ibt at max, not even half an hour is enough to finish 1 run (there is 10)

It's stable at high and very high, temps max reach low 40's which is super
 
Last edited:
My point is that by channels, I mean performance levels.

If you were to take DDR4 results in quad channel, and then look at dual channel for DDR5 (2 sticks), the performance is in line with the updated tech. Trickery, terminology, whatever, the performance is there, and I call it as I see it.

Funny thing is the market seems to agree with me, as the most sold product in DDR5 is a single stick, as it offers "dual channel" performance levels without any of the issues of multiple sticks.
 
@igormp
dont waste your breath.
i always love when some ppl here chime in, while not using ithe hw involved themselves/have zero experience with it, and base their guidance on past stuff, thats compeletly different in how works/settings etc.

@gasolin
not sure what you mean.
testing for stability involves running tests for hours, in most cases with +16gb even a whole day, to be sure you dont have any intermittend errors.
testing for less, is not enough.

@sneekypeet
that would only be true for a (singel) dual rank stick, not SR.
 
https://x.com/IanCutress/status/1848283700317831308My point is that by channels, I mean performance levels.

If you were to take DDR4 results in quad channel, and then look at dual channel for DDR5 (2 sticks), the performance is in line with the updated tech. Trickery, terminology, whatever, the performance is there, and I call it as I see it.

Funny thing is the market seems to agree with me, as the most sold product in DDR5 is a single stick, as it offers "dual channel" performance levels without any of the issues of multiple sticks.
No, it's not. A single stick of DDR4 is 64-bit, and so is DDR5. The increased performance comes from the higher clocks. The base formula for theoretical bandwidth for ANY memory config is:
bus_size (in bits) / 8 (so you get bytes) x Frequency x data_rate (2 for DDR, 4 for GDDR)
or
bus_size (in bits) / 8 (so you get bytes) x effective_frequency (the MHz your DIMMs get sold at)
For dual-channel DDR4 (128-bit) at 3600MHz you'd have: 128/8 * 3600 = 57600MB/s or 57.6GB/s. Grabbing a random screenshot from aida64 with 3600MHz memory:
vlkejnbi30v71.png


Pretty close, if I say so.

For DDR5 6000MHz you'd have: 128/8 * 6000 = 96000MB/s or 96GB/s. Grabbing another pic:
aida64-cache-mem.png


As you saw, the so called "quad-channel" of DDR5 makes no difference in there, what matters is the bus size, which was unchanged. Wanna know why a "quad-channel" DDR4 matches your findings of DDR5? It's because it has twice the bus size (4x64-bit = 256-bit bus).
Using a quad-channel 1950x using DDR4 at 3466MHz (weird number because that was my first result from google images):
uqzyqplfioa11.jpg

256 (remember our bus size is double now!) / 8 *3466 = 110912, or ~111GB/s.

Extra bonus, showing the same math for a GPU, let's use a 5090 as an example: it has a 512-bit bus, with 28Gbps modules, so we have 512 / 8 * 28 = 1792GB/s, which matches perfectly its announced bandwidth.
Why not do so for an apple chip as well? My M3 Max has a 24x 16-bit controllers, so a 384-bit bus (same bus size as a 3090/4090), using LPDDR5-6400, giving us 384/8*6400=307GB/s.
Strix halo, since I mentioned it before, has a 256-bit bus (what we would call "quad-channel") with LPDDR5X-8000, giving us 256/8*8000=256GB/s of bandwidth.

Honestly I really despise the "channel" terminology for desktop since it gives way for too much ambiguity, I'd rather just use the raw bus-size. And I'm not the only one that thinks that:

For what it's worth, in that screenshot of ours with a recent CPU-Z version, it now shows "4x32-bit", which is the amount of sub-channels x the bus-size for each controller. A DDR4 platform should show up as "2x64-bit" (not sure since I don't use windows to try it out), which in both cases leads to a total of 128-bit.
 
I don't use my pc with 50% or more of the ram for hours with my cpu at 50-100% usage to see if it's stable (or cpu 75-100% and cpu 50% usage)

In normal life i don't use my pc as extreme as that where it might be unstable at 75-100% cpu usage and 50-75% usage of my ram after i have used it for 6-8 hours

I don't test my car (i don't have one) for hours at it's top speed to see if it's unstable at max power in worse case senario

What i do is use ibt (intel burn test),occt, aida64,PassMark Performance Test to test if it's stable, ibt and occt is very good and fast to show if my pc cpu,ram is unstable

What i do is use my pc, watch tv,video,listen to music, facebook,instagram,twitter, play games.... you know use it normally like many people do

I have tested it i know my cpu is stable. I don't take stress test,use my ram,cpu at maximum every day for 8 hours

if i had a car i would test it before i bought it and use it every time for normal tasks, if i first notice something after half a year i'll do something about it at that time

It's not a server that is used 24/7 that has to be 1000% stable, it's a pc with a little more memory than most pc's and 2 monitors
 
No, it's not. A single stick of DDR4 is 64-bit, and so is DDR5. The increased performance comes from the higher clocks. The base formula for theoretical bandwidth for ANY memory config is:

or

For dual-channel DDR4 (128-bit) at 3600MHz you'd have: 128/8 * 3600 = 57600MB/s or 57.6GB/s. Grabbing a random screenshot from aida64 with 3600MHz memory:
vlkejnbi30v71.png


Pretty close, if I say so.

For DDR5 6000MHz you'd have: 128/8 * 6000 = 96000MB/s or 96GB/s. Grabbing another pic:
aida64-cache-mem.png


As you saw, the so called "quad-channel" of DDR5 makes no difference in there, what matters is the bus size, which was unchanged. Wanna know why a "quad-channel" DDR4 matches your findings of DDR5? It's because it has twice the bus size (4x64-bit = 256-bit bus).
Using a quad-channel 1950x using DDR4 at 3466MHz (weird number because that was my first result from google images):
uqzyqplfioa11.jpg

256 (remember our bus size is double now!) / 8 *3466 = 110912, or ~111GB/s.

Extra bonus, showing the same math for a GPU, let's use a 5090 as an example: it has a 512-bit bus, with 28Gbps modules, so we have 512 / 8 * 28 = 1792GB/s, which matches perfectly its announced bandwidth.
Why not do so for an apple chip as well? My M3 Max has a 24x 16-bit controllers, so a 384-bit bus (same bus size as a 3090/4090), using LPDDR5-6400, giving us 384/8*6400=307GB/s.
Strix halo, since I mentioned it before, has a 256-bit bus (what we would call "quad-channel") with LPDDR5X-8000, giving us 256/8*8000=256GB/s of bandwidth.

Honestly I really despise the "channel" terminology for desktop since it gives way for too much ambiguity, I'd rather just use the raw bus-size. And I'm not the only one that thinks that:

For what it's worth, in that screenshot of ours with a recent CPU-Z version, it now shows "4x32-bit", which is the amount of sub-channels x the bus-size for each controller. A DDR4 platform should show up as "2x64-bit" (not sure since I don't use windows to try it out), which in both cases leads to a total of 128-bit.
Hey!!!

Can you give the screen shots using Intel Memory Latency Checker instead of AIDA64 please.

 
DDR5's smaller channels don't change the above convention much, a single stick is still 64-bit, two of those make for 128-bit, and the relationship with bandwidth is not changed.
It does change how the memory controller accesses those two separate channels for the single stick of course. Power down management of stick plays into that as well.
 
testing for stability involves running tests for hours, in most cases with +16gb even a whole day, to be sure you dont have any intermittend errors.
testing for less, is not enough.

Might as well just use the computer then IMO. In my experience memory problems/instability either show themselves right away or after months as the file system is slowly getting corrupted and stress testing might not catch the latter kind of instability.
 
Hey!!!

Can you give the screen shots using Intel Memory Latency Checker instead of AIDA64 please.

Those screenshots aren't mine, I just grabbed those from google images haha
mlc had some funky stuff with hugepages from what I remember, so I never got it to run properly in my desktop.
It does change how the memory controller accesses those two separate channels for the single stick of course. Power down management of stick plays into that as well.
That may improve latencies and may allow for some extra interleaving, but those won't have any impact on the bandwidth anyway.
 
You could be taking a stress,stability test for hours or ibt, fine nothing

As sone as you start gaming booom unstability

Don't just run you car at rated top speed for half an hour to an hour

Use the brakes, go fast thru a corner, let the engine use the power when you car is close to it's max weight

Use it the way it's meant to be used and they way you wanna use it,if it's not stable deal with it, don't test your car like it's a race car or your pc as if it has to be used 24/7 close to it's limit for many days before you reboote or turn it off
 
@gasolin
how you use the pc has ZERO to do with it being stable or not, ram testing doesnt use or need 100% cpu usage, which neither TM5 nor HCI do, and proper testing only needs to be done ONCE.
and it doesnt have to be a server, that errors will be able to affect it.

just because its stable for 6h, doesnt mean its stable for 7h, and thats what proper mem testing tools are doing, looking for errors that dont show up right away.
ignoring your car doesnt change a 0 to a 1, because its running at full speed, so not sure how thats a decent comparison.

calling it stable, without a proper (read longer running prog), doesnt make it so.
but if testing ram properly (not cpu, not max perf, not maximum load) is too much for you, thats fine.

(and short of your car having only 50HP, i have yet to see anyone able to do full throttle for longer period, even in germany with many roads with no speed limit,
as you either will see traffic/temp speed limit, or areas where you are forced to lift your foot, and never be able to really put 100% load on the engine)

@Frick
there are enough times where normal use will not show anything, especially when ppl arent leaving it running continuously, but that doesnt mean you dont have any issues.
and sure, you might not find every single possible error, but its about excluding the possibility to a very high degree,
but when i see ppl spending money and time to tweak stuff (passed stock settings), and then dont care to spend the time to test it properly, why bother with anything then.

the easiest example:
i could only tell my x570 Gb board was the cause of issues, after testing (ram) with jedec settings and getting errors above 800% of hci (past 6h of testing),
because i knew the ram (bdie) was fine doing 3600c14 on another board, passing 1600% of testing with 3600C14.
so if i had stopped at 6-8h, i would have never found the problem/nor had a baseline to compare.
 
Last edited:
No, it's not. A single stick of DDR4 is 64-bit, and so is DDR5. The increased performance comes from the higher clocks. The base formula for theoretical bandwidth for ANY memory config is:

or

For dual-channel DDR4 (128-bit) at 3600MHz you'd have: 128/8 * 3600 = 57600MB/s or 57.6GB/s. Grabbing a random screenshot from aida64 with 3600MHz memory:
vlkejnbi30v71.png


Pretty close, if I say so.

For DDR5 6000MHz you'd have: 128/8 * 6000 = 96000MB/s or 96GB/s. Grabbing another pic:
aida64-cache-mem.png


As you saw, the so called "quad-channel" of DDR5 makes no difference in there, what matters is the bus size, which was unchanged. Wanna know why a "quad-channel" DDR4 matches your findings of DDR5? It's because it has twice the bus size (4x64-bit = 256-bit bus).
Using a quad-channel 1950x using DDR4 at 3466MHz (weird number because that was my first result from google images):
uqzyqplfioa11.jpg

256 (remember our bus size is double now!) / 8 *3466 = 110912, or ~111GB/s.

Extra bonus, showing the same math for a GPU, let's use a 5090 as an example: it has a 512-bit bus, with 28Gbps modules, so we have 512 / 8 * 28 = 1792GB/s, which matches perfectly its announced bandwidth.
Why not do so for an apple chip as well? My M3 Max has a 24x 16-bit controllers, so a 384-bit bus (same bus size as a 3090/4090), using LPDDR5-6400, giving us 384/8*6400=307GB/s.
Strix halo, since I mentioned it before, has a 256-bit bus (what we would call "quad-channel") with LPDDR5X-8000, giving us 256/8*8000=256GB/s of bandwidth.

Honestly I really despise the "channel" terminology for desktop since it gives way for too much ambiguity, I'd rather just use the raw bus-size. And I'm not the only one that thinks that:

For what it's worth, in that screenshot of ours with a recent CPU-Z version, it now shows "4x32-bit", which is the amount of sub-channels x the bus-size for each controller. A DDR4 platform should show up as "2x64-bit" (not sure since I don't use windows to try it out), which in both cases leads to a total of 128-bit.

I guess you can only lead the horse to water.

IR should know better (he used a link in 2022), but if you want, talk to the ram manufacturers about what I am saying. IDGAF about what Intel and AMD say about it.

From RAMBUS!
d31fa545-e394-4750-b112-0464899c4443.jpg
 
Last edited:
Those screenshots aren't mine, I just grabbed those from google images haha
mlc had some funky stuff with hugepages from what I remember, so I never got it to run properly in my desktop.

That may improve latencies and may allow for some extra interleaving, but those won't have any impact on the bandwidth anyway.
OK thanks for that. Wanted to confirm. While AIDA64 would say 70ns latency IMLC would say closer to 90ns. Aida64 seems to test with a really small data set. Do you know if you can increase it?
 
ibt maximum also stress,stability test the cpu
 
One thing you seem to be missing
If you take a look at the QVL of most mobos, high density configs are not listed at all
:( No I am not missing anything.

What you seem to be missing is all the things I keep saying that you leave out when you say I'm missing something. :(

If "high density configs" are not listed on the QVL, then don't buy it!!!! Buy listed RAM (or at least RAM with the same specs as listed RAM). I repeatedly have said to buy RAM the motherboard makers say is compatible.

As I also repeatedly said, leave the defaults alone if you want the greatest chance of no problems.

It seems you and others seem to think if one buys unlisted RAM, it should automatically still work with no problems. Why would you assume that? And then beyond that, you seem to feel they should be able to change default settings in the BIOS to settings the RAM makers do NOT list as supported, and once again it should automatically still work with no problems. Why would you assume that?

And the sad part there is, when unsupported parts are used in an unsupported operating environment, and it fails to work like the user wanted, you and others seem to think its not the user's fault! :kookoo:

In what industry are unauthorized user-modifications that are NOT sanctioned by the manufacturers, modifications that don't work as the user wants, not the user's fault? Why would you expect computer hardware to be any different?

Did the motherboard makers promise any RAM that physically fits will work? No.

Did the RAM maker promise if it fits the slot it will work? No.

Did either the RAM maker or the motherboard maker promise any tweaks changing the defaults in the BIOS will still work? No.

Yet it seems to some users here, none of that matters. It should still work and its the hardware maker's fault if it doesn't. :kookoo:

I will say it again - please take my entire comment as a whole and not portions out of context. If the motherboard supports 4 sticks, and you buy 4 sticks listed on the QVL, and you don't dink with the default settings, there is no reason to suspect the 4 sticks will not work perfectly, right out of the box, starting with the first boot.
 
I guess you can only lead the horse to water.

IR should know better (he used a link in 2022), but if you want, talk to the ram manufacturers about what I am saying. IDGAF about what Intel and AMD say about it.

From RAMBUS!
View attachment 380252
I mean, they're saying the exact same thing I said:
While the data width is the same (64-bits total) having two smaller independent channels improves memory access efficiency. So not only do you get the benefit of the speed bump with DDR5, the benefit of that higher MT/s is amplified by greater efficiency.
Bus size is the same, and the bandwidth increase only comes from the speed bump, it has no relation with the idea of sub-channels.
The increased efficiency improves on things like latency, interleaving and power management (like @biffzinker mentioned), but has no correlation with actual bandwidth.

If "high density configs" are not listed on the QVL, then don't buy it!!!! Buy listed RAM (or at least RAM with the same specs as listed RAM). I repeatedly have said to buy RAM the motherboard makers say is compatible.
You did not mention that at all. You just said "compatible RAM", which has tons of different meanings. Which also leads us to...
As I also repeatedly said, leave the defaults alone if you want the greatest chance of no problems.

It seems you and others seem to think if one buys unlisted RAM, it should automatically still work with no problems. Why would you assume that? And then beyond that, you seem to feel they should be able to change default settings in the BIOS to settings the RAM makers do NOT list as supported, and once again it should automatically still work with no problems. Why would you assume that?
You made no mention of what's the "defaults". If by defaults you mean no XMP/EXPO, then sure, run your DDR5 at 3600MHz, or your DDR4 sticks at 2133MHz, it's going to work flawlessly!
Otherwise, if you do mention stuff like XMP/EXPO, that would fit within overclock already (even if it's a 1-button press), but it is required to achieve the 6000MHz or whatever is the advertised frequencies of your sticks.
So, which of those cases are you talking about?
I will say it again - please take my entire comment as a whole and not portions out of context. If the motherboard supports 4 sticks, and you buy 4 sticks listed on the QVL, and you don't dink with the default settings, there is no reason to suspect the 4 sticks will not work perfectly, right out of the box, starting with the first boot.
You had not mentioned the QVL before, and you still don't make the above ambiguity clear :)
 
I mean, they're saying the exact same thing I said:

Bus size is the same, and the bandwidth increase only comes from the speed bump, it has no relation with the idea of sub-channels.
The increased efficiency improves on things like latency, interleaving and power management (like @biffzinker mentioned), but has no correlation with actual bandwidth.


You did not mention that at all. You just said "compatible RAM", which has tons of different meanings. Which also leads us to...

You made no mention of what's the "defaults". If by defaults you mean no XMP/EXPO, then sure, run your DDR5 at 3600MHz, or your DDR4 sticks at 2133MHz, it's going to work flawlessly!
Otherwise, if you do mention stuff like XMP/EXPO, that would fit within overclock already (even if it's a 1-button press), but it is required to achieve the 6000MHz or whatever is the advertised frequencies of your sticks.
So, which of those cases are you talking about?

You had not mentioned the QVL before, and you still don't make the above ambiguity clear :)
And you read right past where they say dual channel on a single stick, but wtf do I know....lol
 
You did not mention that at all. You just said "compatible RAM", which has tons of different meanings.
Oh bullfeathers! Come on dude? How is a user supposed to determine compatibility otherwise? And do you not realize that not all motherboard makers use the term "QVL"? Do you see "QVL on this Gigabyte page? How about this MSI page?

And what is the QVL but a list of "compatible" devices? :kookoo:

It still does not change the fact you took only a portion of my comment out of context which then changed the meaning of what I said.
 
Back
Top