Latency is just one factor, and specifically the CL it's how long (in clock cycles, not time) it takes for the first word of a read to be available on the output pins of the memory. After that, the first number (3200, 4400, 4800, etc) is how fast data transmits.
I think in general for 'normal' applications high MT/s (like 5200) is better, for games lower latency is better. There are plenty of exceptions, especially when you get into the 'scientific' side of 'applications', but for normal user apps I think high MT/s helps.
So just to note, here at TPU they used DDR5-6000 C36 Gear 2 (1:2 ratio). This is some freaky fast DDR5 for now, probably more reflective of what will be available in 1H 2022. The DDR4 used on older platforms is quite good too though, DDR4-3600 C16-20-20-34 1T Gear 1 and 1:1 IF for AMD is no slouch. I think these are putting the older platforms pretty close to their best footing, that 90% of folks can get to run properly.
Uhm ... what, exactly, in my post gave you the impression that you needed to (rather poorly, IMO) explain the difference between RAM transfer rates and timings to me? And even if this was necessary (which it really wasn't), how does this change anything I said?
Your assumption is also wrong: Most consumer applications are more memory latency sensitive than bandwidth sensitive, generally, though there are obviously exceptions. That's why something like 3200c12 can perform as well as much higher clocked memory with worse latencies. Games are
more latency sensitive than most applications, but there are very few realistic consumer applications where memory bandwidth is more important than latency. (iGPU gaming is the one key use case where bandwidth is king outside of server applications, which generally
love bandwidth - hence why this is the focus for DDR5, which is largely designed to align with server and datacenter owners' desires.)
And while DDR5-6000 C36 might be fast for now (it's 6 clock cycles faster than the JEDEC 6000A spec, though "freaky fast" is hardly suitable IMO), it is
slow compared to the expected speeds of DDR5 in the coming years. That's why I was talking about mature vs. immature tech.
DDR5 JEDEC specifications currently go to DDR5-6400, with standards for 8400 in the works. For reference, the absolute highest DDR4 JEDEC specification is 3200. That means we haven't even seen the tip of the iceberg yet of DDR5 speed. So, again, even DDR5-6000c36 is a poor comparison to something like DDR4-3600c16, as one is below even the highest current JEDEC spec (let alone future ones), while the other is faster than the highest JEDEC spec several years into its life cycle.
The comment you responded to was mainly pointing out that the comparison you were talking about from Computerbase.de is
deeply flawed, as it compares one highly tuned DDR4 kit to a near-base-spec DDR5 kit. The DDR4 equivalent of DDR5-4400 would be something like DDR4-2133 or 2400. Also, the Computerbase DDR5-4400 timings are JEDEC 4400A timings, at c32. That is a theoretical minimum latency of 14,55ms of latency compared to 7,37ms for DDR4-3800c14. You see how that comparison is
extremely skewed? Expecting
anything but the DDR4 kits winning in those scenarios would be crazy. So, as I said, mature, low latency, high speed DDR4 will
obviously, be faster, especially in (mostly) latency-sensitive consumer workloads. What more nuanced reviews show, such as Anandtech's more equal comparison (both at JEDEC speed), is that the expected latency disadvantage of DDR5 is much less than has been speculated.