You still seem to be missing the important point, the differences in worst case latency are measured in single digits.
Using the graphs you posted from the site above.
View attachment 374945
Worst case scenario for 285k? 85.15ns
View attachment 374946
Worst case scenario for 9900X? 80.65ns
That is a difference of
whooping & earthshattering 4.5ns!...
Good grief, whatever is Intel going to do?!?
That's 4.5/1,000,000,000ths of a second for a function event that happens infrequently.
You,
@trparky and the site you linked to are making a big deal out of what is effectively nothing-sauce. Do you want to continue with this silly shtick?
I thought we were going to leave it at that, but since you insist on spreading misinformation and moving goalposts...
Yes, your cherry-picking of results make it look like a nothing-burger. But now let's get down to some facts:
Ryzen has been widely plaged by this cross-CCD issue, specially in games, in both chiplet designs with multiple CCDs, to the monolithic designs with different CCXes (which are still a thing in AMD's latest hybrid-core devices). This is a fact, otherwise stuff like core parking, different chipset drivers to workaround scheduler issues and whatnot wouldn't be a thing as the "solution" to keep tasks pinned to a single set of CPUs in order to avoid such latency hit, making it so that those tasks consistently see a communication latency of 25~50ns instead of 75~80.
Now for Intel's case, as you can see in the image you posted, ALL P-cores have a high latency when talking to one another, with the best case scenario being the 2 last P-cores (7&8) talking to one another (57ns penalty).
So for applications that require high performance and get pinned to the 8 P-cores will all be seeing a latency of 57~87ns, whereas on an AMD system you'd be pinning such application to the same 8 cores from a CCD, keeping their latency down to 25~50ns.
Of course, for applications that are not sensitive to cross-core communications this is irrelevant, and those tasks are also often then ones that can scale to multiple cores without issues (so the multi-CCD latency or that ring bus limitation stop being an issue), but other applications (like games, as said before) are really sensitive to that and it does lead to worse performance.
Couple that with Windows' shitty scheduler, and it doesn't do any good.
One simple example of that was some applications having better performance being pinned solely to the E-cores - which have lower latency when communicating among themselves, and also lower performance because duh- instead of the P-cores.
I don't get why you are going so hard on fan-boying for a product. It has strong points, and also weak ones. The latency is an issue for some applications, like it or not.
And no, depending on your application context switches and cross-core comms happen very often, you are just making wrong assumptions and misusing data to try to make a point.