There's nothing to "believe". I literally just tested it (
1. Picked a game at random, Mark of the Ninja (and it's the GOG offline installer version so no client running spending time doing DRM checks),
2. Start it from a 5,400rpm HDD then a SATA SSD then a RAMDisk (with a cold boot in-between each to clear the Windows Cache) and
3. Record "time from start click until reaching the main menu":-
A. (5,400rpm HDD) = 12.1s
B. (MX500 SSD) = 3.9s
C. (RamDisk) = 3.8s
^ And that's with the entire game pre-cached (a 100% perfect cache prediction hit rate of a game that completely fits into RAM) vs a slow (by modern standards) SATA SSD. For larger games that won't fit into a RAMDisk, or perhaps playing something else the algorithm hasn't cached, the 3.9 vs 3.8s would be reduced even more as it'll be loading direct from the SSD anyway. As I said, regardless of the
Quattuordecillion Yottabytes per FemtoSecond CrystalDiskMark sequential marketing screenshots, real world game load times = depreciating gains...
That unless you "play" CrystalDiskMark all day long, you're not actually testing anything of real-world substance beyond how CrystalDiskMark specficially can saturate multi GB/s loads in a way normal games / applications don't. (Hint: If you're looking at this software for the purpose of reducing game load times, have you tried testing some actual games)?...