How are next gen consoles supposed to utilize their SSDs anyway ? I've seen a lot of talk from Sony and MS and nothing yet that showcases their claims. Hitman 3 was just released, a game that used to take quite a while to load a level and on a SATA3 SSD on PC is as fast is it is on PS4/Xbox Series X.
I am starting to believe all of those were just empty promises. What really matters with SSDs is latency, which remains roughly the same whether it runs at 1 GB/s or 10GB/s.
It's both the access latency and how well the flash drive's controller holds up when disk capacity fills up versus when it's empty or near empty along with how good it is at random I/O. Disk performance impacts micro stutter and FPS averages it's easier to pinpoint comparing a HDD to the fastest NVME than it is with HDD to SSD or SSD to NVME because the disparity gap widens the most with access latency which is the biggest performance culprit, but for other reasons as well like random I/O of the controller itself and overall sequential bandwidth the latter of which matters the least for in game performance stream of data into memory. The bulk of micro stutter in games that don't preload data is actually from random queue depth I/O to do with access latency from what I've encountered over the years. Comparing NVME 3.0 to NVME 4.0 on a x4 device is actually really difficult to pinpoint in the context of things for in game performance because the performance disparity between the two isn't that large especially in the area's that are more important which is mostly random I/O and latency access sensitive.
Having the right hardware and software helps as well if testing and comparison is the objective. If you want to visually see difference in a more glaring way using conventional VSYNC and not adaptive VSYNC, GSYNC, or FreeSync is most ideal and at the highest possible refresh rate and frame rate in a game that is micro stutter sensitive. Due the amount the amount of stream data combined with the higher hardware demands you'll notice the visual impact the most in the right game titles that stream lots of data from storage into memory rather than preloading most of it into memory ahead of time minimizing access latency and random I/O frame render issues which are queue depth sensitive based on scene complexity and what's in memory already versus what needs to be transfer into it.
Testing methodologies and error of margin make testing and comparing solid state devices for in game performance impacts against each other extremely difficult in practice. The disparity gap of the devices adds to the challenge the smaller it is as opposed to the wider it is much like testing a HDD against NVME is going to be more readily obvious. It's not simply the storage adding to the complexity and testing and pin pointing readily transparent differences though. Lower the refresh rate and FPS averages and you'll have a harder time spotting things than you would otherwise likewise if you obscure and minimize readily obvious slow downs forms of adaptive sync and such rather than the more jarring traditional VSYNC you'd be more hard pressed to spot it visually. Error of margin makes it a challenge as well especially when storage performance for micro stutter contend and overlap with each other pretty heavily. As storage performance and display tech and other hardware improves it might be easier to spot and pinpoint though. All things considered with error of margin it might be best slow down the rest of the system outside of the storage to get a better picture with less "error of margin" fluctuation coming into play impacting results is my conclusion. In the big picture storage doesn't have a enormous impact on FPS averages, 1%'s, and 0.1%'s.