On SSD's the benefit is that valuable space is not wasted. There is also a wear-leveling benefit as Windows is forced to use system RAM instead of swapping out data to the pagefile as frequently. On HDD's, the main benefit is fragmentation. By forcing a fixed size, the pagefile doesn't fragment. While HDD's don't suffer from wear-leveling, forcing Windows to keep data in RAM often saves on pagefile access times to and from the drive.
None of this is accurate.
Once again, the PF is NOT a set and forget setting. It is dynamic! Why? Because people use their computers for different tasks. A single purpose computer (like an ATM machine or cash register POS computer) is a suitable candidate for a fixed size PF. Computers that are used for a variety of tasks are best suited by a dynamic PF.
Wouldn't it just make sense for Microsoft to just use a set and forget setting if that was best? It would mean much less programming for them. That would mean it would cost them less and since so many think they are nothing but greedy money lovers, that would be best for MS too. But No! MS really does want our systems to run optimally and through YEARS of experience and data analysis, they learned a dynamic PF works best for the vast majority of users. That's why they made it that way in Windows 7 and more importantly, that's why they it is still that way in W10/11.
No its not - not unless the SSD (or hard drive) is already critically low on disk space. This is just another example where someone is trying to use an exception to render the norm and main point moot!
Note that SSD wear-leveling (along with TRIM) is another reason "SSDs are ideal for Page Files" (see below).
If free disk space is that low (regardless the drive type) the USER has failed to make sure the system has all it needs with plenty to spare.
The excuse about fragmentation is simply nonsense! Come on, Lex! Why? 25 years ago that might have mattered. But today? Nonsense. Why? Because hard drives today are typically HUGE but more importantly, Windows keeps our hard drives defragmented automatically! That's been the default since Windows 7 came out in 2009!! Therefore, as long as the user has maintained plenty of free disk space, fragmentation will never get to the point it becomes a problem.
Unless, of course, someone once again thinks they are smarter than all the computer scientists and PhDs on the development teams and their exabytes and decades of empirical data at Microsoft, and they foolishly disabled automatic defragging.
SSDs are ideally suited for Page Files! See
this post.
Lex suggests forcing Windows to use system RAM instead of swapping out to the PF is a good thing. It is NOT. That simply forces Windows to keep "low priority" data in RAM. That is NOT the optimal use of system resources. You want your highest priority data to go into the fastest memory - that's system RAM, not the PF.
forcing Windows to keep data in RAM often saves on pagefile access times to and from the drive.
Huh? This makes no sense at all. Forcing Windows to keep data in RAM does cut the PF from the equation. But it makes no sense to do that because more "low-priority" data is now in RAM. That forces Windows to read and write more "high-priority" data back to its normal storage locations on the drive, instead of temporarily stuffing back into the PF. That adds even more to data access times as it forces the R/W to run back and forth even more, and puts even more wear on the drive.
Hell, if someone bothered to change the pagefile settings, I'd be willing to bet that same person knows if their computer has enough RAM/pagefile to handle a piece of software or not, and it's likely they won't bother to run that software because they know it won't run well or at all.
It would be so nice if this were true, but it is totally not - for several reasons! And sadly, most often is because the person changing those setting is simply because they read where someone on a forum, who is NOT a virtual memory expert, told them to. NOT because they actually did a proper analysis to determine what is best.
For example, you mention the "commit limit" above not the "commit charge" or specifically the "peak commit charge" or how to determine it. Yet that is a critical bit of information needed to properly set the PF size. No one here has mentioned that.
Nor has there been any mention on how to determine performance counters, another key factor in determining PF size.
Did you notice the first paragraph of your second "nice article" from Microsoft where it clearly says, "
This means that page file sizing is also unique to each system and cannot be generalized."
Yet so often we see those pretending to be experts just throw out arbitrary numbers or 1.5 x RAM or whatever. They never show the readers how to properly analyze their resource utilizations so they can properly determine the correct PF size for that unique system.
The page file size is NOT a set and forget setting. If you want a fixed size PF, fine. But do it right! You must analyzed the utilization of your resources for commit charge, page faults, performance counters and more while you perform your most demanding tasks. Then determine the ideal maximum and minimum settings. If you don't know how to do that, don't dink with the defaults!
Every computer is different. So you cannot go by the settings someone else uses. And then remember, any time you upgrade your hardware, make major changes to the OS or other major software, or change your computing habits, you need to start all over and analyze your resource utilization again to determine if you need to change those settings.
Or, you can just let Windows manage it. Contrary to what some here want every one else to believe, Windows, where they employ teams of genuine virtual memory experts, does this very well.