• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Windows 11 General Discussion

If Microsoft hasn't changed the policy about it since XP days, Windows will increase the size, regardless of whether you set a fixed size for it or if it is fully automatic.
Never seen this happen. I'm inclined to think it doesn't.
And if you don't have a pagefile, prepare for potential data loss or Windows automatically closing stuff.
Also never seen this happen, ever. Just doesn't happen.
 
I am here to learn, not criticize; what is the advantage of a fixed size page file when most hard drives these days have a lot of free space?
 
Never seen this happen. I'm inclined to think it doesn't.

Also never seen this happen, ever. Just doesn't happen.
To be frank, it's... rare. I mean, it happens only when you run into a situation where your computer's RAM and pagefile aren't big enough. How often does that happen to begin with?

Hell, if someone bothered to change the pagefile settings, I'd be willing to bet that same person knows if their computer has enough RAM/pagefile to handle a piece of software or not, and it's likely they won't bother to run that software because they know it won't run well or at all.

However, I myself have stumbled upon this, first due to my old computers' lack of enough RAM (XP with 256 MB of RAM in early 2010s) and then because at the time I had no better idea than to disable the pagefile entirely when I also ran software that wouldn't fit in the system's RAM alone. So, I've seen both types of system notifications.

Granted, I didn't think opening twenty PDFs would be that much of a hassle in that system. Alas, I was proven wrong.

EDIT:
So I decided to look it up straight from the source, to ensure my memory ( :laugh: ) wasn't faulty.

I went to Microsoft Docs, and pulled these nice articles:


Basically page file is *needed* in certain server applications because it increases reliability. It's specially important when running Hyper-V. Not sure if those server applications will refuse to run at all if there's no pagefile, though.

But for client Windows users, this is what matters:

When large physical memory is installed, a page file might not be required to support the system commit charge during peak usage.

However, the reason to configure the page file size has not changed. It has always been about supporting a system crash dump, if it is necessary, or extending the system commit limit, if it is necessary. For example, when a lot of physical memory is installed, a page file might not be required to back the system commit charge during peak usage. The available physical memory alone might be large enough to do this. However, a page file or a dedicated dump file might still be required to back a system crash dump.
For those not aware, the system commit limit is the total virtual memory the system can support (virtual memory being the sum of physical memory and page files).

If you somehow get very close to the limit, this happens:
1628644622534.png


Also, regarding how Windows handles page file sizes:
1628645187767.png

There's no mention of whether Windows will change the page file size if it had been set to a fixed size, but the documentation seems to imply Windows won't grow the PF in such cases.
 
Last edited:
I am here to learn, not criticize; what is the advantage of a fixed size page file when most hard drives these days have a lot of free space?
On SSD's the benefit is that valuable space is not wasted. There is also a wear-leveling benefit as Windows is forced to use system RAM instead of swapping out data to the pagefile as frequently. On HDD's, the main benefit is fragmentation. By forcing a fixed size, the pagefile doesn't fragment. While HDD's don't suffer from wear-leveling, forcing Windows to keep data in RAM often saves on pagefile access times to and from the drive.

To be frank, it's... rare. I mean, it happens only when you run into a situation where your computer's RAM and pagefile aren't big enough. How often does that happen to begin with?
As I've said, I've never seen an instance of it.
Hell, if someone bothered to change the pagefile settings, I'd be willing to bet that same person knows if their computer has enough RAM/pagefile to handle a piece of software or not, and it's likely they won't bother to run that software because they know it won't run well or at all.
Agreed.
However, I myself have stumbled upon this, first due to my old computers' lack of enough RAM (XP with 256 MB of RAM in early 2010s) and then because at the time I had no better idea than to disable the pagefile entirely when I also ran software that wouldn't fit in the system's RAM alone. So, I've seen both types of system notifications.
Ok, back then it could have happened more easily. Back then I limited the swapfile to 3 or 4 x the system RAM, which worked out well the vast majority of the time but once system RAM sizes got above 3GB, the chances of programs or the OS running out of memory space was very, very small.

Now for the scope of working with Windows 11, setting the pagefile to a static size provides the aforementioned wear-leveling benefit and many people will be installing 11 on an SSD. However, even if a HDD is used static pagefile size benefits still apply, even with the improvements that are rumored to have been made to Windows memory usage and management routines.
 
Last edited:
Good point: SSD
 
Did a restart and I'm back at 3.7 GB; maybe there is a memory leak some place?

Well, I checked my own system and it seems about right? I had a physical memory usage of around 3.5 GB with the system close to stock install conditions (only things running in the background being the Radeon driver software and the AnyDesk service).
1628648119156.png
 
On SSD's the benefit is that valuable space is not wasted. There is also a wear-leveling benefit as Windows is forced to use system RAM instead of swapping out data to the pagefile as frequently. On HDD's, the main benefit is fragmentation. By forcing a fixed size, the pagefile doesn't fragment. While HDD's don't suffer from wear-leveling, forcing Windows to keep data in RAM often saves on pagefile access times to and from the drive.
:( None of this is accurate.

Once again, the PF is NOT a set and forget setting. It is dynamic! Why? Because people use their computers for different tasks. A single purpose computer (like an ATM machine or cash register POS computer) is a suitable candidate for a fixed size PF. Computers that are used for a variety of tasks are best suited by a dynamic PF.

Wouldn't it just make sense for Microsoft to just use a set and forget setting if that was best? It would mean much less programming for them. That would mean it would cost them less and since so many think they are nothing but greedy money lovers, that would be best for MS too. But No! MS really does want our systems to run optimally and through YEARS of experience and data analysis, they learned a dynamic PF works best for the vast majority of users. That's why they made it that way in Windows 7 and more importantly, that's why they it is still that way in W10/11.

Good point: SSD
No its not - not unless the SSD (or hard drive) is already critically low on disk space. This is just another example where someone is trying to use an exception to render the norm and main point moot! :( Note that SSD wear-leveling (along with TRIM) is another reason "SSDs are ideal for Page Files" (see below).

If free disk space is that low (regardless the drive type) the USER has failed to make sure the system has all it needs with plenty to spare. The excuse about fragmentation is simply nonsense! Come on, Lex! Why? 25 years ago that might have mattered. But today? Nonsense. Why? Because hard drives today are typically HUGE but more importantly, Windows keeps our hard drives defragmented automatically! That's been the default since Windows 7 came out in 2009!! Therefore, as long as the user has maintained plenty of free disk space, fragmentation will never get to the point it becomes a problem.

Unless, of course, someone once again thinks they are smarter than all the computer scientists and PhDs on the development teams and their exabytes and decades of empirical data at Microsoft, and they foolishly disabled automatic defragging. :rolleyes: :kookoo:

SSDs are ideally suited for Page Files! See this post.

Lex suggests forcing Windows to use system RAM instead of swapping out to the PF is a good thing. It is NOT. That simply forces Windows to keep "low priority" data in RAM. That is NOT the optimal use of system resources. You want your highest priority data to go into the fastest memory - that's system RAM, not the PF.

forcing Windows to keep data in RAM often saves on pagefile access times to and from the drive.
Huh? This makes no sense at all. Forcing Windows to keep data in RAM does cut the PF from the equation. But it makes no sense to do that because more "low-priority" data is now in RAM. That forces Windows to read and write more "high-priority" data back to its normal storage locations on the drive, instead of temporarily stuffing back into the PF. That adds even more to data access times as it forces the R/W to run back and forth even more, and puts even more wear on the drive.

Hell, if someone bothered to change the pagefile settings, I'd be willing to bet that same person knows if their computer has enough RAM/pagefile to handle a piece of software or not, and it's likely they won't bother to run that software because they know it won't run well or at all.
It would be so nice if this were true, but it is totally not - for several reasons! And sadly, most often is because the person changing those setting is simply because they read where someone on a forum, who is NOT a virtual memory expert, told them to. NOT because they actually did a proper analysis to determine what is best.

For example, you mention the "commit limit" above not the "commit charge" or specifically the "peak commit charge" or how to determine it. Yet that is a critical bit of information needed to properly set the PF size. No one here has mentioned that. :( Nor has there been any mention on how to determine performance counters, another key factor in determining PF size.

Did you notice the first paragraph of your second "nice article" from Microsoft where it clearly says, "This means that page file sizing is also unique to each system and cannot be generalized."

Yet so often we see those pretending to be experts just throw out arbitrary numbers or 1.5 x RAM or whatever. They never show the readers how to properly analyze their resource utilizations so they can properly determine the correct PF size for that unique system.

The page file size is NOT a set and forget setting. If you want a fixed size PF, fine. But do it right! You must analyzed the utilization of your resources for commit charge, page faults, performance counters and more while you perform your most demanding tasks. Then determine the ideal maximum and minimum settings. If you don't know how to do that, don't dink with the defaults!

Every computer is different. So you cannot go by the settings someone else uses. And then remember, any time you upgrade your hardware, make major changes to the OS or other major software, or change your computing habits, you need to start all over and analyze your resource utilization again to determine if you need to change those settings.

Or, you can just let Windows manage it. Contrary to what some here want every one else to believe, Windows, where they employ teams of genuine virtual memory experts, does this very well.
 
The whole Chia thing is that solid state drives don't like too many writes, so I thought Lex had a good point about paging on solid state drives.

I upped the RAM on my daughters laptop to reduce paging on the solid state drive, which I replaced recently and it then failed on me; I know 'statistic of one'.
 
Last edited:
When i was on W7 i had no PF, never had a single issue.
 
Or, you can just let Windows manage it. Contrary to what some here want every one else to believe, Windows, where they employ teams of genuine virtual memory experts, does this very well.
Indeed. I personally stopped bothering about the page file and let Windows handle everything in that regard. At most I move it to some other drive, but that's about it.
 
Hi,
I don't do anything i just use ssd's
Warranty rules.
 
The whole Chia thing is that solid state drives don't like too many writes
You are right, Andy. But as noted a couple times now, too many writes was only a problem years ago with first generation SSDs. It is no longer a problem. Not unless, and even then only maybe, the SSD is being used in a very busy data-center.

Please read the link I provided above about SSDs being ideal for Page Files.
 
You are right, Andy. But as noted a couple times now, too many writes was only a problem years ago with first generation SSDs. It is no longer a problem. Not unless, and even then only maybe, the SSD is being used in a very busy data-center.

Please read the link I provided above about SSDs being ideal for Page Files.

Chia is not a typical critter Bill. I am near certain, and I don't know much about Chia, but it was killing drives in a month to a few months based on its abuse to drives. If I recall properly, manufacturers changed warranty policies based on Chia.
 
You are right, Andy. But as noted a couple times now, too many writes was only a problem years ago with first generation SSDs. It is no longer a problem. Not unless, and even then only maybe, the SSD is being used in a very busy data-center.

Please read the link I provided above about SSDs being ideal for Page Files.
Good to know... I just learned something useful.
 
I think security is a big issue in the modern world with countries preparing for cyber attacks in time of conflict.
 
I am wondering how much he got paid to write that article
The article is basically the same thing as the Insider blogpost, if you ask me. So I don't see anything noteworthy, other than reaching other audiences.
 
WiNPass11 Update! LINK

Capture.JPG
 
I am wondering how much he got paid to write that article
Hi,
Per click so keep giving lol

I still have very old crucial mx-100 ssd's working just fine didn't do much of anything beside use them and disable hibernation.
Linux killed one didn't like the firmware but crucial replaced it so no more issue.
 
Silly question time :laugh:

I want to migrate W11 from a 850 to a 980 Pro but the W11 bootloader is in my 970 where i have W10, so i would like to know if it is possible to migrate W11 even the bootloader is in another SSD,.
 
Silly question time :laugh:

I want to migrate W11 from a 850 to a 980 Pro but the W11 bootloader is in my 970 where i have W10, so i would like to know if it is possible to migrate W11 even the bootloader is in another SSD,.

Might depend on how Windows identifies volumes. The UEFI bootloader loads systems based on the GUID of the volume where the OS is located. So, I think if you simply clone the drive volumes from one SSD to the other, they should hold their original GUID. Hence, the bootloader should keep working fine.

Otherwise, you can try your hand at editing the boot entries with bcdedit or some other tool (EasyBCD comes to mind).

Just have some bootable USB drive or something at hand if things don't work out that way.
 
Silly question time :laugh:

I want to migrate W11 from a 850 to a 980 Pro but the W11 bootloader is in my 970 where i have W10, so i would like to know if it is possible to migrate W11 even the bootloader is in another SSD,.
Just image the drive over. AOMEI has a drive copy utility that does the job perfectly.
 
Back
Top