# Pagefile "anomalies"?



## Tibor Hazafi (Oct 16, 2019)

Hi all,

For now I thought I understand the operation of the pagefile.. I thought that when the physical RAM runs out, then data will be written in part of the hard drive called pagefile. Since I have 16GB RAM I thought I can completely turn off pagefile, but I have set a fix 4GB space for it just for sure.

Yesterday just for my curiosity I monitored the pagefile used via MSI Afterburner & Riva Tuner Statistic Server while gaming. Looking at the log file it moved between 2147MB and 9331MB, while the physical RAM usage maximum was 5872MB. Wtf? Three questions born in my head immediately:

1. If the pyshical RAM didn't run out, then why is there any pagefile at all?
2. If I have set the maximum amount of pagefile to 4GB, then how is it possible for it to be 9331MB without error?
3. Which fix amount should I set then?

I don't know how the pagefile works anymore..


----------



## INSTG8R (Oct 16, 2019)

With Win 10 you just let it do it s own thing, clearly you’ve seen its doing that anyway. Best to just “Let Windows Manage“ default setting and don’t think/worry about it.


----------



## Vya Domus (Oct 16, 2019)

The pagefile size should not be messed with, I don't know why MS still lets people try and change it. Paging is a critical component for virtual memory. Contrary to popular belief that if you have enough RAM you don't need it, this is wrong, any OS *will always use virtual memory* and therefore it will always need paging.

When the OS needs to allocate memory to a program it does not provide a physical address to it but rather it does this through these "memory pages" which have a different address space much larger than the physical one.

In other words, any software will always interact with the memory through these pages and never directly to the physical locations no matter how much memory is available, you can see why there is no point in trying to mess with it.

So it doesn't matter what it does, just let it do it's thing.


----------



## bonehead123 (Oct 16, 2019)

Windows 10 has pretty much made this discussion a moot point, since it's memory management, while not 100% perfect, is light years beyond what it used to be in earlier versions of windows (XP, Vista, 7, 8).

A while back, just out of curiosity, I set up 3 identical rigs (OS (fully updated & all background shitzu turned off), components, apps & drivers), 1 with a system-managed page file, 1 with my own settings, and the 3rd one with NO page file..... and then ran them all through a series of tasks that I normally do everyday for over a week.

The results:  There was no noticeable difference in performance regardless of the tasks performed, either individually or in multi-app, heavy workload situations....


----------



## kapone32 (Oct 16, 2019)

I agree with Vya Domus. I have 32GB of RAM and I still have Pagefile using about 5 GB of Data when I am gaming. The best thing to do is set it on your fastest drive and be happy.


----------



## Jetster (Oct 16, 2019)

The page file is also used for memory dumps. Windows will allocate enough space to do this. Dont mess with it, its part of your system recovery


----------



## eidairaman1 (Oct 16, 2019)

INSTG8R said:


> With Win 10 you just let it do it s own thing, clearly you’ve seen its doing that anyway. Best to just “Let Windows Manage“ default setting and don’t think/worry about it.



I set a minimum and max size to same so it doesnt stretch or compress, however i know in the days of the hdd you could put it on a separate drive for better performance.


----------



## Tibor Hazafi (Oct 16, 2019)

Jetster said:


> Dont mess with it, its part of your system recovery



I don't use system recovery either. One of my first doing after a Windows install is to turn it off with restore points as well. I prefer a new clean Windows install instead of recovery. Especially because I had to reinstall OS only after bigger hardware changes in every 3-4 years.


----------



## Jetster (Oct 16, 2019)

Tibor Hazafi said:


> I don't use system recovery either. One of my first doing after a Windows install is to turn it off with restore points as well. I prefer a new clean Windows install instead of recovery. Especially because I had to reinstall OS only after bigger hardware changes in every 3-4 years.


Its not that recovery, its a memory dump. You know when you reboot and it goes right back to your page you were on because your system got an error


----------



## INSTG8R (Oct 16, 2019)

eidairaman1 said:


> I set a minimum and max size to same so it doesnt stretch or compress, however i know in the days of the hdd you could put it on a separate drive for better performance.


I used too as well but it really isn’t necessary anymore but the other drive thing is still valid, I’m probably thrashing my SSD RAID array not moving it but ¯\_(ツ)_/¯ it’s my fastest drive so....


----------



## Bill_Bright (Oct 16, 2019)

Tibor Hazafi said:


> I thought that when the physical RAM runs out, then data will be written in part of the hard drive called pagefile.


Sorry, but no. That is not right at all. And it never has been. That's been a falsehood told and retold since the beginning of page files (as far back as when they were called disk caches, swap files and other names). Operating systems will use the PF even if you have 128GB of RAM installed and only 4GB is being used. And that's a good thing! 

What happens is the OS will use the main system RAM for the higher priority data, then stuff the lower priority data into the page file. The more system RAM you have, the more higher priority data can go in it. But it will still use the PF for lower priority data - and again, that's a good thing. 

You will often hear if you have lots of RAM, disabling the PF is good because it forces the OS to put everything into RAM. That is illogical and BAD ADVICE! Operating systems use "virtual memory" to process the data. Virtual memory is the system RAM plus the PF. The PF allows the OS to use the system RAM more efficiently. It does NOT improve performance when you disable the PF. 

Even if the PF is rarely touched, there is no benefit to disabling it. So why do it? If the excuse is to save disk space, that's BS! For one, if you let Windows manage the space, it will give it up if space is critically low. But more importantly, if you are that low on free disk space, then YOU HAVE FAILED to give your OS the necessary space it needs. You need to uninstall some programs, delete or move some files or buy a bigger drive. Those are user responsibilities, not the operating system's.

There really is no need to set a fixed size either. It does no harm when the PF compresses or stretches back out. In fact, being "dynamic" is one of its virtues. And unless you are a true expert at virtual memory management, how do you know what the optimal size should be? The old rule of thumb (Page file size = RAM X  1.5 or RAM X 2) makes no sense in modern systems.

If the worry of using a fixed size is to minimize fragmentation due to the PF expanding and compressing, then I say that is an unnecessary worry. With XP and before when drives (and RAM amounts) were much smaller, it might have done some good but no longer. For one, if fragmentation is a real concern, you need a bigger drive. Second, fragmentation is not a problem with SSDs. And third, unless the user changes the defaults (not recommended) Windows automatically defrags HDs anyway.

Its important for all of us to remember modern versions of Windows (7/8/10) are not XP. There's no need to treat them the same. In fact, doing so can actually be detrimental. With XP and before all the way back to DOS days, I always set a fixed size. Since 7 and above, I always let Windows manage it.

So I agree with those recommending leaving the defaults alone (all the defaults) and to just let Windows manage your page file(s). Contrary to what some seem to think, the army of PhDs, computer scientists and developers at Microsoft are not stupid. Those developers definitely want our systems to run optimally. And even the oft misguided marketing weenies and execs at Microsoft truly want our computers to run optimally too - if for no other reason than it would be bad publicity (thus bad for business) if they didn't.

Microsoft has decades of experience and exabytes of empirical data to draw from. And they have super computers to analyze that data to ensure Windows (7/8/10) uses the PF in the most efficient manner. I highly doubt any of us here are more knowledgeable in virtual memory management than the development team at MS. 

***


INSTG8R said:


> I’m probably thrashing my SSD RAID...


Actually SSDs are ideally suited for Page Files. See Support and Q&A for Solid-State Drives and scroll down to, "_Frequently Asked Questions, Should the pagefile be placed on SSDs?_" Note it says, "_there are few files better than the pagefile to place on an SSD._" While the article is getting old, it actually applies even more so today since wear problems of early generation SSDs are no longer a problem and each new generation of SSD just keeps getting better and better. So IMO, you were wise to put you PF on your SSD RAID.


----------



## INSTG8R (Oct 16, 2019)

Bill_Bright said:


> Actually SSDs are ideally suited for Page Files. See Support and Q&A for Solid-State Drives and scroll down to, "_Frequently Asked Questions, Should the pagefile be placed on SSDs?_" Note it says, "_there are few files better than the pagefile to place on an SSD._" While the article is getting old, it actually applies even more so today since wear problems of early generation SSDs are no longer a problem and each new generation of SSD just keeps getting better and better. So IMO, you were wise to put you PF on your SSD RAID.


No doubt but it’s also my system drive, but also my fastest so I suppose it’s not a huge deal.


----------



## Bill_Bright (Oct 16, 2019)

INSTG8R said:


> No doubt but it’s also my system drive, but also my fastest so I suppose it’s not a huge deal.


Again, that sounds ideal.  I always put my OS and my PF on my fastest drives. If the OS itself is allowed to operate most efficiently, all the running programs benefit too.


----------



## INSTG8R (Oct 16, 2019)

Bill_Bright said:


> Again, that sounds ideal.  I always put my OS and my PF on my fastest drives. If the OS itself is allowed to operate most efficiently, all the running programs benefit too.


Well there’s the whole “Move it to another drive” theory I mean I have a 1TB 960 QVO I suppose I could put it there but I was always sceptical it made any noticeable difference.


----------



## Bill_Bright (Oct 16, 2019)

Sure, you can move it to another drive. But if you have a fast SSD as your boot/system drive, it would make no sense to move it to a slower drive (like a HD) - unless you were desperately low on disk space. 

And yes, for sure, if you move it to another SSD, even a slow one, it is highly unlikely you (as a human) would "notice" any difference in particular because the highest priority data would still be going into system RAM. 

But even if you move the PF to a secondary drive, it is still recommended you leave a PF on the system drive for dumps. Windows knows how to use and manage multiple PFs just fine.


----------



## INSTG8R (Oct 16, 2019)

Bill_Bright said:


> Sure, you can move it to another drive. But if you have a fast SSD as your boot/system drive, it would make no sense to move it to a slower drive (like a HD) - unless you were desperately low on disk space.
> 
> And yes, for sure, if you move it to another SSD, even a slow one, it is highly unlikely you (as a human) would "notice" any difference in particular because the highest priority data would still be going into system RAM.
> 
> But even if you move the PF to a secondary drive, it is still recommended you leave a PF on the system drive for dumps. Windows knows how to use and manage multiple PFs just fine.


Yeah I never really subscribed to it anyway. I only ever used the fixed size back in the day.


----------



## HD64G (Oct 16, 2019)

Page file for modern OSes is very critical to its performance remaining optimal. And it can work well for the system only when put into an SSD or a RAMdisk. I have a dual-boot system atm with the win7's page file into the OS' SSD and the win10's pagefile in a RAMdisk. It was the only way to keep constant performance. Whenever I put a pagefile into a HDD, performace was degrading after a few days to the point of the system being very laggy.


----------



## Bill_Bright (Oct 16, 2019)

HD64G said:


> And it can work well for the system only when put into an SSD or a RAMdisk.


Only? That's not true at all. While certainly better on a SSD, if you only have a hard drive, putting it there too is fine.

As far as your performance degrading "after a few days", that's caused by something else - like low disk space - and not the fact you put the PF on the HD.

Its not like page files fill up and stays filled up with old data.


----------



## DeathtoGnomes (Oct 16, 2019)

Vya Domus said:


> The pagefile size should not be messed with, I don't know why MS still lets people try and change it. Paging is a critical component for virtual memory. Contrary to popular belief that if you have enough RAM you don't need it, this is wrong, any OS *will always use virtual memory* and therefore it will always need paging.
> 
> When the OS needs to allocate memory to a program it does not provide a physical address to it but rather it does this through these "memory pages" which have a different address space much larger than the physical one.
> 
> ...


The problems from previous windows versions managing the page file is that it will claim and allocate more drive space than is really necessary. I set a specific size, 10gb is good for most everything. I have no problem adjusting it as it does not hurt anything really. Setting a size works well for smaller drives.


----------



## Bill_Bright (Oct 16, 2019)

DeathtoGnomes said:


> The problems from previous windows versions managing the page file is that it will claim and allocate more drive space than is really necessary.


But so what? Again, if space becomes critical Windows will adjust it back down again. And it will do it dynamically as and when needed. That's the beauty of letting Windows manage it. When set manually, it is done only when and if the user thinks about it.


DeathtoGnomes said:


> I have no problem adjusting it as it does not hurt anything really.


And that's fine but letting Windows manage it does not hurt anything either. That's why just letting Windows manage it is the preferred and most often recommended setting by the experts. I say again, it is not likely any of us here are true experts at virtual memory management.


DeathtoGnomes said:


> I set a specific size


And yet your system specs say W10 but your complaint was about previous windows versions. ???

I am not suggesting you are hurting your system. But I am saying setting a fixed size does not help it.

For the record, if W7 was kept fully updated, virtual memory management was updated from when W7 was first introduce and was almost as efficient as that in W10. W8's is pretty much the same as W10. While Windows managed page files was the default in Vista, and it did work, it was not as mature as the later version. 

As I noted earlier, with XP and before, I set my own sizes too. I was very hands-on back then. But again, W10 is not XP. We really need to stop treating it like it is. Microsoft has not been sitting on their thumbs these last 20 plus years. They have taken the lessons learned to make Windows manage virtual memory very efficiently.


----------



## HD64G (Oct 16, 2019)

Bill_Bright said:


> Only? That's not true at all. While certainly better on a SSD, if you only have a hard drive, putting it there too is fine.
> 
> As far as your performance degrading "after a few days", that's caused by something else - like low disk space - and not the fact you put the PF on the HD.
> 
> Its not like page files fill up and stays filled up with old data.


Degredation of performance comes from the page file getting fragmented by the constant writesa and not from the size of it. SSD and RAMdisk aren't affected by that at all.


----------



## natr0n (Oct 16, 2019)

You always need a pagefile.

Photoediting,games,everything uses pagefile.


----------



## DeathtoGnomes (Oct 16, 2019)

Bill_Bright said:


> but your complaint was about previous windows versions. ???


you misread and misinterpret as usual. As for the rest of your unwanted feedback (on my system), ill refrain from telling you what to do with your PC as well.


----------



## Voluman (Oct 16, 2019)

Bill_Bright said:


> The old rule of thumb (Page file size = RAM X 1.5 or RAM X 2) makes no sense in modern systems.


Define modern system, or you mean win 10 build xx?



Tibor Hazafi said:


> 3. Which fix amount should I set then?



If you stick with above, you should not experience any disturbing effect. Or at least the same amount of your ram.
But if you set it manually to 4-8 GB, that should be okay in most cases.

Windows needs the pagefile, so turning it off not a solution and you can experience freezes, lags, program errors ( for me, for example SW Battlefront 2 is very picky with pagefile)
If you have enough storage, let the system handle it. If not set it to same as your ram, or you can try your progs which ones work well with less pagefile, like 4-8GB.


----------



## Bill_Bright (Oct 16, 2019)

HD64G said:


> Degredation of performance comes from the page file getting fragmented by the constant writesa and not from the size of it.


While this is certainly possible it is not very common. And the problem would be minimal if the user kept plenty of free disk space available as that would allow the PF to be maintained in large, contiguous, or even one section.


Voluman said:


> Define modern system, or you mean win 10 build xx?


Not just modern operating systems. But systems with decent amounts of RAM, fast CPUs etc. Even W7 on hardware built for W7 could be considered modern. XP era hardware that was upgraded to W7 might not be.


----------



## Tibor Hazafi (Oct 17, 2019)

Bill_Bright said:


> That's the beauty of letting Windows manage it.



Okay, so after reading all your writings/advices it became perfectly clear to me, that the optimal setting for PF is to let windows handle it. But in which way?
A. check "Automatically manage paging file size for all drives"
OR 
B. uncheck "Automatically manage paging file size for all drives" and set "System managed size" for C and "No paging file" for D
OR
C. Whatever


----------



## INSTG8R (Oct 17, 2019)

A and never think about it again. Automatic and forget where the setting is


----------



## Tibor Hazafi (Oct 17, 2019)

INSTG8R said:


> A and never think about it again. Automatic and forget where the setting is



What setting? 
(Thanks by the way  )


----------



## Bill_Bright (Oct 17, 2019)

Tibor Hazafi said:


> What setting?


Umm, he answered using your own designation for the first option, "A". 

Just check the box for "Automatically manage paging files sizes for all drives" then never worry about it again.


----------



## Tibor Hazafi (Oct 17, 2019)

Bill_Bright said:


> Umm, he answered using your own designation for the first option, "A".



It wanted to be a joke:
INSTG8R : "Automatic and *forget* where the setting is"
Me: "What setting?"


----------



## Bill_Bright (Oct 17, 2019)

Ah! Sorry.   My bad! I clearly should have picked up on that. Obviously I am not fully caffeinated yet today so I'm not good at reading facial expressions and body language, or hearing tone of voice via the forums!


----------



## Vayra86 (Oct 17, 2019)

Personally never understood the rationale behind 'put all data in RAM so you can access it more often, it improves performance'.

Like, wut? Using more resources for the same task is good?


----------



## Voluman (Oct 17, 2019)

I think set it on system partition only and let win manage it then 

(I always set on system partition only, on other partition or drives there are nothing to do by MS or pagefile, those are my stuff  )



Vayra86 said:


> put all data in RAM so you can access it more often, it improves performance


Usually lower acces time and reaching with higher bandwith than any storage drive.


----------



## Athlonite (Oct 18, 2019)

Funny thing is I have 16GB ram and a 16GB pagefile (on a seperate SSD) but I'm yet to see Windows make any meaningful use of it even when gaming meh it doesn't really worry me but I just think it's a little weird is all

also what is the difference between Pagefile.sys and Swapfile.sys (other than size that is)


----------



## TheMadDutchDude (Oct 18, 2019)

Disabling your PF can also wreak havoc on multiple aspects of your system. Some games in particular, though they escape my memory as I refuse to mess with the PF these days (there's no reason to!) would crash upon launch without a PF. You can't just get rid of something that is there for the system to use.

If memory serves me right, the swapfile.sys is there to store (in their current state)/resume programs as needed.


----------



## biffzinker (Oct 18, 2019)

Athlonite said:


> also what is the difference between Pagefile.sys and Swapfile.sys (other than size that is)


Swapfile.sys pertains to the newer Windows Universal Apps.

It first showed up in Windows 8.


			
				The Windows Club said:
			
		

> The* Swapfile.sys in Windows 8* is a special type of pagefile used internally by the system to make certain types of paging operations more efficient. It is used to *Suspend or Resume Metro or Modern Windows 8 apps*.











						Hiberfil.sys, Pagefile.sys & the New Swapfile.sys file in Windows 11/10
					

What is Hibernation file, Page file & Swap file in Windows? Why do we see all 3 in Windows 11/10? Can we delete or disable them? Read this post to find out!




					www.thewindowsclub.com


----------



## oobymach (Oct 18, 2019)

You can turn off pagefile completely and your computer will still work fine if you have enough ram (did this in windows 7 for a good while). Games may not run though, gtav especially needs a big pagefile to run. If you use an ssd to run windows turning off your pagefile can extend the life of your drive, but may affect the performance of some programs.


----------



## Bill_Bright (Oct 18, 2019)

Vayra86 said:


> Personally never understood the rationale behind 'put all data in RAM so you can access it more often, it improves performance'.


That's because that is not rational or true. 



oobymach said:


> You can turn off pagefile completely and your computer will still work fine


But why? That just makes no sense. If it made sense (or improved performance) to turn it off, don't you think Microsoft would code Windows to disable it by default? If the computer ran better with the PF disabled when gobs of RAM was installed, don't you think MS would code Windows to disable it?

Microsoft wants Windows to perform optimally. Why? Because if it didn't, they know all the MS haters and bashers would constantly and relentlessly trash them over it! So Windows is coded to enable the PF by default, even when gobs and gobs of RAM is installed, or when the boot drive is an SSD.



oobymach said:


> If you use an ssd to run windows turning off your pagefile can extend the life of your drive


That's nonsense too. Many years ago with first generation SSDs, that might have been true. But with modern SSDs, even with a busy computer it would take so many years to reach that limit, all the other hardware would be so obsolete and superseded many times over years ago. 

Please go back and see my post #11. Read my last paragraph and follow the link that explains why SSDs and Page Files are ideally suited for each other.


----------



## Athlonite (Oct 18, 2019)

biffzinker said:


> Swapfile.sys pertains to the newer Windows Universal Apps.
> 
> It first showed up in Windows 8.
> 
> ...



Ah I thought as much. So now they're double dipping not good enough to have just a pagefile anymore lucky for me it's easily turned off in the reg


----------



## Splinterdog (Oct 18, 2019)

On a system with a small amount of RAM, say 1, 2 or 4Gb, what would the ideal pagefile sizes be? Or leave it at Windows managed?


----------



## oobymach (Oct 18, 2019)

Bill_Bright said:


> But why? That just makes no sense. If it made sense (or improved performance) to turn it off, don't you think Microsoft would code Windows to disable it by default? If the computer ran better with the PF disabled when gobs of RAM was installed, don't you think MS would code Windows to disable it?
> 
> Microsoft wants Windows to perform optimally. Why? Because if it didn't, they know all the MS haters and bashers would constantly and relentlessly trash them over it! So Windows is coded to enable the PF by default, even when gobs and gobs of RAM is installed, or when the boot drive is an SSD.


Think of it like a temp folder, you don't really need it for most programs to run, but you may run into issues if you disable it.

Turning it off doesn't improve performance, but it doesn't negatively impact performance either.



Splinterdog said:


> On a system with a small amount of RAM, say 1, 2 or 4Gb, what would the ideal pagefile sizes be? Or leave it at Windows managed?


Windows managed is usually best. I think you're supposed to set a 24gb max for 16gb of ram, I use 8192mb for the initial size and 16384mb for the max and haven't run into any issues, I have 16gb of ram. I don't think it matters how much ram you have vs the pagefile size.


----------



## Ryzen_7 (Oct 19, 2019)

Tibor Hazafi said:


> Hi all,
> 
> For now I thought I understand the operation of the pagefile.. I thought that when the physical RAM runs out, then data will be written in part of the hard drive called pagefile. Since I have 16GB RAM I thought I can completely turn off pagefile, but I have set a fix 4GB space for it just for sure.
> 
> Yesterday just for my curiosity I monitored the pagefile used via MSI Afterburner & Riva Tuner Statistic Server while gaming. Looking at the log file it moved between 2147MB and 9331MB, while the physical RAM usage maximum was 5872MB. Wtf?



Yap, my experience exactly.

On Windows you need pagefile because that's how Windows works since dawn of time. If you make custom size pagefile like me (e.g. initial 4096 - maximum 8192 MB), you could experience weird behaviour of Windows, even crashes, black screen etc. Opera browser is crashing tabs despite my RAM is only 50% full, so I need to close some of the tabs and reload again.

Moral of story, have large enough SSD and let Windows automatically manage pagefile so you will not experience weird behaviour of Windows and applications. And never put pagefile on mechanical disk because it is terribly slow.



Vya Domus said:


> The pagefile size should not be messed with, I don't know why MS still lets people try and change it. Paging is a critical component for virtual memory. Contrary to popular belief that if you have enough RAM you don't need it, this is wrong, any OS *will always use virtual memory* and therefore it will always need paging.



Every OS (some of them from same branch) more or less behave differently despite having similar logic. In Linux you could be perfectly fine (of course, if having large ammount of RAM, like 8 GB and more) without dedicated swap partition or swap file (similar how Windows have its pagefile).

But you need swap for hibernation or when you don't have enough RAM on Linux but it is worst perfomance if swap is on mechanical drive.



			
				Bill_Bright said:
			
		

> Its not like page files fill up and stays filled up with old data.



It does, in my case. If I let Windows automatically manage pagefile, it grows like mad, no shrinking or flushing garbage from pagefile and this is maybe because I use S3 state, suspend to RAM so if I have long uptime, Windows don't flush pagefile without restart.
I have 16 GB of RAM and it is not enough. I have many tabs on browsers, memory usage from browsers about 2 GB or more and pagefile keeps growing.



			
				Bill_Bright said:
			
		

> So I agree with those recommending leaving the defaults alone (all the defaults) and to just let Windows manage your page file(s). Contrary to what some seem to think, the army of PhDs, computer scientists and developers at Microsoft are not stupid. Those developers definitely want our systems to run optimally. And even the oft misguided marketing weenies and execs at Microsoft truly want our computers to run optimally too - if for no other reason than it would be bad publicity (thus bad for business) if they didn't.
> 
> Microsoft has decades of experience and exabytes of empirical data to draw from. And they have super computers to analyze that data to ensure Windows (7/8/10) uses the PF in the most efficient manner. I highly doubt any of us here are more knowledgeable in virtual memory management than the development team at MS.



Argument from authority. There are different approaches to manage memory, partitions, files etc.

Anyway, there is still old  Microsoft/Windows recommended tradition everything to install, at least their stuff on C:, and even if you have option to install on different drive or partition, some of that stuff would install on C: no matter what.
I know there are reasons (pros and cons) behind it but it is not flexible (as on Linux) especially if you have small SSD system drive, let's say 60 GB.


----------



## Bill_Bright (Oct 19, 2019)

oobymach said:


> Think of it like a temp folder, you don't really need it for most programs to run,


No!  That's not the right way to think about it. Regardless what your running programs need or do, the OS will use it to its advantage. And that's a good thing.


> but you may run into issues if you disable it.


Right! You may - or may not. So since you "may", there's no reason to disable it.

And certainly, disabling it because "I didn't notice any difference" doesn't make sense either. I say use that same logic to leave it alone, enabled and system managed.


Ryzen_7 said:


> It does, in my case. If I let Windows automatically manage pagefile, it grows like mad, no shrinking or flushing garbage from pagefile and this is maybe because I use S3 state, suspend to RAM so if I have long uptime, Windows don't flush pagefile without restart.


Define, "grows like mad". Surely you are not suggesting if you don't reboot, the PF will eventually consume all your free disk space? If that is the case, then you are already critically low on free disk space, or there is a fault with your system somewhere that needs to be corrected. Either way, the solution is not circumvention.

You say the PF keeps growing. I say that's what it is supposed to do! But as seen here with Windows 10, it should not get larger than 4GB unless your system is experiencing a bunch of crash dumps - in which case you have other issues to deal with.

FTR, I also use S3 and I only reboot when some Windows or security program update forces me to. That means I could easily go several weeks without an actual reboot.



Ryzen_7 said:


> Argument from authority. There are different approaches to manage memory, partitions, files etc.


I agree. But what I am saying is, unless a true professional with advanced training in virtual memory management, it is highly unlikely one would know a better approach than the default - which is already totally capable regardless how unique any particular scenario may be. In fact, the system-managed PF assumes each and every computer (even with identical hardware running identical programs) will have unique PF requirements, and then deals with them accordingly.


----------



## Ryzen_7 (Oct 20, 2019)

Bill_Bright said:


> Define, "grows like mad". Surely you are not suggesting if you don't reboot, the PF will eventually consume all your free disk space? If that is the case, then you are already critically low on free disk space, or there is a fault with your system somewhere that needs to be corrected. Either way, the solution is not circumvention.
> 
> You say the PF keeps growing. I say that's what it is supposed to do! But as seen here with Windows 10, it should not get larger than 4GB unless your system is experiencing a bunch of crash dumps - in which case you have other issues to deal with.
> 
> FTR, I also use S3 and I only reboot when some Windows or security program update forces me to. That means I could easily go several weeks without an actual reboot.



Now when I changed PF to automatic, currently I have 12800 MB (and I expect it would grow further to no expected size) of PF and RAM is 64% full (10.3/16), browsers are eating memory like mad by the way. It seems like Windows follows old Linux rule of thumb, 2*RAM = swap/pagefile. So 32 GB pagefile would be safe bet. 

True, there could be some issues I am unaware of, either way, just as OP I expected different behaviour of Windows managing PF, and one of reasons, because on GNU/Linux I experienced different memory management where I did not experience weird behaviour despite having small swap partition just for the sake of it or tradition.



Bill_Bright said:


> I agree. But what I am saying is, unless a true professional with advanced training in virtual memory management, it is highly unlikely one would know a better approach than the default - which is already totally capable regardless how unique any particular scenario may be. In fact, the system-managed PF assumes each and every computer (even with identical hardware running identical programs) will have unique PF requirements, and then deals with them accordingly.



Fortunately I have 60GB free space. Problem is, the way how Windows manage memory is not enough flexible, you are forced to have big system drive (C. In Linux, system, (/) root partition could be about 20 GB top, /temp files and /swap could be on another drive or you could use soft/hard links if you don't have enough space on main drive. I know you can use soft and hard links on Windows but I don't know and did not try how efficient they are for system stuff. Windows is very picky and sensitive about system stuff.


----------



## Bill_Bright (Oct 20, 2019)

Ryzen_7 said:


> It seems like Windows follows old Linux rule of thumb, 2*RAM = swap/pagefile.


No, it does not follow any rule of thumb. If it did, my page file would be a whopping 64GB. But it is currently set to 4186MB


Ryzen_7 said:


> Problem is, the way how Windows manage memory is not enough flexible, you are forced to have big system drive


Ummm, not true. First, the size needed is not determined by the way the OS "manages" memory. The size needed is determined simply by the size of the files that make up the OS. 

Technically, you only need 20GB for 64-bit Windows 10 (though a minimum of 32GB is recommended). However, I would never recommend anything less than a 128GB drive for the OS as that gives Windows enough room for drivers, temporary files and the PF. But I would recommend a secondary drive for all installed applications if the boot drive is less than 250GB.

That said you can easily move your temp files location, Documents folder, and the PF to a different drive in Windows too. So your point there is invalid.

And for the record, a quick bit of homework with Google shows the minimum system requirements for Ubuntu Linux calls for 25GB of disk space - 5 more than W10!


Ryzen_7 said:


> Windows is very picky and sensitive about system stuff.


No its not! A little bit of homework and setting aside of biases is needed here. 

Consider this. There are over 1.6 billion (with a "b") Windows computers out there. Virtually each and every one became a unique system within the first few minutes of being booted up the very first time! Users setup their accounts, personalization, networking, security apps, personal apps, and peripherals. And Windows supports them all. 

With Windows, you can buy an ASUS motherboard, AMD processor, MSI graphics card, Western Digital hard drive, 8GB of Kingston RAM, put them in a Corsair case, power them with a Seasonic power supply, connect it to a 27" Acer monitor and Epson laser printer, connect to your network via Ethernet, install AVAST security, and install Microsoft Office on it to create your resume/CV , and it will work. 

Or you can buy a Gigabyte motherboard, Intel processor, XFX graphics card, Samsung SSD, 16GB Crucial RAM, and put them in a Fractal Design case, power them with an EVGA power supply, connect them to two 24" LG monitors and HP ink jet AiO, connect to your network via wifi, use Windows Defender and install LibreOffice on it to create your resume/CV, and it will still work. 

If you are a Toyota mechanic working in a Toyota dealer's service center, and you see nothing but broken down Toyota's all day, if you don't keep an open mind and set aside any preconceived notions, you could easily start to believe Toyota makes lousy cars.


----------



## Ryzen_7 (Oct 20, 2019)

Bill_Bright said:


> No, it does not follow any rule of thumb. If it did, my page file would be a whopping 64GB. But it is currently set to 4186MB
> Ummm, not true. First, the size needed is not determined by the way the OS "manages" memory. The size needed is determined simply by the size of the files that make up the OS.



It was a joke from my part, that's why smilie to not make confusion.

Recap, after my last reply, my pagefile grow to 18003 GB, RAM is 71% full. I don't have anything else but browsers running with more or less tabs open and running in background, I should check extensions or add-ons hibernating or suspending tabs in background like Vivaldi browser is doing out of the box I think. I would expect Windows would flush unused garbage from pagefile, but no, it keeps growing.

Beside crash dumps or something and hibernation there is no reason to fill swap or pagefile if there is enough RAM, because RAM is the fastest memory available and in old times when there were only mechanical disks it would be terrible to use HDD as RAM alternative, it would be like a crutch RAM. If you have large enough RAM I don't see a reason memory management would not manipulate everything in RAM instead on more slower mechanical drive or SSD.

AmigaOS and Atari TOS could use RAM disk for a reason and I remember by the way, at least on Atari ST, you could use hardware reset button and program (Macintosh emulator if memory serves me well) would stay resident in memory and you need to completely turn off computer (hardware button) to clear off program from memory.

Speaking of which Gigabyte released i-RAM long time ago, it was a PCI card with DDR RAM slots serving as RAM disk, way ahead of its time considering low density or low capacity of DDR.

I thought that 16 GB of (DDR 4) RAM would be enough for browsing and using of computer casually when I saw how 8GB of (DDR 3) RAM on my other computer configuration was not enough. I tend to have many tabs open in browsers and propably many casual users never experienced my case scenario.

Desktop PC is not a server, one size does not fits all. For servers swap or pagefile have a meaning, on desktop (or workstations) I dare to say, not as much. I don't know how many desktop users and professionals would check out logs and crash dumps for debugging or something.



Bill_Bright said:


> Technically, you only need 20GB for 64-bit Windows 10 (though a minimum of 32GB is recommended). However, I would never recommend anything less than a 128GB drive for the OS as that gives Windows enough room for drivers, temporary files and the PF. But I would recommend a secondary drive for all installed applications if the boot drive is less than 250GB.



You said it yourself. You recommend at least 128GB drive for the Windows and I say 40-60GB SSD or even lower is enough for any GNU/Linux or FreeBSD distibution. I was forced to buy 256GB SSD because everything less was pain in the ass for using Windows 10 and now, even 256 GB is to small. If you need Visual Studio and you have 256 GB SSD you are screwed, 500 GB (NVMe) SSD here I come.



Bill_Bright said:


> That said you can easily move your temp files location, Documents folder, and the PF to a different drive in Windows too. So your point there is invalid.



I moved default \Documents, \Downloads  etc. folders to another D: drive. Having permission issues for some of those folder or files inside of them after fresh install is another problem despite using same account. Maybe I did something wrong, can't remember and it's another story.



Bill_Bright said:


> And for the record, a quick bit of homework with Google shows the minimum system requirements for Ubuntu Linux calls for 25GB of disk space - 5 more than W10!
> No its not! A little bit of homework and setting aside of biases is needed here.



Firstly, it was not my intention going into GNU/LInux/Unix/FreeBSD vs Windows flame war. It was just example how things works or it could work as alternative point of view.
GNU/Linux got its own fair share of stupidity and complications, such as SysV vs systemd and forkifications for the sake of it and ego trips.

Considering Ubuntu recommendation, it is propably safe bet for noobs, because I know my / (root) partition never get bigger from 20GB or less no matter what distribution I use, not to mention FreeBSD.

And you can be sure those 25GB will never be a problem for GNU/Linux user, because Windows user will need more than 20GB of space for Windows normally to run if everything is on default after installation.



Bill_Bright said:


> Consider this. There are over 1.6 billion (with a "b") Windows computers out there. Virtually each and every one became a unique system within the first few minutes of being booted up the very first time! Users setup their accounts, personalization, networking, security apps, personal apps, and peripherals. And Windows supports them all.



Argumentum ad populum.

If we count smartphones, supercomputers and servers, Linux or Unix like OSs serve even more users but this is irrelevant for discussion.

But I get your point, it is complex to make one size fits all solution.

By the way, Microsoft had opportunity  with their WP (I own Lumia 640) to be good alternative for iOS and Android, but they blew this opportunity up by own idiocy and this is problem for big companies, they become slow to change some things and lose their focus. Just like IBM management did not know what to do with PC and Atari or later Commodore blew up Amiga project.



Bill_Bright said:


> With Windows, you can buy an ASUS motherboard, AMD processor, MSI graphics card, Western Digital hard drive, 8GB of Kingston RAM, put them in a Corsair case, power them with a Seasonic power supply, connect it to a 27" Acer monitor and Epson laser printer, connect to your network via Ethernet, install AVAST security, and install Microsoft Office on it to create your resume/CV , and it will work.
> 
> Or you can buy a Gigabyte motherboard, Intel processor, XFX graphics card, Samsung SSD, 16GB Crucial RAM, and put them in a Fractal Design case, power them with an EVGA power supply, connect them to two 24" LG monitors and HP ink jet AiO, connect to your network via wifi, use Windows Defender and install LibreOffice on it to create your resume/CV, and it will still work.



You mentioned apples and oranges. Some of this stuff is about standards and some of them are about compatibility. TCP/IP, CUPS, .DOC  you know.
I don't use any other antivirus program beside default Windows Defender since it became an option. I used Avast and Avira (on XP and Win 98) long time ago, AVG never liked it and others more known (NOD32, Kaspersky, BitDefender, you name it) were shareware or something with less options available.

By the way, Apple before Steve Jobs came in charge again, tried this approach, by making Macintosh clones. You've had Radius (specialized in Macintosh peripherals and accessory equipment) Macintosh clones among others.

Same apply for GNU/Linux and FreeBSD and some other more "exotic" OSs, like HaikuOS. You need to comply with certain standards and you don't have a problem with compatibility.

And you could buy AMD (there were also Cyrix and NexGen in old times) instead of Intel to run x86 instructions. Fortunately, thanks to advancing in process, technology and software development there is rise in OpenRISC architecture and similar so people/users will have another alternative.

There is a reason PC is/was called IBM/PC compatible. All those manufacturers follow certain standards required to run IBM/PC compatible computers.

Cases are irrelevant here (you can have computer (hardware) without them but it is not practical and safe), they need to follow certain formats, ATX, EATX, ITX etc. They are just boxes where you put your hardware, no pun intended. 



Bill_Bright said:


> If you are a Toyota mechanic working in a Toyota dealer's service center, and you see nothing but broken down Toyota's all day, if you don't keep an open mind and set aside any preconceived notions, you could easily start to believe Toyota makes lousy cars.



Usually automotive and computer analogy are awkward and lousy at best but are useful or simple to describe something to technologically inept people.

Cars don't change like hardware and software IT industry, especiallly not as in Moore's law otherwise we would have flying cars. You have the same working principle in engines be it petrol, diesel or electricity. There are small variations, but you don't have so much space in advancing as in IT industry.

For sure if you are Toyota mechanic repairing only Toyota cars every day you could get wrong impression, but you could share experience with other mechanics repairing other cars so you could make some kind of comparision.

It is same with laptops. Some laptops could have more returns than others, but you need to see if this is result of more usage (more buying and consequently more returns) or really they are more prone to failure.


----------



## LAN_deRf_HA (Oct 20, 2019)

Vya Domus said:


> The pagefile size should not be messed with, I don't know why MS still lets people try and change it. Paging is a critical component for virtual memory. Contrary to popular belief that if you have enough RAM you don't need it, this is wrong, any OS *will always use virtual memory* and therefore it will always need paging.
> 
> When the OS needs to allocate memory to a program it does not provide a physical address to it but rather it does this through these "memory pages" which have a different address space much larger than the physical one.
> 
> ...



A lot of us learn this the hard way. There will be some random thing that seems to benefit, like I recall turning off the pagefile greatly improved one specific aspect of Skyrim performance, but then you slowly discover all these little things that are dependent on it. I also use to think you could get away with shrinking it down in the days of limited SSD size, but ultimately it's best to not touch it at all.


----------



## Vayra86 (Oct 21, 2019)

Bill_Bright said:


> You say the PF keeps growing. I say that's what it is supposed to do! But as seen here with Windows 10, it should not get larger than 4GB unless your system is experiencing a bunch of crash dumps - in which case you have other issues to deal with.



You're always right but this is plain wrong.

I see much bigger page files on a daily basis in my own rig. And MS also agrees. I'm frequently looking at 11-13 GB page files. My OS disk is a 1TB SSD.


----------



## Ryzen_7 (Oct 21, 2019)

Just a little update, when my RAM was about 11 GB, pagefile did whooping 20 GB. I closed some tabs and when RAM was at 8 GB, pagefile shrinked to 12800 MB, so I will correct myself, Windows dinamically change size of pagefile in every session but not as I expected or would like considering experience on GNU/Linux.

I really don't like how pagefile is managed because I am all for efficiency and against waisting resources in this case valuable free space on storage drives no matter how big they are.



LAN_deRf_HA said:


> A lot of us learn this the hard way. There will be some random thing that seems to benefit, like I recall turning off the pagefile greatly improved one specific aspect of Skyrim performance, but then you slowly discover all these little things that are dependent on it. I also use to think you could get away with shrinking it down in the days of limited SSD size, but ultimately it's best to not touch it at all.



Correct. In case of Windows it is the best practice not to mess with default system settings, you could make your system unstable or experiencing weird behaviour.


----------



## Bill_Bright (Oct 21, 2019)

Vayra86 said:


> You're always right but this is plain wrong.


   Yes, I am always right - except when I'm wrong. And that happens. But I tend to practice what I preach in this regard and do my homework before posting so it doesn't happen often.

So please note I specifically said, it "_*should* not_ get larger than... ." And I was citing that Microsoft reference I linked to. It was not a claim I personally was making. 

Still, to your example, I see nothing wrong with a 13GB PF on a 1TB SSD - except it does suggest, as your quote so notes, that you have been having some error issues and Windows is preparing for crash dumps. I suggest you keep an eye on your Event Viewer. 

I also note if you manually set your PF using the old (and totally obsolete) rule of thumb, then according to your system specs, your PF would be 24GB in size. 

I keep getting the impression some feel page files are evil. They're not. They are good things. 



Ryzen_7 said:


> but not as I expected or would like considering experience on GNU/Linux.


 Why would you expect (or like) the Page File on Windows to be sized the same as the swap file on GNU/Linux? They are totally different operating systems and surely you were running totally different programs. Frankly, I would be surprised if they were the same size.


----------



## Vayra86 (Oct 21, 2019)

Bill_Bright said:


> Yes, I am always right - except when I'm wrong. And that happens. But I tend to practice what I preach in this regard and do my homework before posting so it doesn't happen often.
> 
> So please note I specifically said, it "_*should* not_ get larger than... ." And I was citing that Microsoft reference I linked to. It was not a claim I personally was making.
> 
> ...



No errors since this PC was turned on, well, yes, event viewer has the odd thingy here and there but this pagefile usage is entirely normal to me. The more than 4GB happens all the time... And no, its set automatically, I know better. Last time I fixed the size was in Windows XP 

I'm also not implying 'its bad' at all. Just saying 4GB is by no means actual anymore. Sounds like something out of the Windows 7 age.


----------



## biffzinker (Oct 21, 2019)

I'm curious if Windows 10 memory management is making use of in-memory compression for the two of you? @Vayra86 @Ryzen_7 

For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.


----------



## Vayra86 (Oct 21, 2019)

biffzinker said:


> I'm curious if Windows 10 memory management is making use of in-memory compression for the two of you? @Vayra86 @Ryzen_7
> 
> For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.
> View attachment 134668



Here you go. Been idling mostly since startup. All is well in the world...





Now, I'm in-game. Is that pagefile monitoring broken, or how do you explain this

The plot thickens...


----------



## biffzinker (Oct 21, 2019)

What does compressed pages data in memory look like for you?


----------



## Bill_Bright (Oct 21, 2019)

biffzinker said:


> For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.


So "_haven't ran into any oddball issues_" is the rationale you use to justify changing settings? 

What were the "oddball issues" you experienced before you changed the default settings? What problems do you encounter with the default PF settings that make you feel Microsoft doesn't know their a$$es from a hole in the ground? 

"_Because it didn't break when I changed it_" is not a valid reason to change anything. 

That just makes no sense to me. 

If you were having problems and switching to a manual setting fixed those problems, then that makes sense. 
If switching to a manual setting made a noticeable improvement in performance, then that would make sense too

But if switching to a manual setting didn't fix anything or made no noticeable difference in performance, then what makes sense is switching it back! 

What methodology did you use to analyze your virtual memory requirements in order to determine the ideal settings for your computer and your computing habits? Surely you didn't just arbitrarily pick 16MB and 8GB out of the air?  Why is the nearly 3GB recommended by the system not near enough for you?

And by the way - one the primary reasons Microsoft decided to make that a "dynamic" feature (so it will expand and contract as needed) is because the demands are dynamic. This means it is NOT a set and forget setting.

This means for every major change to the OS, for every new program or major upgrade to your programs, or any other major change you make to your computer, to use a manual setting correctly you need to regularly re-analyze your virtual memory requirements and, if necessary manually change your settings. This might mean you need to do this multiple times a week - or even more often. That's what a system managed PF does. Are you doing that too? If not, why not?


----------



## biffzinker (Oct 21, 2019)

Bill_Bright said:


> What were the "oddball issues" you experienced before you changed the default settings?


I've never had anything happen going all the way back to Windows 8 related to me changing the default settings for the page file. How about this, I'll switch it back to system managed even though it's not going to make any difference.


----------



## Bill_Bright (Oct 21, 2019)

I asked, what were the "oddball issues" you experienced *before you changed the default* settings?


----------



## biffzinker (Oct 21, 2019)

Bill_Bright said:


> I asked, what were the "oddball issues" you experienced *before you changed the default* settings?


Nothing, before or after


----------



## Bill_Bright (Oct 21, 2019)

Exactly. So why change it? 

If it wasn't broke, why fix it? If there is a reason to change from the defaults, then it makes sense to do so. But so far, there's been no reason.

And I ask again, what methodology did you use to analyze your virtual memory requirements in order to determine the ideal settings for your computer and your computing habits? Were 16MB and 8GB just arbitrary numbers? Why is the nearly 3GB recommended by the system not near enough for you?


----------



## ShrimpBrime (Oct 21, 2019)

I recently turned my page file back on. The system runs fine. Let windows manage, and I carry on about my business. 
However, it didn't like that my OS drive was nearly full (since freed up some space) and put most page file on my HDD. 
Meh I can't tell the difference. Seems to work as it should when windows configures it for itself.
Thanks for all the information in regards to Pagefile Bill_Bright. I've learnt a lot.


----------



## oobymach (Oct 21, 2019)

Just because it isn't broken doesn't mean I'm not going to play with it, but yeah I use system managed or set manual to 16gb, I used to tinker with it more in previous os versions but lately games just need so damn much it's best to leave it on auto.

Does anyone use a drive other than c for their pagefile?


----------



## Bill_Bright (Oct 22, 2019)

oobymach said:


> Just because it isn't broken doesn't mean I'm not going to play with it


I have no issue with experimenting. I do that all the time. But if my testing doesn't show changes I made brought any improvement, I change it back.

My issue lies in two areas that really makes no sense. The first involves changing it just because people used to do it with earlier versions of Windows, in particular XP and earlier.  Modern versions of Windows are not XP.

The second involves changing, or rather leaving it changed because they didn't notice any difference when they disabled the PF, or set it manually. Did they do an analysis of their virtual memory requirements before and after? Do they even know how? What about a month later? What about after installing a service pack or other major upgrade?  It is not a set and forget setting. Would they make such changes to their car's emissions control computer? To their HVAC system? To any other high-tech device and then leave them because they noticed no difference? Why would their Windows computer be any different?

Do they really believe they have more expertise with virtual memory management than the teams of PhDs, computer scientists and professional developers at Microsoft who have decades of experience and exabytes of accumulated data to draw from? I mean I've got decades of experience with swap files and virtual memory management going back to DOS days. I've got multiple IT degrees and certs with Windows and computer hardware and no way do I think I am smarter than the developers at MS. I think I'm smarter than some of the marketing weenies and even some of the execs based on some of the misguided marketing and business decisions they've made. But smarter than the developers? No way.


oobymach said:


> Does anyone use a drive other than c for their pagefile?


Lots of people do. I do on a couple of my systems here that have small SSDs for boot drives. So I moved the PFs to larger secondary SSDs. I would never move the PF to a hard drive unless free disk space on a tiny SSD boot drive was critically low. And that would be a temporary move until I put in a larger SSD boot drive.


----------



## oobymach (Oct 22, 2019)

Where I grew up Home Improvement was always offering great advice on overkilling an upgrade, play with it till it explodes, then ease up a bit. My reasoning behind setting a manual size is that I am reserving the space for use where windows just writes any place it wants when it increases in size. 

Was wondering about running it on another drive because I have the option to do so but my c drive is a hefty 2tb so not going to run out of space anytime soon where if you run your c drive full to the brim your pagefile might not have the space required and when that happens windows gives errors.

I haven't borked a system in a long time (I didn't invent water cooling but I did it on an athlon way back in the day and things didn't go well), but I'm the kind of user who pokes my computer in the eye with a stick.


----------



## Bill_Bright (Oct 22, 2019)

oobymach said:


> My reasoning behind setting a manual size is that I am reserving the space for use where windows just writes any place it wants when it increases in size.


Huh? That's not how locations on drives are selected. The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that. And for that matter, if the PF is on an SSD (where it should be if you have an SSD) TRIM and wear leveling will move the PF about anyway. So sorry, but your reasoning makes no sense.


oobymach said:


> Was wondering about running it on another drive because I have the option to do so but my c drive is a hefty 2tb so not going to run out of space anytime soon where if you run your c drive full to the brim your pagefile might not have the space required and when that happens windows gives errors.


That a totally different scenario. But even so, if you run your C drive full to the brim, the best solution is to clean the clutter off your C drive, uninstall unused programs, move space hogging programs and files to your secondary drives, and/or buy a bigger C drive.


----------



## Ryzen_7 (Oct 22, 2019)

Bill_Bright said:


> Why would you expect (or like) the Page File on Windows to be sized the same as the swap file on GNU/Linux? They are totally different operating systems and surely you were running totally different programs. Frankly, I would be surprised if they were the same size.



Because I could use smaller SSD and have more available space for programs.



> Frankly, I would be surprised if they were the same size.



Beside servers and for hibernation (if you fill up all of your RAM, usually for hibernation you could get away with smaller reservation space on disk) I don't think that on desktop computer with let's say 64 GB of RAM I would need 128 GB of pagefile or swap on GNU/Linux, this would be half of my NVMe 256 GB SSD (not to mention if I coud expand RAM to 128 GB, all my drive could be used as pagefile) and I presume following experience with only 16 GB of RAM, my SSD woud be easilly filled up with big pagefile.


----------



## Bill_Bright (Oct 22, 2019)

Ryzen_7 said:


> Because I could use smaller SSD and have more available space for programs.


Okay. That makes sense as far as you wanting or "liking" it that way. That's fine. What I was really questioning was you "expecting" it to act a certain way based on how GNU/Linus acted.


----------



## Ryzen_7 (Oct 22, 2019)

Bill_Bright said:


> The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that.



On mechanical drives, files are written in random order because of the way how mechanical drive works for the most part.

But there are differences in perfomance of various file systems. NTFS is different than ext4 for example. Some file systems handle large files better like XFS, some are known to handle small files better, like ReiserFS.



Bill_Bright said:


> What I was really questioning was you "expecting" it to act a certain way based on how GNU/Linus acted.



To some extent and I know it sounds stupid because I am aware both of those systems are designed differently. Even some GNU/Linux distros don't follow traditional Unix principles, like GoboLinux.



biffzinker said:


> I'm curious if Windows 10 memory management is making use of in-memory compression for the two of you? @Vayra86 @Ryzen_7
> 
> For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.
> View attachment 134668



I don't know.







> For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.



It depends what you do and what programs you have running.

I noticed weird behaviour in Windows with fixed custom size of 8192 MB I mentioned in previous posts.


----------



## Bill_Bright (Oct 22, 2019)

Ryzen_7 said:


> On mechanical drives, files are written in random order because of the way how mechanical drive works for the most part.


It may seem random to you and me, but there are protocols and algorithums used. Much depends on how fragmented the drive is and how much free space there is.


----------



## MazeFrame (Oct 23, 2019)

Bill_Bright said:


> It may seem random to you and me, but there are protocols and algorithums used. Much depends on how fragmented the drive is and how much free space there is.


The only "known" on where a Kernel puts stuff on mechanical drives is in the outer parts (larger radius = higher linear speed = lower access time).

Regarding swap and Windows:
Windows Kernel is a patchwork, old and not really up to the post-2016 architectures (NUMA is a big issue for example)
I am yet to see Win10 run into "thrashing" scenarios (quickly swaping memory pages from and to storage), but I have seen Win10 load zombies (processes that do not belong to any programm anymore) into RAM. So basically this nice "continue where you left off" filling 32GBs of precious memory with crap! A trip to regedit fixed that, so if you see high memory use without any programm open: _HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management "ClearPageFileAtShutDown" = 1_

I like to tinker with things, I reinstall Manjaro every time it bricks itself, I lost bash to terrible accidents. So why not tinker with Windows? Worst case, you are going to reinstall it.

Edit: If Windows spreads its bits and pieces over all drives introduced to the system, enable SATA hotplug/hotswap in BIOS. Windows will then reconsider and keep itself to C:


----------



## oobymach (Oct 23, 2019)

Bill_Bright said:


> Huh? That's not how locations on drives are selected. The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that. And for that matter, if the PF is on an SSD (where it should be if you have an SSD) TRIM and wear leveling will move the PF about anyway. So sorry, but your reasoning makes no sense.



You may be right but, when you specify the file size pagefile.sys stays the same size and when a file has constraints its space is reserved on the drive. Like when you get the metadata for a file before it is downloaded, the computer gets a blueprint of the file it is receiving including size and reserves the space for the file to be written. Does the pagefile not work the same way? Once a size has been specified you're reserving a space for that file to be written.


----------



## Bill_Bright (Oct 23, 2019)

oobymach said:


> You may be right but, when you specify the file size pagefile.sys stays the same size and when a file has constraints its space is reserved on the drive.


Okay. But this, in no way, suggests setting a manual size is better.


oobymach said:


> Does the pagefile not work the same way? Once a size has been specified you're reserving a space for that file to be written.


Yes. But it works that way regardless if manually set or system set. So again, this does not suggest a manually set size is better. But it does show how setting too big a size could waste space or how setting too small of a size could mean too small of a PF. 

Again remember - manually setting a PF size is NOT a "set and forget" setting. The demands are "dynamic" or constantly changing. If it was "set and forget", it would have been much easier for Microsoft to simply code Windows to set a fixed size (or fixed range) once during installation and leave it forever. But they wisely chose to make system managed PFs dynamic too. 


MazeFrame said:


> The only "known" on where a Kernel puts stuff on mechanical drives is in the outer parts (larger radius = higher linear speed = lower access time).


Umm nope. Not how it works. When the drive is formatted, the location for the file allocation tables and partition tables are established in those outer tracks by the file system doing the formatting - not the "kernel". The kernel is part of the OS, remember. But you can take a hard drive formatted by Windows and use it with Linux. FAT32 is supported by Windows, most Linux distributions, OpenBSD and Mac OS.

All storage locations, whether available or used, are "known" and the files tables keep track of that information.

When the computer needs to store data on the drive, it doesn't just throw that data at the drive to have it saved in "random" locations. No! When new data is to be stored, the file table "map" is accessed to see which sectors are free. The controller is then instructed to move the R/W head to that specific (not random, not unknown) location and in an orderly pattern, write that data. Then the file table maps are updated to show which locations are now unavailable. 

"Random" would suggest it would be like throwing a dart at a dartboard after being spun around 3 times while blindfolded and then stuffing the data wherever the dart landed. No. The map is analyzed and then the storage location is selected based on those "known" available locations. 

Now reading the various segments of the stored data may be done in a somewhat "random" order, then all the bits reassembled into the correct order in memory. This allows the read/write head to more quickly gather up the fragments based on the proximity of their locations on the disk, rather than sequentially (for example, the next word in a sentence). This means the various "fragments" may be read into memory in no "apparent" order (which is where "random _access_" comes from). But "accessing" the already saved file segments is not the same thing as "writing" new ones to disk.

So again, it may seem file segments are written randomly, they are not. And they are not stored in previously "unknown" locations either. The file allocation tables are actively and constantly keeping track and mapping each and every available and used location.


----------



## oobymach (Oct 23, 2019)

Bill_Bright said:


> Okay. But this, in no way, suggests setting a manual size is better.
> Yes. But it works that way regardless if manually set or system set. So again, this does not suggest a manually set size is better.


I agree that manual isn't better, in fact it's probably worse because it potentially limits the pagefile to one spot where auto can pick and choose where to put it, that was what I wanted to confirm. I killed 2 ssd's in 3 years which I think part of the blame is on me from setting a manual pagefile in windows 7.


----------



## Bill_Bright (Oct 23, 2019)

oobymach said:


> I killed 2 ssd's in 3 years which I think part of the blame is on me from setting a manual pagefile in windows 7.


Ummm, I don't see how setting a manual PF on a SSD could cause actual damage to a SSD. I suspect something else killed them - perhaps some power anomaly or some really bad luck.


----------



## oobymach (Oct 23, 2019)

Bill_Bright said:


> Ummm, I don't see how setting a manual PF on a SSD could cause actual damage to a SSD. I suspect something else killed them - perhaps some power anomaly or some really bad luck.


All ssd's have a block write life which I suspect I exceeded but these were consumer level ssd's and my usage could be considered heavy. The one was an ocz arc which I guess was a crap ssd anyway but I had similar failure with the second drive and it was a different brand. I respect your opinion as you seem to know what you're talking about.


----------



## Bill_Bright (Oct 23, 2019)

oobymach said:


> All ssd's have a block write life which I suspect I exceeded


Not hardly.


oobymach said:


> and my usage could be considered heavy.


Also not hardly - not unless you ran a very busy file server that was constantly written to day in and day out. Reads don't count towards wear. Only writes.

And while it is true, SSDs are limited in the number of writes they can support, that number is so high, it is highly unlikely these limits would ever be reached with a consumer computer before the computer itself was long retired due to obsolescence. Perhaps, maybe, if this was years ago with first generation SSDs, but not with later generations. I note more and more data centers are using SSDs as caches for their most commonly accessed data. 

Remember, with SSDs, PF locations are not fixed - so even if you set a fixed size. TRIM and wear leveling will still move those locations around to evenly distribute the wear. And besides, noted way back in post #11, SSDs and PFs are ideal for each other.


----------



## birdie (Oct 23, 2019)

I've never seen so much *complete and utter crap* about memory management and the pagefile/swap use on a single forum.

Let's go through all the falsehoods in this thread:

1) _"Windows cannot work without pagefile as such a configuration causes BSODs and other bad things"_.

I've been running Windows/Linux without swap since I got a gig of RAM back around 2000 and I've never had a single issue because of that. None. Ever. Maybe my own example is not enough? OK, over the past 20 years I've managed over 200 workstations (including the ones used for 3D modelling/CAD/rendering/authoring) and over two dozen Windows servers most of which ran without a pagefile. Zero issues.

2) _"Pagefile presence will always make your computer work/run faster"_.

Windows and other OSes may page out the applications which you currently use. Imagine you've put some of them in background and once you switch back to them, Windows will have to read their code back from the pagefile - as a result you get delays and sometimes mild stuttering.

3) "The pagefile grows because ... reasons even when you have gigabytes of RAM still free".

No, no, no! The reason it grows is because Windows and other OSes may prioritize disk cache over running applications which means if you run a game which reads gigabytes of data from the disk (textures, levels, animations, sounds, etc), Windows often decides to ... page out other running applications and your pagefile use will grow.

Probably I haven't covered everything in this thread but the bottom line is, *you can perfectly run your PC without pagefile if you have enough RAM*.

In Linux you need swap if you use hibernation. Other than that again there's no need to have it if you have enough RAM.



biffzinker said:


> For me I haven't ran into any oddball issues with Windows 10 with the initial size for the pagefile set to 16 MB, and maximum size set to 8 GB.



4. If you're hellbent on having a pagefile, you *must* set minimum and maximum sizes to the same value to avoid rampant fragmentation which causes slow downs and reduces the chances of restoring data successfully.

5. Also, you *must* create pagefile *right after* Windows installation because in this case Windows will most likely allocate one continuous chunk of disk space for it.


----------



## Bill_Bright (Oct 23, 2019)

Sorry but the complete and utter crap is most of what you just said! 

1. No one said Windows cannot work without a PF. 
2. No one said the PF will make the computer work/run faster. 
3. I don't know your reason for 3 but again, no one said you cannot run without a PF.
4. Well, there we agree - you don't need to set a "fixed" size. 
5. For one, Windows will create it by default. And for another, a contiguous chunk only matters with hard drives.


----------



## Ryzen_7 (Oct 23, 2019)

Bill_Bright said:


> It may seem random to you and me, but there are protocols and algorithums used. Much depends on how fragmented the drive is and how much free space there is.



You are correct. Anyway, I don't remember last time I defragmented drive (considering the size of drives we all have it would be ludicrous, and I don't even partition my drives anymore, maybe if I would have one large drive), NTFS and Unix based or inspired file systems are more resistant to fragmentation or they work in a way you don't need to defragment drive as was recommended for example for FAT32.



			http://www.tldp.org/LDP/sag/html/filesystems.html
		




> *5.10.11. Fighting fragmentation?*
> When a file is written to disk, it can't always be written in consecutive blocks. A file that is not stored in consecutive blocks is _fragmented_. It takes longer to read a fragmented file, since the disk's read-write head will have to move more. It is desirable to avoid fragmentation, although it is less of a problem in a system with a good buffer cache with read-ahead.
> 
> Modern Linux filesystem keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system.
> ...





MazeFrame said:


> The only "known" on where a Kernel puts stuff on mechanical drives is in the outer parts (larger radius = higher linear speed = lower access time).



That's why GNU/Linux distributions are recommending /boot, /swap at beginning of disk drive.

It is different for CD-ROM if memory serves me well. Inner radius are faster for reading or writing and there were many approaches in technology to optimize this, CAV vs CLV, Kenwood's TrueX.



MazeFrame said:


> So why not tinker with Windows? Worst case, you are going to reinstall it.



I don't want to bother with phone calls for Windows and keys for various software, and don't want to bother with images. But I could play with Windows on VBox, VMware and similar. I like Windows 10 reset option though.



MazeFrame said:


> I like to tinker with things, I reinstall Manjaro every time it bricks itself



For rolling release distro I expect to brick itself. 



MazeFrame said:


> Edit: If Windows spreads its bits and pieces over all drives introduced to the system, enable SATA hotplug/hotswap in BIOS. Windows will then reconsider and keep itself to C:



I enabled hotplug for safety reasons in case if I ever need to change drive when computer is on or whatever.



Bill_Bright said:


> Umm nope. Not how it works. When the drive is formatted, the location for the file allocation tables and partition tables are established in those outer tracks by the file system doing the formatting - not the "kernel". The kernel is part of the OS, remember.
> 
> Kernel is the brain of the OS. The most critical and important part of the OS as you know it.
> 
> ...





Bill_Bright said:


> The OS does not tell the drive where to put files. When you set the size manually you sure did not tell the drive where to put the file. The controller does that.











						I/O scheduling - Wikipedia
					






					en.wikipedia.org
				






> Input/output (I/O) scheduling is the method that computer operating systems use to decide in which order the block I/O operations will be submitted to storage volumes. I/O scheduling is sometimes called disk scheduling.


----------



## biffzinker (Oct 23, 2019)

birdie said:


> 4. If you're hellbent on having a pagefile, you *must* set minimum and maximum sizes to the same value to avoid rampant fragmentation which causes slow downs and reduces the chances of restoring data successfully.


On a hard disk drive your correct, on a Solid State Disk the page file can get away with expanding, and contracting without any slow down caused by fragmentation. That was the main reason I set it to 16 MB although _someone_ wanted to argue with me about why I was messing with the default system managed settings.


----------



## birdie (Oct 23, 2019)

Bill_Bright said:


> Sorry but the complete and utter crap is most of what you just said!
> 
> 1. No one said Windows cannot work without a PF.
> 2. No one said the PF will make the computer work/run faster.
> ...



Ah, the guy who perpetuates falsehoods and generally talks complete nonsense has replied.

Everything that I contradicted is here in this topic in one way or another. Can you even read?

1. https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4135172 https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4134233
2. https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4134469
3. Again, no one has said anything remotely correct.
4/5 https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4137004

Again, what's your education? How many OSes have you ever run? I bet you started with Windows 10, right? Can you at least explain the meaning of all the columns in Detailed View in the Task Manager? Have you written a single line of code? Does _malloc()_ say anything to you? Or _new_?

I'll just laugh and leave this idiotic thread which is rife with "insightful" comments and excitement.



biffzinker said:


> On a hard disk drive your correct, on a Solid State Disk the page file can get away with expanding, and contracting without any slow down caused by fragmentation. That was the main reason I set it to 16 MB although _someone_ wanted to argue with me about why I was messing with the default system managed settings.



Which part of _"reduces the chances of restoring data successfully"_ didn't you understand? Should I paraphrase it?

When your files are all over the disk (NTFS stores them using 4KB sectors, i.e. a 4MB file can have up to 1024 (!) fragments), you'll have a next to zero chance of restoring them once your MFT goes kaput, the disk itself dies, or you (or some malware) accidentally delete something and you don't particularly care about backups.

Also, with SSDs Windows disables defragmentation completely and for a fun I'd recommend that you run defrag in console (e.g. _defrag C: /A /V_) and assess the level of fragmentation. You're in for a very big very unpleasant surprise.


Enough with this circus. Keep on inventing wild theories and saying absolutely ridiculous things.


----------



## Ryzen_7 (Oct 23, 2019)

birdie said:


> I've never seen so much *complete and utter crap* about memory management and the pagefile/swap use on a single forum.



Bill_Bright already answered but I will response to some of it.

1.) I could believe you past Workstation part.

I have 16 GB of RAM, it's not to much for this day and age but not small either, and I experienced serious stability issues for just running browsers with multiple tabs open, some of them were multimedia based and it was main reason for using so much RAM/pagefile I presume, Opera, Brave, Firefox you name it, not that it matters so much what browsers I run although some of them "eat" more RAM than others. I did not even run any resource hungry game or any video game for that matter. Some DAW's with VST's here and there but nothing serious.

As I already said, Windows and Linux behave differently on memory management part and everything else for that matter. It is apples and oranges. Pagefile (Windows) and swap (Linux) behave differently no matter how similar they are.
By the way, I believe you if you used GNU/Linux sans swap without issues if having enough RAM, but not using Windows without pagefile for any serious or demanding stuff even if enough RAM.



birdie said:


> No, no, no! The reason it grows is because Windows and other OSes may prioritize disk cache over running applications which means if you run a game which reads gigabytes of data from the disk (textures, levels, animations, sounds, etc), Windows often decides to ... page out other running applications and your pagefile use will grow.



You could be right and this is what I think about Windows and pagefile. Currently, RAM is 74% full, 11.9 GB, pagefile is 21988 MB.
And this is my pet peeve about Windows and pagefile.



birdie said:


> *you can perfectly run your PC without pagefile if you have enough RAM*.



How much is enough, 640kB or 64-128 GB? I tell ya, if you even have 128 GB of RAM, you need pagefile, it is how Windows works. At least this is my case.



birdie said:


> In Linux you need swap if you use hibernation. Other than that again there's no need to have it if you have enough RAM.



I agree, if 8-128GB of RAM and not server or using memory intensive programs surpassing your RAM and even then SSD is safe bet, because using mechanical drive would be the worst option as backup RAM.

As I said, mechanical drive is the worst RAM backup (swap) or hibernation option. I tried it and it is terrible no mater how fast mechanical drive you have. Maybe some RAID option would help in this case but this is way beyond of this issue.


----------



## birdie (Oct 23, 2019)

_"I experienced serious stability issues"_

This sounds like crap, sorry. Either your system works or it doesn't. If you don't have enough virtual RAM (RAM + pagefile), Windows will show you a yellow alert in the notification area and it will close the applications which use too much RAM in case you don't have any free virtual RAM left.

Again, I see that many participants of this thread have a severe form of ADHD, so I will repeat: I've been running pagefile-less/swapless for over 15 years now and I started with just 1 gig of RAM. And now I don't have 128GB of RAM. My laptop has 16 and my desktop up to two months ago also had just 16. I've now upgraded my desktop to 32 but _not_ because I run out of memory but because I want to have a larger RAM disk.

On laptop my memory use rarely goes above 6GB since I only use it for light web browsing. Again, I have 16 installed because I love having my temporary files (including browser cache) on a RAM disk. Also RAM disk allows you to compile applications a lot faster.


----------



## Ryzen_7 (Oct 23, 2019)

birdie said:


> _"I experienced serious stability issues"_
> 
> This sounds like crap, sorry. Either your system works or it doesn't. If you don't have enough virtual RAM (RAM + pagefile), Windows will show you a yellow alert in the notification area and it will close the applications which use too much RAM in case you don't have any free virtual RAM left.



I get black sreens or my apps shut down unexpectedly. Maybe I have some issues beside Windows, compatibility, bad coding, whatever. By the way, I've never experienced problem with any of my Intel based rigs previously, even with s939 Athlon64 + DFI LANParty nF4 SLI-DR as with my Ryzen 7 + (Asus)B350/(MSI)B450 chipsets/UEFI based boards. With higly praised MSI B450 GAMING PRO CARBON AC I've got S3 issues with official v16 UEFI (not to mention fiasco powered by AMD AGESA removing Click BIOS 5 bling), and Asus with B350 TUF B350M-PLUS I've got problem with running my G.Skill RipJaws V at 3200 MHz speed but my friend run his Kingston DDR4 without problem at 3200 MHz with this board.

By the way, pagefile is not virtual RAM like swap on Linux because in this case if you've had enough RAM nothing would be written on pagefile and this is not the case with Windows. And even if you would have 128 GB of RAM, if you've had 1 MB of pagefile, it would be full.

You are right, when I've had custom sized pagefile, sometimes some of my browsers would crash all of sudden.



birdie said:


> I've now upgraded my desktop to 32 but _not_ because I run out of memory but because I want to have a larger RAM disk.



I am big fan of RAM disk, because of speed. And I would like to see some form of PCI-E cards with RAM expansion slots including backup battery in case of sudden loss of electricity.



birdie said:


> On laptop my memory use rarely goes above 6GB since I only use it for light web browsing.



No doubt about it.



birdie said:


> Again, I have 16 installed because I love having my temporary files (including browser cache) on a RAM disk. Also RAM disk allows you to compile applications a lot faster.


----------



## Bill_Bright (Oct 23, 2019)

Ryzen_7 said:


> Anyway, I don't remember last time I defragmented drive


Well, unless you changed the defaults - and there's no reason to - you don't need to remember because Windows automatically defrags hard drives regularly anyway.


----------



## oobymach (Oct 23, 2019)

birdie said:


> Stuff


GTAV will hard crash if you don't use a pagefile.


----------



## birdie (Oct 23, 2019)

oobymach said:


> GTAV will hard crash if you don't use a pagefile.



I've yet to see a single crash but this thread is full of people making things up, so I'm not surprised.



Bill_Bright said:


> Well, unless you changed the defaults - and there's no reason to - you don't need to remember because Windows automatically defrags hard drives regularly anyway.



*Windows never defrags SSD drives automatically.*

Does this thread attract the people who know nothing about OSes/computing and have completely cocked up systems?



Ryzen_7 said:


> I get black sreens or my apps shut down unexpectedly.



Well, here we go. You have issues either with your RAM modules (running memtest86 for a few hours is always a good idea) or GPU, or GPU drivers or some broken Windows drivers. Your issues have _nothing_ to do with the pagefile or its size.

I have never in my entire life seen any "black screens" in Windows.

I wonder how many people in this thread mix up RAM/pagefile issues with something else entirely. Probably most if not all.


----------



## Bill_Bright (Oct 23, 2019)

birdie said:


> Ah, the guy who perpetuates falsehoods and generally talks complete nonsense has replied.



Now you have accused me of perpetuating falsehoods - show us where I made any of those claims you made in your #80 or where I perpetuated them.

Your links in post #80 just illustrate the point. It is your comments that are nonsense. 

1. Again no one said Windows cannot work without a PF - including the people you accused of saying so  in your links
2. HD64G did NOT say the PF will make the computer work/run faster as you accused him of doing.
3. You could not back up your claims so you didn't link to anything
4/5. Again, nobody said you must have a fixed size. And nobody said you must create the PF after installing Windows.

So talk about falsehoods and nonsense. You made up everything in your post #76 because no one made those claims - then in post #80 you linked to several posters posts and claimed they said something they didn't. All the while accusing me of perpetuating falshhoods when you clearly make them up!   

I'm done here.


----------



## biffzinker (Oct 23, 2019)

birdie said:


> Windows never defrags SSD drives automatically.


Windows _does_ defrag the meta-data files for the file system on a SSD after a fragmentation threshold is reached.


----------



## birdie (Oct 23, 2019)

Bill_Bright said:


> 1. Again no one said Windows cannot work without a PF - including the people you accused of saying so  in your links



From https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4135172

_Disabling your PF can also *wreak havoc on multiple aspects* of your system. Some games in particular, though they escape my memory as I refuse to mess with the PF these days (there's no reason to!) would crash upon launch without a PF. You can't just get rid of something that is there for the system to use._​
From https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4134233

_The page file is also used for memory dumps. Windows will allocate enough space to do this. *Dont mess with it, its part of your system recovery*_​
From https://www.techpowerup.com/forums/threads/pagefile-anomalies.260180/post-4138132

_GTAV *will hard crash* if you don't use a pagefile._​
I will not reply to any of your complete and utter crap in this thread. Your knowledge of OS internals is minimal if any and your comprehension skills are missing altogether.

You're continuously embarrassing yourself but again, given an utter computer internals illiteracy in this thread (_GTAV crashes without pagefile, Windows automatically degrags volumes_) , I'm not surprised that no one is trying to contradict you. I will ignore notifications about this thread from now on because I love this saying by Mark Twain, "Never argue with an idiot. They will drag you down to their level and beat you with experience". This thread is just a perfect example of this saying.



biffzinker said:


> Windows _does_ defrag the meta-data files for the file system on a SSD after a fragmentation threshold is reached.



There's no such thing as "meta-data defragmentation". Windows has MFT and again it's not defragmented for SSD disks ever.


----------



## biffzinker (Oct 23, 2019)

birdie said:


> There's no such thing as "meta-data defragmentation". Windows has MFT and again it's not defragmented for SSD disks ever.


That's not what this implies:


> This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.


&


> Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.











						The real and complete story - Does Windows defragment your SSD?
					

There has been a LOT of confusion around Windows, SSDs (hard drives), and ...




					www.hanselman.com


----------



## MazeFrame (Oct 24, 2019)

Bill_Bright said:


> file system doing the formatting - not the "kernel"


Wrong.
Directly from Microsoft:



The only thing that talks to the Hardware is the Kernel or Kernelmodules (like drivers).
The file system only "rarely" gets involved for reasons of latency. All the virtual-physical address abstraction happens just by the kernel. Else every storage call would be 4 context switches (Application->Kernel->FileSystem->Kernel->Application), instead of 2 (Application->Kernel->Application).

Random means the closest known free physical address on the drive at the time of writing to drive. As outer parts are under the RW-Head more often, they are "preferred".


----------



## neatfeatguy (Oct 24, 2019)

Use pagefile....don't use it...who the hell gives a rip? I personal find having it on means no issues, like I've ran into before. You may never use page file and your systems never run into problems - good for you. But systems are all different, the same thing can happen with video drivers. You could have the same build as someone else, but they have constant driver issues using the same GPU driver you have installed and you don't run into any issues.

When I re-installed Windows 7 after picking up a SSD, windows did not default a pagefile, it had it turned off. This is the first time I've ever seen come across this in my experience. I wasn't looking to see what the pagefile was set to after installing Windows 7, but I eventually looked because of an issue I was running into.

I had been doing fine, without issues: streaming, gaming, web browsing and so on for a while.

Fast forward about 6-8 months. I ended up picking up a copy of Shadow of Mordor for cheap, about a year after it came out. I ran the game and played fine for 30 minutes. Game looked good, ran good with all settings maxed out at 5760x1080 on my (still fairly new  at the time) 980Ti. I didn't have any good amount of time to put into the game, just wanted to try it out.

The next evening, I start playing. I played for about 2 hours, went to bed.
The next night I started playing, played for about 30 minutes and the game minimized to desktop - low memory bubble on the task bar. I turn the game off and turn on OSD for memory and RAM use.
Started the game again and noticed the memory for the GPU was flirting with the max amount of 6GB, but the RAM wasn't going much over 4GB and I have 16GB installed. I played for an hour and no issue.
Played the next day. About an hour in, minimize to desktop with bubble notification about low memory.....said screw it and went to bed.
The following night, same memory bubble notification after about maybe an hour of playing.....
Pulled up the pagefile and it was turned off.
I turned it on for Windows to manage and my memory issues went away, I had no more problems playing the game.

Long story short, just leave it on. You'll never run into any issues if you do, but there's a chance you could if you have it turned off.


----------



## oobymach (Oct 24, 2019)

birdie said:


> I've yet to see a single crash but this thread is full of people making things up, so I'm not surprised.


Psychological projection would have us believe you are the liar in this case since none of your claims can be verified you're just blowing smoke.

Here on the other hand is actual proof of my claim. GTAV will hard crash if you don't use a pagefile. This is 2 seconds after loading single player game with no pagefile.


----------



## birdie (Oct 24, 2019)

Crap continues unabated.

I'm too lazy to search today, so I'll just repost this:

From https://www.tenforums.com/performance-maintenance/80085-why-windows-defragging-my-ssd.html

I've read a lot of articles and threads on this and they all reference this one guy's blog post who works/worked at Microsoft and he's not in the storage division or really knows anything about storage, also this was posted over two years ago. Not one other person, article or employee at Microsoft has ever stated something similar (to my knowledge) and have always stated that Windows doesn't defragment SSDs, full stop. I now have first hand knowledge that it does. This is the one guy I've seen that actually has a clue: Why Windows 10, 8.1 and 8 defragment your SSD and how you can avoid this – Вадим Стеркин . Notice the tweet near the bottom that states " I just talked to that team. Bad message but no actual defragging happens." Yup, the same guy that everybody is referencing to explain that defrag does happen on SSDs that ended up changing his tune. It's my belief that Windows defrags SSDs just like HDDs and this "intelligent defragging" he posted is just pure BS.​​I've been running Windows 10 on two PCs both with SSDs for over a year now and none of their volumes have ever been defragmented.

Meanwhile, can anyone here find a confirmation on Microsoft.com that Windows 10 indeed defrags SSDs or you'll keep citing random dudes who ostensibly worked for the company in the past?



oobymach said:


> Psychological projection would have us believe you are the liar in this case since none of your claims can be verified you're just blowing smoke.
> 
> Here on the other hand is actual proof of my claim. GTAV will hard crash if you don't use a pagefile. This is 2 seconds after loading single player game with no pagefile.



Amazing! An anectdotal evidence from a single person in the entire world serves as a confirmation that pagefile is always necessary. I've LOLed.

Meanwhile the fact that I've had over 200 PCs under my command (and over two dozen servers) most of which never had pagefile enabled and everything worked perfectly is nothing to write home about.


----------



## oobymach (Oct 24, 2019)

birdie said:


> Crap continues unabated.


My thoughts exactly, can you not shut up? People been calling you out on your nonsense and you just keep coming back. I'm done here since you're blind as well as stupid.


----------



## birdie (Oct 24, 2019)

People with just 8 gigs of RAM say gta5 runs better without pagefile but what do I know?


__
		https://www.reddit.com/r/GrandTheftAutoV_PC/comments/3ecwmb

Again, it's funny how people who have no IT knowledge whatsoever, who've never coded, who've never run high load servers, who don't quite understand how virtual memory works try to argue with me. Never laughed so much, keep it up)))

@oobymach 

So, instead of arguing you've resorted to calling names? How fitting for this discussion. Lol.

Meanwhile no relevant links have posted to counter any of my arguments. What a lovely, sorry inane discussion.


----------



## ShrimpBrime (Oct 24, 2019)

From what I had gathered from researching and asking some questions (all the while page file off) I did quite a bit of learning. Mr. Bright had made some very valid points actually. He's not perfect, but the way it's described, it made me wonder.

So really it was the answer "why turn it off if the system runs as intended with it on" is what actually caught my attention to actually research this a bit along with some good information, perhaps not all on the money, but let's see if I did learn something!!!

Firstly, Page file is reserved for when programs use up all the system RAM and has a place to put additional information, which by the way, isn't going to be in a normal to real time priority. 
So while your game uses up a bunch of RAM, the operating system will allocate the Page File to move lower than normal priority programs here instead of using system memory. 

After all Page File is refered to as Virtual memory, and is treated as such. Because the slow access times from HDDs and such, this is the reason why lower priority programs will be moved to page file.

The interesting part.....

While doing some research, I found that if you have enough RAM installed, the page file may not really be used at all. The system never wants to use virtual memory unless it really really needs to do so. 

Either way, it seems with the ability to have 64GB of memory on a gaming rig is way more than enough, you could turn off page file and it won't make any difference solely because you won't need that extra virtual memory space.

However, I did a little testing myself and found some interesting facts and issues that did occur, but it took some effort. Essentialy I opened a game in steam and ran it in the background basically idle, but using system RAM, opened up some IE: put on some video and opened another taxing game. NOW, my system RAM was full. GOt a pop up mid game that system memory is full and suggested to close some programs. Now the game was running just fine even on low system memory. So I closed IE and the game in the background, the pop up did not come back.

So then, I turned page file back on realizing that @Bill_Bright was onto something, I was able to replicate running a lot of tasks and never saw a low memory issue. The page file consumed many background tasks and idle programs allowing me to play the game undisturbed. 

Other than that, I've had page file off and or configured manually for certain tasks. But generally speaking, have always had plenty of RAM to not need it as long as I wasn't running a bunch of software using memory. Seemed pointless to keep a lot of programs running in the background if I'm off gaming. 

Then looking at virtual machines, yea, running Page File is a good thing IMO. it will allow that VM os to use plenty of virtual memory as if it where RAM and you'll never have an issue.

In short, there really is no point to turn off page file even if you have lots of memory. Obviously if you need the disk space, it's time to invest in more drives.


----------



## Athlonite (Oct 24, 2019)

BLAH BLAH BLAH who the heck cares either run a pagefile or don't run a pagefile at the end of the day it's your choice


----------



## oobymach (Oct 24, 2019)

birdie said:


> More garbage.


I did provide proof, but clearly you're too stupid to click a picture, photographic evidence of my claim, or maybe you're just blind and cannot see what is right in front of your face. Either way I stand by my claim that you're blind as well as stupid.


----------



## Kursah (Oct 24, 2019)

Enough. Disagree constructively or find something better to do with your time.


----------



## Ryzen_7 (Oct 24, 2019)

Bill_Bright said:


> Well, unless you changed the defaults - and there's no reason to - you don't need to remember because Windows automatically defrags hard drives regularly anyway.



Yes, it is set to weekly by default.

I did not notice it, because I rarely leave my computer powered on while away for longer time, or this defragmenation appear in background in not so agressive way so you don't notice perfomance issues. Sometimes I noticed some of my disk drives working in background for no apparent reason, maybe is this some kind of occasional defragmention process.

One of the reasons I don't like defragmentation is because it wears out hard drive moving parts but today's defragmentation tools are propably more optimized I presume.

As you know NTFS and some protocols like NCQ is making fragmentation on mechanical drives less of a problem.



birdie said:


> Well, here we go. You have issues either with your RAM modules (running memtest86 for a few hours is always a good idea) or GPU, or GPU drivers or some broken Windows drivers. Your issues have _nothing_ to do with the pagefile or its size.



I bought MemTest Pro Deluxe especially for this reason and I recommend it to everyone, because MemTest86 although free, it is not that good as I thought because it did not show any errors while MemTest Pro Deluxe did. And my memory passed memory test, same with CPU, GPU etc.

I know that B350 and B450 chipsets and Zen/+ CPU officially don't support 3200Mhz DDR4 as Zen2. And I've had issues running G.Skill Ripjaws V (they are tested and recommended for Intel) with Asus B350 TUF at 3200 MhZ but all is well with MSI B450 Carbon AC motherboard sans issue with v16 UEFI and current state of click BIOS(GSE-Lite) thanks to bloated AMD AGESA.

3200 Mhz is sweet spot for Ryzen CPUs.



birdie said:


> I have never in my entire life seen any "black screens" in Windows.



Neither I, except BSOD or when having custom pagefile I get black screen. Beside pagefile, maybe it got something to do with Radeon drivers well known for having problems. I never got problems with GeForce drivers.



birdie said:


> I wonder how many people in this thread mix up RAM/pagefile issues with something else entirely. Probably most if not all.



If some of my problems no longer appear after I changed from custom to automatic pagefile it is logical to presume it got something to do with pagefile.


----------



## Bill_Bright (Oct 24, 2019)

Ryzen_7 said:


> or this defragmenation appear in background in not so agressive way so you don't notice perfomance issues.


^^^This^^^ Microsoft has put a lot of effort into making sure tasks like this (Indexing is another) do not interfere with the user. Defragging (like Indexing) is done so frequently, it really takes little time or resources to keep the drive defragged (or Indexed). The very first time a drive is defragged or indexed can take awhile. But after that, it is very quick. In any case, it is done WAY in the background, when the user is idle.

In the old days when hard drives themselves were much slower and smaller, defraggers used to move all the files closer to the outer edges to consolidate free space. With today's much faster drives and especially with their very fast seek times (the time it take to find the first file segment), moving files to the outer edges is no longer necessary for performance. And with today's drives being so big, consolidating free space is much less important too. It is still important to keep file segments together (not fragmented) but it does not matter if those files are scattered all over the disk.


Ryzen_7 said:


> One of the reasons I don't like defragmentation is because it wears out hard drive moving parts


Yeah, that never was a valid reason. Yes, defragging does task those moving parts - but that is only infrequently when the defragging process is happening. What is worse is the day-in and day-out thrashing of the hard drive's moving components when the drive is fragmented.


----------



## Apocalypsee (Oct 25, 2019)

oobymach said:


> Psychological projection would have us believe you are the liar in this case since none of your claims can be verified you're just blowing smoke.
> 
> Here on the other hand is actual proof of my claim. GTAV will hard crash if you don't use a pagefile. This is 2 seconds after loading single player game with no pagefile.


After reading this thread yesterday, I disable page file and play GTAV just to test it. No crash whatsoever, game played smoothly. Might be slightly smoother than with page file judging from RTSS frametime graph. My pagefile was set at static 8192MB and set on different drive than my game hard drive. Not saying you are a liar, but different system act differently when disabling page file.


----------

