# How to turn off low disk space notification in Windows 10



## Pedipalpa (Feb 6, 2018)

Good day 

I have configured my computer to have a RAM cache for Internet temporary files and pointed the Chrome's Cache directory to it. Everything works great, however every once and a while Windows starts bugging me with notifications that the cache is running low on space. It is frustrating because the only way to clean up space is to exit Chrome and manually delete the cache folder. So I was wondering if there could be an easier way -like to disable this low dosk space notification alltogether?


----------



## StefanM (Feb 6, 2018)

Check out https://support.microsoft.com/en-us/help/555622/how-to-remove-the-low-disk-space-warning


----------



## bug (Feb 6, 2018)

alecx121 said:


> Thanks , it is useful for me as well. These days, I receive a sudden popup saying low memory, close programs. I shall try this. And actually I think problem occurs for me when there are lots of tabs open in Firefox.


You think an article about disabling _low-disk warnings_ will help you avoid _low-memory warnings_?


----------



## ne6togadno (Feb 6, 2018)

https://mageplugins.net/blog/post/how-to-increase-browser-cache-in-google-chrome/
or
vivaldi.com


----------



## ne6togadno (Feb 6, 2018)

low memory refers to RAM
low disk space refers to hdd/ssd
both are memory a bit different though


----------



## lexluthermiester (Feb 6, 2018)

Pedipalpa said:


> Good day
> 
> I have configured my computer to have a RAM cache for Internet temporary files and pointed the Chrome's Cache directory to it. Everything works great, however every once and a while Windows starts bugging me with notifications that the cache is running low on space. It is frustrating because the only way to clean up space is to exit Chrome and manually delete the cache folder. So I was wondering if there could be an easier way -like to disable this low dosk space notification alltogether?


http://www.thewindowsclub.com/faq-l...on-or-warning-in-windows-7-how-to-disable-etc
While the link says Windows 7, it works the same in 10. I personally have been using the following;
http://www.thewindowsclub.com/ultimate-windows-tweaker-4-windows-10
This utility provides a simple method to change a lot of needless, irritating and annoying things about Windows.
Cheers and fun!


----------



## Bill_Bright (Feb 6, 2018)

ne6togadno said:


> low memory refers to RAM


Actually, low memory refers to "_virtual_" memory running low. Virtual memory is physical system RAM plus the Page File. You can run low of virtual memory even if you have lots of "system" installed. If you dinked with your PF settings, just let Windows manage it.


----------



## lexluthermiester (Feb 6, 2018)

Bill_Bright said:


> If you dinked with your PF settings, just let Windows manage it.


I disagree with this. Letting Windows manage the Page/Swap file will result in progressively worse drive performance over time and will lead to fragmentation. While fragmentation isn't much of a problem for SSD's, it's *HUGE* problem for HDD's. Every time I even look at a system the first things checked are the swapfile settings and fragmentation state.

This article shows in detail how to do this;
https://www.online-tech-tips.com/co...uters-performace-configuring-the-paging-file/

As general rule, setting the swapfile to 2x physical system RAM for less than 8GB and 1.5X for 8GB or more will be more than enough for even power users and will keep drive performance optimal.
Example;
If you have 4GB of RAM, setting the swapfile to 8192MB as a minimum and maximum.
If you have 8GB of RAM, setting the swapfile to 12288MB as a minimum and maximum.
If you have 16GB of RAM or more, setting the swapfile to anything more than that which is equal to your RAM will have little to no benefit to the user/system and will waste storage space.
This methodology will lock the size of swap file and prevent fragmentation while providing optimal performance.


----------



## ne6togadno (Feb 6, 2018)

Bill_Bright said:


> Actually, low memory refers to "_virtual_" memory running low. Virtual memory is physical system RAM plus the Page File. You can run low of virtual memory even if you have lots of "system" installed. If you dinked with your PF settings, just let Windows manage it.


sure but do you think all those detiles matter for someone that cant make difference between memory and disk space?
dont make learning curve so steep right from the beggining. make sure basics are clear and understanded then go deeper into details
also we are going too far away from browser cash topic of op so i think it's better to stop here. if alecx needs more info i am sure he/she is able to make a new thread with questions he/she has.


----------



## Bill_Bright (Feb 6, 2018)

lexluthermiester said:


> I disagree with this. Letting Windows manage the Page/Swap file will result in progressively worse drive performance over time and will lead to fragmentation.


Sorry, but that is not correct.


lexluthermiester said:


> While fragmentation isn't much of a problem for SSD's, it's *HUGE* problem for HDD's.


First, fragmentation is no problem at all for SSDs. This is why SSDs do not get defragged - ever.  And second, if you don't dink with the default settings, Windows will automatically and regularly defrag hard drives just so fragmentation will NOT become a problem.


ne6togadno said:


> dont make learning curve so steep right from the beggining.


I agree. And with that in mind it is best to leave the Page File management to Windows. Contrary to what some people think (or want to think), the developers at Microsoft know what they are doing.


lexluthermiester said:


> As general rule, setting the swapfile to 2x physical system RAM for less than 8GB and 1.5X for 8GB


This is really just old wives' tale nonsense, and has been nearly forever. Windows 7, 8 and 10 are not XP, W95 or DOS and people need to stop treating them that way.

Note the world renown, preeminent top expert on virtual memory, Mark Russinovich, where he says (my* bold* added), How Big Should I Make the Paging File?


> Perhaps one of the most commonly asked questions related to virtual memory is, how big should I make the paging file? *There’s no end of ridiculous advice* out on the web and in the newsstand magazines that cover Windows, and even Microsoft has published misleading recommendations. *Almost all the suggestions are based on multiplying RAM size by some factor*, with common values being 1.2, 1.5 and 2. Now that you understand the role that the paging file plays in defining a system’s commit limit and how processes contribute to the commit charge, you’re well positioned to *see how useless such formulas truly are*.



The problem with your advice to set a fixed size (which if done right, can be good) is that it is NOT a "set and forget" setting. If it were, Microsoft would use those silly formulas, set the PF size, then forget them. But instead, Windows dynamically adjusts the PF size as needed. And that's a good thing. Because every time you make major changes to the OS, hardware, major programs, or the way the user uses/tasks the computer, the commit rates need to be reanalyzed and if necessary, the PF needs to be resized. 

*If you don't fully understand how virtual memory affects performance and stability, and if you don't understand OS and program "commit limits", "pool usage", and memory "mapping", leave the page file settings at their defaults! Let Windows manage it! *Because as ne6togadno suggests, understanding those essential components of memory management involves a steep (not to mention very technical) learning curve. If not understood, it is likely to be done wrong.

And contrary to what lexluthermiester wants us to believe, Microsoft, Windows and especially Windows 10 understand those essential components of memory management very well. So frankly, users just don't need to fully understand virtual memory, commit limits, etc. Windows already knows how.

Sadly, too many think they are experts at memory management when they aren't. While we may think some of Microsoft's marketing and executive policy decisions were dumb (and I totally agree, many were) the developers at Microsoft are top notch and have decades and exabytes of empirical and statistical data to draw from, with real experts on staff who know how to use it.


----------



## John Naylor (Feb 6, 2018)

lexluthermiester said:


> I disagree with this. Letting Windows manage the Page/Swap file will result in progressively worse drive performance over time and will lead to fragmentation. While fragmentation isn't much of a problem for SSD's, it's *HUGE* problem for HDD's. Every time I even look at a system the first things checked are the swapfile settings and fragmentation state.



It helps, but you can take it a bit furtherto even more advantage.   Back in the 90s, because of the way AutoCAD worked with extensive page and temp file usage, we gave them their own partition.

C:\ OS 1
X:\ OS 2
D:\ Swap File (temp files Optional)
Other partitions followed

We used a boot menu whereby to get into work OS, during boot up, an access code had to be typed in and if not done, would load to the home OS (wife and kids) after 15 seconds.  The work OS was on 1st partition C:\ and 2nd partition was X:\ (hidden and not visible to the OS).  If the code wasn't typed in, it would boot to the 2nd partition and from there you couldn't access the work OS.  The reasoning was that both could share the swap file space . ... a big deal when a 1 GB drive was $1,000

What we learned was.... 

1.  With the swap / temp files permanent;y located near the outer edge of the platters, we saw a significant increase in speed over the life of the drive
2.  Fragmentation was anon issue
3.  As temp files did not need the "protection" warranted for OS, Program and data files, this small partition could be FAT and escape the associated NTFS overhead.
4.  When a fixed size swap file was used we found that over time it would "move", presumably as Windows updates were applied before th swap file was created.  Eventually, this could lead to fragmentation as free space dwindled but more importantly, the moves were further back on the drive toowards the inside edge where drive speed if half of what it is at the front.


----------



## Bill_Bright (Feb 6, 2018)

John Naylor said:


> What we learned was....
> 
> 1. With the swap / temp files permanent;y located near the outer edge of the platters, we saw a significant increase in speed over the life of the drive


Yeah, you are dating yourself! Those days were way back when hard drive space was precious and drives (seek and access times) slow, and 2Mb buffers were big. Back then, the better defrag programs would move all the files to the beginning of the disk for faster seek times and they would consolidate free space too. Today's hard drives are huge and much faster. Huge makes a big difference because that means there's lots of free disk space - it does not need to be consolidated to minimize defragging. And today's drives being faster means it does not matter if files are scattered all over the disk - just as long as they are not fragmented since finding the first file segment is what takes up the most time. And again, if users don't dink with settings, Windows regularly defrags hard drive anyway, so it does not matter what Windows Update does. Another example where we need to stop thinking W7/8/10 need to be treated the same way we treated XP. 

Of course with page files on SSDs (where they are ideally suited), your point number 4 becomes moot since fragmentation is not an issue with SSDs, nor does the file location on the SSD affect seek times. 

In case someone is wondering: To see why SSDs are ideally suited for Page Files, see Support and Q&A for Solid-State Drives and scroll down to, "_Frequently Asked Questions, Should the pagefile be placed on SSDs?_" While the article is getting old, it applies even more so today since wear problems of early generation SSDs are no longer a problem and each new generation of SSD just keeps getting better and better.


----------



## bug (Feb 6, 2018)

John Naylor said:


> It helps, but you can take it a bit furtherto even more advantage.   Back in the 90s, because of the way AutoCAD worked with extensive page and temp file usage, we gave them their own partition.
> 
> C:\ OS 1
> X:\ OS 2
> ...


Too bad you didn't discover putting your swap file on an actual 2nd physical drive


----------



## lexluthermiester (Feb 6, 2018)

Bill_Bright said:


> Sorry, but that is not correct.


Yes, it is.


Bill_Bright said:


> First, fragmentation is no problem at all for SSDs. This is why SSDs do not get defragged - ever.


That's not exactly correct. There are circumstances were files being fragmented to a severe degree on an SSD can degrade performance slightly. It can also cause uneven sector wear in some instances.


Bill_Bright said:


> And second, if you don't dink with the default settings, Windows will automatically and regularly defrag hard drives just so fragmentation will NOT become a problem.


That statement assumes that Microsoft's builtin defrag service does it's job as intended. In practice, that service is interrupted frequently and ends up making things worse over the long run.


Bill_Bright said:


> This is really just old wives' tale nonsense, and has been nearly forever. Windows 7, 8 and 10 are not XP, W95 or DOS and people need to stop treating them that way.


My recommendations are based on a vast level of practical, real world experiences. A solid 75% of the time when people bring me their PC/Laptop and claim it's very slow, the hard drive is usually fragmented by over 60%. After using a proper defragmentation utility and performing the configurations described above, those same systems become much more responsive and snappy.


Bill_Bright said:


> The problem with your advice to set a fixed size (which if done right, can be good) is that it is NOT a "set and forget" setting.


There are no real-life practical problems with the advice given. They are proven methodologies. If you don't wish to subscribe to them, that is your prerogative. However, such an opinion does not affect or change the fact that these methods work very well. Your point of "if done right" does resonate true. If not done right it can cause problems. Using the method described above, it is a very much a stable and well performing "set it and forget it" situation. When I set systems up in these ways, they stay running smoothly, long term. If there was a flaw, it would render negative results.


Bill_Bright said:


> Note the world renown, preeminent top expert on virtual memory, Mark Russinovich


That is an opinion not shared by everyone.


Bill_Bright said:


> And contrary to what lexluthermiester wants us to believe, Microsoft, Windows and especially Windows 10 understand those essential components of memory management very well. So frankly, users just don't need to fully understand virtual memory, commit limits, etc. Windows already knows how.


Microsoft is not the end-all-be-all of computing. If they were, there would be no need to make laws to control their many instances of unlawful, unethical behavior. There would also be no need for alternatives to their software. They would also still be the most used OS platform on the planet. There are many ways of doing things. If Microsoft had their ducks in a row, there wouldn't be a need for entire websites dedicated to helping people get the most out of their experiences with Windows, especially Windows 10.


bug said:


> Too bad you didn't discover putting your swap file on an actual 2nd physical drive


That works too, but isn't something the general user will do. As a rule I usually create a 12-24GB partition at the front of the drive to contain the swapfile and temp files. This keeps the most used files right at the front of the HDD, which is the fastest part of a drive. The OS partition then takes up the rest of the space.


----------



## bug (Feb 6, 2018)

lexluthermiester said:


> That works too, but isn't something the general user will do. As a rule I usually create a 12-24GB partition at the front of the drive to contain the swapfile and temp files. This keeps the most used files right at the front of the HDD, which is the fastest part of a drive. The OS partition then takes up the rest of the space.



It's only the fastest part as long the heads don't have to read/write anywhere else.


----------



## Bill_Bright (Feb 6, 2018)

lexluthermiester said:


> There are circumstances were files being fragmented to a severe degree on an SSD can degrade performance slightly. It can also cause uneven sector wear in some instances.


It is senseless and unrealistic to use extreme, rarely seen scenarios to set policies or to use that to try to justify a rule applicable to all, or the vast majority of situations. 

A drunk driver could, in certain circumstances, jump the curb, swerve past two trees and land on my porch too. I guess I better not stand on my porch. 

A SSD is like a mail sorting box with a robot arm stuffing and retrieving data chunks into and out of each slot. Except in extreme circumstances that the vast majority of users will never see, it takes no more time if the file segments are distributed in slots 2, 14, 31, 7, 23, and 16 than it does if stuffed in slots 1, 2, 3, 4, 5, 6. For that reason, fragmentation is not a problem with SSDs and SSDs do not need to be defragged. In fact, any defrag program worth its salt will not attempt to defrag a SSD.  And uneven wear on SSDs is prevented by TRIM and wear leveling so suggesting uneven sector wear is just misinformation.



lexluthermiester said:


> That statement assumes that Microsoft's builtin defrag service does it's job as intended. In practice, that service is interrupted frequently and ends up making things worse over the long run.


Oh bullfeathers! That's one of the silliest things I've heard in a long time. Please show us a current study using today's typical monster drives that says Microsoft's built in defragger in Windows 10 makes things worse! 

For the record, nothing says the process cannot be interrupted. In fact, it is designed that way - to operate in the background when the computer is idle, to step out of the way when the user starts using the computer again. And because it is regularly scheduled, fragmentation is minimal between defragging. It is ridiculous to suggest an entire drive must be defragged without interruption.   It is not like files are left open or fragments are suddenly lost.

Do you want to kill the power or process in Task Manager? Absolutely not as that could lead to file corruption. But killing power or terminating the task in TM is not the same thing as the program gracefully halting and stepping aside.



lexluthermiester said:


> Using the method described above, it is a very much a very stable and well performing "set it and forget it" situation.


No it isn't! Clearly you are not the master of virtual memory you think you are. And it is clear you don't know who Mark Russinovich is either - understand Microsoft hired him long after he proved himself as one of the world's top experts. He is not a Microsoft shill.

If you set it with 4GB of RAM installed and then forget it, then install another 4GB of RAM, your PF settings will likely be wrong. If you start using that computer for totally different tasks, your PF settings will likely be wrong. If another user starts using that computer, your PF settings will likely be wrong. 

If it was a set and forget, why doesn't Microsoft just pick 1.5 x RAM or 2 x RAM and leave it? Why go through the complex ordeal of making that a dynamic process? Sure, if you have 8GB of RAM and you set your PF to 12GB, you will have enough virtual memory. But if you don't need that much, you just wasted a bunch of disk space - especially with your totally inefficient suggestion to set not just the maximum, but the minimum at that level too. 

FTR, I have 16GB of RAM installed in this system. Microsoft has my PF currently set to 2432MB. I don't need or want an extra 10GB of my SSD obligated to fixed PF size. That would be inefficent and a waste of space. And, for that matter, using up such a large chunk of space could contribute to fragmentation - if this were a hard drive.



lexluthermiester said:


> Microsoft is not the end-all-be-all of computing. If they were, there would be no need to make laws to control their many instances of unlawful, unethical behavior.


Ah! There it is! Your true biased colors just came out.   I made a specific point to differentiate the developers and the work they do from the marketing and executive  people who often make dumb decisions. But you lump them all together as if they come from the same "unethical" mindset - as if setting page file sizes and defragging parameters has something to do with business ethics.   

Oh well. I'm outta here.


----------



## Jetster (Feb 6, 2018)

We have been over this before. With today's Windows 10 leave your page file alone and let windows manage it.

There is nothing to debate

As far as letting Chrome cash to your ram, I really don't see the point. Why?

This is all stuff we use to do in the 90s. When drive's were slow and operating systems were crap


----------



## lexluthermiester (Feb 6, 2018)

Bill_Bright said:


> It is senseless and unrealistic to use extreme, rarely seen scenarios to set policies or to use that to try to justify a rule applicable to all, or the vast majority of situations.


That isn't what was said or even implied.


Bill_Bright said:


> Please show us a current study using today's typical monster drives that says Microsoft's built in defragger in Windows 10 makes things worse!


Don't need to. I literally see this weekly. If you want to see it, build two systems that are identical in every aspect. Use one with Microsoft default settings and set the other with the settings stated above. Use them both generally equally for the space of one year. At the end of that one year test, observe and learn.


Bill_Bright said:


> Ah! There it is! Your true biased colors just came out.  I made a specific point to differentiate the developers and the work they do from the marketing and executive people who often make dumb decisions. But you lump them all together as if they come from the same "unethical" mindset - as if setting page file sizes and defraging parameters has something to do with business ethics.


My opinions and perspectives concerning Microsoft are well known and documented. But ok.


Jetster said:


> We have been over this before. With today's Windows 10 leave your page file alone and let windows manage it.


No thank you. While Windows 10 has improved on some things, the swapfile being in a variable state on a standard HDD can, does and will still result in drive performance degradation over time. This has already been tested and proven.


Jetster said:


> There is nothing to debate


And yet here we are..


Jetster said:


> This is all stuff we use to do in the 90s. When drive's were slow and operating systems were crap


Some methodologies are still in use because they are still useful and needed. Whether or not everyone can see such usefulness is irrelevant.


----------



## eidairaman1 (Feb 6, 2018)

Jetster said:


> We have been over this before. With today's Windows 10 leave your page file alone and let windows manage it.
> 
> There is nothing to debate
> 
> ...





lexluthermiester said:


> That isn't what was said or even implied.
> 
> Don't need to. I literally see this weekly. If you want to see it, build two systems that are identical in every aspect. Use one with Microsoft default settings and set the other with the settings stated above. Use them both generally equally for the space of one year. At the end of that one year test, observe and learn.
> 
> ...




For most it's best to not jack with it. Performance degradation I do not see on a ssd, im just ocd and manually set it like I have been since Win 9X.  Best solution to this is using a separate drive for paging.


----------



## Jetster (Feb 6, 2018)

You know you can defrag a page file if its on a disk drive. And who still uses disk drives for the OS ?


----------



## Bill_Bright (Feb 6, 2018)

lexluthermiester said:


> My opinions and perspectives concerning Microsoft are well known and documented. But ok.


I could care less about your opinions concerning Microsoft. You are entitled to them and I will defend your right, with vigor, to express them. What is wrong and what I object to is you using your biases and biased opinions to cloud reality to rationalize and justify unjustly your inaccurate position on technical issues. 

Microsoft's ethical (your word - and in some cases, I don't disagree with) issues are due to marketing and executive decisions and directives. The "technical" methods Windows uses to manage virtual memory, defragging, etc. are based on technical decisions made by the development team referring to exabytes of evidence and decades of experience - not marketing, not executive management decisions.

Yet your clear dislike for and biases against the _entire company_ of Microsoft and their "ethics" has you believing their technical decisions must be bad and your way must be right. That's just wrong, twisted logic, and very sad. And not good for TPU readers who come to learn the true technical facts. 

And you have demonstrated this blind, twisted logic by using the same minimum size PF as the maximum. That's not sound, or efficient virtual memory managment or efficient use of disk space. Your comments about defragging SSDs, set and forget, and unfounded claims about Window Optimize Disk utility also demonstrate this. No

doubt, you honestly believe you are right and Microsoft and everyone else who takes a different stand than you are wrong. That does not change the fact that you need to do your homework before talking. 



eidairaman1 said:


> Best solution to this is using a separate drive for paging.


If possible that is absolutely true. And Window will certainly take advantage of having two drives, and with the primary PF on the secondary drive to optimize performance. But even with using a secondary drive to house your primary PF, it is best to let Windows manage it unless your truly do understand commit levels, mapping etc. AND you understand this is NOT a set and forget setting.



Jetster said:


> You know you can defrag a page file if its on a disk drive.


Don't waste your breath. You can't change the opinion of biased haters and others who refuse to open their eyes and accept reality.


----------



## Jetster (Feb 6, 2018)

Bill_Bright said:


> Don't waste your breath. You can't change the opinion of biased haters and others who refuse to open their eyes and accept reality.



I guess your right


----------



## eidairaman1 (Feb 7, 2018)

Jetster said:


> You know you can defrag a page file if its on a disk drive. And who still uses disk drives for the OS ?



Yup i definitely know that, i used to partition drives for paging till i learned im just wasting space and causing head thrashing. I know disks benefitted from this but idk if ssds do by having paging on a separate ssd because there is physical latency of having to go to a separate drive...


----------



## lexluthermiester (Feb 7, 2018)

eidairaman1 said:


> Performance degradation I do not see on a ssd


On an SSD, no. On a HDD, yes.


Jetster said:


> And who still uses disk drives for the OS ?


Most people still use standard HDD's.


Bill_Bright said:


> I could care less about your opinions concerning Microsoft. You are entitled to them and I will defend your right, with vigor, to express them. What is wrong and what I object to is you using your biases and biased opinions to cloud reality to rationalize and justify unjustly your inaccurate position on technical issues.


As demonstrated by the above, your argument is motivated by feelings and ego. My position is based upon ongoing experiences that are revisited yearly. So far, they still hold up and render benefit. You can call those methodologies anything you wish, it will not change the effectiveness of using them. You claim they are wrong without offering anything more than opinion as a qualifier. Show actual and real benchmarks & testing and I will happily consider your position. 


Bill_Bright said:


> your clear dislike for and biases against the _entire company_ of Microsoft and their "ethics" has you believing their technical decisions must be bad


Incorrect. My opinion for the company has nothing to do with technical implementation of the software they produce. The configuration practices I use are based on actual usage and the results of such use. I only trust *that which is proven to work*. Microsoft's default configurations are based on a wide range of deployment scenarios. They are meant to be refined and customized by professionals to meet the needs of the users. Such defaults are less than optimum most of the time.


Bill_Bright said:


> No doubt, you honestly believe you are right and Microsoft and everyone else who takes a different stand than you are wrong.


If real-world experience says/shows something different than what the "experts" at Microsoft or others say and your livelihood depends upon getting things right, which are you going to consider? If you say anything other than what real-life experiences show you, then the question is answered before it was asked.


Bill_Bright said:


> That does not change the fact that you need to do your homework before talking.


Ok then.


Bill_Bright said:


> Don't waste your breath. You can't change the opinion of biased haters and others *who refuse to open their eyes and accept reality*.


Yes, absolutely correct.


eidairaman1 said:


> Yup i definitely know that, i used to partition drives for paging till i learned im just wasting space and causing head thrashing. I know disks benefited from this but idk if ssds do by having paging on a separate ssd because there is physical latency of having to go to a separate drive...


The benefit of using separate drives for the OS & PF is that your system can access the main OS drive at the same time it's using the PF drive. Latency should be minimal if anything.


----------



## eidairaman1 (Feb 7, 2018)

Bill_Bright said:


> I could care less about your opinions concerning Microsoft. You are entitled to them and I will defend your right, with vigor, to express them. What is wrong and what I object to is you using your biases and biased opinions to cloud reality to rationalize and justify unjustly your inaccurate position on technical issues.
> 
> Microsoft's ethical (your word - and in some cases, I don't disagree with) issues are due to marketing and executive decisions and directives. The "technical" methods Windows uses to manage virtual memory, defragging, etc. are based on technical decisions made by the development team referring to exabytes of evidence and decades of experience - not marketing, not executive management decisions.
> 
> ...



Bill for the Swap Space I set it to 4096 or 8192 for both minimum and maximum so it doesnt have a rubberband effect lol.



lexluthermiester said:


> On an SSD, no. On a HDD, yes.
> 
> Most people still use standard HDD's.
> 
> ...



Yeah I don't see any perceivable performance impact of swap space on my ssd.


----------



## Bill_Bright (Feb 7, 2018)

eidairaman1 said:


> Bill for the Swap Space I set it to 4096 or 8192 for both minimum and maximum so it doesnt have a rubberband effect lol.


If you have a sufficient amount of free disk space, there is nothing wrong with the "rubberband" effect. The once in awhile "dynamic" reconfiguring of the PF creates that "rubber band" effect at an infinitesimal level compared to the massive and wild swings in disk space consumption going on all the time through normal computer usage! You are comparing an ultra light to a jumbo jet.

For example, each and every time you boot and shut down your OS, large amounts of free disk space is consumed by 100s and 100s of temporary system files as the OS boots and runs. Then much of that same space "snaps back" and is freed up again as the OS shuts down. Same with any large application! Every time you open any file, a temporary copy is written to the disk. And major applications like Word, Excel or your security apps typically opens dozens of files.

Clean out the history and clutter caused by your browser, then start your browser and suddenly you will have dozens if not 100s of files written to disk again. Before long you will easily have 1000s of temporary Internet files, cookies, etc. on your disks! In total, they may not immediately add up to the PF size, but with 1000s of small writes and reads, deletes, then more writes and reads scattered all over the disks, all day long every day, day in and day out, you can see the "rubber band" effect of a dynamic PF is negligible - at most!

So these suggestions and implications that the PF and hibernation files are causing excessive writes to an SSD are just nonsense.  And hard disk space is so inexpensive (with SSD space getting there), if someone is that concerned about a few extra gigabytes of space being consumed by the PF, they need to buy or free up more disk space.

Once Windows sets your PF and superfetch configurations, your page file settings stay the same (perhaps for weeks, months or even years!) until you make a big change to your hardware configuration, computing habits, etc. It is not set and forget, but it is not constantly stretching and snapping back and forth either .

And I'll say it again (and again and again, if necessary), modern versions of Windows are NOT XP. And this holds especially true for Windows 10. Just because something might have been true with W7, that does NOT automagically make it true for, or better with Windows 10! 

If you (speaking to the crowd) are not letting Windows manage your PF and instead, you have disabled your PF or manually set the sizes just because you have always done it that way (or because you don't "see" any difference), then those are not sound reasons for deviating from the defaults.



lexluthermiester said:


> If real-world experience says/shows something different than what the "experts" at Microsoft or others say and your livelihood depends upon getting things right, which are you going to consider? If you say anything other than what real-life experiences show you, then the question is answered before it was asked.




You can see via the link in my sig I've been an IT technician, supporting major military, federal, state, corporate, and SOHO computers and secure networks to provide for me and my family's livelihood for over 45 years. That's a lot of "real world experience"! But I would not pretend for a second that makes me smarter than the experts at Microsoft. Nor that my experiences gives me the right to use my biases against the _business practices_ of any entity to influence my "technical" advice about one of their products. Or that my past  experiences (especially with "legacy" operating systems and hardare) automatically apply to everyone today using current operating systems and modern hardware. I can be arrogant but not even I am that arrogant. I do not and will not assume what I have seen is the unshakable reality for everyone. So I do my homework. I research. I consult with "the experts" - because I don't assume I know it all, that I am right, and more importantly, I don't assume what is right for me is right for everyone else.

So I'll say it again - Windows 10 is NOT XP. It is not even W7. So 45 years experience is not of much value with an OS that is barely 2 1/2 years old and already been through several major evolutionary updates! Not to mention with this modern W10 that is running on modern hardware designed to run W10.  So what do I do? I do my homework. I consult with the experts. I don't use "exceptions" to make the rule and I don't let my biases against a company influence my "technical" decisions about the products they produce. 

******

@Pedipalpa - My apologies for my part in your topic being run off in different directions. I will not pursue further OT discussions and would hope others do the same.


----------



## eidairaman1 (Feb 7, 2018)

Bill_Bright said:


> If you have a sufficient amount of free disk space, there is nothing wrong with the "rubberband" effect. The once in awhile "dynamic" reconfiguring of the PF creates that "rubber band" effect at an infinitesimal level compared to the massive and wild swings in disk space consumption going on all the time through normal computer usage! You are comparing an ultra light to a jumbo jet.
> 
> For example, each and every time you boot and shut down your OS, large amounts of free disk space is consumed by 100s and 100s of temporary system files as the OS boots and runs. Then much of that same space "snaps back" and is freed up again as the OS shuts down. Same with any large application! Every time you open any file, a temporary copy is written to the disk. And major applications like Word, Excel or your security apps typically opens dozens of files.
> 
> ...



Nah i have plenty, i just set that space asside, heck I even limit the size of the recycle bin too


----------



## Bill_Bright (Feb 7, 2018)

I don't have a problem with that as long as you don't assume your configuration is right for everyone else - especially since the vast majority of users and readers on this and similar sites are not computer experts, or even willing to dink with such settings. 

I used to dink with recycle bin sizes. Now, I just empty it periodically.


----------



## lexluthermiester (Feb 7, 2018)

Bill_Bright said:


> I do my homework.


I do *actual* work. I observe how systems actually perform. I tinker with settings and observe the results. I use the scientific method to draw conclusions.


Bill_Bright said:


> I consult with the experts.


I *am* an expert.


Bill_Bright said:


> I don't use "exceptions" to make the rule and I don't let my biases against a company influence my "technical" decisions about the products they produce.


You also don't pay attention to context.


Bill_Bright said:


> I will not pursue further OT discussions and would hope others do the same.


Good. Let's agree to disagree and call it finished.


----------

