- Joined
- Aug 16, 2005
- Messages
- 27,171 (3.83/day)
- Location
- Alabama
System Name | RogueOne |
---|---|
Processor | Xeon W9-3495x |
Motherboard | ASUS w790E Sage SE |
Cooling | SilverStone XE360-4677 |
Memory | 128gb Gskill Zeta R5 DDR5 RDIMMs |
Video Card(s) | MSI SUPRIM Liquid X 4090 |
Storage | 1x 2TB WD SN850X | 2x 8TB GAMMIX S70 |
Display(s) | 49" Philips Evnia OLED (49M2C8900) |
Case | Thermaltake Core P3 Pro Snow |
Audio Device(s) | Moondrop S8's on schitt Gunnr |
Power Supply | Seasonic Prime TX-1600 |
Mouse | Razer Viper mini signature edition (mercury white) |
Keyboard | Monsgeek M3 Lavender, Moondrop Luna lights |
VR HMD | Quest 3 |
Software | Windows 11 Pro Workstation |
Benchmark Scores | I dont have time for that. |
I remember a long long time ago I had 768mb of ram and was trying to play NWN 1. It ran like trash. This was back when Windows XP was dominant. To fix the issue I came a cross a post about modifying the virtual memory amount. I set it to 4GB and the game ran amazing.
Years have passed since that day. I have been through numerous phases.
-Run it with 4GB
-Dont run it at all
-Run it with 1GB
-Let Windows decide.
The only conclusion I have been able to come to without actually designing it myself is the following.
- Running it with 4GB for several years proved to be an amazing benefit back when 1GB of DDR1 RAM was hundreds of dollars a stick. I kept it like this for many many years and never had a single issue. That said I could never ultimately decide whether or not it still provided the same benefit as it had for me that day. I also realized with the onset of bigger games in the early 2000s and the low storage capability of affordable HDDs at that time that 4GB was too much for a data hoarder or someone that needed or had many programs installed. A cycle that would crop backup with the presentation of the first wave of 32 and 64gb SSDs. With the added bonus of destroying them back when NAND structure was very weak. Of course this was back with XP and memory management in windows OSs comes very far with almost every release.
-Not running it at all seemed like a fantastic prospect when RAM and total board compatibility grew and became affordable. This became a popular practice around Windows Vista. When "4GB Vista Ready" kits were introduced at around the $200 mark. Memory management was better for software in general obviously looking the other way at Vistas memory usage in general compared to XP at the time it was a product of change and higher OS consumption was starting to be accepted when we looked more closely at the technology that was going to be implemented in future OSs like transparent shells and security measures that loaded system processes in memory.
Memory management was getting better and running without a pagefile worked for me for a very long time, however like anything humans are unpredictable and imperfect. Some programs were coded using old techniques and peoples refusal to leave the past behind meant compatibility modes that would break Windows memory management algorithms causing memory leaks. These caused users and myself on many different occasions issues as our systems would break because of the unforeseen consequences of bad programming and maybe bad practice. Of course then and even now, who can be certain running without one is bad practice? If not for the memory leaks of management protocols would we consistently see adverse effects? Of course we also had the issue with the crashes themselves. With no pagefile we could not analyse crash dumps.
-Running with 1GB of RAM was another popular midpoint starting with Windows 7 when we started to see less and less of the issues we encountered with Vista. The framework and best practices for software was more outlined for developers and the kernel code cleaned up and improved so 7 itself used less resources. However with the improvements most were still unsure if the likelihood of issues was going down because some were now running pagefiles that were not previously, or because now RAM stepped into the DD3 erra, becoming cheaper and more plentiful with the added software improvements simply became less of an issue because we now rarely reached the threshold. Of course one obvious added benefit is that in some small capacity we were now able to analyse some smaller crash dumps with the added pagefile thus making some bluescreens and dumps that weren't 1:1 copies of system memory readable.
-Windows managed pagefile was a proof of concept and the last in my line of thinking. I did all other varients before settling on what MS setup for me on day 1. I did this more as an academic exercise, for all I knew current OSs would be aware enough to correct issues I had once had like NWN not working. However I have seen 3 different combinations of issues. Of course one of which is all systems GO. Working as intended with the onset of better cheaper and bigger SSDs and memory Windows pagefile usage was something that slowly stopped being considered when size was thought about. Correctly working it is under the total system amount and automatically scales to need. I have seen no issues with this model.
With everything however there are PROs and CONs one of which is when automatic adjustment does not work. Over the years memory topology and management has changed. All of it for the better, but systems arent perfect. I have seen automatic configuration 1:1 match my system memory which becomes a serious concern when disks are getting thrashed by the sheer size of the pagefile. It is also an issue in this aspect when system memory started becoming incredibly cheap during the DDR3 speed races when 1333 and 1600mhz sticks were as low as $1X for gigs of capacity. This quickly killed systems that were still running reliable smaller SSDs while running deep density sticks because of there cost effectiveness.
I have also seen Windows management use double system memory which still ties into what I just wrote but more importantly is the issue of it not correcting itself. whether it is double, 1:1 or in the unfortunate event of my recent dealing with scientific simulations were memory consumption was very high causing BSODs because Windows Memory Management was not automatically expanding the paging file the issues with Automatically setting a paging size isnt usually due to the size at which it is set automatically. The problem comes later when it FAILS to shrink or expand it. This brings in all the issues discussed above without the added benefit of being able to say you did it to yourself.
In closing I monitor my paging files for inconsistency due to Windows or software that manipulates its value incorrectly. However I will say that I do have it automatically set for every system I own now. That isnt too say I will not change it like I already have several times, only that I currently see no issues with it as is as long as it keeps working in what I believe to be an acceptable capacity.
Am I going to banish someones thought or personal ideals about this setting? No, I will not. I do not know what there situation or needs are and I have manipulated it myself on multiple occasions because of real or perceived problems with it being automatically set.
That said I also think there is no wrong way to eat a reeses.
Years have passed since that day. I have been through numerous phases.
-Run it with 4GB
-Dont run it at all
-Run it with 1GB
-Let Windows decide.
The only conclusion I have been able to come to without actually designing it myself is the following.
- Running it with 4GB for several years proved to be an amazing benefit back when 1GB of DDR1 RAM was hundreds of dollars a stick. I kept it like this for many many years and never had a single issue. That said I could never ultimately decide whether or not it still provided the same benefit as it had for me that day. I also realized with the onset of bigger games in the early 2000s and the low storage capability of affordable HDDs at that time that 4GB was too much for a data hoarder or someone that needed or had many programs installed. A cycle that would crop backup with the presentation of the first wave of 32 and 64gb SSDs. With the added bonus of destroying them back when NAND structure was very weak. Of course this was back with XP and memory management in windows OSs comes very far with almost every release.
-Not running it at all seemed like a fantastic prospect when RAM and total board compatibility grew and became affordable. This became a popular practice around Windows Vista. When "4GB Vista Ready" kits were introduced at around the $200 mark. Memory management was better for software in general obviously looking the other way at Vistas memory usage in general compared to XP at the time it was a product of change and higher OS consumption was starting to be accepted when we looked more closely at the technology that was going to be implemented in future OSs like transparent shells and security measures that loaded system processes in memory.
Memory management was getting better and running without a pagefile worked for me for a very long time, however like anything humans are unpredictable and imperfect. Some programs were coded using old techniques and peoples refusal to leave the past behind meant compatibility modes that would break Windows memory management algorithms causing memory leaks. These caused users and myself on many different occasions issues as our systems would break because of the unforeseen consequences of bad programming and maybe bad practice. Of course then and even now, who can be certain running without one is bad practice? If not for the memory leaks of management protocols would we consistently see adverse effects? Of course we also had the issue with the crashes themselves. With no pagefile we could not analyse crash dumps.
-Running with 1GB of RAM was another popular midpoint starting with Windows 7 when we started to see less and less of the issues we encountered with Vista. The framework and best practices for software was more outlined for developers and the kernel code cleaned up and improved so 7 itself used less resources. However with the improvements most were still unsure if the likelihood of issues was going down because some were now running pagefiles that were not previously, or because now RAM stepped into the DD3 erra, becoming cheaper and more plentiful with the added software improvements simply became less of an issue because we now rarely reached the threshold. Of course one obvious added benefit is that in some small capacity we were now able to analyse some smaller crash dumps with the added pagefile thus making some bluescreens and dumps that weren't 1:1 copies of system memory readable.
-Windows managed pagefile was a proof of concept and the last in my line of thinking. I did all other varients before settling on what MS setup for me on day 1. I did this more as an academic exercise, for all I knew current OSs would be aware enough to correct issues I had once had like NWN not working. However I have seen 3 different combinations of issues. Of course one of which is all systems GO. Working as intended with the onset of better cheaper and bigger SSDs and memory Windows pagefile usage was something that slowly stopped being considered when size was thought about. Correctly working it is under the total system amount and automatically scales to need. I have seen no issues with this model.
With everything however there are PROs and CONs one of which is when automatic adjustment does not work. Over the years memory topology and management has changed. All of it for the better, but systems arent perfect. I have seen automatic configuration 1:1 match my system memory which becomes a serious concern when disks are getting thrashed by the sheer size of the pagefile. It is also an issue in this aspect when system memory started becoming incredibly cheap during the DDR3 speed races when 1333 and 1600mhz sticks were as low as $1X for gigs of capacity. This quickly killed systems that were still running reliable smaller SSDs while running deep density sticks because of there cost effectiveness.
I have also seen Windows management use double system memory which still ties into what I just wrote but more importantly is the issue of it not correcting itself. whether it is double, 1:1 or in the unfortunate event of my recent dealing with scientific simulations were memory consumption was very high causing BSODs because Windows Memory Management was not automatically expanding the paging file the issues with Automatically setting a paging size isnt usually due to the size at which it is set automatically. The problem comes later when it FAILS to shrink or expand it. This brings in all the issues discussed above without the added benefit of being able to say you did it to yourself.
In closing I monitor my paging files for inconsistency due to Windows or software that manipulates its value incorrectly. However I will say that I do have it automatically set for every system I own now. That isnt too say I will not change it like I already have several times, only that I currently see no issues with it as is as long as it keeps working in what I believe to be an acceptable capacity.
Am I going to banish someones thought or personal ideals about this setting? No, I will not. I do not know what there situation or needs are and I have manipulated it myself on multiple occasions because of real or perceived problems with it being automatically set.
That said I also think there is no wrong way to eat a reeses.