DIsabling swap file with 4GB ram. Wise?

EXE/DLL files are typically loaded by the OS using memory-mapped file I/O, so as to allow sharing between processes.

The hard page fault generated by those don't really compare to what is caused when a system is running low on memory (and there is a page file).

If the system is running low on physical memory and there is no paging file, exe's and dll's for instance will inevitably need to be ejected from memory to make room for the active processes code and data. Other exe's and dll's may then need to be pulled into memory and therefore causing hard page faults. From a performance perspective, how does this differ if the system has a paging file?

When a system's committed memory exceeds the amount of RAM it has, then by definition it is "overcommitted" on memory. And as this overcommitment gets worse and worse (i.e. usage approaches the commit limit) then you will find that hard page faults (to/from the page file itself) will increase and increase. This is also known as "disk thrashing" or "disk swapping".

System committed virtual memory increasing above the amount of physical memory of the machine in itself doesn't cause any problems in terms of what you have explained above because it doesn't tell you about how much of that allocation processes have touched (and therefore are demanding it to be backed by physical memory).

This comes up all the time, rather than debating what it could be set to lets actaully use some tools to define what it needs to be set to for your machine...

Let windows manage your page file for a couple of weeks, then run this command:

Code:
wmic pagefile get caption, currentusage, peakusage

*Snip*

Monitoring how much has been written to the paging file should not be your first port of call when it comes to sizing the paging file because it doesn't take into account how much system committed virtual memory your workload requires. This should be made the priority. Once you have sized the paging file to accommodate your workloads system commit requirements, you could then do what you have suggested and increase the size of it if you feel it is necessary to do so.

A system which has a large enough paging file to support the system commit requirements of the workload which is being run but nothing else can be written to the paging file, what will happen is anything which is backed by the paging file will have to stay in physical memory. While this will results in less memory being available for other purposes, it won't cause any problems.

Where as if you size the paging file based on how much has been written there, and after you have calculated what you think the initial size of the paging file should be, if that turns out to be smaller than the size which is needed to support your workloads system commit, you will start to experience problems as illustrated near the bottom part of this post here.
 
Last edited:
Back
Top Bottom