Do i need virtual memory enabled with 4gig of ram?

I have no judgement on it at the moment, I am not asking for proof to be arsey, I am asking for proof so I can know what I am talking about when I speak to them.
I talk to them more infrequently than I could post here so it is easier if I get your defence and pass it on to them than coming back a point at a time from them.
 
easiest way - get them to join the forums ;)
No one does any work in Uni anyway, so everyone should have plenty of time ;)

Cheers
ROfu
 
|Ric| said:
I don't quite understand your remark about the temporary page file as it seems unnecessary, if an out of memory exception is triggered then what ever caused it is arguably useless so kick it out and load in anything needed by the kernel. But I don't see why this would not already be in RAM any way?
Kernel mode software (i.e. the kernel itself, and device drivers) use about 150MB on Vista. Around half of that is "non-paged" memory which means it is exempt from being paged out to disk. Usually this type of memory is used for hardware buffers, for things like DMA. Everything else, but depending on how each device driver is written, usually uses a regular piece of pagable memory - because quite simply there's no reason for them not to.

You can see the exact figures your PC is using in Task Manager.
 
saffyre said:
Sorry but the downside is the time spent reading/writing to disc.

Again, a soltion to not having enough ram. I know what im saying isnt viable atm but thats never what i have been trying to assertain.

Sorry for quote a post a fair way back, but the whole CPU cache, RAM, Hard Disk, other storage media argument all come down to Economics in the end.
The ideal performance strategy would be to have X GB (or TB) of cache memory on the CPU where X would be enough to store every conceivable piece of information the CPU will ever need. (for arguments sake this would need to be persistent memory for the optimal enviroment, other wise there is a performance cost in populating this memory from a persistant source such as a hard disk)
This is economically unviable (the direct cache to the CPU is still measured in kb even this far into processor development (L1 cacce these days its called i think?). Each stage beyond the directly accessed CPU cache is a compromise between cost and performance and is why memory management even exists. Memory managers are about trying to keep the optimal information in the closest cache to the CPU, however they don't posses ESP so we still require a page file.
I can see both sides of the arugment but the way OSes are design at the moment the Memory manager is designed to be used with a page file that is large enough to cach the system RAM. Saffyre has a point is as much that if there was an enviroment where total memory required is a known constant a more beneficial memory management system could be used but this isnt the case with an operating system designed to run any number of application. If you were designing a (software) system with a specific application then you can design the processor and memory management system as you require, as already happens.

edit: Im drunk cant spell and cant be arsed to correct it
 
Sorry to revive this but i never read through this ages ago when i posted it, i didnt realise it caused so much handbags :), so the outcome seems to be leave it at the same as the amount of ram you have.
 
Sorry to revive this but i never read through this ages ago when i posted it, i didnt realise it caused so much handbags :), so the outcome seems to be leave it at the same as the amount of ram you have.

The outcome should be leave Windows to do it for you ;)

Edit: This article on technet refers to a terminal server, but the same concept applies. You simply cannot say your pagefile needs to be x big or even memory size + x. It wholly depends on how you use your computer.

Technet said:
Return of Pagefile

Mole,
I am having problems with configuring the pagefile for terminal server. Recently when a user was trying to use their e-mail, they received a message "out of virtual memory," even though I had configured the pagefile to the recommended size, any ideas as to what may have caused this?
When spreading the pagefile across all physical disks, does it matter if they are not all of equal size?
Pearson
Dear Pearson,
Mole is going to have to get out his stabbing stick to answer your letter. This is not to take a stab at you (Mole never resorts to physical aggression). No, Pearson, Mole's stabbing stick is what he uses to unearth possible causes of his reader's problem when he does not have enough information to construct a truly informed reply—like what version of NT you are running, the Event ID that was in the Event View. That said, Mole will take a stab at your problem nonetheless.
This sounds like the recommended pagefile size is just not big enough. Let's make sure we understand what "recommend" means. Mole's trusty Webster's Ninth New Collegiate Dictionary says it means "To present as worthy of acceptance or trial " (Mole emphasis added). The recommended pagefile size is just that – a starting point from which you determine whether the setting is appropriate for your system. Mole can't say something like "If you have 96 MB of memory, then you need to configure the pagefile to X MB." No way. Not even Mole can know what combination of applications or number of users will be accessing the server. Mole's Magic 8-Ball only gives the most vague answers, and then only to specific questions. For example, in response to the question "Should I make my pagefile size = to Memory + 12 MB?" Mole's Magic 8 Ball simply says "Outlook Not So Good."
Instead, you should monitor the size of the paging file(s) on the Terminal Server computer. To initiate the information capture, start Performance Monitor and chart object "Paging File," counter "% Usage Peak," and instance "_Total." Set the interval that PerfMon records data to something like 15 seconds so as to lessen the impact of taking this measurement on the server. This measurement displays the maximum percentage of the currently configured pagefile disk space that the pagefile has consumed during the Performance Monitor session. Observe this over a period of time on the Terminal Server.
If you have configured a pagefile for each disk, then you can monitor each individual pagefile usage in addition to the total. Do this by adding the "%Usage" counter and selecting the appropriate disk drive letter in the Instance drop-down.
So, what do you do with this information? If the Usage peak is approaching, say, 90 percent, that tells you that your paging file is probably sized about right. On the other paw, if it bumps up at near 100 percent, you probably need to increase the size of the paging file. Mole leaves no stone unturned, so here's a quick review of how the paging file is changed in NT 4.0: Start Control Panel, select the Performance tab, and in the Virtual memory section, click the "Change…" button. There you can specify the pagefile size in addition to creating a pagefile for each logical disk.
As to your question about "spreading the pagefile across all physical disks, does it matter if they are not all of equal size?" Mole's answer: It does not matter one whit.
A good source for information on the paging file is the Windows NT 4.0 Workstation Resource Kit, specifically Chapter 12 - Detecting Memory Bottlenecks.
Hope this helps, Mole

Burnsy
 
Last edited:
thats what most folks think saffyre - and to an extent that does happen.

*snip*

Cheers
ROfu


Then if you have 4Gb (I've never filled 2Gb) then you have everything in the memory and it never has to be pulled from the PF.


I've never had a PF on XP since getting 2Gb, as far as I can see it decreases load times because the HD doesn't have to write to the PF when loading a game. It all goes in the memory, which is why I bought it.
 
As has been said (repeatedly) before though, every single process (on a 32-bit x86 architecture) can address up to 4GB of memory, so if you're confident that you can fit the total memory needs of, say, photoshop, lightroom, firefox, thunderbird, skype, utorrent, Visual Studio.NET, the kernel, smartftp, excel and a running virtual machine etc. into your physical memory, then you won't need a pagefile :)

I've got 2Gb RAM, and I can say for certainty that with everything that I'm running at the moment has a total working set larger than 2Gb. Without a pagefile, windows would just fall over and I wouldn't be able to start any other applications.
 
As has been said (repeatedly) before though, every single process (on a 32-bit x86 architecture) can address up to 4GB of memory, so if you're confident that you can fit the total memory needs of, say, photoshop, lightroom, firefox, thunderbird, skype, utorrent, Visual Studio.NET, the kernel, smartftp, excel and a running virtual machine etc. into your physical memory, then you won't need a pagefile :)

I've got 2Gb RAM, and I can say for certainty that with everything that I'm running at the moment has a total working set larger than 2Gb. Without a pagefile, windows would just fall over and I wouldn't be able to start any other applications.

Luckily, the most a game i've seen us is 1.5Gb, so all is good :)
 
I've been running Windows without a page file since around 2002, and Linux for far longer, without a problem - and without a _single_ HD thrashing session caused by paging ;) I like my programs staying in memory where they should be - thankyou very much :)
 
So, NathanE, if you think that the 4Gb page file is best, what do you set as the parameters then?

What's your initial and whats your limit (exactly 4000?)
 
i think on linux the pagefile is used in an "overflow" way - different to windows. when i used linux i always just had a 2gb or so swap partition, barely ever got used, in windows i just leave it as system managed and its fine.
 
It totally depends on what you do with the PC, I've spent ages doing bench tests with without different sizes etc etc, best let windows manage it then you get the best of every world.
 
Back
Top Bottom