• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

slower core

I did a massive post on the concept of virtual memory and the page file last year in the Windows forum but it got deleted, sigh...

I'll rewrite it in brief:

Windows, like all modern OSes, uses the concept of virtual memory - but more specifically, demand-paged virtual memory. Do not think of "virtual memory" or the page file as being akin to an overflow carpark, it is not. Virtual memory is the key to managing two types of memory storage medium simultaneously - in this case, physical RAM and the hard disk page file. (Actually in Vista a third type is added in the form of USB flash drives.) Now think of virtual memory as an index in the back of a book. It is basically a massive list of page numbers, right? That is exactly what virtual memory is in a computer too. Except that as well as including a page number, it also includes where it is stored (i.e. RAM or page file) and various other fields such as when that page of memory was last accessed. Just to clarify, "page" is a term to describe a 4KB chunk of virtual memory. The term can only be used when talking of virtual memory (and that is pretty much always...) At the lowest level, physical memory (RAM) chunks are called "frames".

Now consider that a machine may only have 128MB of RAM - a pretty pitiful amount by all standards these days... How can that machine "emulate" having a full 4GB of RAM? The answer is virtual memory. The virtual memory index contains 262,144 (assuming 32-bit non-PAE) entries for every page in a 4GB block of memory. Some of these (about 32,000 pages) will point to real physical memory but the rest and vast majority will of course point to the page file.

Now when an application or the CPU performs some memory I/O (which is pretty much every other nanosecond) it doesn't query physical memory directly... it first queries the virtual memory lookup table. It asks something like "OK Mr Virtual Memory, you told me earlier that my data is stored at address 0x12345678 and I want to read it now...", so then the virtual memory manager in the kernel of the OS will (theoretically) reply with "Yep, and that part of memory is currently in RAM at address 0xABCDEFFF, go for it!". Transaction done.

However, what if that page of memory the application requested was not in RAM but in the page file at the time? The virtual memory manager and application would be in a bit of a pickle because its impossible for the CPU to address memory stored in the page file. So what happens is the memory manager blocks the application from continuing running. It then reads the requested virtual memory page from the page file and swaps it into the RAM at a free location (this could require shifting a page out of RAM and into the page fiel to free space but generally a certain % of RAM is kept available at all times for operations like these) and then unblocks the application and lets it know where its data is stored in physical memory. This is called a "page fault". Not really a "fault" per se but a substantial hit to memory performance, and is the cause of all the silly "turn off your page file!!!111oneone" jokes.

Now as I mentioned before, most OSes these days use demand-based paging. This is the policy the memory manager uses to decide which pages get to stay in physical RAM. The primary policy here is down to the frequency at which the page of memory is accessed (i.e. every split second, every second, every hour, every month?) but also when it was last accessed (i.e. the date/time). The memory manager can use this information to occasionally prune physical memory. If it spots that physical memory is getting a bit low then it can just look down its list of pages and find all the pages stored in RAM that haven't been accessed for >5 hours and then take them out and bung 'em in the page file. But also, if a page fault is pending and there's no free RAM then it can look down the list for the least desirable page of memory in RAM and swap it out to the page file. Then the page fault can succeed.

Windows in particular takes a lot of variables into account with its demand-based paging. For instance, whether of not the application is maximised/minimised etc. It's process priority. The average amount of CPU load it is creating. Whether is a foreground or background process (i.e. a Windows Service or not.) The amount of file I/O the application creates. And loads loads more. They vary a lot also depending on whether its a desktop or server variant of the OS.

So the only way to conduct a thorough test of your memory is by using tester that boots its own OS and does not run as an application in Windows.
 
janesssssy said:
when you test memory just use memtest.
small FFT for the win.

exactly, when i overclock i always start by using a low divider and see how high my mem goes first and testing with memtest. then i overclock the cpu and test with small FFT's to test the cpu. you dont need to use the blend test cos you already know what the ram can do. :cool:
 
Don't forget about the onboard memory controller (which wont be stressed with small fft's) on A64's/X2/Opteron. L2Cache and Core are stressed with small fft's. Test small first but then run large to be certain.
 
Back
Top Bottom