Warning to VMWare SMP users

Man of Honour
Joined
4 Nov 2002
Posts
15,513
Location
West Berkshire
Just thought I'd better post a warning about a problem I've seen on most/all of my VMs...

It looks like a new core - fahcore_a2 - has recently been posted which isn't quite so memory friendly. I've had several workunits crash with out of memory errors as I use quite a tight memory/disk space configuration on all my VMWare installs.

It's a bit of a quandry this one for me. Do I go around and reconfigure all my VMs with more swap space and risk workunits thrashing (configuring more RAM isn't an option), or leave it as is and let them get killed for not honouring the memory limits I've set (which admittedly are probably borderline)?

PS - I'm certainly not the only one to have hit this problem.
 
Last edited:
Ubuntu, and pretty low (~400MB) to stop it thrashing it's head off.

Apparently fahcore_a2 needs about 800MB, though there are reports of them blowing up with 1GB RAM.

That means that at least in theory one should be OK for me, but I'm set up to run two as that's how a1 work runs best.

Net result - lots of people who have relied on all this lovely a1 work are going to take a PPD hit with a2, or have to spend money on 4GB RAM.
 
I've seen it mentioned on the official forums. I've always allocated about 700mb per VM, but that's hardly going to be enough for 2 clients per VM. But I think you can avoid the new units by disabling the advanced methods option. Thankfully memory's cheap as chips these days.
 
Hmmmmmmmmmmmm........ I may end up having to upgrade to Vista 64 in order to use all of the 4Gb of RAM I have. Was hoping I didn't have to do this for a while, hopefully the diskless install footprint won't use a lot of memory.
 
I've managed to complete a couple of P2619 WUs on a VM with 650MB. Each fahcore_a2 process seems to use 200MB, so 4 x processes + Ubuntu = 1GB requirement for optimum performance :eek: And thats only running 1 client, usually 650MB is enough to run 2 clients with minimal use of the swap space.

I got around 2200PPD for the P2619 compared to 2600PPD for 2 "normal" WUs, so the PPD hit isn't too bad, although I did notice that my other VM performance dropped 200-300PPD while the P2619s were being crunched.

Whilst these WUs can currently be avoided by not running adv methods, it was hinted at on the official forums that there are more of these types of WU to come, as Stanford are keen to make the most of the hardware out there.
 
Well that explains why the a2 core kept crashing with only 256MB & 380MB in my VM.

Whilst these WUs can currently be avoided by not running adv methods, it was hinted at on the official forums that there are more of these types of WU to come, as Stanford are keen to make the most of the hardware out there.
I guess 4GB looks like a must for my upgrade.
 
Last edited:
As jaric said, these have been moved back to -advmethods testing to sort out the memory allocation problems. So currently you can avoid them by setting the 'use advanced cores' setting to 'no' in the client config.

When they come back out, I'm hoping to avoid them again just by running 256MB VMs. Gonna keep doing it until there are no decent SMP WUs left that will run in 256MB! I don't mind RAM-heavy WUs, as long as the points are reasonable... these were much worse than the ordinary 1760-pointers, even configured with 1GB each.
 
Last edited:
These are what have been killing my 512mb VMs. I've also set the advmethods to "no" and WU size to "small" but somehow I still seem to get them :(:confused:

I've ordered another 2GB of RAM but just realised I'm running 32bit Vista. :mad:
 
These are what have been killing my 512mb VMs. I've also set the advmethods to "no" and WU size to "small" but somehow I still seem to get them :(:confused:

I've ordered another 2GB of RAM but just realised I'm running 32bit Vista. :mad:

Try deleting the WU from the queue and then delete the "machinedependent.dat" file from the CPU folder.
 
I'm presently rebuilding all my VMs using Ubuntu Hardy Heron (previously I had a mix of Gutsy Gibbon and Edgy Eft VMs with various configurations). Done one so far and it's gone well.

The other change I'm making is to configure each VM with two disks - one for the OS and folding, and the other for swap. This means I'll be able to dynamically adjust the swap size without an OS rebuild.

Going to knacker my output for a few days while I do it as I'm letting each VM run dry and then replacing it when I have the time (which may be several days later). As a result, ms9cw got away. :(

Also looking at how much 4GB for my media PC will cost, then I'll move the 2GB from there to my server (in the hope that it'll not break the overclock on that).
 
Last edited:
Back
Top Bottom