Memory Management

Suspended
Joined
16 Sep 2018
Posts
12,805
Is it something I've done or does memory management just kind of suck in Linux?

E.g. On Windows i can fire up multiple VMs with a collective allocation of memory far exceeding the amount of physical RAM in the system but if i try the same thing on Linux, even with a large swapfile, everything goes a bit haywire until eventually OOM Killer kills one of the VMs. It's almost like Kubuntu totally ignores the swapfile.

FWIW I've tried changing the default swappiness and setting the swapfile to be 20GB in the past (currently using the default 60 and 2GB) but it didn't seem to have any effect, hence why i say it's almost like it's not using the swapfile.

So is it me or is Linux's memory management a bit naff?
 
What sort of VMs? KVM/QEMU, or? What are you using to manage them?

Are you using hugepages at all? Dedicated hugepages can't be swapped for sure; I'm not sure about transparent, but would supect they can't either. Dedicated hugepages have to be pre-allocated, so you'd know if you were using them. But your VM management tool might try to use transparent hugepages by default.
 
It's not so much the VMs themselves as i can easily dedicate less memory, it's if how badly OOM situations are handled is down to something I've done/not done or if Linux's memory management is a bit naff.

OOM situations can happen irregardless of what software is running, you'd (I'd) expect them to be handled more gracefully. As in actually using the swapfile and only killing a process as a last resort when there's no virtual memory left.
 
If you are using hugepages for the VMs, you've effectively told the memory manager not to swap the VMs out.

Memory for other processes, not using hugepages, will get paged out; but if you fill physical memory with hugepages, there will be nowhere for those other processes to get paged back in so they can run.

It's possible that whatever you are using to create the VMs might, by default, use transparent hugepages even if you aren't explicitly asking for them (KVM/QEMU will if they are supported by the kernel/CPU). So it's possible you're implicity filling physical memory with unswappable pages.

If you are unsure if you are using hugepages, or how to avoid using them with whatever VM technology you are using, you can disable them at kernel level. Then the memory management will behave as you are expecting - VMs will also get paged out as needed.
 
AFAIK hugepages isn't even enabled/installed on Kubuntu 22.04 so I'm unsure why you're fixating on it being a VM issue, FWIW I'm using VirtualBox and again i don't think that supports hugepages.

Like i said It's got nothing to do with VMs, i only mentioned those because they happened to be what was using the most memory, AFAIK hugepages would need to be enabled at both the OS and VM level, and I'm pretty sure they're not enabled by default or have been enabled so IDK why you keep mentioning them. :confused:
 
I didn't realise you weren't interested in possible reasons why you might have seen the behaviour you did, but rather you were just hoping to reinforce your unfounded assertion that linux memory management is naff.

Sorry, I'll leave you to it.
 
What on earth are you talking about, you've not provided any reason you've just prattled on about VMs and hugepages. Those aren't reasons as they're totally unrelated to how Linux handles OOM situations, if i knew someone was going to completely ignore the actual issue and fixate on the software that just happened to be running and getting killed when running out of memory i wouldn't have mentioned it.

How about this, when i open 6k tabs in Firefox and run out of memory it doesn't seem to use the swapfile and locks up the desktop until OOM killer kills FireFox.
 
What on earth are you talking about, you've not provided any reason you've just prattled on about VMs and hugepages. Those aren't reasons as they're totally unrelated to how Linux handles OOM situations, if i knew someone was going to completely ignore the actual issue and fixate on the software that just happened to be running and getting killed when running out of memory i wouldn't have mentioned it.

How about this, when i open 6k tabs in Firefox and run out of memory it doesn't seem to use the swapfile and locks up the desktop until OOM killer kills FireFox.
Your OP is talking about how your overprovisioned VMs are being killed off... doesn't seem that unreasonable for someone to offer information about how a type of memory usage that hypervisors on Linux can take advantage of which can't be paged out of memory so check if you're using them...

If it seems like your swap isn't being used, have you actually checked whether it is? From the CLI you could run htop or watch free and then start up some memory consuming processes and see.
 
I agree, it doesn't seem unreasonable but when someone says OOM situations can happen irregardless of what software is running and that they can easily dedicate less memory you'd kind of expect them to take that onboard and realise that it's not the VMs that are the cause of the problem, that their issue is not with the VMs but in how Linux handles OOM situations and, like you have done, to offer advise on how to check if the swapfile they suspect isn't being used is in fact being used or not.

To offer advise on possible better solutions than simply watching a half-frozen desktop for +5 min waiting for OOM killer to kill the process that's using the most memory and losing any data.

Not to take a holier than thou attitude and blame the user for encountering something that seems to be a fairly well known issue.
 
As it's Linux, there are also plenty of solutions: https://github.com/hakavlad/nohang

You're still missing the (eventual) point though, of how you expect any OS to handle running out of memory completely... Something has to be 'removed' and there's no easy way to define what that is.
You are, by definition, allocating more resources than can possibly be handled, and not providing any way to determine how the system should cope.

If you were correctly provisioning all the way down, this would not be an issue, but as is typically the way these days, it's a case of 'expect the OS to deal with it'.

Anyone over-provisiong servers, especially in production, is a total hack. It means you are not catering for the adverse effects, and just 'hoping' it goes okay.
 
You're still missing the (eventual) point though, of how you expect any OS to handle running out of memory completely... Something has to be 'removed' and there's no easy way to define what that is.
Like i said I'd expect the OS to handle it more gracefully than locking up the desktop for an inordinate amount of time until it eventually kills whatever is using the most memory.

A system managed swapfile would be a start, even a popup saying the system has run out of memory and asking/telling you to close a program would be a better way of handling things than how they're currently handled. Don't get me wrong as despite the handle VersionMonkey flew off on the moment anyone dared to point out a well understood flaw i think Linux does a lot of things better than Windows or *insert OS here*, however IMO if people refuse to acknowledge such issues they're never going to be address, simply ignoring a problem or pretending it doesn't exist doesn't solve anything. (After realising asking for advise here was a waste of time i did a lot more research and had already come across nohang but it's little more than a more effective OOM killer, ungracefully killing programs should never be acceptable, but thanks for the link, i guess).

Yes it's a case of 'expect the OS to deal with it'. It's that because if Linux is ever going to go from a niche OS to something 'normal' people can use as a desktop OS it needs to cater users who don't have days, months, or even years to learn the ideal way to set things up and what to do, or not, in certain scenarios and/or get what I've noticed is sadly the all too common snotty responses from some in the Linux community of you're "a total hack" and essentially telling the user that it's not happening because of something the OS should really be handling more gracefully but because they're stupid, because that's sure to enamour people to both the community and the OS itself, isn't it.

I mean it may come as a surprise but people using desktop OS aren't going to spend the time making sure they have enough memory, provisioning just the right amount, adjusting swapfiles, etc, etc. Just so they can launch a program. Not being able to gracefully handle OOM situations is something OSs did more than two decades ago.

e: Oh and BTW it has nothing to do with over-provisiong or VMs, like i said you can bring the desktop to its knees by simply opening 1k tabs in Firefox or even by using 'echo {1..1000000000}'. I'm not saying you should be able to do those without issues, I'm saying that if you did the OS should not take control away from the user because the DM has become unresponsive and it should not just arbitrarily kill whatever is using the most memory. The desktop should stay responsive so the user can take corrective action and/or decide how they want to address the issue, if they want to lose hours of work A or a few min of work B, or something they were doing that wasn't important.

Taking control away from the user is bad no matter what way you cut it.
 
Last edited:
Taking control away from the user is bad no matter what way you cut it.
Not if the user initiated those problems in the first place.

I guess I come from a place/time where I don't expect Linux to be just 'usable by anyone'. It's following in the rich history of Unix computing, and whilst it may have acquired a lot more user-friendliness over the years, that's not the main purpose.
Remembering that Linux is the (Kernel) System space, and most everything else is in User space, and as such a lot of the 'applications' are also not written to handle OOM.

There is a swapfile - if you configure it. You can change the 'swappiness' if you want. Is it simple to do for non-tech folks ? No. Should it be ? I don't think so.

I would absolutely love a mainstream alternative to Windows and Mac, that the average person could easily and 'safely' use, but I don't expect any *nix or similar OS to be that.

I'm not saying you should be able to do those without issues, I'm saying that if you did the OS should not take control away from the user because the DM has become unresponsive
The DM is not the OS. It's a Userland process.

It sounds like you want Linux to be like Windows or Mac, but that's not what it was designed to be. Sure, there are still improvements to be made, but it is amazingly good at what it does. There's a reason most servers run Linux or *BSD or whatever - and it's not just because of licensing costs.

Linux can be used, very well these days, as a Desktop environment. It just requires people to understand how computers work. If it becomes 'dumbed down' in the same way Windows and Mac are (less user control by the day), it will be terrible.

Be careful what you wish for. If you want a simple 'managed' system, choose Windows or Mac. If they do all of these things so well, why choose Linux ?
 
Not if the user initiated those problems in the first place.

I guess I come from a place/time where I don't expect Linux to be just 'usable by anyone'. It's following in the rich history of Unix computing, and whilst it may have acquired a lot more user-friendliness over the years, that's not the main purpose.
Remembering that Linux is the (Kernel) System space, and most everything else is in User space, and as such a lot of the 'applications' are also not written to handle OOM.
All problems are user initiated, installing an OS just to look at it, to not use it, is a bit pointless. If it's not meant to be user friendly then why bother with a DM or even a GUI, it's acquired a lot more user-friendliness over the years because that's what the people who use it want. If users want an un-user-friendly OS then you'd assume they known enough to make it as un-user-friendly as they like.

The kernel/user space is something i thought about and understand the reasons for that separation but even with the need/desire to keep the DM in user space there are better ways of reducing the impact of one misbehaving program in user space with other programs running in user space (off the top of my head giving the DM a higher priority and/or reserving enough memory, or even pausing the program with the highest memory usage when there's only 5% or whatever amount of available memory left remaining).
There is a swapfile - if you configure it. You can change the 'swappiness' if you want. Is it simple to do for non-tech folks ? No. Should it be ? I don't think so.

I would absolutely love a mainstream alternative to Windows and Mac, that the average person could easily and 'safely' use, but I don't expect any *nix or similar OS to be that.
Yea i played around with swappiness but that really only addresses the propensity of the kernel to use swap space.

I would love there to be alternative also so i have to ask why you don't expect any *nix or similar OS to be that? Because from my perspective it should be, the hardcore 'un-user-friendly' types can ignore/remove the user-friendly features while others can leave the OS to do its think without having to worry. I said it in another thread but IMO one of the things Linux has nailed is how modular it is so i can't see why anyone wouldn't want or expect it to cater to a wider audience.
The DM is not the OS. It's a Userland process.

It sounds like you want Linux to be like Windows or Mac, but that's not what it was designed to be. Sure, there are still improvements to be made, but it is amazingly good at what it does. There's a reason most servers run Linux or *BSD or whatever - and it's not just because of licensing costs.

Linux can be used, very well these days, as a Desktop environment. It just requires people to understand how computers work. If it becomes 'dumbed down' in the same way Windows and Mac are (less user control by the day), it will be terrible.

Be careful what you wish for. If you want a simple 'managed' system, choose Windows or Mac. If they do all of these things so well, why choose Linux ?
I know: See above.

I disagree with your "If it becomes 'dumbed down' in the same way Windows and Mac are (less user control by the day), it will be terrible." for reasons I've already given, if it can have dozens of DEs then it can have dumbed down and more advanced versions. Heck it already has that with distros like Mint, Manjaro, and Linux From Scratch.
 
I disagree with your "If it becomes 'dumbed down' in the same way Windows and Mac are (less user control by the day), it will be terrible." for reasons I've already given, if it can have dozens of DEs then it can have dumbed down and more advanced versions. Heck it already has that with distros like Mint, Manjaro, and Linux From Scratch.
It has 'user-friendly' distros because some people spend their own time making things easier to their own vision. You are, of course, welcome to do the same.
What you shouldn't expect is for GNU/Linux to just 'become' that.

This is an OS, based upon what has gone before, and progressed by many for free. It has become what those people want. If enough people wanted it to be as you suggest, they would submit patches with a solid reasoning as to what is happening, and if those patches are not accepted, would fork the OS and create their own - that's the world of Open Source.

It's clear that the problems you state are not common enough - or impactful enough - to warrant change, or those who are affected have just patched for themselves or gone elsewhere.

For the vast majority of users, this is not a problem.
 
It has 'user-friendly' distros because some people spend their own time making things easier to their own vision. You are, of course, welcome to do the same.
What you shouldn't expect is for GNU/Linux to just 'become' that.
I don't and i think i made that quiet clear.
This is an OS, based upon what has gone before, and progressed by many for free. It has become what those people want. If enough people wanted it to be as you suggest, they would submit patches with a solid reasoning as to what is happening, and if those patches are not accepted, would fork the OS and create their own - that's the world of Open Source.
So what your essentially say is unless you're a programmer and/or able to submit patches then your not welcome in our community, nice.
It's clear that the problems you state are not common enough - or impactful enough - to warrant change, or those who are affected have just patched for themselves or gone elsewhere.

For the vast majority of users, this is not a problem.
Yea, Artem S Tashkinov disagrees with you as do all the other people who say it's a problem in the links i posted earlier, i mean if it wasn't a problem you wouldn't have people developing this....
As it's Linux, there are also plenty of solutions: https://github.com/hakavlad/nohang
Would you.

Honestly this sort of 'it's not a problem', 'it's the user', etc, etc toxic attitude from some people in the Linux community is what's held it back for so long IMO, having people telling you essentially it's your problem not ours, we're not going to change and if you don't like that you can F-off doesn't exactly foster a friendly, helpful community. In fact it actively puts people off from continuing to use Linux, i know it has me.
 
Honestly this sort of 'it's not a problem', 'it's the user', etc, etc toxic attitude from some people in the Linux community is what's held it back for so long IMO, having people telling you essentially it's your problem not ours, we're not going to change and if you don't like that you can F-off doesn't exactly foster a friendly, helpful community. In fact it actively puts people off from continuing to use Linux, i know it has me.
How often do Microsoft, Apple or Google just 'do' what some people ask ?

It's not about whether it's a problem or the user, it's about how 'much' of a problem and who is going to fix it.

Linux has in no way been 'held back' for what it is. It's powering most of the services around the world. It just has no obligation to be what some people want it to be.

This is relevant

Nobody is telling you to F-off. Just that you shouldn't expect something to be 'fixed' just because you and others say it is broken. There are usually reasons why 'simple' things aren't changed, and I guarantee that nobody would refuse such a fix in the kernel if it was that straightforward. You should probably ask 'why' this is still the way it is, and the answer is most likely "... because nobody has offered a patch that fixes it whilst not breaking anything else".

The 'community' is helpful, but they have no reason to just do what others demand.
 
How often do Microsoft, Apple or Google just 'do' what some people ask ?
All the time, why do you think they addressed the problem of running out of memory some 20-30 years ago, why do you think they've worked on improving it throughout those years, why do you think they're now in a situation were the chances of an errant program locking up the entire DE or crashing the system.
It's not about whether it's a problem or the user, it's about how 'much' of a problem and who is going to fix it.
Well it's enough of a problem for OOM killer to be written into the kernel and it's 'much' of a problem because otherwise you wouldn't have multiple packages that seek to address the situation in a slightly more efficient manner.

As for who's going to fix it that really depends on what the 'fix' is chosen, personally I'd go for what i assume would be the easiest route and just have the option for a system managed swapfile, storage space is plentiful and cheap these days and with the proliferation of SSDs they're not exactly slow like spinning rust used to be. But i assume that would be something that would need to be supported/managed by the kernel. Either way, like i keep saying, simply locking up the DE and closing a program without warning is far from ideal so pretty much anything better than that would be an improvement.
Linux has in no way been 'held back' for what it is. It's powering most of the services around the world. It just has no obligation to be what some people want it to be.

This is relevant

Nobody is telling you to F-off. Just that you shouldn't expect something to be 'fixed' just because you and others say it is broken. There are usually reasons why 'simple' things aren't changed, and I guarantee that nobody would refuse such a fix in the kernel if it was that straightforward. You should probably ask 'why' this is still the way it is, and the answer is most likely "... because nobody has offered a patch that fixes it whilst not breaking anything else".

The 'community' is helpful, but they have no reason to just do what others demand.
And yet that's exactly what it does, there's so many distros out there that saying "It just has no obligation to be what some people want it to be" is frankly laughable.

No that is not relevant, it's a based rant as the comments on the gist say, because saying...
The only people entitled to say how open source 'ought' to work are people who run projects, and the scope of their entitlement extends only to their own projects.
Is a demonstration of exactly the toxicity of some in the Linux community that it's so well known for, if the only people entitled to say how open source 'ought' to work are people who run projects then don't publish your work, don't make it open source.

Personally I'd say this is more relevant, it's more than 8 years old...
Then three years later...
And 11 months ago so it's still sadly still relevant today.

If nobody is telling me to F-off then read back through this thread and quote all the posts where someone has offered possible solutions, has given instructions on what to do or check, on a package that maybe better than the default OOM killer, on a way to stop the DE from locking up for ages only for it to close an arbitrary program. Because i count two, one suggesting "you could run htop or watch free and then start up some memory consuming processes and see" and another from yourself saying "there are also plenty of solutions: https://github.com/hakavlad/nohang" to a problem you also seem to say isn't much of a problem so shouldn't be addressed because you're seemingly scared that it will make Linux more user-friendly, because according to you it's not meant to be used by users.

If the 'community' have no reason to just do what others 'demand' (i mean nobody is just demanding for no reason but whatever) then like i said don't pretend that there's a community, don't invite input, no matter how small, from others.

I mean I'm fine with yours and others attitude like the link you posted, if you and others want to act all elites because you're scared Linux may change then you be you. But if you're not going to welcome input or accept that things could be done better then don't share your work, don't make it public.
 
Last edited:
Was this actually a genuine question, or were you just setting this thread up so you could go on a little rant.

You don't seem that interested in troubleshooting your issue, or trying the options that have been shared here.

Personally I use a mixture of Windows, Linux and MacOS in normal desktop use and with the type of work I do I can't say that I've had my OS run out of resource and lock up in a very long time; but I'm also not running resource starved systems. Conversely in the server space I've managed many resource starved Windows servers and once they run out of physical memory and fill up their page file because of a bad app, guess what - they become completely unresponsive. Running out of memory is always going to lead to a bad time, whatever you're using.

If Linux isn't working for how you want to use your computer then why waste all this energy on it, just use something else that works for you.
 
Was this actually a genuine question, or were you just setting this thread up so you could go on a little rant.
Yes it was, until that is the three of you decided to do what some in the Linux community are well known for.
You don't seem that interested in troubleshooting your issue, or trying the options that have been shared here.
You're welcome to show me these troubleshooting steps, suggestions on what to try that would help identify or alleviate the issues. Because like i said i count two, count them, post that have made any sort of suggestion on what to actually do. One from yourself suggesting "you could run htop or watch free and then start up some memory consuming processes and see" and a link from Koalaboy to a github repository to software that seeks to be a better OOM killer and other links to a problem that (s)he says isn't a problem.

As the saying goes be the change you want to see, if you wanted this thread to be about troubleshooting the issue maybe you should've offered more troubleshooting suggestion other than just check to make sure it is actually using the swapfile.
Personally I use a mixture of Windows, Linux and MacOS in normal desktop use and with the type of work I do I can't say that I've had my OS run out of resource and lock up in a very long time; but I'm also not running resource starved systems. Conversely in the server space I've managed many resource starved Windows servers and once they run out of physical memory and fill up their page file because of a bad app, guess what - they become completely unresponsive. Running out of memory is always going to lead to a bad time, whatever you're using.

If Linux isn't working for how you want to use your computer then why waste all this energy on it, just use something else that works for you.
Perfect example of what I've been saying, thanks. ;)

e: If, as an example, you guys/girls wanted this to be about troubleshooting then i would've expected something along the lines of: Verify if the swapfile is actually being used by using top, htop, free -m and looking to verify if it is actually using the swapfile, some sort of guidance on how to check if hugepages are being used (that BTW i still don't have a clue on how to check despite more than a week of Googling). Maybe a link with how to increase the swapfile size and whether, other than using more storage space, there's any issue with over-sizing it (50-100GB).

Even just an acknowledgment that how Linux handles OOM situations is far from ideal but it is what it is so until something better comes along you may want to look into zram and/or zswap with a brief explanation of what they do, maybe followed by a short discussion to clarify or attempts to answer reasonable questions. Maybe, if it were me from the future were I've spent the last week and a half learning about how Linux deals with, and possible solutions/work-around, i would've suggesting looking into swapspaces if i didn't mind a temporary lack of system responsiveness while more swap is added or taken away.

And no, there's little point in offering advise on those things now because like I've been saying instead of making this thread about actually helping someone you three have made it all about blaming the user, all about how specially you are because it doesn't happen to you, all about how the problem isn't actually a problem and even if it is we, the Linux community, ain't going to do anything, you can make us, we don't owe you nuffink because Linux isn't developed to be used by users.
 
Last edited:
Even just an acknowledgment that how Linux handles OOM situations is far from ideal but it is what it is so until something better comes along you may want to look into zram and/or zswap with a brief explanation of what they do, maybe followed by a short discussion to clarify or attempts to answer reasonable questions. Maybe, if it were me from the future were I've spent the last week and a half learning about how Linux deals with, and possible solutions/work-around, i would've suggesting looking into swapspaces if i didn't mind a temporary lack of system responsiveness while more swap is added or taken away.
I'm sorry that the links I posted were not enough to help along the way. For anything more, I'd have to spend a lot of time actually trying to reproduce your issue, and then further troubleshooting, and tech support isn't something I do outside of work.

First search on DDG for "how to check if hugepages are used in linux": https://access.redhat.com/solutions/320303

On that note, I'm not trying to be facetious or say you aren't having problems, but since I've been using Linux from day one (when it was first announced on usenet), through today both at work and at home, I've never hit such issues.
Yes, I see OOM, but it's because applications or systems are incorrectly provisioned. No, I never over-provision VMs. This does not mean you (or others) do not have a problem, but it's one I have never experienced and the link I posted to the github repo is where I personally would start if I had to fix something. Is that suitable for 'average' users ? I have no idea, but there's a reason my wife and daughter use Macs.

You obviously - like many - have issues with 'the community', but that's like having issues with 'anonymous' or 'society'. Perhaps ask the maintainers of the VM software, or the distro you use, or the kernel maintainers, as they are the people who know this 'inside out'. The 'community' is just random people, most of whom (especially arch users) who have learned for themselves through trial and error, with some assistance online, but then trying and failing a lot, and that's likely why you get the responses that you do. Again, I'm sorry if this isn't what you're looking for, but it's how things are.

I sincerely wish you the best of luck trying to solve your problem - most of us have been through many similar in our journey - and hope you find a solution you are happy with.

I'm out.
 
Back
Top Bottom