When your Linux machine runs out of memory, Out of Memory (OOM) killer is called by kernel to free some memory. It is often encountered on servers which have a number of memory intensive processes running. In this post, we dig a little deeper into when does OOM killer get called, how it decides which process to kill and if we can prevent it from killing important processes like databases.
How does OOM Killer choose which process to kill?
The Linux kernel gives a score to each running process called oom_score
which shows how likely it is to be terminated in case of low available memory. The score is proportional to the amount of memory used by the process. The score is 10 x percent of memory used by process
. So the maximum score is 100% x 10 = 1000. In addition, if a process is running as a privileged user, it gets a slightly lower oom_score as compared to same memory usage by a normal user process. In earlier versions of Linux ( v2.6.32 kernel), there was a more elaborate heuristic which calculated this score.
The oom_score
of a process can be found in the /proc
directory. Let's say that the process id (pid) of your process is 42, cat /proc/42/oom_score
will give you the process' score.
Can I ensure some important processes do not get killed by OOM Killer?
Yes! The OOM killer checks oom_score_adj
to adjust its final calculated score. This file is present in /proc/$pid/oom_score_adj
. You can add a large negative score to this file to ensure that your process gets a lower chance of being picked and terminated by OOM killer. The oom_score_adj
can vary from -1000 to 1000. If you assign -1000 to it, it can use 100% memory and still avoid getting terminated by OOM killer. On the other hand, if you assign 1000 to it, the Linux kernel will keep killing the process even when it uses minimal memory.
Let's go back to our process with pid 42. Here is how you can change its oom_score_adj
:
echo -200 | sudo tee - /proc/42/oom_score_adj
We need to do this as root
user or sudo
because Linux does not allow normal users to reduce the OOM score. You can increase the OOM score as a normal user without any special permissions. e.g echo 100 > /proc/42/oom_score_adj
There is also another, less fine-grained score called oom_adj
which varies from -16 to 15. It is similar to oom_score_adj
. In fact, when you set oom_score_adj
, the kernel automatically scales it down and calculates oom_adj
. oom_adj
has a magic value of -17 which indicates that the given process should never be killed by OOM killer.
Display OOM scores of all running processes
This script displays the OOM score and OOM adjusted score of all running processes, in descending order of OOM score
#!/bin/bash
# Displays running processes in descending order of OOM score
printf 'PID\tOOM Score\tOOM Adj\tCommand\n'
while read -r pid comm; do [ -f /proc/$pid/oom_score ] && [ $(cat /proc/$pid/oom_score) != 0 ] && printf '%d\t%d\t\t%d\t%s\n' "$pid" "$(cat /proc/$pid/oom_score)" "$(cat /proc/$pid/oom_score_adj)" "$comm"; done < <(ps -e -o pid= -o comm=) | sort -k 2nr
Check if any of your processes have been OOM-killed
The easiest way is to grep
your system logs. In Ubuntu: grep -i kill /var/log/syslog
. If a process has been killed, you may get results like my_process invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Caveats of adjusting OOM scores
Remember that OOM is a symptom of a bigger problem - low available memory. The best way to solve it is by either increasing the available memory (e.g better hardware) or moving some programs to other machines or by reducing memory consumption of programs (e.g allocate less memory where possible).
Too much tweaking of the OOM adjusted score will result in random processes getting killed and not being able to free enough memory.
References
- proc man page
- https://askubuntu.com/questions/60672/how-do-i-use-oom-score-adj/
- Walkthrough on which part of Linux code is called
- Classic LWN article (a bit dated)
- Invoking the OOM killer manually
Top comments (8)
Thanks Raunak, interestingly in 20+ years of developing for Linux systems I've never played with the oom_score_adj feature, not even experimentally never mind in production :) This path may well end up as a tragedy of the commons, where every process lowers it's score drastically - cf: IP packets have a user-settable priority field, guess what it always is?
I feel that your caveat is worth restating:
"Remember that OOM is a symptom of a bigger problem - low available memory."
I would add that well before the OOM killer does it's thing, you should be getting alerts from your monitoring (you have monitoring in production right?), and the system will likely be swapping madly (you have swap space right?) - it's like working in treacle, but it buys you time to act!
Your fixes are good for keeping the show on the road - throw money at it in the form of more hardware / VMs, to buy more time to resolve the design / implementation errors...
I /have/ had to track down and fix numerous memory leaks (usually me being a lazy C coder), poor allocation strategies (looking at you long running Python apps!), and poor configuration choices (let's allow 1000 Apache instances!) to fix memory issues - eg: recently resorting to scheduled restarts of the Azure Linux agent (waagent) to prevent it eating my small server every 48-72 hours.
May the OOM never strike twice :)
edited to add: Julia (@b0rk) has an excellent series of Linux drawings, including one on memory management: drawings.jvns.ca/
Agreed! There is no substitute for good monitoring. It catches many issues before they become bigger problems. Ultimately, we must be fixing the root cause for high memory which is generally poor design/architecture.
What you said about the tragedy of commons is exactly what happened to
nice
scores for process priority.I toyed with that command a bit - I wanted to get the RSS and username in there, keep the sort, and include how many procs were included and skipped. Something like (fewer than 30 procs are shown due to trimming):
Here's the result. Whether this is an argument for or against bash syntax is an exercise for the reader. The cat/tr calls can probably be obviated :-)
Very interesting.
This must be a linux-specific thing, not *nix in general. My MacOS laptop doesn't seem to have a
/proc
.Edit to add: This article says Mac uses the sysctl function for some things that would otherwise use /proc for.
We've found an interesting issue: specific
oom_score_adj
values in the range [942,999] seem to produce "unexpected"oom_adj
values of 16, which seem to be out of range [-17, 15].That is at least unexpected, any idea where it is coming from and if that could affect the oom_killer behavior (e.g. task with oom_score_adj=940 will be killed before the task with oom_score_adj=999)? At least
/proc/<pid>/oom_score
seem to be "OK" and is higher for oom_score_adj=1000...Thanks for the article. Noticed you can add an OOM column to htop which make it easy to check.
Up Next: I have certain priorities which tasks can be killed and which not. Now checking how I can set the oom_adj values - maybe directly in the systemctl startup scripts?
little typo I spotted : instead of
sudo echo -200 > /proc/42/oom_score_adj
doecho -200 | sudo tee - /proc/42/oom_score_adj
Thanks, corrected