I am trying to limit the memory usage of a process
$ulimit -m 2
$/usr/bin/time -l ./myProcess arg1 arg2The process run without being killed until time outputs
7.00 real 4.83 user 2.16 sys
4154855424 maximum resident set size 0 average shared memory size 0 average unshared data size 0 average unshared stack size 1014384 page reclaims 0 page faults 0 swaps 0 block input operations 2 block output operations 0 messages sent 0 messages received 0 signals received 0 voluntary context switches 15 involuntary context switchesshowing that the limit has been overpassed despite the ulimit -m 5 command line. I have also tried the options -v and -l but none of them seem to actually limit the memory usage. I also tried with time to make sure it would not fail to see the memory usage of a subprocess. Here are all limits after using all -m, -v and -l
$ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 3
max memory size (kbytes, -m) 2
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) 2If I limit the CPU time (ulimit -t 3), then it works fine and kill the process after 3 seconds.
Question
Is there something I misunderstand about ulimit -m 5? Is there a bug in my ulimit version?
Is there an alternative to ulimit to limit time and memory usage of a process (not necessarily the bash session)?
Versions
I am on MAC OSX 10.11.6 and bash version 3.2.57.
Related post
The post "ulimit not limiting memory usage" is very related but I don't think the accepted answer offer any solution on how to solve the problem
31 Answer
With modern Linux kernels, the ulimit is getting less and less meaningful with every release. You really cannot use -v because your version of glibc might prefer loading files with mmap() so you have to set -v large enough to include all files that the process is going to open. As a result, the -v flag cannot be used to limit physical RAM usage.
In addition, the -m flag does nothing so it cannot be used to limit physical memory usage, either.
Nowadays one has to use cgroups interface to kernel to limit physical RAM usage but the API is a bit hard to use, unfortunately. Perhaps one day we have some process similar to nice and ionice that can limit memory for a process given as a parameter.
In theory, with recent enough systemd you should be able to do something like
systemd-run --scope -p MemoryMax=250M -p MemoryHigh=200M /path/to/program/to/use
and it should magically work. There's no technical reason for this not working (because the underlying Linux cgroup feature does support the required things) but some systemd versions will just silently fail and not actually limit the memory usage.