Limiting Linux Processes; short-term and long term.
I am currently studying towards my LFCS and found an interesting section on limiting Linux Processes, on a short-term and long-term scale, that I hadn’t previously known about. I’d be interested in understanding more about how these are applied on cloud-based instances, as it would reason well that cloud-providers use these as a way of rate limiting usage for specific instance classes/types.
The basic idea behind these controls is to create hard and soft limits that can be used to strangle/release resources for use on servers, for example, the amount of forks a process can spawn, (bye-bye to our good old friend $ :(){ :|:& };: , you shouldn’t run that unless you have set limits, and even then, probably just avoid it), or the amount of open files that a system can have, for example, the default Ubuntu ulimit -n (open files) is set to 1024, but maybe you’re running a file-server, so you’d probably want to up that.
ulimit
Enter ulimit, a simple command line tool that allows you to quickly up/down/round and round your process limits on the fly, for that particular shell session. Read again, for that particular session, limits changed via ulimit will not be permanent. To get an idea of what can be changed with ulimit, run:
ulimit -a
/etc/security/limits.conf
In this one, we can change limits on a more consistent basis, with changes made here sticking around. The file itself is well documented, so I won’t rehash what it says. But you should have a look yourself, this file probably shouldn’t be edited directly though, as this particular config utilises the newer systemd styled limits.d structure, so if you intend to create configs, create them in there and allow systemd to handle the parsing.
