If you run an Nginx server, you may come across an issue where you’ve reached the limit of how many files Nginx can have open. System wide resources are controlled by ulimit. You can check the current limits on a process with
cat proc/pid/limits where
pid is the pid of the process you want to view limits on. We can change the maximum number of file descriptors a process can create by modifying the
/etc/sysctl.conf file and adding the
fs.file-max setting. Set
fs.file-max=50000 to allow processes to create 50000 file descriptors.
We next need to set the Nginx limits by modifying
/etc/security/limits.conf and adding a ‘sort’ and ‘hard’ limit. The soft limit may be changed later by the process running with these limits up to the hard limit value. Hard limit can only be lowered though and cannot be increased by the process itself. We can set these by adding two lines
nginx soft nofile 10000 and
nginx hard nofile 30000 . We can then run
sysctl -p to verify our change.
Finally, we need to change some Nginx settings. Open up your
/etx/nginx/nginx/conffile and add the
worker_rlimit_nofile directive. This changes the limit on the maximum number of open files. This number is usually the product of your
worker_processes directive settings. You can only really adjust
worker_processes is based off the number of CPUs you have available.
Once this is all set, we need to completely shut down the Nginx server. Using
kill pid is probably your best bet to make sure Nginx restarts as running
nginx -s reload will not reload the master process. You can read about CentOS solutions here.