Operating systems (Linux and macOS included) have settings which limit the number of files and processes that are allowed to be open. This limit protects the system from being overrun. But its default is usually set too low, when machines had way less power. Thus a “gotcha” that is only apparent when “too many files open” crashes appear only under load (as in during a stress test or production spike).
# Commands require root access
$ sudo ulimit -a
fs.file-max = 100000
#Edit /etc/security/limits.conf add the below lines before the "#End of file" line, save and exit
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
$ sudo sysctl -p
Font: https://naveensnayak.com/2015/09/17/increasing-file-descriptors-and-open-files-limit-centos-7/
$ launchctl limit maxfiles
# maxfiles 65536 200000
# The first number is the “soft” limit and the second number is the “hard” limit.
# maxfiles 256 unlimited
$ sudo launchctl limit maxfiles 65536 200000
# Some set it to 1048576 (over a million).