procfs is inherently dangerous. Numerous security issues have arisen in Linux due to procfs. Process carving of an sftpd process via attacking /proc/pid/mem over a given sftp session is another great example.
This requires me to quote the article:
If we can find a way to run another instance of Apport before our crash, that will give us 30 seconds to switch the PID!
Are 30 seconds enough to generate MAX_PIDs? In Ubuntu 18.04 we can easily do it in a few seconds, but in Ubuntu 20.04, forking four million PIDs will take longer (3-4 minutes on my VM), depending on the CPU, number of cores, system load etc.
Placing execution limits on a per-user or per-session basis will fully mitigate this particular attack. Telling the OS “this user can execute a maximum of N applications in Y seconds” (aka, resource limiting) would be good. The OS can (should?) ship with a sane default for unprivileged accounts. Should the sysadmin need to modify the limit, tooling should exist to assist in modifying resource limits.
My two big takeaways:
procfs is inherently dangerous. Numerous security issues have arisen in Linux due to procfs. Process carving of an sftpd process via attacking
/proc/pid/mem
over a given sftp session is another great example.This requires me to quote the article:
Placing execution limits on a per-user or per-session basis will fully mitigate this particular attack. Telling the OS “this user can execute a maximum of N applications in Y seconds” (aka, resource limiting) would be good. The OS can (should?) ship with a sane default for unprivileged accounts. Should the sysadmin need to modify the limit, tooling should exist to assist in modifying resource limits.