This is silly. If you can’t trust environment variables then consider the entire system vuln and toss it. If your error logging software sends all of your secrets to a third party consider it vuln and toss it. Fix the problems instead of throwing the baby out with the bathwater – environment variables make more sense than anything else.
If you’re attached to your vulnsoft there is an alternative that is still more sensible than avoiding environment variables: encrypt the data of the variables with a shared secret.
One more concrete idea: On startup, you can delete any environment variables (or just substitute dummy values) that don’t match some whitelist. This way, you explicitly drop any security sensitive variables before they reach subprocesses or logging infrastructure.
[Comment removed by author]
I disagree on this one. For debugging for example cronjobs, it’s important to know what the environment looks like, because it’s often different from the one you wrote and tested the cronjob in.
But you don’t need the password? You just need to log your settings, maybe whether it’s run in testing or production.
Dumping all of the environment doesn’t seem to have a big benefit. In fact it just causes more resources to be required (shipping, storing, serializing) and may even make it harder to find what you are looking for when there is just too much stuff there.
I’d argue for logging more of what will help you and less of what won’t. And I know the argument of having everything you can have, but then you could also log a complete memory dump of the system, including an nmap.
I know that’s extreme, but honestly, this seems to be the typical scenario. Things should be everything you need in a specific instance And you really shouldn’t need the password, only whether authentication succeeded (and whether the format is valid).
The point I wanted to make with this was that it probably is reasonable to clearly define what you are logging and that you probably don’t wanna log your password. And as you said, it is a setting, so if you want to log your setting you’d have to log it no matter where it is.
So if you have a config file you wanna log those contents, if you have environment variables, you want to log these. But you very likely don’t wanna send passwords to logging, in either case.
I was hoping for a section on “do use these things: ENV variables that hold filenames that contain secrets” or something like that. I don’t use docker, but I would like to keep secrets out of environment variables. What are good ways to do that?
Generally you bake an encrypted file into your image, that file is read and decrypted on app start, you can fetch a key to decrypt the file from vault or similar.
If you use kubernetes, it supports exposing secrets as files as prescribed by the post.
A great tool for this is SOPS which supports both PGP and AWS’s KMS:
Yes there is a difference.
PASSWORD_FILE=/path/to/foo.txt just specifies where the password lives, not what the password is. Because of this, it doesn’t matter if the variables get sent to pagerduty or other processes. In the case of pagerduty it’s just some random path on a server external parties can’t get access to, and in the case of other processes, if you’ve correctly configured permissions they can’t read it.
Does this protect you from an attacker inside the system? No, but it protects you from exposing secrets externally, which was the point.
Except now, you have a file with passwords in it, which has to be in your version control repository if you have a stateless infrastructure (which is part of 12FA). So there’s that.
If you do follow all of 12FA, I fail to see how storing a secret in environment variables can be an issue.
It’s an issue for the reason Monica says it is: if your app calls helper programs, especially programs you didn’t necessarily write, those programs are virtually certain to get access to secrets they shouldn’t have, and it’s not unlikely that they’ll further leak them to the world.
Environment variables are much better than secrets checked into source code repositories, but strictly inferior to real secret management systems.
That’s one possible methodology. You don’t have to follow 12FA, and many places probably don’t.
This article probably isn’t aimed at people who follow 12FA religiously, and instead is aimed at people who have an application already that doesn’t follow it, and uses environment variables. Making a simple change means you no longer leak secrets inadvertently.
Following a methodology is nice, but in the real world not everyone does. And if they do, not everyone uses 12FA