Piping commands directly to sh seems so dirty, but when you consider how often people will extract some tarball and run ./configure without thinking about it, it’s not much different. If anything, running that ten-kilobyte pile of shit that autoconf spit out is even worse since it’s much harder to audit (irssi backdoor, anyone?).
It would be neat to see a simple one-liner that can go in between curl and sh that matches the checksum of what curl spit out to what the webpage expects it to be and only continues the pipe if it’s valid. So a webpage could show something like:
curl <url> | <some magic> e6a92ec2fe5fba022c31c32c97ea455cee4b2736 | sh -
If that magic doesn’t sum the curl data to match e6a92ec2fe5fba022c31c32c97ea455cee4b2736 then it will just pipe some big error to sh instead.
Indeed it isn’t and that’s the exact reason I didn’t start debating whether it’s right or wrong (it’s one of those never ending conversations). This post was simply to make people aware that what they’re seeing in the browser may just be an illusion.
It would be neat to see a simple one-liner that can go in between curl and sh that matches the checksum of what curl spit out to what the webpage expects it to be and only continues the pipe if it’s valid.
Yup. Another mitigation would simply be to send the -A flag with curl and set a user agent string that’s more browser-like.
Most “pipe to install” things require running as root or sudo bash -, which is really awful. At least if you download a tarball, you extract it, run configure and make as a non root user, and at least have an opportunity to look at the makefile or see if the build explodes before running make install.
sudo bash -
Additionally, you can often do make install as a user, if you specify PREFIX=/DESTDIR= and move into place after the fact as root.
Among the other problems with this install.sh fad, what happens when it fails halfway through? At least tar.gz files have a rudimentary checksum. Even if blind ./configure isn’t more secure, I have a little more faith it’s not going to dick things up from sheer incompetence.
Using ruby gems with something other than Mac/Linux was an exercise in frustration for similar reasons. It runs off and does stuff, then it dies, and you don’t know where the pieces are, let alone how to put them back together. Broken configure scripts can be equally annoying, but at least they drop all their turds in the same directory and make it fairly easy to poke them with a stick.
I thought this article was going to be about man-in-the-middle-ing an http connection, I suppose checking the user agent could work too.
The whole thing works on trust as jcs said. Hell, you even have to trust that the person who compiled your gcc did so without adding an invisible backdoor in. http://cm.bell-labs.com/who/ken/trust.html
I’ve done a fair amount of user testing with users who aren’t familiar with the command line and the basic conclusion is that command-line usability is terrible. People fail in really dumb ways, for example copying the dollar sign at the beginning of a command and then getting “$: command not found” and then are unable to fix that problem (try googling for a dollar sign, etc). I don’t blame library authors for wanting to control the environment and the error messages that their installation programs emit.