In the very narrow sense of “can it pwn you”, curl|bash is not per se less secure than any other distribution mechanism that involves running uninspected third party code on your machine as root, that is true. The problem is one of repeatability and knowing what you’re getting: if I am responsible for a cluster of machines that should all be identical, I want (1) to be able to say “give me version x.y”, and (2) a mechanism for being notified when x.z is available. It is then my decision whether I accept the latest version on all the machines (including the running ones that already got the previous version) or whether the upgrade needs more planning (dig into the changelog or do a canary/staging rollout or …)_ and I don’t get surprised by unexpected changes or screwed by having new nodes running different software than old nodes. A package repository (or, hell, even just a directory full of versioned tarballs would be fine) will do this, curl|bash won’t.
tl;dr curl|bash doesn’t say “insecure” to me, it says “unfinished”.
So basically, you already run code you personally never reviewed or tested, HTTPS is enough, our script is good, we will continue to recommend this install method.
compared to maintaining (and testing) half a dozen package formats for different distros.
Again with that bullshit. Let. Packagers. Do. Their. Jobs.
compared to maintaining (and testing) half a dozen package formats for
Again with that bullshit. Let. Packagers. Do. Their. Jobs.
compared to maintaining (and testing) half a dozen package formats for
I’m torn on this. On the one hand, yes. Having 3rd party packagers is
great for pretty much everyone and ideally f/oss software organizations
should maintain and license their software so that repackaging is possible.
The problem with this view is that user maintained packages lag
upstream, often by quite a lot, and as an engineering org having lagging
user packages means having to deal with innocent users who are stuck
with fixed bugs. See a classic rant on
So yes. By all means I want package managers (or /opt monodir
installs) so I can uninstall your software eventually and update it
sanely along with everything else, but really that means that the
developing org does have to take on packaging responsibilities or else
pay the costs associated with not taking them on. For all that I dislike
curl | bash, it definitely seems to be a local optimum.
curl | bash
Disclaimer: this is my view as a user, I’ve never participated in the
community as a (re)packager or distributing an original package except
in publishing Maven artifacts which don’t require repackaging and thus
I certainly agree with letting packagers do their jobs. However, it seems many users see this as the project’s responsibility, rather than the distribution’s. I read this part of the post as being about that sentiment.
From the perspective of a young project trying to gain traction, taking on this responsibility can noticeably help with uptake. Unfortunately, individual projects - even very large ones, like languages - are not in a great position to make things work smoothly at the scale of an entire distribution. (I’d count Linux distributions and also things like Homebrew as basically the same sort of thing, here.)
I think the ideal is for projects to keep offering these “fast start” setups, but recognize that they are not going to be the long-term solution, nor the solution for everybody.
I wish they would at least encourage users to even think about the security implications, but focusing on that aspect isn’t the heart of why this keeps happening.
As an (occasional, part time) packager, the ideal would be to get some frozen version of the uptream release (or a reference to same) that I can transform into the package. A magic button I can press whenever I like to get the latest version at that time does not meet that need.
If a young project wants to get into distros (I don’t know whether that’s the kind of traction you’re thinking of, if not then obviously disregard) I’d suggest that’s what they should be thinking about doing, and the curl|sh should be a wrapper over that
I was mostly thinking in terms of mindshare - users and contributors. I think a lot of developers don’t necessarily think of distros as part of their plan at all. That’s exactly what I wish were different. :)
It makes a lot of sense, now that you say it, that getting a frozen version is the biggest need there.
This has come up before - https://lobste.rs/s/ejptah/curl_sh
I still haven’t seen a strong argument against this mode of installation, but I still hear a lot of “you’re doing it wrong” anger. I’d be very interested in a cogent argument (comment or link pointer) about why this is bad, as it feels to me like the culture has been shifting in the direction of curl | sh
Shell-based installations for people who want to run the “arbitrary” code from the developers doesn’t prevent you from using packages and dependency management. What’s the problem here?
Often times these scripts are targeted at give OSs (often the big 3, which excludes things like NixOS, OpenBSD, FreeBSD… etc), or more commonly, send flags to commands that aren’t available on all systems (sed on OpenBSD for example, was missing -i up until recently).
These missing flags can be disastrous. No one ever writes scripts to handle errors that pop up from missing flags. The result of which can be clobbered files, removed /’s.. or really anything..
Agreed. Forgetting the use of curl, install.sh itself is the problem.
If the connection fails during the middle of the script download, whatever has been downloaded will already have been executed. Imagine if you’re on a system that doesn’t utilize the –no-preserve-root option to rm, and the script called for rm -rf /some/directory/here, but the connection terminated at rm -rf /, your system would be hosed.
rm -rf /some/directory/here
rm -rf /
There’s no way to audit that what was performed during one curl | bash instance will be the same thing performed in another instance, even if done only seconds later. There’s no way to audit what exactly was done without taking a forensic image of the system beforehand.
Simply relying on HTTPS for security doesn’t protect against all threat actors. Certain actors can, and have in the past, performed man-in-the-middle attacks against SSL/TLS encrypted connections. There are companies like Blue Coat, who provide firewall and IPS appliances, and who also are CAs and can perform man-in-the-middle attacks of every SSL/TLS connection for those sitting behind its appliances. This can also be done in any enterprise setting where the client has a corporate CA cert installed and the corporate firewall can do SSL/TLS inspection. Often times, the private key material for the corporate CA certificates are stored insecurely.
The same holds true, and is especially grievous, when the installation instructions say to use curl | sudo bash.
curl | sudo bash
No, thank you. I’ll stick to my package manager that keeps track of not only every single file that is placed on the filesystem, but the corresponding hashes.
edit: Fix typo
TFA addresses this.
Download the script and look at it. If you have reason to believe that the upstream is gonna serve you something malicious on subsequent installs, then you should audit the entire source you are installing, not just the installer.
If you don’t already have the package in your distro’s repositories, then you will need to use HTTPS or a similar mechanism to download it. There is no way to verify it against a hash either, because you will need to use HTTPS or a similar mechanism to download the hash. I’m sure there are more reliable (and exotic) ways of bootstrapping trust but in practice nobody will use them.
This also has nothing to do with curl | bash in particular; this attack applies to, say, downloading a tarball of the source and ./configure && make && make install.
This is what I love about FreeBSD’s ports tree: it solves all of what you just brought up. Each port entry (like www/chromium) already contains a list of hashes for all files it needs to download. Additionally, when using the binary package repo, the repo is cryptographically signed and the public key material is already on my system. No need for HTTPS in the slightest.
I don’t disagree with you here, using packages with your distro is preferable to curl | sh when the option is available. I see curl | sh as a convenient way of performing an installation when that option is not available. There is a lot of paranoia over curl | sh though that would lead one to believe that is more insecure than other forms of non-package installation, and I think having an article that counters these misconceptions is valuable.
The sandstorm script is specifically designed to avoid that failure case:
# We wrap the entire script in a big function which we only call at the very end, in order to
# protect against the possibility of the connection dying mid-script. This protects us against
# the problem described in this blog post:
set -euo pipefail
I take issue with recommending and even defending it as good practice. If you want to use it, no one can stop you.
Yes, it’s clear you take issue with that. My question is why?
Oh wow, let’s see:
Basically it’s the same reasons why you shouldn’t just blindly do make install from source, only there are no DESTDIR and PREFIX.
I see where you’re coming from now.
I think one of the drivers for people not considering those reasons (with server-side software, anyway) is that while package management tries to solve the issues you’ve identified, it hasn’t been particularly successful or reliable. A common solution which does work is to use installers and shell scripts to build an image and replace the whole system when you need to upgrade/downgrade/cleanup/uninstall – this is perfectly compatible with sandstorm’s position.
In this case the end user is responsible for their system and its administration (which is exactly what every license says anyway). The idea that people should only install things provided by their distro feels a bit paternalistic/outmoded.
You seem to think that my position is “install from packages or else”. No. There is a plethora of valid approaches to administering your system. My problem is that Sandstorm are recommending end users do something that can pretty easily shoot their foot off and then defend it with hand-waving and and arguing with paranoid strawmen.
I’d really have no problem if they’d even mention at some point in their install instructions something like “or look in your distro repos, it might be packaged already, yay!”, but no. They specifically ignore distro repositories altogether.
That and they recommend building from HEAD despite even having regular tagged releases, which is also a small red flag for me.
UPD: To be clear, this is very relevant:
In this case the end user is responsible for their system and its administration
They deliberately recommend an installation method that requires the user to really know what they are doing. That can be valid, just not as the default option. Recommending such a volatile approach as the default install method for end users is at the very least irresponsible.
Got it, you take issue with the very concept of install scripts, not the practice of piping one to sh from curl.
The only real argument I read against curl|sh is the one regarding network issues. let’s say the script you’re curling include a line as:
rm -fr $HOME/$CONFDIR/tmp
And curl get a connection reset right after the first ‘/’, you’ll loose your whole $HOME.
I do agree that there are way to prevent this kind of things to happen, by using “guideline”, “best practice”, “defensive programming” or whatever, but if you don’t read the script you’re pipingi.to sh, this is something that can happen to you.
Another grief against this method is that sometimes, the devs ask you to run
curl | sudo sh
But that’s a different topic
I’d be very interested in a cogent argument (comment or link pointer) about why this is bad
This reddit link and discussion cover some of the evil possibilities.
“Software should only be distributed through official distro channels” (the only consistent interpretation I can find of your statement) is far from universally held idea, so I’d expect an opinion stated this forcefully to come with some reasoning.
Look. If someone wants to build packages for every distro ever — I can’t and don’t want to stop them. But don’t use that argument like someone’s forcing you to build those packages. It was your own damn choice.
Their “own damn choice” was to sidestep the distro packages thing and create a shell script installer. You appear to have an issue with that, since you called it “bullshit.” I doubt Sandstorm cares if others package their software differently (packagers can Do. Their. Jobs!), since it is Apache licensed. What exactly is your objection?
Wait what. It’s a simple case of a false dichotomy. Look. They are saying that they only have two choices, and two choices only: an installer script that you pipie into your shell or building and testing half a dozen packages. Like someone is forcing them to. It’s pretty obvious hand-waving.
Those are not the only two possible choices for the project. They know it, you know it, I know it. Don’t get caught on a simple fallacy.
Sure, no one stops packages from packaging the thing. But they are telling users to bypass the repos. That’s not helping.
Where in the article are they telling people to bypass the repos? In fact they even say
However, once Sandstorm reaches a stable release, we fully intend to offer more traditional packaging choices as well.
Also, I am confused to what any of this has to do with whether curl | bash is secure or not.
That’s the hand-waving part. They’ve mentioned the packaging issue for no reason other than confuse you even more, which is why I initially said it was bullshit. No one is forcing them to build packages for all or even all major distros, but they go out of their way to use it as an argument for… what exactly?
What irks me about this discussion is that if sandstorm.io had provided a URL for a Debian/RPM repository no one would bat an eye. And yet those packages would be just as “arbitrary” as this shell script, and certainly harder for to read for those inclined to do so.
I’m all for using distro-provided packages - but let’s acknowledge that installing third-party software depends on some combination of trust and technical know-how, and curl doesn’t measurably change the quantities.
And yet those packages would be just as “arbitrary” as this shell script, and certainly harder for to read for those inclined to do so.
I actually agree. A package built by the distro for the distro is at the very least tested and signed by people with at least some track record. I’m almost as much against devs providing their own packages as the main way of installing their shit because my distro’s maintainer would almost always do a better job of making sure the package works and is consistent with the distro’s guidelines. In short: a package from the repos has a much smaller chance of surprising me.
It’s just that a script is even worse because packages you build for distros, and even if you don’t know the distro as well, you will probably at least superficially test the package on the target distro. A script is supposed to be run by whoever on whatever system out there, which has so many ways to fail.
Unfortunately I would imagine if most packagers were just doing their jobs (as in employment that pays their bills) they would have no time to update packages.
Yeah my company has paid for a substantial number of hours I’ve spent on AUR packages over the years :)
In the end, curl|bash is just another tool and, like Comic Sans, the problem lies with its abuse. It looks like sandstorm takes active steps towards making it more secure and mitigating the connection dropped error, which is nice.
I agree that most code should be installed with packages. However, since that is the packager’s responsibility, why is it bad that they release a “common” version, as long as they’re not forbidding anybody else to release packaged versions? Can anybody explain further?
I can speak to this a bit: I port as much stuff to OpenBSD as possible and, having tried to port a few “curl|bash” apps I have noticed a few commonalities in the projects that seem to use this method:
TL;DR - porting projects that use “curl|bash” is usually a huge undertaking because the script is doing the job of a proper build system and requires huge amounts of shoehorning to get the project to function properly with out.
edit I put the hat on just for the Comic Sans :D
edit2 Looks like sandstorm is using MeteorJS - which is one of the “curl|bash” projects I gave up on for the above reasons! :P
Remember kids, every time you forget to use set -e, the flying spaghetti monster eats a meatball.
For more than 10 years, I thought we had reached a global minimum and autoconf was the worst methodlogy to happen to distributed software. But I’ve been disrupted! It was merely a local min.
[Comment removed by author]
If you think there is no problem, and that those arguing against it are misinformed, writing an article to clear up the misconceptions is the only path forward.
I don’t care what your project is, if part of the installation process calls for running arbitrary shellcode, I’m never, ever going to run it.
a) linux only, b) 64bit only, c) includes shellcode in the shell script it runs, d) sets scary sysctls, e) sets dev-mode by default, g) is hella interactive
a, b: These have nothing to do with whether curl | bash is secure.
c: Trusting an upstream to provide you a safe program but not a safe installer is security theater.
d, e, g: Probably bad, but doesn’t have anything to do with curl | bash in particular.
I’m beginning to think the reason people have an issue with curl|sh is just because they can actually see problems in the software they install, as opposed to binary packages which politely hide the author’s bad practices behind a curtain.
You know, there really ought to be like a blockchain that collects signatures from many projects and provides transparency. That would fix the issue with your second step. :)
The default status of software projects is nonexistent. :) Somebody who has time needs to write it, advertise it, field questions from people who think it’s unimportant, and maintain it.
(Credit for quip: Raymond Chen.)
I think sandstorm is incredible software and kentonv (primary author) is a really smart guy, so this does assuage my concerns. For those who still think it’s a bad idea, can you link to the reasons why and suggest what realistically should be done instead? At this point, the kneejerk negative response to curl | bash feels a little cargo culty to me.
Remember that very smart people can occasionally do very dumb things.