1. 51
  1. 25

    The problem is that there is no checksum and most often not even a version number. You have no idea if you’re getting what you expected when you run the command. One developer can run it on their system and 5 minutes later another developer can run it on theirs and end up with a different environment. You’d have no idea. It’s lazy release engineering.

    That’s malpractice.

    If I trust docker.com enough to run dockerd, then why shouldn’t I trust their website to not inject hidden stuff?

    Websites are much more easily compromised than source repositories and package managers that validate what they download. It’s not docker you need to worry about.

    As a more general point, people are going to paste stuff in their terminals regardless so this is best fixed in the terminal or shell instead of telling people to “not do that really convenient thing”.

    So, because people are going to do dangerous things, we should encourage it? If there is a fix to be done elsewhere, fix it first.

    if there is a potential security problem and no one is exploiting it, then is it still a security problem?

    Yes. This is basically the “I have nothing hide” argument for privacy. You don’t know how risky this might be tomorrow. Don’t leave a known opening.

    I tend to avoid these kind of scripts (or read them) for exactly this reason [Not knowing what the script is going to do.]

    So you take the extra steps after an article of how no one needs to do any of that? How is it not malpractice to you that you have to download and read the script because its effects are not documented while the site encourages everyone else to not do that by just piping it to the shell?

    But this is a UX issue and not a security one. It’s fine to not like these scripts, but that doesn’t make them “glaring security vulnerability” or “malpractice”.

    It’s absolutely at least malpractice and quite potentially a security issue.

    1. 13

      How would a checksum work” Publish it on the site? The same site you’re downloading the script from? If someone can change the script, they can change the checksum. Signing it with a pgp key works, but key exchange is not an easy problem, and never mind that most people can’t figure out how to use gpg. None of this is easy.

      For most people and most purposes downloads over https from a trusted source are “good enough security”. How many people are verifying all those signatures? Almost no one, especially not non-technical users. Signing is good to have for multiple reasons, but the practical value in verifying “is the file foo I downloaded from example.com untampered?” is very limited and the UX is hard.

      How often has a website been compromised and an install.sh modified? I can recall several source stories of source repositories being compromised, but none for install scripts. Thus far, no one in the extensive HN discussion has offered an example. I will gladly add a link if you have one.

      The article is about whether it’s a security problem or not (glaring one, even). That I personally don’t like it is besides the matter; for reasons I don’t fully understand myself I am somewhat irrationally obsessed with keeping my system as “light” as possible, almost to the point of stupid light. That’s just a personal preference.

      While this was probably not your intention, this kind of “aha! see the hypocrisy here!” shoe-horning doesn’t come across as very constructive or friendly.

      I take issue with the term “malpractice”. To me, a doctor accidentally amputating the wrong leg because he didn’t pay proper attention as he was texting his wife is “malpractice”, not disagreement over the security of a certain idiom. A quick check seems that the dictionary agrees with this definition.

      I like the “especially when legally actionable because an injury or loss has been suffered”-part of that definition; as mentioned I have not been able to find any real-world compromises because of this.

      Many large projects use this method, and there are good reasons for that (something I should probably expand on in the article). These people are not all “lazy” or “careless”; in the previous thread about this (linked in other comment) a Nix developer explains this in some more depth.

      1. 8

        I’m talking about a checksum in order to know if the script changed between 2 points in time. I’m not talking about security here.

        I disagree that “a security issue that hasn’t been exploited isn’t a security issue.” Websites have been compromised. Just because an install script hasn’t been effected during one of those compromises, doesn’t invalidate the risk. installers and update files have. Transmission and Linux Mint off the top of my head.

        “malpractice” literally means “a bad practice”. Poor release engineering, which can introduce security risks, is malpractice.

        1. 8

          For Linux Mint the ISOs were compromised and for Transmission the macOS binary. This underscores my point that there is nothing special about these install scripts. Any server can be compromised, any code you download can be dangerous, and it’s always hard to verify pretty much any code you run. fom the internet (which is almost all code you we run). There is nothing unique about install scripts at all versus any other software you download.

          Poor release engineering

          A lot of big projects are doing it. If lots of people are doing something like this then chances are they have good reasons for that. The other alternative explanation is that the Docker, Rust, Bundler, PHP, Nix, etc. teams are stupid and/or a bunch of lazy bums, and I don’t think that’s the case.

          1. 4

            If lots of people are doing something like this then chances are they have good reasons for that.

            wat lol. Yea, the reason is the bar has been lowered and they can get away with it.

            • for rust (swift, dotnet core, electron… others I can’t think of): They also build / ship their own llvm, because it’s easier than “knowing what distros have what llvm installed”. This is lazy. they have stupid wrappers that break compat on non-main line arches… (where the tool would otherwise build if it wasn’t for them “saving time by shipping all deps ever”).

            You are right in that … if I trust them to run JS on my browser.. then surely I trust them to run a shell script… the YUGE difference with that bit is that a shell script has a shit load more access by default than a browser does.

            sudo? .ssh/*private*… these are things the browser “doesn’t do” without some extra work, while a shell script might have full access…….

            1. 9

              You are right in that … if I trust them to run JS on my browser.. then surely I trust them to run a shell script… the YUGE difference with that bit is that a shell script has a shit load more access by default than a browser does.

              This is not the argument I made at all.

              1. 4

                because it’s easier than “knowing what distros have what llvm installed”

                No, this is absolutely not the reason. Rust uses a fork of LLVM with patches that are not upstream yet.

                1. 0

                  It sure isn’t using the rust-built in llvm on OpenBSD : rust 1.38.0 builds fine with LLVM 8.0.1

                  snippet that modifies config.toml
                  	echo 'llvm-config = "${LOCALBASE}/bin/llvm-config"' \
                  1. 4

                    Right, it’s been compatible with upstream for a couple years now, but I think they still maintain the fork and use it for official builds.

              2. 3

                Hackers made a modified Linux Mint ISO, with a backdoor in it, and managed to hack our website to point to it.


                the Transmission team removed the malicious file from their web server


                Besides install scripts having run of the system, they are also not versioned and not verified. Make a release and use a good package manager.

            2. 1

              Like this? From https://getcomposer.org/download/

              php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
              php -r "if (hash_file('sha384', 'composer-setup.php') === 'a5c698ffe4b8e849a443b120cd5ba38043260d5c4023dbf93e1558871f1f07f58274fc6f4c93bcfd858c6bd0775cd8d1') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
              php composer-setup.php
              php -r "unlink('composer-setup.php');"

              This installer script will simply check some php.ini settings, warn you if they are set incorrectly, and then download the latest composer.phar in the current directory. The 4 lines above will, in order:

              • Download the installer to the current directory
              • Verify the installer SHA-384, which you can also cross-check here
              • Run the installer
              • Remove the installer

              == end of quoted part

              Sure it looks unwieldy and might no help against mitm - but I think it’s an interesting display of “trying to fix curl|sh” I guess.

          2. 15

            I think this is a small front in a greater battle - should the application developer be in charge of distribution, or should the operating system vendor? This battle gets more complex with i.e the fact Linux distributions aren’t very good at ABI stability, and language-specific package managers.

            You can regenerate them from autoconf, but I don’t think many do? Even systems like FreeBSD ports just use existing configure scripts.

            If you patch the configure script, you have to regenerate it, so…

            1. 6

              I think you hit on the crux of the issue. As an upstream application developer, I want my users to get the intended experience of my application, including access to the latest releases. As a package maintainer, I want my users to get the most integrated system that works for them.

              One classic example of this dissonance is the naming of nodejs on Debian. node was already taken by another package, so Debian devs had to do a lot of patching to rename the binary. This was not a popular decision with nodejs devs.

              1. 1

                A better alternative: the application developers are in charge to make their software easy to package. The package maintainers do the packaging. Everybody wins.

              2. 5

                A standard, consistent installation mechanism for binaries, and an unknown, inconsistent installation script, are two radically different things. You can make statements about the security of the former, but everything goes out the window with the latter.

                What the author seems to be missing is the fact that with a good sandbox installer we don’t need to completely trust the authors of software or the software’s provenance.

                1. 3

                  My main issue is that I’d rather have my distro package manager manage installation than leave it up to the random collection of software authors who are generally not cooperating with each other to keep things organized on the end user’s system, especially as regards dependencies.

                  1. 3

                    For me, it was more watching people "curl http://... | sudo sh" in large coffee shops with plenty of evil twins present. At least if you’re pulling down an archive or getting something from git, TLS was usually involved. I’d see some sites try to fix this by adding -L to their curl instructions to handle the redirect to HTTPS, but why not just have the HTTPS link there in the first place?

                    1. 3

                      Most of my issue with curl | sh is that it might scatter files around the filesystem, and generally be hard to update and/or clean up after uninstalling. The main benefit of an installation system is keeping track of where things are put (most of the time, at least) and being able to update or purge in one fell swoop.

                      1. 3

                        AFAIK there is a way to detect on server side if socket is being piped into bash on other side. You do that by checking backpressure propagating via TCP buffers - bash will execute stuff as soon as it will receive it. Just put a sleep 5 command in the beginning of the bash script and see your buffers on server side being full. You will not see this if client is sane and is not piping to bash.

                        1. 3

                          The case with this practice is that the installation script isn’t being audited by the distro maintainers. But I’m not sure how big of a problem this is

                          1. 2

                            It’s big enough that various FAANGs and many other companies prohibit such malpractices. To the point of firing employees.

                          2. 3

                            The issue I have is not being able to check the hash or signature of the script. It took me a long time to install brew because the install docs were so simple they didn’t include a signature doc.

                            I was eventually able to find the GitHub project and compare the checksum on their release to what the web site pointed toward.

                            I think training user behavior to run shell scripts is dangerous, especially with less experienced users. I wouldn’t download and run an exe blindly without checking up on it.

                            So I’m not against piping curl to sh, but want to have comfort that what I’m curling is what I think I’m curling.

                            1. 6

                              I think curl | sh is all about feelings, and not meaningful security difference.

                              • It’s loss aversion. You lose ability to inspect the script and verify the checksums. Not being able to do it feels worse than being able to do it, but not bothering to do it.

                              • It’s about loss of control. curl | sh could detect the piping and do something nasty without leaving a trace, so you can’t gather evidence for some kind of imagined shaming/bragging/revenge later.

                              In the end, the install script is just a tip of the iceberg. It’s installing millions of lines of code, and let’s be honest, you haven’t inspected these millions of lines of code, and you won’t.

                              1. 1

                                Totally not. Browsers have less access than a shell.

                                1. 1

                                  If not using browsers, then packages provided by OS vendor.. they are a repeatable, reversible, maintained component.

                                  You don’t know what, where, why something is installed via |sh.. just that it is installed. How do you remove it?! updates?! GLHF..

                                  1. 2

                                    Take Rust for example. curl | sh of Rustup:

                                    • tells you exactly what it’s going to install and where, and how to remove it.
                                    • installs the very latest version and comes with a tool to keep the installation up to date (even nightly).
                                    • actively maintained by the Rust team itself

                                    OTOH Rust packaged by a distro:

                                    • may be 6-12 months old, which for Rust’s 6-week release schedule is an eternity.
                                    • likely uses an older version of LLVM from the system, without patches for bugs that Rust ran into.
                                    • someone makes a package once in a while and hopes it works.

                                    The upsides and downsides aren’t inherent to the method.

                                    The curl script is 500 lines of code. The whole project is ~5 million lines of code. In the end it comes down to trust. If you don’t trust Rust, then whether you install 5000000 lines of code you haven’t reviewed, or 4999500 lines of code you haven’t reviewed, it doesn’t make a material difference. I’d love the project to provide proper installers and packages for many reasons, but security isn’t one of them.

                              2. 3

                                This is kind of a repeat from a story several months ago: Piping curl to s(hell). This article is just a nicer write-up of my comment there.

                                1. 2

                                  The author is missing a number of other problems. For context, various FAANGs and other companies strictly prohibit installing software from random sources (including downloading tarballs, debs, RPMs)

                                  • Reproducibility: there is no way to install the same software in the same way on the same OS version 6 months from now. Multiply the problem for 10 or 50 applications and your infra becomes unmanageable.
                                  • Legal issue 1: distributions verify licensing compliance. sudo bash does not.
                                  • Legal 2: companies (e.g. Canonical) provide legal indemnification for license breaching but only for distributions.
                                  • Legal 3: same for patents.
                                  • Legal 4: you cannot easily prove nor disprove that some configuration files were modified by some sudo bash or by an attacker after the files have been modified multiple times.
                                  • Configuration/state management: Contrarily to tools like dpkg, you cannot track which application installed a (potentially suspicious) file. E.g. a new CA is found in /etc/ssl/certs/ and it’s unclear where it came from.
                                  • “benign” MITM: firewalls, IPS, proxies, captive portals can hijack TLS connections using internal CAs previously added by the user.

                                  If I trust docker.com enough to run dockerd

                                  This is another reason for the companies listed above to only deploy software that has been packaged, vetted, tested and run for months by 3rd parties before adopting it.

                                  you also have no guarantees that https://rust-lang.org/rust-1.39.tar.gz is really the same as git, and I don’t see anyone calling this a “glaring security vulnerability” or “malpractice”.

                                  Sounds a bit like a “what-about” fallacy, and I hope my initial comment addresses this.

                                  1. 1

                                    Curl to sudo-shell isn’t insecure, it’s just sloppy (and enforces this sloppiness on the person doing the installation).