1. 38
  1. 35

    I shall use an apt source instead, which will be able to run arbitrary code with root privileges any time it wants.

    People got really nerdsniped with this whole “server could send different code to bash” twist, which merely adds extra points for style, but doesn’t materially change the security situation of installing arbitrary code.

    You either trust the source and security of the network connection, in which case it doesn’t matter what they could do but they don’t. Or you don’t trust the source, in which case they could just send you malware exe without being sneaky about.

    1. 13

      yeah apparently downloading some windows installer to run as admin, adding random npm/.deb repos to “sudo” into your system or pip-installing is the serious-security way of installing random stuff to run as the same user you do online banking

      1. 11

        It’s the whole circle of trust thing. I trust ubuntu, I don’t trust tool.sh. I trust some well-known certificate authority, I don’t trust a self-signed CA. Someone trusts me and they invite me to lobsters. All in the name of security, safety or some benevolent goal. You can barely prove software does anything at all and that’s where we’re at. It’s a collective practice that adjusts as we go. “Is secure” isn’t even a thing. It has no concept of time in it. Ubuntu apt repo is “secure” now. It’s not “secure” later (let’s say they let their domain lapse). Then it’s “secure” again.

        I like your “style points” thread. I was thinking the same thing. I’m not trying to jump on the word arbitrary here but I’ve just been noticing themes of software can barely be proved. You have like file size, benchmarks and maybe lines of code. That’s all the physical traits of software you get. Good luck to all of us. Like, virus scanners are style points on content hashing and code functionality. You still have no idea. Security is best effort. What is the complete list of what the latest version of curl “does”?

        I think it’s no different than functionality testing. All you can do is run and observe. All the haskell types in the world can’t prove “it’s secure” much less “it has no bitcoin miner in it”.

        1. 4

          I wonder how much distro maintainers review the code they pull in. I don’t think it’s feasible to review so much code. I’d expect that once a project is in, it just gets updated with “LGTM” level of scrutiny at best.

          I believe maintainers have the best intentions, but I don’t think they can be relied upon as a security barrier. I had to review source of dependencies I use for a high-security work project, and I have a first-hand experience how mind-numbingly boring, slow, and exhausting such code review is.

          A crypto miner isn’t going to be a big cryptominer.c that would stand out to someone packaging the code. More likely it’s going to be an obfuscated line of system(user_input) sitting among thousands of innocent lines of code, easy to overlook in a diff.

          So I think security of a distro is mainly based on a the level of reputation that packages need to have to get included in the first place. They probably don’t have a better way to assess reputation than you do. Then it’s just hoping that developers won’t go rogue, it’s not a bayesian-thanksgiving-turkey attack, and that someone else somehow is reviewing all of that code.

          1. 4

            It depends. More than zero, I’ve known one who read every line and wouldn’t miss anything big. He would pretend not to have noticed an easter egg and maybe he didn’t notice, but either a crypto miner or a dependency that makes crypto mining trivial? No way. Distribution packagers look at every dependency and see whether it could/should be turned into a separate shared package.

            There’s another relevant point: The distribution packagers see bug reports, and are loyal to the distribution and its users. They will have some interest in, perhaps sympathy with the software they package, but if there’s any situation where loyalty matters, they’ll be on the side of the distribution and its users.

            1. 2

              I wonder how much distro maintainers review the code they pull in. I don’t think it’s feasible to review so much code.

              Traditionally, a maintainer would be expected to be intimately familiar with what they were packaging, and could reasonably expect to understand the entirety of what the program does if not literally read all its code. That’s totally intractable in today’s world, but I don’t think we (distros) have completely moved away from the former world view.

          2. 4

            Installing apt packages/rpms/whatevers from untrusted sites harbor the exact same risk, yes. Which is why you shouldn’t do that either.

            The main problem here isn’t that there is arbitrary code executed on your machine, but that most people do not take even a short moment to reflect on that, and happily copy-and-paste installation instructions.

          3. 34

            Total let down

            $ curl https://get.tool.sh | sudo bash
            curl: (6) Could not resolve host: get.tool.sh
            1. 18

              Good, good, so the echo "curl: (6) Could not resolve host: get.tool.sh" line is serving its purpose 😈

              1. 8

                You brave, brave soul.

                1. 4

                  They were probably running it on someone else’s machine they’d already breached ;-P

              2. 24

                “If you don’t trust us, just review the 12345 line install script we copy-and-pasted from 20 SO questions.”

                1. 9

                  I recently saw one of these (for a VMware CLI) that consisted of a tar file, wrapped inside a shell script. The script looks for a marker ####ARCHIVE_BEGINS---, uses awk to write the inlined binary data to its own file and then extracts it and continues installing. Good luck reviewing that!

                  Apparently this is not uncommon, but was new to me - at least as a trick anyone would use for production distribution of a professional product. I still can’t quite work out why they bothered as obtaining the script requires a login and a lot of JavaScript, making it difficult to curl | sudo bash anyway.

                  1. 7

                    For those interested: https://makeself.io

                    1. 2

                      Makeself is a great tool for what it is! I believe if it advertised itself more in its own output, there would be fewer custom installation scripts.

                    2. 6

                      Something like this, https://github.com/megastep/makeself ?

                      It used to be simple. Something like https://cgit.freebsd.org/src/tree/usr.bin/shar/shar.sh

                      1. 3

                        I’ve seen it in installers going back to at least the early 00s and it’s probably a lot older than that. I think in Linux binaries of some video games and ATI graphics drivers.

                        The common thing I’ve seen is using awk or something to find the offset of the marker, then dd or head to make a copy with it stripped off?

                        1. 4

                          I remember the ones that Loki games used in the late 90s/early 00s. They were shell archives, and when bash 4 came out it broke them all.

                    3. 5

                      If you don’t trust the source of the bash install script, why do you trust their software?

                      If you don’t trust that the bash install script isn’t getting MITM’d somehow, why do you trust that the rest of the software isn’t, too?

                      1. 2

                        My issue is not the bash script, my issue is the security put in place to host this bash script.

                        Usually distributions (I can only speak about Fedora/CentOS/RHEL) have tight ACLs, and review processes for new packages to be uploaded. Supply chain attacks are taken seriously. There are processes and tools in place for auditing the supply chain.

                        Who administers https://get.tool.sh/? How do you audit that the supply chain hasn’t been tainted? Who uploads a new version? How do they upload a new version? Is there an approval process? If a version is tainted, how can people know whether they installed that version or not?

                        Also, on an unrelated to security note, packages provide a way to track installed files, and can easily get purged. Distributions usually avoid pre-install or post-install script which creates untracked files of which the package manager is not aware. (YMMV, exceptions do occur, for example: postgresql usually creates the data directory in its post-install script)

                        1. 1

                          Do you audit and ensure the review process was carried out correctly for every piece of software you install?

                          1. 2

                            Well… I install my software from the official Fedora repos, so I assume every package was approved. I can look it up on koji.

                            Regarding what I install outside of my package manager (golang and/or rust libraries when I develop), I develop on local VMs. The threat model is these programs reading my firefox cookies, ssh keys etc… Which they can’t since they’re sandboxed.

                        2. 1

                          I might trust their software, their laptop, and even their build server (if it’s not their laptop), but not their web host.

                          1. 2

                            So how would you detect if their web host tampered with the binary? Did you verify a signature? Did you verify a signature that you didn’t get from the same source as the binary?

                            1. 1

                              That’s my point, and why I’d rather get things through a distro.

                          2. 1

                            If you don’t trust the source of the bash install script, why do you trust their software?

                            If you don’t trust the install script, don’t trust the software. That’s why you should read the install script first.

                            If it’s a massive 200+ line script that’s hard to understand, I’d probably avoid the software in question if I have any choice in the matter since it’s probably doing something unspeakable to my system. If it’s just installing some files in the appropriate dirs, that’s a different story.

                            The best way to distribute software, IMO, is a simple .tar.gz with $PREFIX subdirs laid out (bin/, share/, lib/, etc). Just extract the tarball into $PREFIX, no install scripts necessary.

                            1. 2

                              The best way to distribute software, IMO, is a simple .tar.gz with $PREFIX subdirs laid out (bin/, share/, lib/, etc). Just extract the tarball into $PREFIX, no install scripts necessary.

                              This breaks down if your software needs a user, or a /var/lib/myapp owned by that user, or something else.

                              “You could add this to the README!” And yeah, sure. But a lot of devs aren’t all that Unix-savvy as most people here are.

                              Maybe what’s really needed is a standard way to set up this stuff, e.g.:

                              $ setup-sw setup.json
                                  User _arp242 (no login)
                                  Directory /var/lib/arp242 owned by _arp242

                              Basically an “ansible-light” with a UI specifically catered to this use case.

                              That loads and loads of people keeping writing these scripts demonstrates there’s a use case here that’s not yet adequately solved.

                              Actually, this could even do the download and extraction as well:

                              $ setup-sw https://install.arp242.net/setup.json
                              1. 1

                                How is this different from a packaging format and package manager? I can just as easily run dnf install https://install.seirdy.one/package.rpm, for instance.

                                Plenty of packages come with init scripts or Systemd units that create and run as users too; web servers like Nginx are popular examples.

                                1. 1

                                  Yeah, it’s not really, except that it’s probably easier to create. I don’t have a system with dnf; I’m not even sure which distro(s) it belongs to (Fedora/RedHat/CentOS, I think?) And then there’s apt, and pacman, and xbps, and FreeBSD’s pkg, etc. etc.

                                  People like these shell scripts because they’re simple, they can test them, and when done right work more or less anywhere. It’s also low in maintenance as there aren’t 10 different packages to update and test.

                          3. 4

                            some-package-manager install stuff

                            Yeah, this is much better and safer too!

                            1. 5

                              Package managers allow to track files which were related to the software being installed. This prevent such mistakes.

                            2. 2

                              Is there actually a target audience for this kind of post?

                              There are certainly people who are very concerned with the security of their machine; and whether it be “only downloading distro-packaged applications and assuming the distro already does the needed checks” or “only install software onto an air-gapped machine running openbsd in a VM unless self-compiled only from software where each file has been GPG signed by someone I’ve personally attended at least 3 weddings for”, these people generally already have security practices in place. And I strongly doubt they are unaware that “downloading random things from the Internet and giving them root access” is not always a great idea.

                              On the flip side, we’ve got developers running lord knows what from NPM or PyPI or whathaveyou, folks who are using distro-style packages from 3rd party vendors, people installing precompiled applications distributed via Github – heck we have folks running Windows and using magical mysterious NSIS installers that don’t always even have the benefit of being open source.

                              Going off of the assumption that these are still developers, systems administrators, or some variety of computing enthusiasts though (because, really, who else is reading Lobsters), they almost definitely already know “Hmm letting literally anything run random stuff, especially as a privileged user, may not be great.” So, again, they already know the message here.

                              So what’s the real audience, what’s the real purpose this exists? Is this some sort of abstract art about more developers not going through the essay per distro to push their project through existing distro-based distribution channels instead? Some sort of gatekeeping from Serious Computer People to keep Not So Serious Computer People away?

                              I really don’t get it. What’s the point here?

                              1. 2

                                The point is that an install script should be simple and easy to read. Users should read the install script before running it so they understand what’s going on.

                                The essay-per-distro isn’t the only path; users can also build from source and install. A Makefile or sane build system should output a list of files being installed with sources and destinations so that users understand what’s happening on their machines.

                                This post describes installing software blindly, allowing it to do basically anything to your computer with root privileges.

                                Most software doesn’t need a super fancy 200+ line install script when extracting a simple .tar.gz into $PREFIX would suffice.

                              2. 2

                                This is awful for security, but there’s not a great deal of discussion in the open source world about how to actually build trust into the system. I’ve been trying to drive this conversation a bit in the LLVM world. Our current release process involves a volunteer for each supported platform uploading binaries and there’s some discussion of automating the signing process. This misses the point that the signature doesn’t, currently, provide any value. The signing key is provided by whoever uploads the binary and there is no central mechanism for ensuring that the key is stored safely. A compromise of the VM that the person building the binaries uses for the builds would allow someone to tamper with the build. Even without the compromise, the key identity is associated with an email address and (probably) a GitHub username: there’s no guarantee that the person doing the uploads isn’t malicious, just that they’re (probably) the same malicious person who uploaded the last binary.

                                At $WORK, releases of open source projects are built by:

                                • Cloning the repos to an internal copy.
                                • Applying any patches that we want.
                                • Running the release pipeline, which:
                                  • Clones the repo to a fresh VM
                                  • Runs the build, runs the tests, and uploads a build artefact
                                  • Signs the result (with a key that’s not exposed to the build VM or anywhere else).

                                The FreeBSD ports build process is fairly similar. Poudriere builds each port in a clean jail (in dependency order, installing the build and run dependencies in the jail before starting the build of each port). The final package build is then signed. I don’t remember what’s done to keep that key safe, but I remember that the process was reviewed by several people. A jail escape in the build of a package could compromise the whole set and there’s no security auditing of the source code for the various ports, but at least you get some kind of chain of custody.

                                A lot of open source projects don’t even support reproduceable builds and so there’s absolutely no way of verifying that a random release binary on GitHub is the real one.

                                1. 1

                                  I saw this on orange-site today: https://volta.sh/

                                  Is this meant to be a jab at that? I find it rather amusing.

                                  1. 10

                                    It’s a jab at all sites like that.

                                  2. 1

                                    As other comments noted, the criticism of Tool’s trustworthiness is not quite “right” but I don’t think that’s what this is really about. To me this is a reflection on an industry that’s moving too fast. How did we end up having to choose between running random stuff to get our work done or putting all our faith in FAANG notaries to keep us safe on our own devices?

                                    It feels like a yearning for computing technology that moved on more human timeframes - whether that’s distro release cadences with libraries included, GNU/X/GTK/Qt, Win32, Cocoa. Stuff that stuck around. For me iOS was the turning point where suddenly it was normal that your software had to be substantially reworked each year to remain functional. Add the explosion of the web platform and now we’re frantically curl|bashing just to keep our heads above water. In truth we were always dependent on third parties - the trust just feels much more tenuous when vendors are coming and going so rapidly.

                                    1. 1
                                      2.Ram Disk:> wget -O tool https://get.tool.sh
                                      --2021-05-04 07:28:23--  https://get.tool.sh/
                                      Resolving get.tool.sh... failed: timed out.
                                      wget: unable to resolve host address `get.tool.sh'

                                      Bummer. I wanted to try the tool on my Amiga.