1. 1

    If I understand the post correctly, this seems like a too big obvious failure. I kind of can’t believe Debian and Ubuntu never thought about that.

    Did someone try injecting a manipulated package? I’d assume that at least the signed manifest contains not only URLs and package version but also some kind of shasum at least?

    1. 2

      Looks like that’s exactly what apt is doing, it verifies the checksum served in the signed manifesto: https://wiki.debian.org/SecureApt#How_to_manually_check_for_package.27s_integrity

      The document mentions it uses MD5 though, maybe there’s a vector for collisions here, but it’s not as trivial as the post indicates, I’d say.

      Maybe there’s marketing behind it? Packagecloud offers repositories with TLS transport…

      1. 2

        Modern apt repos contain SHA256 sums of all the metadata files, signed by the Debian gpg key & each individual package metadata contains that package’s SHA256 sum.

        That said, they’re not wrong that serving apt repos over anything but https is inexcusable in the modern world.

        1. 2

          You must live on a planet where there are no users who live behind bad firewalls and MITM proxies that break HTTPS, because that’s why FreeBSD still doesn’t use HTTPS for … anything? I guess we have it for the website and SVN, but not for packages or portsnap.

          1. 1

            There’s nothing wrong with being able to use http if you have to: https should be the default however.

            1. 1

              https is very inconvenient to do on community run mirrors

              See also: clamav antivirus

              1. 1

                In the modern world with letsencrypt it’s no where near as bad as it used to be though.

                1. 1

                  I don’t think I would trust third parties to be able to issue certificates under my domain.

                  It is even more complicated for clamav where servers may be responding to many different domain names based on which pools they are in. You would need multiple wildcards.

          2. 1

            each individual package metadata contains that package’s SHA256 sum

            Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

            But if it is, then forging packages should require SHA256 collisions, which should be safe. And package integrity verified.

            Obviously, serving via TLS won’t hurt security, but (given that letsencrypt is fairly young) depend on a centralized CA structure and additional costs - and arguably add a little more privacy on which packages you install.

            1. 3

              A few days ago I was searching about this same topic when after seeing the apt update log and found this site with some ideas about it https://whydoesaptnotusehttps.com, including the point about privacy.
              I think the point about intermetdiate cache proxys and use of bandwith for the distribution servers probably adds more than the cost of a TLS certificate (many offer alternative torrent files for the live cd to offload this cost).

              Also, the packagecloud article implies that serving over TLS removes the risk of MitM, but it just makes it harder, and without certificate pinning only a little. I’d defer mostly to the marketing approach on this article, there are call-to-action sprinkled on the text

              1. 1

                https://whydoesaptnotusehttps.com

                Good resource, sums it up pretty well!

                Edit: Doesn’t answer the question about whether SHA256 sums for each individual package are included in the manifesto. But if not, all of this would make no sense, so I assume and hope so.

                1. 2

                  Hi. I’m the author of the post – I strongly encourage everyone to use TLS.

                  SHA256 sums of the packages are included in the metadata, but this does nothing to prevent downgrade attacks, replay attacks, or freeze attacks.

                  I’ve submit a pull request to the source of “whydoesaptnotusehttps” to correct the content of the website, as it implies several incorrect things about the APT security model.

                  Please re-read my article and the linked academic paper. The solution to the bugs presented is to simply use TLS, always. There is no excuse not to.

                  1. 2

                    TLS is a good idea, but it’s not sufficient (I work on TUF). TUF is the consequence of this research, you can find other papers about repository security (as well as current integrations of TUF) on the website.

                    1. 1

                      Yep, TUF is great – I’ve read quite a bit about it. Is there an APT TUF transport? If not, it seems like the best APT users can do is use TLS and hope some will write apt-transport-tuf for now :)

                    2. 1

                      Thanks for the post and the research!

                      It’s not that easy to switch to https: A lot of repositories (incl. die official ones of Ubuntu) do not support https. Furthermore, most cloud providers proivide their own mirrors and caches. There’s no way to verify whether the whole “apt-chain” of package uploads, mirrors and caches is using https. Even if you enforce HTTPS, the described vectors (if I understood correctly) remain an issue in the mirrors/ cache scenario.

                      You may be right, that current mitingations for the said vectors are not sufficient, but I feel like a security model in package management that relies on TLS is simply not sufficient and the mitingation of the attack vectors you’ve found needs to be something else - e.g. signing and verifing the packages upon installation.

                2. 2

                  Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

                  Yes, there’s a chain of trust: the signature of each package is contained within the repo manifest file, which is ultimately signed by the Debian archive key. It’s a bit like a git archive - a chain of SHA256 sums of which only the final one needs to be signed to trust the whole.

                  There are issues with http downloads - eg it reveals which packages you download, so by inspecting the data flow an attacker could find out which packages you’ve downloaded and know which attacks would be likely to be successful - but package replacement on the wire isn’t one of them.

          1. 11

            Gack! That isn’t a micro-optimization.

            That is “Completely replace your core threading model with something radically different.”

            1. 9

              Hi! Thanks for reading my post.

              Flipping that configure switch does not radically change the threading model. It simply changes the method by which pre-emption occurs. Let me explain further.

              –enable-pthread: This enables the use of one single OS level thread which runs a simple loop. It essentially “pings” the Ruby VM at a set rate. The Ruby VM uses this ping to know when it is time to switch between threads.

              –disable-pthread: This disables the use of one single OS level thread. Instead, it uses a timer signal (VTALRM) to send periodic pings to the Ruby VM at a set rate. The Ruby VM uses this ping to know when it is time to switch between threads.

              The actual threading implementation itself is unchanged. It is simply the source of the timer that changes. 20 million sigprocmask calls is quite a price to pay for time tracking.

              1. 1

                I thought I’d take a closer look…. so I pulled the latest ruby 2.4 source and ran ./configure –help and found….

                --enable-pthread        obsolete, and ignored
                

                So I pulled the oldest ruby on the ruby website. 2.0.0-p648

                The same.

                So I took a closer look at your post. Oh yes. You’re using 1.8.7

                Ok, so you probably right.

                But I will say that micro-optimization doesn’t matter.

                I think you will find moving to 2.4 will speed things up way way more than tweaking that option. (Plus give you a lot of very nifty new stuff).

                Anyhoo, pull 1.8.7-p374, yup, what you’re talking about is in eval.c

                Hmm. The get/set context seems to be in longjmp/setjmp implementation. Which is used in the ruby thread context switching.

                Hmm. The fact that shaving a setjmp/longjmp allows them to get by without sigprocmask makes me worried. It sort of implies that there is (possibly narrow) window in which signal delivery at thread context switch time might do unfortunate things.

                Threading is very hard. Threading in the presence of signal delivery is extremely hard to get perfectly right.

                I became a little obsessed with this post since commercial reasons forced me against my will to write a swapcontext based scheduler…. And yes, sigprocmask is a hot spot and no I can’t get rid of it without creating a horrid little window of nastiness.

                It’s one of these things where the devil is in the fine fine fine details.

                1. 1

                  I’m glad you’re looking at this a bit more. I really wanted to go back much earlier and look at the development of this feature to see when, if ever, a regression was introduced. Presumably there was some ruby version without this feature? Or a glibc that didn’t call sigprocmask? And then somebody made a change “it’s only one syscall” that destroyed performance, but nobody seemed to notice until much later.

                  1. 1

                    I’d would look for signal handlers doing the wrong thing on thread context switch.

                    But I sort of wouldn’t bother trying to debug something that is way beyond end of life.

              2. 2

                It’s a pretty heavy cost to pay for programs that don’t even use threads, however.

                1. 4

                  I know.

                  Every time somebody says, “Let’s add threading to something. If you don’t need it it won’t cost you.”…. I sigh.

                  It always costs.

                  In hidden complexity, in hidden bugs, even (in the case of Java) hidden threads whirring about doing stuff you didn’t explicitly ask for.

                  1. 2

                    Every time somebody says, “Let’s add threading to something. If you don’t need it it won’t cost you.”

                    People say that? Where do they buy their drugs, because I definitely don’t want anything from their supplier?

                    1. 2

                      But don’t worry, we’re adding shared memory primitives and workers/threads to Javascript, and this time it’ll be different!