1. 1

    TIL, I thought docker (and pretty much any other container runtime) was built around unshare(2), not clone(2)

    1. 1

      I personally prefer coredumpctl(1).

      1. 1

        This reads like some google-hype-driven alternative to algo.

        Edit: it does mention algo here

        Outline isn’t the only homebrew VPN available: Security researcher Dan Guido launched a similar project in late 2016.

        1. 5

          Sort of an unfortunate name, as Heimdal (one l), is the name of the de facto standard widely-used implementation of Kerberos 5.

          1. 3

            Not to mention a widely-used toolkit for flashing Samsung phones.

            1. 4

              And my 4-year-old slowly progressing RESTful API framework! https://github.com/binarymax/heimdall

              1. 3

                Not to mention the 462 repositories on github

                Edit: to be fair, it seems it’s more like 459 if we substract the ones already mentioned

                1. 2

                  Hey, I used to be on the 1st page of that result list!

                  Damn Idris Elba for being so charismatic.

                  1. 3

                    I starred it for maximum visibility, but one can only do so much

          1. 16

            I don’t see a lot of value in the changes to to true that Rob complains about.

            However, I also don’t see how having your shell scripts depend on an empty file with a particular name, so that you can run that command to get a 0 status code, counts as “good software”.

            I don’t suppose there’s a practical problem to doing it that way[0], but imagine you have to explain true to an alien who knows a great deal about programming, but has no background with unix.

            [0] I’m tempted to argue that every change in this series of tweets is the predictable consequence of true being a file. So far as you think these changes are bad, you should be bothered by the original decision.

            1. 3

              Not to mention that performance is another reason true and false were moved to a builtin.

              1. 1

                Well, you would have to explain that Alien what unix is and how it works anyway, because true can only be “true” on unix systems.

                You could also tell that alien “executing a file on unix will return successfully, unless the program specifies otherwise. An empty file is an empty program and thus does nothing, so it returns successfully” And he don’t even need his weird alien programming logic ;)

              1. 1

                If I understand the post correctly, this seems like a too big obvious failure. I kind of can’t believe Debian and Ubuntu never thought about that.

                Did someone try injecting a manipulated package? I’d assume that at least the signed manifest contains not only URLs and package version but also some kind of shasum at least?

                1. 2

                  Looks like that’s exactly what apt is doing, it verifies the checksum served in the signed manifesto: https://wiki.debian.org/SecureApt#How_to_manually_check_for_package.27s_integrity

                  The document mentions it uses MD5 though, maybe there’s a vector for collisions here, but it’s not as trivial as the post indicates, I’d say.

                  Maybe there’s marketing behind it? Packagecloud offers repositories with TLS transport…

                  1. 2

                    Modern apt repos contain SHA256 sums of all the metadata files, signed by the Debian gpg key & each individual package metadata contains that package’s SHA256 sum.

                    That said, they’re not wrong that serving apt repos over anything but https is inexcusable in the modern world.

                    1. 2

                      You must live on a planet where there are no users who live behind bad firewalls and MITM proxies that break HTTPS, because that’s why FreeBSD still doesn’t use HTTPS for … anything? I guess we have it for the website and SVN, but not for packages or portsnap.

                      1. 1

                        There’s nothing wrong with being able to use http if you have to: https should be the default however.

                        1. 1

                          https is very inconvenient to do on community run mirrors

                          See also: clamav antivirus

                          1. 1

                            In the modern world with letsencrypt it’s no where near as bad as it used to be though.

                            1. 1

                              I don’t think I would trust third parties to be able to issue certificates under my domain.

                              It is even more complicated for clamav where servers may be responding to many different domain names based on which pools they are in. You would need multiple wildcards.

                      2. 1

                        each individual package metadata contains that package’s SHA256 sum

                        Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

                        But if it is, then forging packages should require SHA256 collisions, which should be safe. And package integrity verified.

                        Obviously, serving via TLS won’t hurt security, but (given that letsencrypt is fairly young) depend on a centralized CA structure and additional costs - and arguably add a little more privacy on which packages you install.

                        1. 3

                          A few days ago I was searching about this same topic when after seeing the apt update log and found this site with some ideas about it https://whydoesaptnotusehttps.com, including the point about privacy.
                          I think the point about intermetdiate cache proxys and use of bandwith for the distribution servers probably adds more than the cost of a TLS certificate (many offer alternative torrent files for the live cd to offload this cost).

                          Also, the packagecloud article implies that serving over TLS removes the risk of MitM, but it just makes it harder, and without certificate pinning only a little. I’d defer mostly to the marketing approach on this article, there are call-to-action sprinkled on the text

                          1. 1

                            https://whydoesaptnotusehttps.com

                            Good resource, sums it up pretty well!

                            Edit: Doesn’t answer the question about whether SHA256 sums for each individual package are included in the manifesto. But if not, all of this would make no sense, so I assume and hope so.

                            1. 2

                              Hi. I’m the author of the post – I strongly encourage everyone to use TLS.

                              SHA256 sums of the packages are included in the metadata, but this does nothing to prevent downgrade attacks, replay attacks, or freeze attacks.

                              I’ve submit a pull request to the source of “whydoesaptnotusehttps” to correct the content of the website, as it implies several incorrect things about the APT security model.

                              Please re-read my article and the linked academic paper. The solution to the bugs presented is to simply use TLS, always. There is no excuse not to.

                              1. 2

                                TLS is a good idea, but it’s not sufficient (I work on TUF). TUF is the consequence of this research, you can find other papers about repository security (as well as current integrations of TUF) on the website.

                                1. 1

                                  Yep, TUF is great – I’ve read quite a bit about it. Is there an APT TUF transport? If not, it seems like the best APT users can do is use TLS and hope some will write apt-transport-tuf for now :)

                                2. 1

                                  Thanks for the post and the research!

                                  It’s not that easy to switch to https: A lot of repositories (incl. die official ones of Ubuntu) do not support https. Furthermore, most cloud providers proivide their own mirrors and caches. There’s no way to verify whether the whole “apt-chain” of package uploads, mirrors and caches is using https. Even if you enforce HTTPS, the described vectors (if I understood correctly) remain an issue in the mirrors/ cache scenario.

                                  You may be right, that current mitingations for the said vectors are not sufficient, but I feel like a security model in package management that relies on TLS is simply not sufficient and the mitingation of the attack vectors you’ve found needs to be something else - e.g. signing and verifing the packages upon installation.

                            2. 2

                              Is the shasum of every individual package not included in the (verified) manifesto? That would be a major issue then, as it can be forged alongside the package.

                              Yes, there’s a chain of trust: the signature of each package is contained within the repo manifest file, which is ultimately signed by the Debian archive key. It’s a bit like a git archive - a chain of SHA256 sums of which only the final one needs to be signed to trust the whole.

                              There are issues with http downloads - eg it reveals which packages you download, so by inspecting the data flow an attacker could find out which packages you’ve downloaded and know which attacks would be likely to be successful - but package replacement on the wire isn’t one of them.

                      1. 3

                        Always interesting to find the work that you do in the wild. I’m surprised the recommendations don’t say “try to use a TUF transport to secure your APT repository”. Although it’s not as trivial as it sounds, there are a lot of success stories for TUF in OSS package managers today. Just check the website

                        1. 3

                          After realizing on our last project that setting up + running a private Docker registry is non-trivial,

                          This is extremely trivial though. What problems did you run into?

                          Although it’s not obvious, docker content trust is a way better security framework than git repositories (even with signed git tags). I just wanted to point that out in case you were not aware of it.

                          I’m also curious to see how does this work with nested FROM statements? What happens if I have a FROM [git-hosted-thing], do I have to do docker-get [git-hosted-thing] and then do my build?

                          Either way, it sounds like a fun hackathon-like idea. Congrats!

                          edit: Sorry I didn’t mean to come off as condescending (I just realized it does after re-reading what I wrote). I’m sorry….

                          1. 6

                            The word secure is somewhat meaningless without enough context. Also, HTTPS doesn’t immediately translate to secure and adding “not secure” to the url bar doesn’t achieve much either. AFAIR chrome still mistreats the “target = _blank” property…

                            1. 15

                              This is a common argument that I never understood the utility of. HTTPS is table stakes of online security, as there’s no security to be had if anyone on the network path can modify the origin contents.

                              There’s plenty of actual research and roadmaps on indicators like Not Secure, and the eventual goal is indeed to mark the insecure option Not Secure instead of marking HTTPS as Secure. The web is a complex slow moving beast, but this is exactly a step in that direction!

                              Anyway, if there’s one thing experience showed us is that trying to convey “context” on the security status of a TLS connection to users is a losing proposition.

                              1. 4

                                There’s plenty of actual research and roadmaps on indicators like Not Secure, and the eventual goal is indeed to mark the insecure option Not Secure instead of marking HTTPS as Secure. The web is a complex slow moving beast, but this is exactly a step in that direction!

                                Not that I don’t believe you, but mind pointing me at this research?

                                Anyway, if there’s one thing experience showed us is that trying to convey “context” on the security status of a TLS connection to users is a losing proposition.

                                This is exactly my concern, it seems that sprinkling “security” hints to non-technical users usually leads to them making the wrong assumptions.

                                1. 1

                                  I am focusing on a specific point in your post

                                  there’s no security to be had if anyone on the network path can modify the origin contents.

                                  This can be addressed by adding signatures rather than encrypting the whole page. There are useful applications such as page caching especially in low bandwidth situations which are defeated by encryption everywhere.

                              1. 3

                                I don’t think the MitM vector is clearly described on the article (or the blogpost it links to for that matter). Anyone care to elaborate why is this MitM-able?

                                1. 2

                                  From reading the article this is better described not as MitM but as reducing the security of a popular workflow back to the level equivalent to software wallets. Although I could probably find a way to explain why it is in some sense MitM.

                                  The idea of hardware wallet is partially that the limitated protocols it uses make it very hard to attack; so the abilities of a worm which runs under your user account on your desktop to manipulate your payments is removed, unless it finds a vulnerability in a narrow-scope software.

                                  In this case, one of the workflows includes doing something in Javascript on the desktop side, while the verification on the token side is optional. This means that there is a workflow where manipulating your browser is enough to trick you into making a different payment than you expected.

                                  1. 2

                                    That I could understand, but (in my very humble opinion) that sounded more like a CSRF-like vulnerability rather than MitM. Either way, that’s just semantics :)

                                    1. 2

                                      It just depends on what you would call end-to-end. I think the idea of calling it MitM is that you don’t trust your desktop and want to trust only the hardware wallet. You still use your desktop for a part of communication, because of convenience and network connection and stuff like that. Turns out, a program taking over the desktop can take over a part of the process that should have been unmodifiable without infiltrating the hardware wallet.

                                      So MitM is the desktop being able to spoof too much when used to facilitate interaction between you, hardware token and the global blockchain.

                                1. 10

                                  I agree in principle, but sadly users rarely have a choice. Electron developers are the ones developing the only software that does X, and people will just use that because it does X. Electron is eating the lunch of native apps because it covers more market and poisons the well faster than native application writers can write alternatives for.

                                  1. 3

                                    I don’t know why, but I found this story really heartwarming. I’m left wondering if I’ve forgotten to love my squashed bugs. I definitely remember some eureka moments with some of them.

                                    1. 4

                                      or just Stop Using Git To Deploy, period, full stop.

                                      1. 5

                                        More, stop using a VCS to deploy. Git or otherwise is inconsequential.

                                        1. 5

                                          or just Stop Using Git To Deploy, period, full stop.

                                          Agreed. I was appalled when I realized the author’s point was that s/pull/fetch + reset/g.

                                        1. 4

                                          It’s a SCM problem. David A. Wheeler has the definitive write-up covering the various angles of it:

                                          https://www.dwheeler.com/essays/scm-security.html

                                          I’m just throwing a few quick things out rather than being thorough. There’s several logical components at work here. There’s the developers contributions that might each be malicious or modified. The system itself should keep track of them, sanitize/validate them where possible, store in append only storage, snapshots in offline media, and automated tooling for building/analyzing/testing. It’s advantageous to use a highly-isolated machine for building and/or signing with the input being text over a safe channel (eg serial).

                                          In parallel, you have the fast box(es) people are actually using for day-to-day development. The isolated machine w/ secure OS periodically pulls in the text to do the things I already described with whatever sandboxing or security tech is available. Signing might be done by coprocessors, dedicated machines, smartcards, or HSM’s. The output goes over a separate line to separate computers that do distribution to the public Internet with no connection to development machines. Onboard software and/or a monitoring solution might periodically check the sources or binary hashes each are creating to ensure they match with ability to automatically or with admin approval shut off distribution side.

                                          Simply having updates and such isn’t good enough if the boxes can be hacked from the Internet. Targeted attacks have a lot of room to maneuver on that. The development boxes ideally have no connection to the deployment servers or even company web site. One knowing the latter can’t help hackers discover the former. Those untrusted boxes just have a wire of some sort periodically requesting info they handle carefully or sending info they’ve authenticated. The dev boxes would be getting their own software using the local Internet or off the wall wifi’s if person is really paranoid. Also hardened.

                                          It was also common practice to have separate VM’s or especially hardware w/ KVM switches for Internet or personal activities. As in, the software development was completely isolated from sources of malice such as email or the Web. Common theme is evil bits can’t touch the source, build system, or signing key. So, separation, validation, POLA, and safe code everywhere possible.

                                          1. 2

                                            It’s a SCM problem.

                                            Unfortunately, it is not just a SCM problem. I wish the problem was that easy. Supply chain attacks can have at many points during the software-value chain. Wheeler himself brought reproducible builds to attention because of this reason (e.g., a backdooring compiler). Software updates and distribution media are also a common means of attack.

                                            All in all, I think it’s a very underdeveloped field in cyber security that has a really wide attack surface and with devastating consequences.

                                            Needless to say, David A. Wheeler brought many issues to the table years ago and we’re finally realizing that we need to do something about it :P

                                            1. 3

                                              The collection, protection, and distribution of software securely via repos is SCM security. It’s a subset of supply, chain security which entails other things such as hardware. Securing that is orthogonal with different methods. Here’s an analysis I did on it if you’re interested in that kind of thing:

                                              https://news.ycombinator.com/item?id=10468624

                                              David A. Wheeler learned this stuff from the same people I did who invented INFOSEC and high-assurance security. They immediately told us how to counter a lot of the issues with high-assurance methods for developing the systems, SCM for the software, and trusted couriers for hardware developed similarly. Wheeler has a nice page called High Assurance FLOSS that surveys tools and methods. He turned the SCM stuff into that great summary that I send out. I also learned a few new things from it such as the encumberance attack. His goal was that FOSS developers learned high-assurance methods, applied at least medium assurance with safe languages, applied this to everything in the stack from OS’s to compilers to apps, and also developed and used secure SCM like OpenCM and Aegis tried to do. The combination, basically what Karger et al advised starting in MULTICS evaluation, would eliminate most 0-days plus deliver the software securely. Many problems solved.

                                              https://www.acsac.org/2002/papers/classic-multics.pdf

                                              https://www.usenix.org/system/files/login/articles/1255-perrine.pdf

                                              https://en.wikipedia.org/wiki/Trusted_Computer_System_Evaluation_Criteria

                                              They didn’t do that, though. Both proprietary sector and FOSS invested massive effort into insecure endpoints, langauges, middleware, configurations, and so on. The repo software that got popular were anything but secure. Being pragmatic, he pivoted to try to reduce risk of issues such as Paul Karger’s compiler-compiler subversion and MITMing of binaries during distribution. His methods for this were Diverse-Double Compilation and reproducible builds. Nice tactics with DDC being hard to evaluate esp given the compiler can still be malicious or buggy (esp optimizing security-critical code). The reproducible builds have their own issues where they eliminate site-specific optimizations or obfuscations since hashes won’t match. I debated that with him on Hacker News with us just disagreeing on the risk/reward tradeoff of those. What we did agree on was that what’s needed and/or idea are a combination of high-assurance endpoints, transports, SCM, and compilers. His site already pushes that. We also agreed economic and social factors have kept FOSS from developing or applying them. Hence, methods like he pushes. The high-assurance, proprietary sector and academia have continuously developed pieces of or whole components like I’ve described with things occasionally FOSSed like CakeML, seL4, SAFEcode, and SPARK. So, it’s doable but they don’t do it.

                                              If you’re wondering, the old guard did have a bag of tricks for interim solution. The repo is on highly-secure OS’s with mandatory access control. Two example, the first products actually, in link below. The users connect with terminals with each thing they submit being logged. The system does builds, tests, and so on. It can send things out to untrusted networks that can’t get things in per security policy. Possibly via storage media instead of networking. Guard software also allows humans in the loop to review and allow/deny a code submission or software release. Signing keys are isolated or on a security coprocessor. The computers with source are in an access-controlled, TEMPEST shielded room few can enter. Copies of source in either digital or paper mediums are kept in a locked safe. The system has the ability to restore to trusted state if compromise happens with security-critical actions logged. The people themselves are thoroughly investigated to reduce risk plus paid well. Any one of these helps reduce risk. Fully combining them would cover a lot of it.

                                              http://www.cse.psu.edu/~trj1/cse443-s12/docs/ch6.pdf

                                              In 2017, such methods combining isolation, paper, physical protection, and accountable submissions are still way more secure than how most security-critical software is developed today. If people desire, we also have highly-secure OS’s, tons of old hardware probably not subverted (esp if you pay cash for throwaways), good implementations of various cryptosystems, verified or just robust compilers for stuff from crypto DSL’s to C to ML, secure filesystems, secure schemes for external storage, cheap media for write-once backups or distribution, tons of embedded boards from countless suppliers for obfuscation, and so on. This is mostly not an open problem: it’s a problem whose key components have been solved to death with dead simple solutions for the basics like old guard did. Solving it the simple way is just really inconvenient for developers who value productivity and convenience over security. I mean, using GCC, Git, Linux, and 3rd-party services over a hostile Internet on hardware from sneaky companies is both so much easier and set up to fail in countless ways. Have failed in countless ways. If people really care, I tell them to use low-risk components instead with methods that worked in the past and might work again. It’s just not going to be as fun (FOSS) or cheap (proprietary).

                                              Quick Note… I do have a cheat based on old pattern of UntrustedProducer/TrustedChecker where you develop everything on comfortable hardware writing the stuff that works down on paper then manually retype in trusted hardware. If it still works, it probably wasn’t subverted. Tediuous but effective. I’ve never seen a targeted, remote, software attack that beat that. Sets bar much higher. Clive Robinson and I also determined infrared was among the safest if you wanted careful communication between electrically-isolated machines. Lots of suppliers, too. Hardware logic for anything trusted can be done in an ASIC on old nodes that are visually inspectable w/ shuttle runs for cost reduction. All the bounds checks and interface protection built-in. Lots of options to let one benefit from modern tooling while maintaining isolation of key components. Just still going to be inconveient, cost more, or both.

                                          1. 2

                                            Nice post!

                                            I’m glad that finally supply chain attacks are both being detected and acknowledged as an issue. Here at NYU, we have been working on a framework called in-toto to address this for over a year ago now. Although I agree with the just use [buzzword] point, I think in-toto is a good way forward to start discussing and addressing the issue.

                                            There are some videos of our talks at debconf and dockercon and others in the website.

                                            1. 4

                                              Lines and lines of rant without a clear goal. I can tell from the context the guy doesn’t like HTTP (or is it Javascript? both?). What part of the “web” did he exactly want to kill and how?

                                              1. 3

                                                I thought the author addresses this near the beginning:

                                                This is the first of two articles. In part one I’m going to review the deep, unfixable problems the web platform has[…] In part 2 I’ll propose a new app platform that is buildable by a small group in a reasonable amount of time

                                              1. 2

                                                This is pretty nice, but I think it has a couple of flaws. My only knee-jerk reaction was his claim that “hacking is not an academic discipline per se.” It is an academic discipline nowadays, like any other CS field.

                                                1. 2

                                                  There is a project on NYU’s Secure System’s Lab that tries to identify the programming constructs that lead to these ambiguities/misunderstandings. I may be biased, but I think it’s a really interesting project

                                                  1. 18

                                                    The first time I released Monocypher, I was wildly over-confident:

                                                    Monocypher is probably already bug-free.

                                                    Something tells me this might be the second round of “wildly over-confident”

                                                    1. 3

                                                      It’s not obvious from the way it’s styled, but that quote is Loup quoting themselves from that first time around, not a present claim. The text of that quote in the article links to its original context.

                                                      1. 2

                                                        Sure but the line below implies he still feels that way.

                                                        my crypto library, is done and ready for production

                                                        He speaks about how auditing is important but nothing about how it has been done with his software. I’m sorry but if your crypto has not been audited it is not ready for production.

                                                        1. 1

                                                          Oh, I’m on board with your point! …

                                                          we now have a crypto library that could displace Libsodium itself

                                                          And, re-parsing your comment now, I think I’m reading it the way you meant, which is not as unfair as the way I first understood it. I think the quote-in-quote threw me off. Sorry!

                                                      2. [Comment removed by author]

                                                        1. 17

                                                          or:

                                                          • Don’t claim to be bug free
                                                          • Has been audited more thoroughly
                                                          1. 4

                                                            It’s 1,300 lines of portable C; auditing it is far easier than libsodium, openssl, etc.

                                                            1. 2

                                                              That’s cool but until it happens it’s pretty irresponsible to say that it’s production ready.

                                                      1. 31

                                                        This post has everything

                                                        1. Opinionated UX decisions
                                                        2. Publicly trashing a main project maintainer for something happened 10+ years ago
                                                        3. PS we’re hiring
                                                        4. yet another git wrapper that pretends to be easier to use based on 1.
                                                        1. 19

                                                          I don’t think he’s trashing him. He says, “I would have done the same thing”. He’s just trying to figure out what happened. More git annotate than git blame (hey, by the way, why does git not have that alias? svn also has svn praise as another alias in this family.)

                                                          Furthermore, the proliferation of git wrappers says something. Mercurial has a lot of users too (Facebook), and guess what, they don’t write wrappers for it. They do write aliases and extensions, using hg’s established customisation mechanisms, but they don’t feel like the entire UI is so terrible that it has to be completely replaced by a different UI. There’s a reason for this – we spend a lot of time thinking in hg about how to make things consistent with itself (in our defense, a lot of the modifications that Facebook does is to make hg more consistent with git). Every time a new feature comes in a lot of time is spent naming that feature, seeing what options it should take, seeing what other similar or related features already exist and what options they use. It’s not a perfect process, and there are some small historical mistakes, but at least we have a process.

                                                          1. 1

                                                            And those of us who use Hg thank you greatly for that process.

                                                          2. 3

                                                            Has anyone made a Git equivalent of https://craphound.com/spamsolutions.txt ?

                                                            1. -5

                                                              I stopped taking the author seriously after they mentioned git’s “user experience”. Git is a tool. It is not there to be pretty or give you a good experience - it’s there to get the job done.

                                                              1. 20

                                                                Why does being “a tool” give it carte blanche to have bad UX? In fields outside of software tool ergonomics is a serious topic.

                                                                1. 9

                                                                  In the tools I maintain at least, user experience is pretty far up there with one of the most important things to optimize for. (Among other things, like ease of maintenance.)

                                                                  1. 6

                                                                    tools are where i most want a good user experience! that extends to the physical realm too; the experience of using a tool that is well-made, sturdy and fits well into your hand is an order of magnitude better than using a shoddy one, even if the latter gets the job done too.

                                                                    1. 3

                                                                      This effect is greatly magnified if you use the tool for a long time.

                                                                      Using a weirdly shaped hammer for 5 minutes is annoying. Using it for 8 hours is unbearable.

                                                                      Same with digital tools.

                                                                    2. 5

                                                                      This is a pretty lame response. Certainly things that get jobs done can have a decent UX. Or at least not a ridiculously confusing one.

                                                                      1. 2

                                                                        Bad UX gets in the way of using the tool effectively, it is directly related to getting the job done.

                                                                        With that said, git gets a lot of bad-rap for having a learning curve, but having a learning curve is not bad UX. Git is the damn good DVCS.