1. 3

    Congratulations on getting setup for funding your project. Hope that works out!

    I have a quick question, though. So, OpenBSD and HardenedBSD both do mitigations against code attacks. The other set of problems I had to harden Windows/Linux for were configurations allowing too much access or leaks, turn off dangerous services, and so on. I know OpenBSD addresses them to be secure by default. Is FreeBSD secure by default out of the box or do you recommend a hardening guide(s)?

    If it is, then that means there’s two, extra BSD’s competing for most secure: CheriBSD with capability-secure instructions and custom hardware; Criswell et al’s SVA-OS that combines safe interface for low-level stuff with compile-time safety for C programs. Then, the most accessible and supported would be HardenedBSD and OpenBSD. Further, it’s probably wise to move low-cost mitigations into other two as extra layer.

    1. 6

      Thanks! So, my perspective is a bit different. I don’t view HardenedBSD as a competitor with the other BSDs. I’m more of a collaborative person and I believe that the best innovations come with collaboration.

      I’m also not a fan of the phrase “secure by default”. Secure against what? OpenBSD does a lot to harden an out-of-the-box fresh installation. HardenedBSD focuses mostly on exploit mitigations, but does include some level of service hardening. FreeBSD needs a lot of work.

      There’s some interesting research coming out of the CHERI project. The rather large hurdle they’ll have to jump over, though, is getting their hardware-based work landing in consumer hardware. Not only will their hardware improvements need to land in silicon, they will need to be used by the general public. So that means they’re at least three to four decades out from seeing any hardware-based security mechanisms effectively used.

      Of course, all these projects could do still more to harden a system.

      1. 3

        Appreciate the heads up on FreeBSD needing hardening. I expected as much since it’s a full-featured OS. I like your collaborative attitude, too. I used to be about slamming the competition. I switched a few years ago to live-and-let-live with folks helping each other out. We’ll all be better off that way since strong security will never have massive talent or financial support. On top of having more fun together. :)

        Far as CHERI, you actually need a FPGA or ASIC depending on use case. There’s lots of embedded systems whose processors aren’t fast. A FPGA could reach that performance with CHERI’s level of security at high unit cost and watts. Terasic board already runs it, too. There’s also security-focused and legacy apps that don’t have to be fast that might benefit.

        If they do an ASIC, a hardware team has to convert it to ASIC-ready form, buy some IP for things like I/O, stitch them together, and do verification. I think that can happen in less than a year if it’s funded. Draper/Dover already converted SAFE in a few years to support three ISA’s, too. They had little to nothing to work with vs CHERI’s open hardware.

        1. 3

          Well, what I meant to convey was that CHERI’s hardware-based work still needs to be used by the average consumer for it to be of any real use. That means Intel needs to adopt their work. Which then means it’ll be another decade before it’s available on the shelves. And then two more decades before it’s broadly used. So, we’re talking a minimum of three decades before CHERI’s work is useful outside of academia.

          Or, perhaps, I’m being way too pessimistic. I like a lot of CHERI’s hardware-based work, so I really hope I’m wrong.

          1. 2

            Well, what I meant to convey was that CHERI’s hardware-based work still needs to be used by the average consumer for it to be of any real use.

            You are being too pessimistic but you’re still right to a large degree. Obviously, most people will want the full-custom performance their x86 boxes are getting them. The ARM processors that were targeted at embedded, which CHERI could enter, weren’t good enough for that. Smartphone companies and later tablet companies still convinced them to use stuff like that for much of their day to day experience. They included hardware accelerators for stuff like MPEG 2 the CPU couldn’t handle. Now, they’re selling so much they’re eating into desktop and laptop markets. They’re also Standard Cell (cheaper/easier) rather than Full-Custom (ultra-expensive). So, those products might use an ARM or RISC-V SoC with CHERI extensions the addition of which would be dirt-cheap, fasts work compared to rest of SoC.

            For Intel/AMD/IBM doing full-custom, I don’t know where you’re getting decade. They can crank out whole CPU’s with billions of transistors done full-custom in a year or two. These extensions should be smaller than the processors college students build in their coursework with standard tooling. If companies wanted, they could throw out a CHERI-RISC processor probably in months. Cavium especially since they’re MIPS experts and CHERI is MIPS. On high end, easier to add to POWER than x86 since it’s already RISC. IBM had INFOSEC pioneer Paul Karger build a MLS-enforcing processor that one report said was in small-scale, commercial use. More realistically, I told companies interested to get Intel or AMD to do a modification trimming the fat plus adding some security through their semi-custom business. Nobody from Intel/AMD/IBM to their purchasers are interested in doing this probably because people like us can’t sell the executives on the idea directly. It’s tech people talking to business people or no need is seen. That is why it isn’t happening. If they wanted it, it could be done in a few months to a few years with FreeBSD support already there or a doable amount of work for them.

            If not desktops, there’s also the embedded market which sells everything from sub-100Mhz 8-bitters to fixed-purpose setups with high-end chips. I wanted SiFive’s RISC-V team to consider targeting secure, fast, cheap networking before anything else. Switches, routers, VPN’s, firewalls, caches, SSL/TLS offloading, and so on. That would get the networking IP in better shape for the attacks that will be coming later. Inspectable, too, except for NDA parts. If it’s general purpose, it could become a more expensive and more open version of Raspberry Pi, too, with the volume sales of the commercial products keeping it cheap. Throw it in some servers they use in ARM-based clouds, too, differentiating on security and privacy.

            For funding, it’s worth noting that hardware startups happen regularly funded by VC’s and taxpayers (esp university spinouts). My best idea was getting paid pro’s to work with good academics with cheap tooling to just build an open, secure CPU that they published in a form industry could use. Although not security-focused, the Leon3 GPL core was done that way. Then, the pro’s would stitch together using sponsored or crowdfunded money the third-party I.P. necessary to make it useful and buy the masks. Then, they begin selling it with sales making it self-sustaining maybe. Alternatively, it’s a loss-leader for one or more sellers of hardware appliances or software. An example of that was Zylin doing the ZPU.

    1. 7

      I would have rather seen the HardenedBSD code just get merged back into FreeBSD, I’m sure there are loads of reasons, but I’ve never managed to see them, their website doesn’t make that clear. I imagine it’s because of mostly non-technical reasons.

      That said, It’s great that HardenedBSD is now setup to live longer, and I hope it has a great future, as it sits in a niche that only OpenBSD really sits in, and it’s great to see some competition/diversity in this space!

      1. 13

        Originally, that’s what HardenedBSD was meant for: simply a place for Oliver and me to collaborate on our clean-room reimplementation of grsecurity to FreeBSD. All features were to be upstreamed. However, it took us two years in our attempt to upstream ASLR. That attempt failed and resulted in a lot of burnout with the upstreaming process.

        HardenedBSD still does attempt the upstreaming of a few things here and there, but usually more simplistic things: We contributed a lot to the new bectl jail command. We’ve hardened a couple aspects of bhyve, even giving it the ability to work in a jailed environment.

        The picture looks a bit different today. HardenedBSD now aims to give the FreeBSD community more choices. Given grsecurity’s/PaX’s inspiring history of pissing off exploit authors, HardenedBSD will continue to align itself with grsecurity where possible. We hope to perform a clean-room reimplementation of all publicly documented grsecurity features. And that’s only the start. :)

        edit[0]: grammar

        1. 6

          I’m sorry if this is a bad place to ask, but would you mind giving the pitch for using HardenedBSD over OpenBSD?

          1. 19

            I view any OS as simply a tool. HardenedBSD’s goal isn’t to “win users over.” Rather, it’s to perform a clean-room reimplementation of grsecurity. By using HardenedBSD, you get all the amazing features of FreeBSD (ZFS, DTrace, Jails, bhyve, Capsicum, etc.) with state-of-the-art and robust exploit mitigations. We’re the only operating system that applies non-Cross-DSO CFI across the entire base operating system. We’re actively working on Cross-DSO CFI support.

            I think OpenBSD is doing interesting things with regards to security research, but OpenBSD has fundamental paradigms may not be compatible with grsecurity’s. For example: by default, it’s not allowed to create an RWX memory mapping with mmap(2) on both HardenedBSD and OpenBSD. However, HardenedBSD takes this one step further: if a mapping has ever been writable, it can never be marked executable (and vice-versa).

            On HardenedBSD:

            void *mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE | PROT_EXEC, ...); /* The mapping is created, but RW, not RWX. */
            mprotect(mapping, getpagesize(), PROT_READ | PROT_EXEC); /* <- this will explicitly fail */
            
            munmap(mapping, getpagesize());
            
            mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_EXEC, ...); /* <- Totally cool */
            mprotect(mapping, getpagesize(), PROT_READ | PROT_WRITE); /* <- this will explicitly fail */
            

            It’s the protection around mprotect(2) that OpenBSD lacks. Theo’s disinclined to implement such a protection, because users will need to toggle a flag on a per-binary basis for those applications that violate the above example (web browsers like Firefox and Chromium being the most notable examples). OpenBSD implemented WX_NEEDED relatively recently, so my thought is that users could use the WX_NEEDED toggle to disable the extra mprotect restriction. But, not many OpenBSD folk like that idea. For more information on exactly how our implementation works, please look at the section in the HardenedBSD Handbook on our PaX NOEXEC implementation.

            I cannot stress strongly enough that the above example wasn’t given to be argumentative. Rather, I wanted to give an example of diverging core beliefs. I have a lot of respect for the OpenBSD community.

            Even though I’m the co-founder of HardenedBSD, I’m not going to say “everyone should use HardenedBSD exclusively!” Instead, use the right tool for the job. HardenedBSD fits 99% of the work I do. I have Win10 and Linux VMs for those few things not possible in HardenedBSD (or any of the BSDs).

            1. 3

              So how will JITs work on HardenedBSD? is the sequence:

              mmap(PROT_WRITE);
              // write data
              mprotect(PROT_EXEC);
              

              allowed?

              1. 5

                By default, migrating a memory mapping from writable to executable is disallowed (and vice-versa).

                HardenedBSD provides a utility that users can use to tell the OS “I’d like to disable exploit mitigation just for this particular application.” Take a look at the section I linked to in the comment above.

            2. 9

              Just to expound on the different philosophies approach, OpenBSD would never bring ZFS, Bluetooth, etc into the OS, something HardenedBSD does.

              OpenBSD has a focus on minimalism, which is great from a maintainability and security perspective. Sometimes that means you miss out on things that could make your life easier. That said OpenBSD still has a lot going for it. I run both, depending on need.

              If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface. That’s not to say ZFS isn’t awesome, it totally is, but if you don’t need ZFS for a particular compute job, not including it gives you a lot smaller surface for bad people to attack.

              1. 5

                If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface.

                I would find a fork of HardenedBSD without ZFS (and perhaps DTrace) very interesting. :)

                1. 3

                  Why fork? Just don’t load the kernel modules…

                  1. 4

                    There have been quite a number of changes to the kernel to accommodate ZFS. It’d be interesting to see if the kernel can be made to be more simple when ZFS is fully removed.

                    1. 1

                      You may want to take a look at dragonflybsd then.

                2. 4

                  Besides being large, I think what makes me slightly wary of ZFS is that it also has a large interface with the rest of the system, and was originally developed in tandem with Solaris/Illumos design and data structures. So any OS that diverges from Solaris in big or small ways requires some porting or abstraction layer, which can result in bugs even when the original code was correct. Here’s a good writeup of such an issue from ZFS-On-Linux.

          1. 3

            When apk is pulling packages, it extracts them into / before checking that the hash matches what the signed manifest says it should be.

            This surprised me. There’s no way to check the hashes before extracting the files?

            1. 3

              Ideally, the outermost file (the archive itself) should have a detached signature of some sort. The algorithm should be as follows:

              1. Ensure the trust store is sane and still trustworthy
              2. Ensure there is a valid signature for the to-be-downloaded package
              3. Download the package
              4. Ensure that the signature is valid for the package
              5. Ensure that each file has an associated hash in the pkg metadata
              6. Extract files
              7. Ensure that the file hashes match
              1. 3

                And do as much work as possible with as few privileges as possible. Using MAC and/or chroot is a bonus.

            1. 1

              A golang wat-ism I just found:

              golang maps the heap at a fixed address (0xc000000000 on FreeBSD/HardenedBSD). This makes the -buildmode=pie option useless, since the heap is at a known, fixed location.

              1. 2

                Not saying your wat is invalid but this is their response:

                “(Russ Cox) Address space randomization is an OS-level workaround for a language-level problem, namely that simple C programs tend to be full of exploitable buffer overflows. Go fixes this at the language level, with bounds-checked arrays and slices and no dangling pointers, which makes the OS-level workaround much less important. In return, we receive the incredible debuggability of deterministic address space layout. I would not give that up lightly.”

                1. 2

                  Yeah, I read the 2012 thread in which Russ said that. However, -buildmode=pie didn’t exist then. The existence of PIE support infers that the project as a whole has changed its mind: that a non-deterministic address space as modified by ASLR is worth it. I’ve submitted a new email to the golang-nuts mailing list to re-address the issue.

                  I’d say that golang is a “safer” language, just like Rust. However, I don’t think we’re to the point where we can say any language is 100% safe. Sure, golang and Rust improve the situation quite drastically, but they’re both written by humans and are still subject to the mistakes humans make.

                  So, it seems weird to me that golang would take the time to implement PIE support, but neglect to address the fixed heap. Hopefully my email on the mailing list will get some informative replies.

              1. 10

                Given these two bits of news, I plan on accelerating my plan to deploy censorship- and surveillance-resistant wireless mesh networks in and around Baltimore. The networks will be available for either free or low cost to underprivileged or unfairly treated populaces.

                I’m hoping to model the network after NYC Mesh with a few distinct changes. Each supernode will route all traffic through Tor. To help with the load placed on the Tor network, each supernode will need to run a public relay (non-exit is fine for now). Inter-node communication will be encrypted via ipsec.

                I’m hoping to get the plan in order by summer or fall of 2019. There’s lots to do and research.

                1. 2

                  I’m not sure I agree with Theo on this one. I don’t think it makes sense to always disable SMT. Single-tenant configurations that don’t locally execute remotely-fetched resources (for example, a browser) would be fine to keep SMT enabled. For example, a physical server (so, no multi-tenant virtualization) that only acts as a VPN server should be fine with SMT enabled.

                  1. 5

                    In a perfect world, yes.

                    But since these vulnerabilities allow memory reads from other threads (including kernel threads) running on the other hyperthread of a core, it means that this escalates a code execution vulnerability (even in a limited, sandboxed environment) to kernel (or just other userspace process) memory reads, which could be a springboard in a more serious exploit chain.

                    SMT can still be safely used in some scenarios, like multiple threads of the same process if no isolation exists between those threads anyway, or when executing multithreaded code in the same sandbox, perhaps.

                    1. 2

                      It is demonstrably true that there are workloads that benefit from Hyperthreading. I agree that we also see a subset of these use cases where the performance trade-off from disabling this feature is being contrasted with a security issue that is not directly exploitable.

                      I think the OpenBSD team, and others, have made a compelling case for not only preventing directly exploitable security issues but also that proactively fixing security issues can prevent exploits that require a chain or series of exploits, information leaks, or crashes.

                      While you can construct scenarios where this single exploit doesn’t apply, being vulnerable to it means that it can be composed or combined with other vulnerabilities where it may turn out to be necessary even when it’s not sufficient to successfully attack.

                      1. 2

                        I think the OpenBSD team, and others, have made a compelling case for not only preventing directly exploitable security issues but also that proactively fixing security issues can prevent exploits that require a chain or series of exploits, information leaks, or crashes.

                        Of course. Both HardenedBSD and OpenBSD are doing wonderful work in this regard. I didn’t mean to convey that OpenBSD’s work was without merit or meaningless.

                        Instead, what I meant to convey is: with proper risk analysis and management, users can and should be able to decide for themselves whether to disable SMT.

                        While you can construct scenarios where this single exploit doesn’t apply, being vulnerable to it means that it can be composed or combined with other vulnerabilities where it may turn out to be necessary even when it’s not sufficient to successfully attack.

                        Sure. But at that point, local code execution is gained. It’s already game over at that point.

                      2. 2

                        Basically, in light of these vulnerabilities, SMT is a risk whenever the system might be running untrusted code. The reality of today’s computing environment is that you’re almost always running unprivileged but untrusted code to some degree or another. Web browsers and multi-tenant VMs are the most obvious examples. The systems in the world which run only “trusted” code are few and far between. Some examples that I can think of are:

                        • HPC/research clusters with specialized applications in which nearly all code is written in-house
                        • Certain Enterprise appliances like storage servers that don’t allow running non-vendor-provided code
                        • Small general-purpose systems with all of their hardware dedicated to one specific task, like my backup server at home

                        And even these aren’t necessarily 100% safe because there might be a remote exploit of some kind that then allows an attacker to run some unprivileged code which can then abuse SMT for privilege elevation and then you have a rooted appliance. In which case, the only truly secure box with SMT enabled is an air-gapped one.

                        The only good news here is that these kinds of exploits seems to be quite difficult to actually pull off but as The Bearded One says, attackers only get better over time and attacks only get easier.

                        1. 1

                          Through this discussion, my thoughts on the matter have changed somewhat. I still think that SMT should be supported, but disabled by default. After proper risk analysis and management are performed, users should decide whether to opt in to SMT.

                          there might be a remote exploit of some kind that then allows an attacker to run some unprivileged code

                          I view it as: if the attacker has gained reliable remote code execution, it’s already game over. SMT doesn’t matter anymore.

                      1. 1

                        This is nice to see, it is a bit sad to see that bectl may not launch with support with separate boot pools (affecting users with a boot pool and an encrypted root pool) but I understand Allan’s point about consistency between the snapshots. Although it seems that native encryption in OpenZFS is coming along nicely for FreeBSD-current which should help the situation, just doesn’t seem to play nice if you also use dedup in ZFS.

                        1. 2

                          New installations of FreeBSD 12.0-RELEASE (when it comes out) should not need a separate boot pool. But, yeah, systems upgrading to 12 from 11 may have a separate boot pool. I’d argue that with how awesome 12 is shaping out to be, a fresh reinstallation would be a good idea.

                          The OpenZFS native encryption implementation may suffer from the same types of watermarking vulnerabilities that plague the Oracle ZFS implementation. It would be best to stick with geli until the native crypto receives more cryptanalysis.

                          1. 2

                            For the FreeBSD 12.0-RELEASE: definitely agree once it’s out a reinstall maybe worth considering, it’s just down to how well the encrypted pool support will work (I have not tried FreeBSD 12-ALPHA2 yet so I can’t say).

                            As for the OpenZFS native encryption, my take on the discussion in the linked mailing list thread is that the watermarking attack stems around how deduplication works in ZFS, so if you don’t need the dedup feature then it shouldn’t be as susceptible to watermarking vulnerabilities. Definitely something that needs heavy testing to be validated. Otherwise yes if you need/want dedup enabled on your encrypted pool, or don’t want to take the risk that it may still in fact be vulnerable, then for now sticking with geli would be wise, I was just saying that progress seems to be coming along nicely on that front.

                            1. 3

                              Yup. There’s also potential attacks when compression is enabled. So, if you enable native encryption, you’ll want to disable both dedup and compression. These kinds of situations are why I prefer to be a bit more cautious in adoption. I think we need multiple cryptographers analysing this from multiple angles.

                        1. 9

                          From the article:

                          Another issue is whether the customer should install the fix at all. Many computer users don’t allow outside or unprivileged users to run on their CPUs the way a cloud or hosting company does.

                          I guess the key there is “the way a cloud or hosting company does.” Users typically run browsers, which locally run remotely-fetched arbitrary code as a feature. I would argue that because of browsers, users should especially install the fixes.

                          The only time when a fix may not be applicable is on single-tenant configurations and when remotely-fetched arbitrary code isn’t run locally.

                          1. 1

                            Users typically run browsers, which locally run remotely-fetched arbitrary code as a feature.

                            I was going to point this out too but you came first.

                            However this opens an entirely different vulnerability set, a Pandora box that no one dares to face.

                            1. 2

                              Great read, thanks.

                          1. 1

                            Reading the mailing list thread on this CFT. It looks like the implementation might suffer from the same watermarking vulnerabilities that the Oracle implementation suffers from. More research needed.

                            1. 27

                              Sometimes I like to think that I know how computers work, and then I read something written by someone who actually does and I’m humbled most completely.

                              1. 11

                                A lot of this complexity seems down to the way Windows works, though. As a Linux user, the amount of somewhat confusing/crufty stuff going on in a typical Windows install boggles the mind; it’s almost as bad as Emacs.

                                1. 11

                                  I guess to me it doesn’t feel like there’s much Windows specific complexity here, just a generally complex issue; a bug in v8’s sandboxed runtime and how it interacts with low-level OS-provided virtual memory protection and specific lock contention behavior, which only expressed itself by happenstance for the OP.

                                  Some of this stuff just feels like irreducible complexity, though my lack of familiarity with Windowsisms (function naming style, non-fair locks, etc.) probably doesn’t help there.

                                  1. 5

                                    How does CFG work with chrome on linux?

                                    1. 2

                                      Do you mean CFI?

                                      CFG is MS’s Control Flow Guard, it’s a combination of compile-time instrumentation from MSVC and runtime integration with the OS. CFI on Linux (via clang/LLVM), in contrast, is entirely compile time AFAIK, with basically no runtime support.

                                      See:

                                      for more details on the differences.

                                      1. 2

                                        Yes and no. :) The linux CFI implementation doesn’t include the jit protection feature in CFG that’s implicated in the bug, so I’m not sure it’s fair to characterize this as “cruft”.

                                        1. 2

                                          The CFI implementation in llvm isn’t a “linux CFI implementation.” :)

                                          As OpenBSD moves towards llvm on all architectures, it can take advantage of CFI, just as HardenedBSD already does. :)

                                        2. 1

                                          llvm’s implementation of CFI does have the beginnings of a runtime support library (libclang_rt.cfi). HardenedBSD is working on integrating Cross-DSO CFI from llvm, which is what uses the support library.

                                      2. 4

                                        Linux just hasit’s own weirdnesses in other places.

                                        That said, memory management seems to be a source of strange behaviour regardless of OS.

                                        1. 4

                                          I absolutely love that the BSDs are switching to llvm. This makes me giddy like a school child.

                                          By switching to a full llvm toolchain, the BSDs will be able to do some really nifty things that simply cannot be done in Linux. HardenedBSD, for example, is working on integrating Cross-DSO CFI in base (and, later, ports). NetBSD is looking at deeper sanitizer integration in base. From an outsider’s perspective, it seems OpenBSD is playing catch up right now, but they’ve got the talent and the manpower to do so within a reasonable period of time.

                                          It’s my dream that all the BSDs switch fully to llvm as the compiler toolchain, including llvm-ar, llvm-nm, llvm-objdump, llvm-as, etc. Doing so will allow the BSDs to do add some really nifty security enhancements. Want an enterprise OS that secures the entire ecosystem? Choose BSD.

                                          Linux simply cannot compete here. A userland that innovates in lockstep with the kernel is absolutely required to do these kinds of things. Go BSD!

                                          1. 3

                                            You’re overstating it. Most of the mitigation development of past decade or two was for Linux. Most of the high-security solutions integrated with Linux, often virtualizing it. The most-secure systems you can get right now are separation kernels running Linux along-side critical apps. Two examples. Some of the mitigation work is also done for FreeBSD. Of that, some is done openly for wide benefit and some uses BSD license specifically to lock-down/patent/sue when commercialized. Quick pause to say thanks for you own work on the open side. :)

                                            So, what’s best for people depends on a lot of factors from what apps they want, what they’re trying to mitigate, stance on licensing, whether they have money for proprietary solutions or custom work, time for custom work or debugging if FOSS, and so on. One is not superior to the other. That could change if any company builds a mobile/desktop/server-class processor with memory safety or CFI built checking every sensitive operation. Stuff like that exists in CompSci for both OS’s. Hardware-level security could be an instant win. Past that, all I can say is it depends.

                                            On embedded side, Microsemi says CodeSEAL works with Linux and Windows. CoreGuard, based on SAFE architecture, currently runs FreeRTOS. The next solution needs to be at least as strong at addressing root causes.

                                            1. 4

                                              Thanks for making me think a bit deeper on this subject. And thanks for the kind words on my own work. :)

                                              With a historical perspective, I agree with you. grsecurity has done a lot with regards to Linux security (and security in general). I think the entire computing industry owes a lot to grsecurity, especially those of us in infosec.

                                              With the BSDs (except FreeBSD) having the core exploit mitigations in place (ASLR, W^X), it’s time to move on to other, more advanced mitigations. There’s only so much the kernel can do to harden userland and keep performance in check. Thus, these more advanced exploit mitigations must be implemented in the compiler. The BSDs are positioning themselves to be able to adopt and tightly integrate compiler-based exploit mitigations like CFI. Due to Linux’s fragmentation, it’s simply not possible for Linux to position itself in the same way. HardenedBSD has already surpassed Linux as far as userland exploit mitigations are concerned. This is due in part because of using llvm as the compiler toolchain.

                                              Microsoft is making huge strides as well. However, the PE file format, which allows individual PE objects to opt-in or opt-out of the various exploit mitigations, is a glaring weakness commonly abused by attackers. All it takes is for one DLL to not be compiled with /DYNAMICBASE, and boom goes the dynamite. Recently, VLC on Windows was found not to have ASLR enabled, even though it was compiled with /DYNAMICBASE and OS-enforced ASLR enabled, due to the .reloc section being stripped. Certain design decisions made decades ago by Microsoft are still biting them in the butt.

                                              I completely agree with you about hardware-based exploit mitigations. The CHERI project from the University of Cambridge in England is doing just that: hardware-enforced capabilities and bounds enforcement. However, it’ll probably take another 20+ years for their work to be available in silicon and an additional 20+ years for their work to be used broadly (and thus, actually usable/used). In the meantime, we need these software-based exploit mitigations.

                                            2. 3

                                              I absolutely love that the BSDs are switching to llvm.

                                              What does this news story have to do with LLVM?

                                              1. 1

                                                UBSan (and NetBSD’s new micro-UBSan) is a sanitizer found in llvm.

                                                1. 5

                                                  And gcc.

                                                  1. 3

                                                    Yes. So the tirade about LLVM could have been about GCC and it would make as much sense here.

                                                    1. 1

                                                      I guess I view it differently, due to newer versions of gcc being GPLv3, which limits who can adopt it. With llvm being permissively licensed, it can be adopted by a much wider audience. The GPL is driving FreeBSD to replace all GPL code in the base operating system with more permissively-licensed options.

                                                      For the base OS, gcc is dead to me.

                                                    2. 2

                                                      (Speaking from the perspective of a FreeBSD/HardenedBSD user): gcc has no real future in the BSDs. Because of licensing concerns (GPLv3), the BSDs are moving towards a permissively-licensed compiler toolchain. Newer versions of gcc do contain sanitizer frameworks, they’re not usable in the BSD base operating system.

                                                      1. 2

                                                        NetBSD base uses GPLv3 GCC and builds it with sanitizers libraries etc.

                                                        1. 1

                                                          Good to know! Thanks! Perhaps my perception is a bit skewed towards FreeBSD lines of thinking.

                                                          I know NetBSD is working on incorporating llvm. I wonder why if they use newer versions of gcc.

                                              1. 3

                                                When you use llvm as your compiler toolchain, you can do really cool things like this. :)

                                                I’ve done something similar in a feature branch of HardenedBSD. I haven’t spent much time on it, though. ASAN found a potential heap buffer overflow in llvm-ar, which we use as the default ar/ranlib.

                                                Gotta be careful, though, to never use ASAN (and any other sanitizer that exhibits the same behaviors as ASAN) in production on suid binaries. Doing so opens up interesting attack vectors.

                                                I’ve also noticed that ASAN doesn’t like how aggressively HardenedBSD randomizes the stack. On amd64 and arm64, we’re able to introduce 42 bits of entropy into the stack. I either have to disable ASLR or greatly decrease the entropy introduced into ASLR delta generation.

                                                1. 6

                                                  Team lobste.rs, @lattera, @nickpsecurity?

                                                  1. 5

                                                    Haha. I would love it if I had the time to play. Perhaps next year. Thanks for the ping, though. I’ve forwarded this on to a few of my coworkers who play CTFs.

                                                    1. 4

                                                      I’d love to if I hadn’t lost my memory, including of hacking, to that injury. I never relearned it since I was all-in with high-assurance security at that point which made stuff immune to almost everything hackers did. If I still remembered, I’d have totally been down for a Lobsters hacking crew. I’d bring a dozen types of covert channels with me, too. One of my favorite ways to leak small things was putting it in plain text into TCP/IP headers and/or throttling of what otherwise is boring traffic vetted by NIDS and human eye. Or maybe in HTTPS traffic where they said, “Damn, if only I could see inside it to assess it” while the data was outside encoded but unencrypted. Just loved doing the sneakiest stuff with the most esoteric methods I could find with much dark irony.

                                                      I will be relearning coding and probably C at some point in future to implement some important ideas. I planned on pinging you to assess the methods and tooling if I build them. From there, might use it in some kind of secure coding or code smashing challenge.

                                                      1. 5

                                                        I’m having a hard time unpacking this post, and am really starting to get suspicious of who you are, nickpsecurity. Maybe I’ve missed some background posts of yours that explains more, and provides better context, but this comment (like many others) comes off…almost Markovian (as in chain).

                                                        “If I hadn’t lost my memory…” — of all the people on Lobsters, you seem to have the best recall. You regularly cite papers on a wide range of formal methods topics, old operating systems, security, and even in this post discuss techniques for “hacking” which, just sentences before “you can’t remember how to do.”

                                                        You regularly write essays as comments…some of which are almost tangential to the main point being made. These essays are cranked out at a somewhat alarming pace. But I’ve never seen an “authored by” submitted by you pointing outside of Lobsters.

                                                        You then claim that you need to relearn coding, and “probably C” to implement important ideas. I’ve seen comments recently where you ask about Go and Rust, but would expect, given the number of submissions on those topics specifically, you’d have wide ranging opinions on them, and would be able to compare and contrast both with Modula, Ada, and even Oberon (languages that I either remember you discussing, or come from an era/industry that you often cite techniques from).

                                                        I really, really hate to have doubt about you here, but I am starting to believe that we’ve all been had (don’t get me wrong, we’ve all learned things from your contributions!). As far as I’ve seen, you’ve been incredibly vague with your background (and privacy is your right!). But, that also makes it all the more easy to believe that there is something fishy with your story…

                                                        1. 11

                                                          I’m not hiding much past what’s private or activates distracting biases. I’ve been clear when asked on Schneier’s blog, HN, maybe here that I don’t work in the security industry: I’m an independent researcher who did occasional gigs if people wanted me to. I mostly engineered prototypes to test my ideas. Did plenty of programming and hacking when younger for the common reasons and pleasures of it. I stayed in jobs that let me interact with lots of people. Goal was social research and outreach on big problems of the time like a police state forming post-9/11 which I used to write about online under aliases even more than tech. I suspected tech couldn’t solve the problems created by laws and media. Had to understand how many people thought, testing different messages. Plus, jobs allowing lots of networking mean you meet business folks, fun folks, you name it. A few other motivations, too.

                                                          Simultaneously, I was amassing as much knowledge as I could about security, programming, and such trying to solve the hardest problems in those fields. I gave up hacking since its methods were mostly repetitive and boring compared to designing methods to make hacking “impossible.” Originally a mix of public benefit and ego, I’d try to build on work by folks like Paul Karger to beat the worlds’ brightest people at their game one root cause at a time until a toolbox of methods and proven designs would solve the whole problem. I have a natural, savant-like talent for absorbing and integrating tons of information but a weakness for focusing on doing one thing over time to mature implementation. One is exciting, one is draining after a while. So, I just shared what I learned with builders as I figured it out with lots of meta-research. My studies of work of master researchers and engineers aimed to solve both individual solutions in security/programming (eg secure kernels or high-productivity) on top of looking for ways to integrate them like a unified, field theory of sorts. Wise friends kept telling me to just build one or more of these to completion (“focus Nick!”). Probably right but I’d have never learned all I have if I did. What you see me post is what I learned during all the time I wasn’t doing security consulting, building FOSS, or something else people pushed.

                                                          Unfortunately, right before I started to go for production stuff beyond prototypes, I took a brain injury in an accident years back that cost me most of my memory, muscle memory, hand-eye coordination, reflexes, etc. Gave me severe PTSD, too. I can’t remember most of my life. It was my second, great tragedy after a triple HD failure in a month or two that cost me my data. All I have past my online writings are mental fragments of what I learned and did. Sometimes I don’t know where they came from. One of the local hackers said I was the Jason Bourne of INFOSEC: didn’t know shit about my identity or methods but what’s left in there just fires in some contexts for some ass-kicking stuff. I also randomly retain new stuff that builds on it. Long as it’s tied to strong memories, I’ll remember it for some period of time. The stuff I write-up helps, too, which mostly went on Schneier’s blog and other spaces since some talented engineers from high-security were there delivering great peer review. Made a habit out of what worked. I put some on HN and Lobsters (including authored by’s). They’re just text files on my computer right now that are copies of what I told people or posted. I send them to people on request.

                                                          Now, a lot of people just get depressed, stop participating in life as a whole, and/or occasionally kill themselves. I had a house to keep in a shitty job that went from a research curiosity to a necessity since I didn’t remember admining, coding, etc. I tried to learn C# in a few weeks for a job once like I could’ve before. Just gave me massive headaches. It was clear I’d have to learn a piece at a time like I guess is normal for most folks. I wasn’t ready to accept it plus had a job to re-learn already. So, I had to re-learn the skills of my existing job (thank goodness for docs!), some people stuff, and so on to survive while others were trying to take my job. Fearing discrimination for disability, I didn’t even tell my coworkers about the accident. I just let them assume I was mentally off due to stress many of us were feeling as Recession led to layoffs in and around our households. I still don’t tell people until after I’m clearly a high-performer in the new context. Pointless since there’s no cure they could give but plenty of downsides to sharing it.

                                                          I transitioned out of that to other situations. Kind of floated around keeping the steady job for its research value. Drank a lot since I can’t choose what memories I keep and what I have goes away fast. A lot of motivation to learn stuff if I can’t keep it, eh? What you see are stuff I repeated the most for years on end teaching people fundamentals of INFOSEC and stuff. It sticks mostly. Now, I could’ve just piece by piece relearned some tech in a focused area, got a job in that, built up gradually, transitioned positions, etc… basically what non-savants do is what I’d have to do. Friends kept encouraging that. Still had things to learn talking to people especially where politics were going in lots of places. Still had R&D to do on trying to find the right set of assurance techniques for right components that could let people crank out high-security solutions quickly and market competitive. All the damage in media indicated that. Snowden leaks confirmed most of my ideas would’ve worked while most of security community’s recommendations not addressing root causes were being regularly compromised as those taught me predicted. So, I stayed on that out of perceived necessity that not enough people were doing it.

                                                          The old job and situation are more a burden now than useful. Sticking with it to do the research cost me a ton. I don’t think there’s much more to learn there. So, I plan to move on. One, social project failed in unexpected way late last year that was pretty depressing in its implications. I might take it up again since a lot of people might benefit. I’m also considering how I might pivot into a research position where I have time and energy to turn prior work into something useful. That might be Brute-Force Assurance, a secure (thing here), a better version of something like LISP/Smalltalk addressing reasons for low uptake, and so on. Each project idea has totally different prerequisites that would strain my damaged brain to learn or relearn. Given prior work and where tech is at, I’m leaning most toward a combo of BFA with a C variant done more like live coding, maybe embedded in something like Racket. One could rapidly iterate on code that extracted to C with about every method and tool available thrown at it for safety/security checks.

                                                          So, it’s a mix of indecision and my work/life leaving me feeling exhausted all the time. Writing up stuff on HN, Lobsters, etc about what’s still clear in my memory is easy and rejuvenating in comparison. I also see people use it on occasion with some set to maybe make waves. People also send me emails or private messages in gratitude. So, probably not doing what I need to be doing but folks were benefiting from me sharing pieces of my research results. So, there it is all laid out for you. A person outside security industry going Ramanujan on INFOSEC and programming looking for its UFT of getting shit done fast, correct, and secure (“have it all!”) while having day job(s) about meeting, understanding, and influencing people for protecting or improving democracy. Plus, just the life experiences of all that. It was fun while it lasted. Occasionally so now but more rare.

                                                          1. 4

                                                            Thank you for sharing your story! It provides a lot of useful context for understanding your perspective in your comments.

                                                            Putting my troll hat on for a second, what you’ve written would also make a great cover story if you were a human/AI hybrid. Just saying. :)

                                                            1. 1

                                                              Sure. Im strange and seemingly contradictory enough that I expect confusion or skepticism. It makes sense for people to wonder. Im glad you asked since I needed to do a thorough writeup on it to link to vs scattered comments on many sites.

                                                          2. 0

                                                            I have to admit similar misgivings (unsurprisingly, I came here via @apg and know @apg IRL). For someone so prolific and opinionated you have very little presence beyond commenting on the internet. To me, that feels suspicious, but who knows. I’m actually kind of hoping you’re some epic AI model and we’re the test subjects.

                                                            1. 0

                                                              Occam’s Razor applies. ‘A very bright human bullshitter’ is more likely than somebody’s research project.

                                                              @nickpsecurity, have you considered “I do not choose to compete” instead of “If only I hadn’t had that memory loss”?

                                                              I, for one, will forgive and forget what I’ve seen so far. (TBH, I’m hardly paying attention anyway.)

                                                              But, lies have a way of growing, and there is some line down the road where forgive-and-forget becomes GTFO.

                                                              1. 1

                                                                have you considered “I do not choose to compete” instead of “If only I hadn’t had that memory loss”?

                                                                I did say the way my mind works makes it really hard to focus on long-term projects to completion. Also, I probably should’ve been doing some official submissions in ACM/IEEE but polishing and conferencing was a lot of work distracting from the fun/important research. If I’m reading you right, it’s accurate to say I wasn’t trying to compete in academia, market, or social club that is the security industry on top of memory loss. I was operating at a severe handicap. So, I’d (a) do those tedious, boring, distracting, sometimes-political things with that handicap or (b) keep doing what I was doing, enjoying, and provably good at despite my troubles. I kept going with (b).

                                                                That was the decision until recently when I started looking at doing some real, public projects. Still in the planning/indecision phase on that.

                                                                “But, lies have a way of growing, and there is some line down the road where forgive-and-forget becomes GTFO.”

                                                                I did most of my bullshitting when I was a young hacker trying to get started. Quite opposite of your claim, the snobby, elitist, ego-centered groups I had to start with told you to GTFO by default unless you said what they said, did what they expected, and so on. I found hacker culture to be full of bullshit beliefs and practices with no evidence backing them. That’s true to this day. Just getting in to few opportunities I had required me to talk big… being a loud wolf facing other wolves… plus deliver on a lot of it just to not be filtered. I’d have likely never entered INFOSEC or verification otherwise. Other times have been personal failures that required humiliating retractions and apologies when I got busted. I actually care about avoiding unnecessary harm or aggravation to decent people. I’m sure more failures will come out over time with them costing me but there will be a clear difference between old and newer me. Since I recognize my failure there, I’m focusing on security BSing for rest of comment since it’s most relevant here.

                                                                The now, especially over past five years or so, has been me sharing hard-won knowledge with people with citations. Most of the BS is stuff security professionals say without evidence that I counter with evidence. Many of their recommendations got trashed by hackers with quite a few of mine working or working better. Especially on memory safety, small TCB’s, covert channels, and obfuscation. I got much early karma on HN in particular mainly countering BS in fads, topics/people w/ special treatment, echo chambers, and so on. My stuff stayed greyed out but I had references. They usually got upvoted back by the evening. To this day, I get emails thanking me for doing what they said they couldn’t since any dissenting opinion on specific topics or individuals would get slammed. My mostly-civil, evidence-based style survived. Some BS actually declined a bit since we countered it so often. Just recently had to counter a staged comparison here which is at 12 votes worth of gratitude, high for HN dissenters. The people I counter include high-profile folks in security industry who are totally full of shit on certain topics. Some won’t relent no matter who concrete the evidence is since it’s a game or something to them. Although I get ego out of being right, I mainly do this since I think safe, secure systems are a necessary, public good. I want to know what really works, get that out there, and see it widely deployed.

                                                                If anything, I think my being a bullshitting hacker/programmer early on was a mix of justified and maybe overdoing it vs a flaw I should’ve avoided. I was facing locals and an industry that’s more like a fraternity than meritocracy, itself constantly reinforcing bullshit and GTFO’ing dissenters. With my learning abilities and obsession, I got real knowledge and skills pretty quickly switching to current style of just teaching what I learned in a variety of fields with tons of brainstorming and private research. Irritated by constant BS, I’ve swung way in the other direction by constantly countering BS in IT/INFOSEC/politics while being much more open about personal situation in ways that can cost me. I also turned down quite a few jobs offers for likely five to six digits telling them I was a researcher “outside of industry” who had “forgotten or atrophied many hands-on skills.” I straight-up tell them I’d be afraid to fuck up their systems by forgetting little, important details that only experience (and working memory) gives you. Mainly admining or networking stuff for that. I could probably re-learn safe/secure C coding or something enough to not screw up commercial projects if I stayed focused on it. Esp FOSS practice.

                                                                So, what you think? I had justification for at least some of my early bullshit quite like playing the part for job interviews w/ HR drones? Or should’ve been honest enough that I never learned or showed up here? There might be middle ground but that cost seems likely given past circumstances. I think my early deceptions or occasional fuckups are outweighed by the knowledge/wisdom I obtained and shared. It definitely helped quite a few people whereas talking big to gain entry did no damage that I can tell. I wasn’t giving bad advice or anything: just a mix of storytelling with letting their own perceptions seem true. Almost all of them are way in my past. So, really curious what you think of how justified someone entering a group of bullshitters with arbitrary, filtering criteria is justified in out-bullshiting and out-performing them to gain useful knowledge and skills? That part specifically.

                                                                1. 2

                                                                  As a self-piloted, ambulatory tower of nano machines inhabiting the surface of a wet rock hurtling through outer space, I have zero time for BS in any context. Sorry.

                                                                  I do have time for former BSers who quit doing it because they realized that none of these other mechanical wonders around them are actually any better or worse at being what they are. We’re all on this rock together.

                                                                  p.s. the inside of the rock is molten. w t actual f? :D

                                                                  1. 2

                                                                    Actually, come to think of it, I will sit around and B.S. for hours, in person with close friends, for fun. Basically just playing language games that have no rules. It probably helps that all the players love each other. That kind of BS is fine.

                                                                    1. 1

                                                                      I somehow missed this comment before or was dealing with too much stuff to respond. You and I may have some of that in common since I do it for fun. I don’t count that as BS people want to avoid so much as just entertainment since I always end with a signal its bullshit. People know it’s fake unless tricking them is part of our game, esp if I owe them a “Damnit!” or two. Even then, it’s still something we’re doing voluntarily for fun.

                                                                      My day-to-day style is a satirist like popular artists doing controversial comedy or references. I just string ideas together to make people laugh, wonder, or shock them. Same skill that lets me mix and match tech ideas. If shocking stuff bothers them, tone it way down so they’re as comfortable as they let others be. Otherwise, I’m testing their boundaries with stuff making them react somewhere between hysterical laughter and “Wow. Damn…” People tell me I should Twitter the stuff or something. Prolly right again but haven’t done it. Friends and coworkers were plenty fun to entertain without any extra burdens.

                                                                      One thing about sites like this is staying civil and informational actually makes me hide that part of my style a lot since it might piss a lot of people off or risk deleting my account. I mostly can’t even joke here since it just doesn’t come across right. People interpret via impression those informational or political posts gave vs my in-person, satirical style that heavily leans on non-tech references, verbal delivery, and/or body language. Small numbers of people face-to-face instead of a random crowd, too, most of the time. I seem to fit into that medium better. And trying to be low-noise and low-provocation on this site in particular since I think it has more value that way.

                                                                      Just figured I’d mention that since we were talking about this stuff. I work in a pretty toxic environment. In it, I’m probably the champion of burning jerks with improv and comebacks. Even most naysayers pay attention with their eyes and some smirks saying they look forward to next quip. I’m a mix of informative, critical, random entertainment, and careful boundary pushing just to learn about people. There’s more to it than that. Accurate enough for our purposes I think.

                                                                    2. 1

                                                                      Lmao. Alright. We should get along fine then given I use this site for brainstorming, informing, and countering as I described. :)

                                                                      And yeah it trips me out that life is sitting on a molten, gushing thing being supplied energy by piles of hydrogen bombs going off in a space set to maybe expand into our atmosphere at some point. That is if a stray star doesn’t send us whirling out of orbit. Standing in the way of all of this is the ingenuity of what appear to be ants on a space rock whose combined brainpower got a few off of it and then back on a few times. They have plans for their pet rock. Meanwhile, they scurry around on it making all kinds of different visual, IR, and RF patterns for space tourists to watch for a space buck a show.

                                                        1. 2

                                                          I’m glad to see that they recommend SafeStack in conjunction with SSP. I’m extra glad we’re doing that for base applications in HardenedBSD. Some ports have SafeStack enabled as well. :)

                                                          1. 1

                                                            Since LLVM and Clang are permissively licensed will this go upstream?

                                                            1. 4

                                                              I emailed Todd and his intentions are to make an attempt at upstreaming after letting it soak for additional testing in OpenBSD for a time. I might toy around with the patch in HardenedBSD.

                                                            1. 4

                                                              Why the choice of Python 3? I probably wouldn’t use any tool like this that doesn’t solely depend on /bin/sh (or can’t be executed via /rescue/sh). The reason being is that if my boot environment is so screwed up that non-base applications (like python) don’t work, yet statically-compiled applications in base (like /rescue/sh) do work, I can’t use zedenv but I can use beadm. My main use case for boot environments is for installing updates and sometimes updates go haywire. I have hit, and am 100% sure I will hit, instances where my environment is so screwed up, only /rescue will rescue me.

                                                              1. 3

                                                                I wanted to build something that’s easily maintainable, and while sh is great, it’s not a programming language. I realize that complex applications can be created with sh, but over time they can become unwieldy. I think they are great for small scripts like starting up services in rc.d, and scripting things quickly, but when I want to build something that can be used or long-term, I would rather use a programming language.

                                                                Part of the point of boot environments, you’re so that you don’t have to enter /rescue. You create a boot environment for the new update, do the update there, and if things go haywire you boot into the old boot environment. At that point, you can mount your broken boot environment, do some surgery, and once fixed you can reboot into it.

                                                                If things are so haywire that all of your boot environments aren’t working, and you have to use /rescue, it’s probably not related to boot environments. If it is, you can always use zfs to fix your problem.

                                                              1. 2

                                                                This is yet another project that has piqued my curiosity, only to find it participates in open source vendor lock-in by requiring Docker. Due to that, I’m unable to use it.

                                                                1. 2

                                                                  it participates in open source vendor lock-in by requiring Docker

                                                                  Can you explain what is “vendor lock-in” about Docker?

                                                                  Isn’t Docker now part of an “open container initiative” or something?

                                                                  AFAIK, it’s usually not too difficult to de-Dockerify something.

                                                                  1. 1

                                                                    Because Docker isn’t supported everywhere and won’t be. It’s not supported on the BSDs. Unless there’s a business requirement to run Linux, I only run BSD.

                                                                    1. 3

                                                                      I’ve never heard of “vendor lock-in” meaning “it doesn’t run everywhere”. By that definition almost all software is “vendor lock-in”. Mostly I’ve heard the phrase used to refer to data formats and data in general. But whatever the case may be, the Dockerfile doesn’t mean Docker is required. You’re free to try building it and running it on BSD without Docker.

                                                                      1. 1

                                                                        The old definition of cross-platform code meant it runs on the widely-used platforms regardless of what a vendor chooses. The project itself controls it. This code, if tied to Docker, will only use the hosts and targets Docker supports. Its locked into what that project chooses. I haven’t heard of open-source, vendor lock-in before but it makes sense: many OSS foundations are easier to use than modify heavily.

                                                                        These people probably have no intention to put Docker on BSD or take over Docker development. They’ll depend on upstream to do that or not do that. So, they’re locked in if the Docker dependency isn’t easily replaceable by them or their users. If it is easily replaceable, I’d not call it lock-in: just a project preference for development and distribution with cross-platform being limited to Docker’s definition of platforms. Which maybe be enough for this project. I can’t say any more than that since I’m just glancing at it.

                                                                        1. 1

                                                                          This code, if tied to Docker, will only use the hosts and targets Docker supports

                                                                          For probably the third time now: this project is not “tied to Docker”, and the concept of “tied to Docker” for a single piece of code is borderline nonsensical.

                                                                          There are projects that are “tied to Docker”, but that most likely means they assemble multiple software pieces together via a docker-compose.yml file, not a Dockerfile file.

                                                                          1. 1

                                                                            It does appear that I misread the project. I looked at their deployment guide and Docker is front-and-center. However, it appears that the project does not have a hard dependency on Docker.

                                                                            For those projects that do have a hard dependency on Docker, my statement still stands. Docker, in those cases, is a form of open source vendor lock-in due to deliberate non-portability.

                                                                            1. 0

                                                                              Docker, in those cases, is a form of open source vendor lock-in due to deliberate non-portability.

                                                                              Let’s Internet rage at non-portable BSD-specific features as well then.

                                                                      2. 2

                                                                        Docker, in fact, literally only runs on Linux. It uses a wide variety of Linux-specific functionality, and all extant Docker images contain Linux x86 binaries. On Windows and OS X, Docker runs on Linux in a VM (a setup which is impressively fragile and introduces an incredibly variety of weird edge cases and complications).

                                                                  1. 11

                                                                    I have to wonder how much time is spent during the researching of a vulnerability in coming up with the perfect dad-joke moniker for it and registering a domain name…

                                                                    1. 3

                                                                      Usually more time than alerting vendors and allowing them to come up with a fix. See also: Meltdown and Spectre.

                                                                      1. 5

                                                                        Really, you think six months was spent dreaming up the meltdown name?

                                                                        1. 1

                                                                          Did all vendors, including OpenBSD, get six months advanced notice with Meltdown?

                                                                          1. 5

                                                                            I don’t think it’s possible to draw any conclusions on the time spent naming the vuln from the list of vendors that weren’t notified.