1. 3

    My wife flew out to Utah to visit family for Christmas, so I’m left a bachelor until I fly out later this month. To celebrate the start of my alone time, I plan to hack on a few things this weekend:

    1. Take care of a few Pull Requests and bug tickets/feature requests in HardenedBSD.
    2. Compile and install Firefox on my Pinebook, which is running HardenedBSD as of today.
    3. Attempt to make more progress on porting SafeStack to HardenedBSD 13-CURRENT/arm64.
    4. Tack some LED strip lights in a dark hallway in my home.
    1. 2

      I plan on doing the following:

      1. Watch/listen to some of the USENIX LISA presentations that were uploaded to YouTube
      2. Work on my fork of FreeBSD’s hypervisor, bhyve, for a special malware lab
      3. Work on automating builds of HardenedBSD for the various arm64 SoC dev boards (RPI3, Pine64-LTS, etc.)
      1. 10

        Unpopular opinion puffin meme time!

        I’ve seen this “move to memory safe languages will solve all our problems” argument too many times. There’s a few problems with it:

        1. The language itself must never use unsafe code and must never contain any type of vulnerabilities.
        2. The entire operating system must be rewritten in that language. Everything from the bootloader to the kernel to the entire userland (and that includes libc). This new operating system must contain the same feature parity, including the same KBI, KPI, ABI, and API as the old operating system.
        3. Just because memory safety is solved doesn’t mean that all types of vulnerabilities are solved as well. Think: LFI/RFI with PHP doesn’t involve memory safety vulnerabilities at all.
        4. All the legacy code deployed on legacy devices. Not only do you have to rewrite the operating system, but you have to replace the billions and billions of Internet-connected devices.
        5. Perhaps I’m forgetting something. Place whatever I’m forgetting here. ;)

        edit[0]: Add an item.

        1. 14

          (Samuel L. Jackson voice) “Oh you were finished? Well, allow me to retort.”

          1. This is already countered if goal is fewer vulnerabilities, which safe languages lead to. If aiming for all, safe languages are only a start. The unsafe must be verified using external tools that do exist. The other vulnerabilities will need to be mitigated by review, language extensions, libraries, and/or external tools. Good news is many of these exist and can be added to toolset. Also, eliminating entire classes of errors gives developers more time to focus on others.

          2. The OS does not need to be rewritten. First, you only need to mitigate the code you use. Projects like Poly2 literally remove every call and feature they dont need. Of the rest, you can use tools that make legacy C memory safe with a performance hit. Then, begin the rewrites on the fast paths to gradually reduce overall performance hit. The C safety scheme is dropped at the end.

          3. That’s true. I addressed it in No 1. Another example is web application attacks that memory safety doesnt mitigate. So, new language designs addressed more of those risks in form of Ur/Web, Opa, and JIF/SIF. Those can be ported to more popular, safe languages. Rinse repeat.

          4. The make-C-safer tools, C-to-other stuff, sound analyzers, and separation kernels address legacy. Partly. Legacy will always be a big problem. What you overlook with IoT devices is that’s easiest market to get new code into. About opposite of legacy.

          5. Ada getting better results than C effortlessly for decades. SPARK getting pointers. Why3 getting support for bit-level issues. Industrial adoption of these by some companies. Shit is getting extra real on prove it has no errors side of things. Slow but steady progress.

          1. 1

            Mostly agreed. Note that I’m not saying to not use memory safe languages. My problem is with those who claim that there is no use for C and that C should just die already.

            The reason why I said the OS would need to be rewritten is because parts of the runtime likely depend on libc (and possibly other C-based libraries (libpthread?)). Those libraries are written in memory unsafe languages, like C, and could be used in an attack.

          2. 5

            This is an understandable opinion. As one of the principal authors of Monte, a language that might be characterized as claiming that moving to capability-safe (and thus memory-safe) languages will solve many of our problems, I can at least explain what the world on the other side of the looking-glass looks like.

            1. Yes, we might hope. There are three pieces. The language should not have unsafe escape hatches, the trusted parts of its implementation should be correct (relative to some specification), and the language should exclude every known bug class. These are tall demands, but Monte does at least achieve the first of them. I think that the other two are moving targets, in that our understanding of bug classes is not always deep and axiomatically-preventable, as with buffer overflows or out-of-bounds access, but is sometimes not conceptually obvious, like misquotation/injection, or emerges from the marching-on of science and technology, like timing-related side-channel-driven data exfiltration, or can even be definitionally incomplete, like plan interference.
            2. Yes, eventually. Note that there is a benefit to doing this rewrite aside from the hoped-for permanent elimination of some bug classes, though; we don’t have to implement any of the old APIs. This is actually something we expect in the object-capability worldview, because of the phenomenon of taming. Old APIs usually are untamed, and by the time that they are tamed, they no longer are compatible. In the typical Monte module, one cannot open a file, connect with a socket, write to stdout, redefine builtin names, examine a caught exception, or set up module-private global mutable state. Sincere capability-first systems like Genode and seL4 have APIs that tend to seem alien to newcomers, deeply limiting what the typical userland process can do.
            3. Yes. This relates quite a bit to the first point. We can only claim that some specific bug classes are impossible as a consequence of the underlying structure of the language. For example, Monte does not have a memory model, and consequently some classes of memory bug are not possible. (Of course, to recall your first point in its entirety, bugs in the runtime are possible, and since we generally run on memory-unsafe CPUs, we should be prepared for bugs in the runtime to be exploitable from user code.)
            4. Yes, but it’s okay, because we already were using language-agnostic abstractions in many cases. As you point out, the Internet is a thing, and many computers are connected through it. But, at the same time, the Internet is not only accessible from memory-unsafe languages. Monte speaks JSON and HTTP just like other modern languages. Nobody is requesting that the ocean be boiled all in a day, merely that we consider the benefits of not drinking seawater.
            5. Your opinion is not at all unpopular. It is, in fact, the dominant opinion, and many conversations I’ve had with security professionals have consisted entirely of me trying to break through their memetic armor in order to have a conversation.
            1. 3

              Sounds like a list of good ideas. Let’s do it!

              1. -2

                People often confuse architectural problems and competence problems with language problems. Most of the security bugs I’ve seen from C programs are due to (a)unskilled programmers and (b) failure to do simple things like use multiple memory protected processes. If program A parses inputs and sends data in parsed form to B, then a pointer exploit in A will not be able to control B - which is the process that should do privilege escalation. As for the first problem, nobody would complain that steel struts are poorly made if major bridges were being constructed by engineers who didn’t know how to calculate load.

                People also tend to discount the single-failure point of memory failures in single address space programs. There is no difference in safety between code that is 100% written in despicable C and code that is 99.99% written in super virtuous Rust/Haskell/Pascal/Ada/Javascript/Snobol or whatever, if that 0.01% code is not memory safe.

                1. 3

                  Thanks for posting this. Here’s an article that might help some folks follow what this paper presents: https://arstechnica.com/gadgets/2018/11/spectre-meltdown-researchers-unveil-7-more-speculative-execution-attacks/

                1. 2

                  A fix needing additional follow-up has been committed in FreeBSD HEAD: https://reviews.freebsd.org/rS340260 (MFC pending three days)

                  Until a full audit is performed, HardenedBSD has performed the suggested workarounds in 13-CURRENT: https://github.com/HardenedBSD/hardenedBSD/commit/d60f241d77eb286179aa25bc58a99b55833b2d10 (MFC pending)

                  1. 4

                    This is why I love bhyve on HardenedBSD:

                    1. PaX ASLR is fully applied due to compilation as a Position-Independent Executable (HardenedBSD enhancement)
                    2. PaX NOEXEC is fully applied (strict W^X) (HardenedBSD enhancement)
                    3. Non-Cross-DSO CFI is fully applied (HardenedBSD enhancement)
                    4. Full RELRO (RELRO + BIND_NOW) is fully applied (HardenedBSD enhancement)
                    5. SafeStack is applied to the application (HardenedBSD enhancement)
                    6. Jailed (FreeBSD feature written by HardenedBSD)
                    7. Virtual memory protected with guard pages (FreeBSD feature written by HardenedBSD)
                    8. Capsicum is fully applied (FreeBSD feature)
                    9. No dependency on legacy hardware
                    10. Minimal support for some virtualized/emulated hardware (like e1000)

                    Bad guys are going to have a hard time breaking out of the userland components of bhyve on HardenedBSD. :)

                    1. 2

                      No dependency on legacy hardware

                      What does this mean exactly? Usually when I see statements like this, it means that software has some (arbitrary?) requirement to run on newer hardware and, say, a system that is 8-10 years old will not work.

                      1. 2

                        You’re correct: bhyve does require newer hardware. It requires VT-d and EPT. It runs on Intel Haswell and above. It has support for AMD processors, but I don’t know the minimum spec there.

                        1. 2

                          I was partially incorrect: On the Intel Xeon side, bhyve runs on Intel Westmere and above. Thanks to Patrick Mooney for pointing that out to me.

                    1. 2

                      I’ll be getting a brand new HVAC installed along with sleeping this week’s stress away.

                      1. 16

                        It seems both HardenedBSD and OpenBSD were wise to disable SMT by default.

                        1. 2

                          I’m working on just a couple things this weekend:

                          1. Writing a blog article on how to jail bhyve, now that FreeBSD is in the release engineering cycle for 12.0.
                          2. Still working on investigating a regression in i386 support on HardenedBSD, as I help OPNsense make the switch to HardenedBSD as its OS upstream instead of FreeBSD.
                          1. 8

                            I plan to do a few things this week:

                            1. Comb through some of the ports that fail to build in HardenedBSD’s ports tree due to the ports’ dislike for certain llvm compiler toolchain components (llvm-ar, llvm-nm, llvm-objdump, etc.).
                            2. Fix an issue in FreeBSD’s new bectl application.
                            3. Help OPNsense fully adopt HardenedBSD as its upstream. Right now, this means investigate a regression on i386.
                            4. Perhaps take a nap.
                            1. 6

                              I’ll be working on HardenedBSD. Now that FreeBSD has created their stable/12 branch, we need to follow suit: https://github.com/HardenedBSD/hardenedBSD/issues/353

                              1. 1

                                There used to be an official Apple doc on hardening Mac OS X, for admins. It described cool methods like permanently disabling WiFi module or turning on secure boot (one more password during startup). That was for Snow Leopard, I wonder if they still wrote it for Mojave.

                                1. 4

                                  NIST maintains a document on hardening OSX. It’s a couple years old, though.

                                  1. 1

                                    i remember this. But hey, they don’t care anymore.

                                  1. 2

                                    I wonder if RAP (and other CFI implementations, like the one in llvm) can be enhanced to xor the per-function hash with a random secret, possibly an ELF auxvec entry with a random value. To successfully exploit the application, then, the attacker would need to first leak the secret.

                                    Of course, the RTLD would need to be patched to support such a scheme.

                                    1. 1

                                      In early 2015 (Feb/Mar), I theorized using Intel SGX to strengthen llvm’s SafeStack implementation, which we use in HardenedBSD. But then L1TF became a thing.

                                      1. 2

                                        Around a decade ago, I theorized using data-only attacks to control the flow of execution. Nice to see someone formalize that very same theory.

                                        But, to be honest, it should be rather obvious that data manipulation controls the flow of execution.

                                        1. 2

                                          Exactly. The machines literally do what the inputs tell them to do. I guess you could say information flow control folks picked up on it but idk how specifically. Microsoft Research did come up with data flow integrity which seems more to the point of stopping data-based attacks.

                                        1. 3

                                          Congratulations on getting setup for funding your project. Hope that works out!

                                          I have a quick question, though. So, OpenBSD and HardenedBSD both do mitigations against code attacks. The other set of problems I had to harden Windows/Linux for were configurations allowing too much access or leaks, turn off dangerous services, and so on. I know OpenBSD addresses them to be secure by default. Is FreeBSD secure by default out of the box or do you recommend a hardening guide(s)?

                                          If it is, then that means there’s two, extra BSD’s competing for most secure: CheriBSD with capability-secure instructions and custom hardware; Criswell et al’s SVA-OS that combines safe interface for low-level stuff with compile-time safety for C programs. Then, the most accessible and supported would be HardenedBSD and OpenBSD. Further, it’s probably wise to move low-cost mitigations into other two as extra layer.

                                          1. 6

                                            Thanks! So, my perspective is a bit different. I don’t view HardenedBSD as a competitor with the other BSDs. I’m more of a collaborative person and I believe that the best innovations come with collaboration.

                                            I’m also not a fan of the phrase “secure by default”. Secure against what? OpenBSD does a lot to harden an out-of-the-box fresh installation. HardenedBSD focuses mostly on exploit mitigations, but does include some level of service hardening. FreeBSD needs a lot of work.

                                            There’s some interesting research coming out of the CHERI project. The rather large hurdle they’ll have to jump over, though, is getting their hardware-based work landing in consumer hardware. Not only will their hardware improvements need to land in silicon, they will need to be used by the general public. So that means they’re at least three to four decades out from seeing any hardware-based security mechanisms effectively used.

                                            Of course, all these projects could do still more to harden a system.

                                            1. 3

                                              Appreciate the heads up on FreeBSD needing hardening. I expected as much since it’s a full-featured OS. I like your collaborative attitude, too. I used to be about slamming the competition. I switched a few years ago to live-and-let-live with folks helping each other out. We’ll all be better off that way since strong security will never have massive talent or financial support. On top of having more fun together. :)

                                              Far as CHERI, you actually need a FPGA or ASIC depending on use case. There’s lots of embedded systems whose processors aren’t fast. A FPGA could reach that performance with CHERI’s level of security at high unit cost and watts. Terasic board already runs it, too. There’s also security-focused and legacy apps that don’t have to be fast that might benefit.

                                              If they do an ASIC, a hardware team has to convert it to ASIC-ready form, buy some IP for things like I/O, stitch them together, and do verification. I think that can happen in less than a year if it’s funded. Draper/Dover already converted SAFE in a few years to support three ISA’s, too. They had little to nothing to work with vs CHERI’s open hardware.

                                              1. 3

                                                Well, what I meant to convey was that CHERI’s hardware-based work still needs to be used by the average consumer for it to be of any real use. That means Intel needs to adopt their work. Which then means it’ll be another decade before it’s available on the shelves. And then two more decades before it’s broadly used. So, we’re talking a minimum of three decades before CHERI’s work is useful outside of academia.

                                                Or, perhaps, I’m being way too pessimistic. I like a lot of CHERI’s hardware-based work, so I really hope I’m wrong.

                                                1. 2

                                                  Well, what I meant to convey was that CHERI’s hardware-based work still needs to be used by the average consumer for it to be of any real use.

                                                  You are being too pessimistic but you’re still right to a large degree. Obviously, most people will want the full-custom performance their x86 boxes are getting them. The ARM processors that were targeted at embedded, which CHERI could enter, weren’t good enough for that. Smartphone companies and later tablet companies still convinced them to use stuff like that for much of their day to day experience. They included hardware accelerators for stuff like MPEG 2 the CPU couldn’t handle. Now, they’re selling so much they’re eating into desktop and laptop markets. They’re also Standard Cell (cheaper/easier) rather than Full-Custom (ultra-expensive). So, those products might use an ARM or RISC-V SoC with CHERI extensions the addition of which would be dirt-cheap, fasts work compared to rest of SoC.

                                                  For Intel/AMD/IBM doing full-custom, I don’t know where you’re getting decade. They can crank out whole CPU’s with billions of transistors done full-custom in a year or two. These extensions should be smaller than the processors college students build in their coursework with standard tooling. If companies wanted, they could throw out a CHERI-RISC processor probably in months. Cavium especially since they’re MIPS experts and CHERI is MIPS. On high end, easier to add to POWER than x86 since it’s already RISC. IBM had INFOSEC pioneer Paul Karger build a MLS-enforcing processor that one report said was in small-scale, commercial use. More realistically, I told companies interested to get Intel or AMD to do a modification trimming the fat plus adding some security through their semi-custom business. Nobody from Intel/AMD/IBM to their purchasers are interested in doing this probably because people like us can’t sell the executives on the idea directly. It’s tech people talking to business people or no need is seen. That is why it isn’t happening. If they wanted it, it could be done in a few months to a few years with FreeBSD support already there or a doable amount of work for them.

                                                  If not desktops, there’s also the embedded market which sells everything from sub-100Mhz 8-bitters to fixed-purpose setups with high-end chips. I wanted SiFive’s RISC-V team to consider targeting secure, fast, cheap networking before anything else. Switches, routers, VPN’s, firewalls, caches, SSL/TLS offloading, and so on. That would get the networking IP in better shape for the attacks that will be coming later. Inspectable, too, except for NDA parts. If it’s general purpose, it could become a more expensive and more open version of Raspberry Pi, too, with the volume sales of the commercial products keeping it cheap. Throw it in some servers they use in ARM-based clouds, too, differentiating on security and privacy.

                                                  For funding, it’s worth noting that hardware startups happen regularly funded by VC’s and taxpayers (esp university spinouts). My best idea was getting paid pro’s to work with good academics with cheap tooling to just build an open, secure CPU that they published in a form industry could use. Although not security-focused, the Leon3 GPL core was done that way. Then, the pro’s would stitch together using sponsored or crowdfunded money the third-party I.P. necessary to make it useful and buy the masks. Then, they begin selling it with sales making it self-sustaining maybe. Alternatively, it’s a loss-leader for one or more sellers of hardware appliances or software. An example of that was Zylin doing the ZPU.

                                          1. 7

                                            I would have rather seen the HardenedBSD code just get merged back into FreeBSD, I’m sure there are loads of reasons, but I’ve never managed to see them, their website doesn’t make that clear. I imagine it’s because of mostly non-technical reasons.

                                            That said, It’s great that HardenedBSD is now setup to live longer, and I hope it has a great future, as it sits in a niche that only OpenBSD really sits in, and it’s great to see some competition/diversity in this space!

                                            1. 13

                                              Originally, that’s what HardenedBSD was meant for: simply a place for Oliver and me to collaborate on our clean-room reimplementation of grsecurity to FreeBSD. All features were to be upstreamed. However, it took us two years in our attempt to upstream ASLR. That attempt failed and resulted in a lot of burnout with the upstreaming process.

                                              HardenedBSD still does attempt the upstreaming of a few things here and there, but usually more simplistic things: We contributed a lot to the new bectl jail command. We’ve hardened a couple aspects of bhyve, even giving it the ability to work in a jailed environment.

                                              The picture looks a bit different today. HardenedBSD now aims to give the FreeBSD community more choices. Given grsecurity’s/PaX’s inspiring history of pissing off exploit authors, HardenedBSD will continue to align itself with grsecurity where possible. We hope to perform a clean-room reimplementation of all publicly documented grsecurity features. And that’s only the start. :)

                                              edit[0]: grammar

                                              1. 6

                                                I’m sorry if this is a bad place to ask, but would you mind giving the pitch for using HardenedBSD over OpenBSD?

                                                1. 19

                                                  I view any OS as simply a tool. HardenedBSD’s goal isn’t to “win users over.” Rather, it’s to perform a clean-room reimplementation of grsecurity. By using HardenedBSD, you get all the amazing features of FreeBSD (ZFS, DTrace, Jails, bhyve, Capsicum, etc.) with state-of-the-art and robust exploit mitigations. We’re the only operating system that applies non-Cross-DSO CFI across the entire base operating system. We’re actively working on Cross-DSO CFI support.

                                                  I think OpenBSD is doing interesting things with regards to security research, but OpenBSD has fundamental paradigms may not be compatible with grsecurity’s. For example: by default, it’s not allowed to create an RWX memory mapping with mmap(2) on both HardenedBSD and OpenBSD. However, HardenedBSD takes this one step further: if a mapping has ever been writable, it can never be marked executable (and vice-versa).

                                                  On HardenedBSD:

                                                  void *mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE | PROT_EXEC, ...); /* The mapping is created, but RW, not RWX. */
                                                  mprotect(mapping, getpagesize(), PROT_READ | PROT_EXEC); /* <- this will explicitly fail */
                                                  
                                                  munmap(mapping, getpagesize());
                                                  
                                                  mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_EXEC, ...); /* <- Totally cool */
                                                  mprotect(mapping, getpagesize(), PROT_READ | PROT_WRITE); /* <- this will explicitly fail */
                                                  

                                                  It’s the protection around mprotect(2) that OpenBSD lacks. Theo’s disinclined to implement such a protection, because users will need to toggle a flag on a per-binary basis for those applications that violate the above example (web browsers like Firefox and Chromium being the most notable examples). OpenBSD implemented WX_NEEDED relatively recently, so my thought is that users could use the WX_NEEDED toggle to disable the extra mprotect restriction. But, not many OpenBSD folk like that idea. For more information on exactly how our implementation works, please look at the section in the HardenedBSD Handbook on our PaX NOEXEC implementation.

                                                  I cannot stress strongly enough that the above example wasn’t given to be argumentative. Rather, I wanted to give an example of diverging core beliefs. I have a lot of respect for the OpenBSD community.

                                                  Even though I’m the co-founder of HardenedBSD, I’m not going to say “everyone should use HardenedBSD exclusively!” Instead, use the right tool for the job. HardenedBSD fits 99% of the work I do. I have Win10 and Linux VMs for those few things not possible in HardenedBSD (or any of the BSDs).

                                                  1. 3

                                                    So how will JITs work on HardenedBSD? is the sequence:

                                                    mmap(PROT_WRITE);
                                                    // write data
                                                    mprotect(PROT_EXEC);
                                                    

                                                    allowed?

                                                    1. 5

                                                      By default, migrating a memory mapping from writable to executable is disallowed (and vice-versa).

                                                      HardenedBSD provides a utility that users can use to tell the OS “I’d like to disable exploit mitigation just for this particular application.” Take a look at the section I linked to in the comment above.

                                                  2. 9

                                                    Just to expound on the different philosophies approach, OpenBSD would never bring ZFS, Bluetooth, etc into the OS, something HardenedBSD does.

                                                    OpenBSD has a focus on minimalism, which is great from a maintainability and security perspective. Sometimes that means you miss out on things that could make your life easier. That said OpenBSD still has a lot going for it. I run both, depending on need.

                                                    If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface. That’s not to say ZFS isn’t awesome, it totally is, but if you don’t need ZFS for a particular compute job, not including it gives you a lot smaller surface for bad people to attack.

                                                    1. 5

                                                      If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface.

                                                      I would find a fork of HardenedBSD without ZFS (and perhaps DTrace) very interesting. :)

                                                      1. 3

                                                        Why fork? Just don’t load the kernel modules…

                                                        1. 4

                                                          There have been quite a number of changes to the kernel to accommodate ZFS. It’d be interesting to see if the kernel can be made to be more simple when ZFS is fully removed.

                                                          1. 1

                                                            You may want to take a look at dragonflybsd then.

                                                      2. 4

                                                        Besides being large, I think what makes me slightly wary of ZFS is that it also has a large interface with the rest of the system, and was originally developed in tandem with Solaris/Illumos design and data structures. So any OS that diverges from Solaris in big or small ways requires some porting or abstraction layer, which can result in bugs even when the original code was correct. Here’s a good writeup of such an issue from ZFS-On-Linux.

                                                1. 3

                                                  When apk is pulling packages, it extracts them into / before checking that the hash matches what the signed manifest says it should be.

                                                  This surprised me. There’s no way to check the hashes before extracting the files?

                                                  1. 3

                                                    Ideally, the outermost file (the archive itself) should have a detached signature of some sort. The algorithm should be as follows:

                                                    1. Ensure the trust store is sane and still trustworthy
                                                    2. Ensure there is a valid signature for the to-be-downloaded package
                                                    3. Download the package
                                                    4. Ensure that the signature is valid for the package
                                                    5. Ensure that each file has an associated hash in the pkg metadata
                                                    6. Extract files
                                                    7. Ensure that the file hashes match
                                                    1. 3

                                                      And do as much work as possible with as few privileges as possible. Using MAC and/or chroot is a bonus.

                                                  1. 1

                                                    A golang wat-ism I just found:

                                                    golang maps the heap at a fixed address (0xc000000000 on FreeBSD/HardenedBSD). This makes the -buildmode=pie option useless, since the heap is at a known, fixed location.

                                                    1. 2

                                                      Not saying your wat is invalid but this is their response:

                                                      “(Russ Cox) Address space randomization is an OS-level workaround for a language-level problem, namely that simple C programs tend to be full of exploitable buffer overflows. Go fixes this at the language level, with bounds-checked arrays and slices and no dangling pointers, which makes the OS-level workaround much less important. In return, we receive the incredible debuggability of deterministic address space layout. I would not give that up lightly.”

                                                      1. 2

                                                        Yeah, I read the 2012 thread in which Russ said that. However, -buildmode=pie didn’t exist then. The existence of PIE support infers that the project as a whole has changed its mind: that a non-deterministic address space as modified by ASLR is worth it. I’ve submitted a new email to the golang-nuts mailing list to re-address the issue.

                                                        I’d say that golang is a “safer” language, just like Rust. However, I don’t think we’re to the point where we can say any language is 100% safe. Sure, golang and Rust improve the situation quite drastically, but they’re both written by humans and are still subject to the mistakes humans make.

                                                        So, it seems weird to me that golang would take the time to implement PIE support, but neglect to address the fixed heap. Hopefully my email on the mailing list will get some informative replies.

                                                    1. 10

                                                      Given these two bits of news, I plan on accelerating my plan to deploy censorship- and surveillance-resistant wireless mesh networks in and around Baltimore. The networks will be available for either free or low cost to underprivileged or unfairly treated populaces.

                                                      I’m hoping to model the network after NYC Mesh with a few distinct changes. Each supernode will route all traffic through Tor. To help with the load placed on the Tor network, each supernode will need to run a public relay (non-exit is fine for now). Inter-node communication will be encrypted via ipsec.

                                                      I’m hoping to get the plan in order by summer or fall of 2019. There’s lots to do and research.