1. 5
  1.  

  2. 2

    “we discuss mitigation strategies”

    Dedicated box for untrusted stuff with no JavaScript or random binaries on the trusted machine. No sharing between the two by default. If too restrictive, careful sharing between the two where incoming data is in formats whose safety can be analyzed/proven. The oldest approach in high-assurance security before they attempted consolidation.

    Add a KVM switch for convenience. When they begin hitting those, buy a secure one from Tenix. ;)

    1. 3

      It seems like browsers executing locally arbitrary remote code might have been a bad decision. ;)

      Every time I hear “ASLR is dead!”, the person claiming such doesn’t understand what’s truly going on. ASLR isn’t dead. Rather, engineering choices were made that had detrimental security ramifications. ASLR wasn’t meant to protect against the threat landscape modern browsers provide (the local code execution of arbitrary remote code). ASLR, as originally designed, was meant purely to frustrate remote attacks, not local.

      These papers are indeed interesting, but we need to remember what ASLR was meant for. Newer exploit mitigations, when properly implemented, like forward-edge and backward-edge Cross-DSO CFI mitigate ASLR’s weaknesses. A holistic defense-in-depth strategy where multiple exploit mitigations are in place, combining old (ASLR, NOEXEC) with new (CFI), will cause the most economic stress for attackers.

      1. 2

        “ASLR wasn’t meant to protect against the threat landscape modern browsers provide”

        High-security thought it was too weak back then. You’re right that it was really limited. The people pushing it also hoped folks would do more secure coding practices that would force enemies to use holes harder to exploit. That didn’t happen.

        “A holistic defense-in-depth strategy where multiple exploit mitigations are in place, combining old (ASLR, NOEXEC) with new (CFI), will cause the most economic stress for attackers.”

        Well, I’m for automatic, memory safety or CFI + DFI wherever it doesn’t absolutely kill performance. If it’s prohibitive, then a combined approach hitting lots of stuff like you describe sounds great. Also, separation kernels. They’re still blocking some of the attacks we’re seeing due to actually trying to suppress covert channels as part of their design. Most stuff doesn’t: just patch with ad hoc methods after each attack.

        1. 2

          The people pushing it also hoped folks would do more secure coding practices that would force enemies to use holes harder to exploit. That didn’t happen.

          There’s a human aspect to this, too. :) We’re human, we make mistakes. That’s why we need exploit mitigations, regular code auditing, etc.

          Well, I’m for automatic, memory safety or CFI + DFI wherever it doesn’t absolutely kill performance.

          Completely agreed. HardenedBSD’s adoption of llvm’s CFI and SafeStack implementations has shown that CFI and SafeStack don’t kill performance on 64-bit systems.

          1. 2

            Good work on that, too.

            1. 2

              Thanks! In the case of CFI and SafeStack, llvm should take the vast majority of the credit there. HardenedBSD has made very few changes to llvm’s CFI and SafeStack implementations. The changes we do make over the next couple years, we’ll try to get upstreamed. We’re still early on in the process.