1. 29
    1. 7

      I made this site IPv6-only in hopes of thwarting repost bots that send toxic comments my way. For an IPv4 version, see https://legacy.cyrnel.net/solarwinds-hack-lessons-learned/

      1. 11

        Thanks. I have working IPv6 here but the v6 version doesn’t work for me - I get connection refused errors. It looks as if you have a valid AAAA record (and no A record), so my browser is trying to connect via v6, but failing.

        1. 4

          I have the same issue on mobile, from my ipv6 network.

          1. 2

            Could it be possible we’re experiencing the effects of peering disputes? https://adminhacks.com/broken-IPv6.html

            Although probably more likely that I just didn’t configure something right…

            1. 3

              It does look like something might not be configured right..

              9:18:25 brd@m:~> curl -I https://cyrnel.net/solarwinds-hack-lessons-learned/ curl: (7) Failed to connect to cyrnel.net port 443 after 72 ms: Connection refused

              9:18:27 brd@m:~> ping -6 cyrnel.net PING6(56=40+8+8 bytes) 2602:b8:xxxxxx --> 2603:6081:ae40:160::abcd 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=0 hlim=52 time=69.674 ms 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=1 hlim=52 time=72.403 ms 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=2 hlim=52 time=72.432 ms 16 bytes from 2603:6081:ae40:160::abcd, icmp_seq=3 hlim=52 time=71.435 ms

              HTH, HAND!

              1. 4

                Ah I only opened port 80 in the firewall for the IPv6 version. Should be fixed for next time, thanks for the debugging help!

                1. 1

                  Works for me, thanks!

              2. 2

                yeah same, I can ping it, but I can’t curl or browse it (handy and PC) - other ipv6 stuff works fine

      2. 5

        Thanks! My ISP (local branch of My Republic) has zero plans/intent to implement IPv6, so it’s appreciated that there’s a legacy method. :)

    2. 5

      Not sure why you’d think capability-based security is incompatible with UNIX, since FreeBSD has been shipping it for over a decade now and has adopted it for a growing number of base system utilities.

      1. 6

        If you want security by design, backwards-compatibility is a sacrifice you’ll have to make.

        I took the author to mean that we’d have to go program by program, retrofitting capability based security. I thought that was how capsicum works in FreeBSD, if that’s what you’re referring to?

        I could see why you wouldn’t want to call that “incompatible with UNIX”, but maybe “incompatible with existing UNIX software pending significant modifications”?

        1. 2

          I took the author to mean that we’d have to go program by program, retrofitting capability based security. I thought that was how capsicum works in FreeBSD, if that’s what you’re referring to?

          You need to retrofit it, but you can do so in a way that fits the UNIX philosophy well. File descriptors remain file descriptors, but now they have more fine-grained rights and the only way that you can access a global namespace is by presenting a capability that authorises access to some part of it. With things like libpreopen, you can get the principle of least privlege benefits with unmodified binaries and a shim layer, but you do need code changes to get the intentionality benefits.

    3. 3

      The universal capability-maker is lambda-abstraction. Personally, I’m tired of lambdas, but they are sufficient.

      The problem is structural; capability theory is actually talking about how values flow within a program, not about the computational result of working with those values, and most software is unstructured goo where all capabilities are ambient.

    4. 2
      1. I agree that we need capabilities. But they are not sufficient, because a) capabilities still by definition allow harm within the capability (and think also about how to securely update the capability system) and b) there will be bugs to bypass the capability security as e.g. many processors in the wild will be vulnerable to side-channel attacks like Spectre for quite some time to come.

      2. We also need verified reproducible builds. That alone would have fully defeated SolarWinds. Most open source projects do not need to sign, they shouldn’t not roll their own updater, they just need to provide build scripts that are reproducible. Reproducers are the ones who need to sign to attest the software is built in a specific way from specific inputs (e.g. by running https://github.com/in-toto/rebuilderd ). Software repositories provide an automatic updater (package manager) client that needs to verify that a few chosen reproducers attested. Chosen means by the end user, but sensible default provided by the people managing the repository. Those attestations need to be secured with an observed global append only log / binary transparency log.

      3. Additionally that automatic updater needs to be able to check for signed attestations of additional post merge code reviews (somewhat like https://github.com/crev-dev/cargo-crev ).

      We need capabilities to not review things where capabilities do the work sufficiently. But even for these we still need (2) and (3) to be able to undo successful attacks.

      1. 4

        But they are not sufficient, because a) capabilities still by definition allow harm within the capability

        This is true, but capability systems enforce two important principles:

        • The principle of least privilege.
        • The principle of intentional use.

        Most of the attention goes to the first of these, because reducing privileges is an obvious win, but I think a lot of the real value comes from the second. In a capability system, it’s not sufficient to be authorised to perform an action, you need to explicitly present the capability associated with that operation to authorise it. This prevents a lot of confused-deputy style attacks. The big Exchange vulnerability a year or so ago, for example, was the result of an intentionality violation: Exchange was able to write files anywhere but shouldn’t have exercised that right in the context of writing the files that unprivileged users could ask it to write. In a capability system, that write would have come with an authorising capability and if someone had passed the write-anywhere-in-the-FS capability to that code path it would have been an obvious bug.

        (and think also about how to securely update the capability system)

        Why? What problems do you see capabilities introducing in secure updates?

        Completely agreed on points 2 and 3. There was work sponsored by the Linux Foundation on package transparency a decade ago (reproducible builds, every build published on a public Merkel Tree so you could easily check your packages were the same ones other people were installing and were the same ones you got from source builds of a particular package) and it seems to have gone nowhere.

        1. 2

          Capabilities don’t introduce any problem in updates. They just don’t help in detecting if the new binary is not malicious. (As to construct a capability for that would require knowing what a future update contains before it is made.)