1. 13
  1.  

    1. 12

      A language that explicit targets privileged and networked services shouldn’t be relying on the hope that “some safety” will be enough.

      UAFs are not a small problem. They make up a huge portion of real world memory safety vulns. Removing spatial vulns with bounds checking is huge, don’t get me wrong, but I would be uncomfortable with a privileged Hare service.

      I’m a bit pessimistic though. I think hardened allocators are quite interesting and, if one were to simply remove all spatial vulns, I admit that I am curious to see what practical exploitation ends up looking like.

      1. 7

        Agreed; any language without some form of automatic memory management (whether GC or ARC) is a hard pass for me.

      2. 1

        but I would be uncomfortable with a privileged Hare service.

        I think in the context of their helios microkernel I would be more comfortable with it.

        1. 3

          Possibly, yeah. The reality is that we’re in new territory in terms of “what if we actually put a bit of effort into security”. Like ~30 years of mainstream software considering this to be a thing worth pursuing. Not enough time to answer questions like “what if we solved spatial but not temporal safety?” or “what if we had a microkernel with a capabilities system, and ‘mostly’ safe userland?”.

          Again, academically I’m quite curious. In terms of caring about users, I wish we’d wait to answer those questions until we actually had safe systems.

        2. 2

          This isn’t quite “boil the ocean” territory, but the water is getting warmer…[1]


          [1] I realize that this idiom is feeling less and less implausible as the planet heats up.

    2. 5

      Hm no mention of integer overflow. A new language seems like an opportunity to do something about that, and report on the performance cost

      As far as I know, the philosophy of the language isn’t to be fast at all costs, so it seems worthwhile

      1. 4

        I did a silly μbenchmark checking integer years ago and found that it’s not that much overhead, at least on the x86 architecture, but it may cause larger code to be generated.

        1. 2

          Microbenchmarks are somewhat misleading for this kind of thing because you are most likely to see overhead from additional branch predictor state. If the core is doing the right thing, branch-on-overflow instructions are statically predicted not taken and consume branch predictor state only if they are ever taken. I think x86 and 64-bit Arm chips all do this. RISC-V can’t because they can’t differentiate this kind of branch (they don’t have flags and so the branch looks like any other branch and so they can’t tell that this is a really unlikely one without additional hinting).

      2. 2

        I have yet to play around with Hare, but what I can find in the spec under Additive arithmetic is that signed integer overflow is at least not undefined behavior:

        6.6.37.3 If an operation would cause the result to overflow or underflow the result type, it is truncated towards the least significant bits in the case of integer types, and towards zero in the case of float types. In the case of signed types, this truncation will cause the sign bit to change

        1. 3

          Well, kind of. The semantics of arithmetic operations in the presence of overflow are well defined. The semantics of arbitrary code that does arithmetic may not be. If some code does a+b on two positive values and the result is negative, then it will almost certainly do the wrong thing unless it’s explicitly checking for overflow.

        2. 2

          Ah thanks for the correction! Missed that

      3. 2

        It is mentioned in passing, in the “undefined behavior” section:

        For a start, much of the behavior that C leaves undefined is defined by Hare. For instance, signed overflows are defined by the specification.