1. 9
  1.  

  2. 4

    Swift, notably, always uses explicit wrapping.

    Int.max + 1 will trap, whereas Int.max &+ 1 will wrap.

    1. 2

      Trapping is fine in a desktop application, but in any kind of OS or server use case fail-stop behaviour is far from ideal. It’s much better than introducing most kinds of security vulnerability but it’s still a denial of service vulnerability.

      Smalltalk copied Lisp’s implementation for big integers. Small integers are stored in a machine word and are one bit smaller than the machine word. On overflow, you promote to a big integer and store the pointer in the word. On modern hardware, the best encoding for this is to make the low bit 0 if it’s a small int (so most arithmetic just requires shifting one operand and not masking the other) and 1 for pointers (because immediate addressing lets you subtract one in a load / store instruction). You can optimise sequences of operations by just collecting the carry flag and redoing if any of them overflowed. This is very efficient on every vaguely modern ISA except RISC-V.

      The reason that low-level languages don’t like this is that it means that any arithmetic operation can cause memory allocation (and require deallocation). That’s a terrible idea in C. In C++ you could implement it as a separate type and then at least you only had to know that Integer might allocate on operations. Even then, it’s a bit annoying when you interop with other code because you have to handle the overflow case (or, rather, the myInteger > sizeof(T) case).

      1. 6

        There are also function like Int.addingReportingOverflow(value: Int) which returns both the added value and a boolean whether overflowing occurred, etc.

        I agree that arbitrarily big integers are usually better, and definitely in non-low level languages, and it’s strange that it isn’t the default in those languages.

        1. 4

          Considering how common the carry bit is in a lot of architectures, it’s surprising a lot of languages never exposed it or an idiomatic wrapper for it.

          1. 3

            Generally, I believe, it’s for one of two reasons:

            • Low-level languages don’t want to limit portability by exposing it (RISC-V, for example, doesn’t have any equivalent of the carry flag and so you need a fairly costly sequence. MIPS doesn’t either, though MIPSr6 added an instruction that just calculated the carry flag).
            • High-level languages don’t want to expose low-level details of the machine to programmers because abstracting away low-level details is one of the main goals of a high-level language.

            I believe Pony and Rust both have support in the standard integer types for this and some C compilers expose it as an intrinsic. This is generally common on new languages using LLVM as the back end, because LLVM IR has overflow-checked intrinsics and so it’s trivial for any language using LLVM to expose it and let LLVM worry about how to codegen it for any given target.

        2. 1

          most arithmetic just requires shifting one operand and not masking the other

          More likely, you shift the result. This means you do catch some extraneous overflow, but means you can reuse the inputs.

        3. 1