1. 9
  1.  

  2. 4

    There’s really no excuse for these being problems in Go. Almost all of them were solved in Lisp and copied in the ‘70s by Smalltalk. In both languages, the default integer representation is a pointer-sized word with one bit reserved to indicate whether it is an integer stored inline or a pointer to a heap-allocated big integer. Arithmetic that overflows triggers allocation of a big integer object on the heap. Fixed-size integers may exist for domains where they make sense (e.g. frame buffer values) but they are not the default.

    C does not do this because it is a systems language where memory management must always be explicit (arithmetic while holding a lock that prevents allocation making forward progress should not deadlock). Go is a language with a global garbage collector that will happily promote variables with automatic storage to the heap if it can’t prove that they don’t outlive the function, so has no such constraints.

    1. 1

      To be fair, Smalltalk-80 (dunno about LISP) had no choice but to support bigints, because the range of native ints was only +/-32K, too small for real-world use. (FUN FACT: if a ST80 text view had more than 32KB of text or became more than 32K pixels tall, its performance cratered.)

      Go is low-level enough that its type system includes different sizes of ints (int8, int32, …) and it is strict about not allowing implicit conversions. It doesn’t jump to a bigger size on overflow. Automatic bignum conversion doesn’t fit that type system at all.

      I do think it would have been good for Go to check arithmetic overflow the way Swift and Rust do. But Go doesn’t seem to have a concept of debug vs optimized builds, so I guess there wouldn’t have been a convenient switch for disabling it if necessary for speed.

      1. 2

        Actually, Swift keeps arithmetic checks even in optimised builds, unless you build with -Ounchecked (which also turns off array bounds checking AFAIK). If you build with the usual optimisation flags, you can opt into overflow using the overflowing operators such as &+, &- — a design choice I wish Rust had chosen as well since inadvertent integer overflows make for incorrect code, something you would not want in production code.

        1. 2

          FUN FACT: if a ST80 text view had more than 32KB of text or became more than 32K pixels tall, its performance cratered

          Many years ago I discovered that both Apple and GNUstep’s implementation of NSTableView had the same bug. They used a 32-bit float to store the offset from the origin of each row. When you had a few tens of thousands of rows, rounding errors added up and meant that rows would have big gaps between them. Apple’s version used CGFloat and so got silently fixed when Apple moved to 64-bit Cocoa. In the GNUstep version, we moved to using double.

          Both performed very well with a table with well over a million rows once this was fixed (I had one row per cycle from an execution trace on our prototype CPU) and this became one of my tests for ‘is your GUI framework any good?’

      2. 1

        Having the UB sanitizer check for signed int overflow is useful. I once tried turning on unsigned overflow checks, but there were too many false positives — turns out a lot of code, including libc++, treats it as a feature not a bug. (IIRC it isn’t UB in the spec?)

        Those “bigger than 64 bit numbers” that ping accepts are IPv6 addresses, which are 128 bits.

        1. 1

          This is brilliant. This is the kind of information that is needed for programmers to learn.