1. 24
  1.  

  2. 8

    This topic is close to my heart. Some random trivia:

    1. Microsoft’s first C compiler with a 64 bit type is Visual C++ 2 from 1994, but the operating system needed support earlier. Before compiler support, the OS provided functions such as LargeIntegerAdd (usermode) or RtlLargeIntegerAdd (kernel export.)
    2. The LARGE_INTEGER type predates compiler 64 bit support and allowed the compiler to assign values, pass as parameters etc despite not having arithmetic support.
    3. When unsigned 64 bit support arrived, the common Windows types of today (ULONGLONG, ULARGE_INTEGER) weren’t defined yet, but it’s still there, just called DWORDLONG. This type is still there but almost never seen in the wild today.
    4. It’s actually a bit of a stretch to call 64 bit integer support as compiler support, since a lot of it is detecting a 64 bit operation and sending it to a library function to implement, thereby streamlining the above dance. Those functions use architecture specific calling conventions and can only be implemented in assembly. See things like _aulldiv (auto(?) unsigned long long divide.)
    5. There’s quite a mismatch between what the CPU can do and what C exposes. An Intel CPU has a native instruction to divide a 64 bit value by a 32 bit value, but this is inexpressible in C (which forces both sides to the same type.) That instruction also returns both the divided value and remainder, but C can’t express an intention to capture both, the compiler has to inspect code and infer it.
    6. Everything old is new again. Now the OS has an UnsignedMultiply128 function and the compiler has a _umu128 function. All we’re waiting for now is a language cleanup to provide a native type and call them transparently.
    1. 5

      Oh, and let’s not forgot “long long” could mean 72-bit on your 9-bit byte system, of course. That’s why stdint.h exists…. except, oh, C89.

      That’s a very important point that this post makes that seems to be overlooked. Simply put, it means that if your program depends on an integer with a known size, it is impossible to correctly write said code in pure C89. Whenever I write C, I always assume that headers like stdint.h are available, even if they technically aren’t compliant with C89.

      1. 4

        Why use C89 in 2021? That’s a 32 year old standard. We have C17 now.

        1. 9

          Especially with C89 you have a huge variety of compilers available allowing you to run your code on nearly every architecture as well as checking your code for maximum standard compliance (some compilers are more liberal than others).

          With any C standard that is >= C99 you are effectively forced using clang or gcc.

          1. 4

            Can you give an example of an architecture that is only supported by a C89 compiler?

            1. 3

              MS VisualC++ only began to add C99 support in Visual Studio 2013, and I’m not sure they support anything newer. So you’re no longer limited to C89 for Windows code these days, but there’s a long tradition of “keep your expectations very low if you want to write C and be portable to the most popular desktop OS”.

              1. 3

                According to this blog post they were working on C11 and C17 support last year. I don’t know how far they are with things they listed as missing.

          2. 4

            Later versions of the C standard are a lot less portable and a lot more complex. I use C89 when I want to write software that I know will be portable across many different machines and will work with nearly any C compiler. IMO, it doesn’t really make sense to target anything later than C11; C17 doesn’t make enough notable and useful changes to warrant using it.

            1. 1

              Some old code bases, especially in the embedded space, are still written for it.

            2. 3

              bsandro from IRC mentions this clump of usenet posts about long long.

              1. 4

                A lot of neat stuff in the thread.

                (But what about 128-bits: I’d be pleased to have a 128-bit type as well … however, a pragmatic view says: we have the 64-bit problem right now, we’ve had it for several years; we won’t have the 128-bit problem for quite a few years. Based on the typical 2 bits every 3 years increase in addressing, a gross estimate is: 32 bits/2 bits * 3 years = 48 years, or 1992+48 = 2040. Personally, I’m aggressive, so might say year 2020 for wanting 128-bit computers …

                I’m excited for the imminent arrival of long long long.

                1. 9

                  One of my favourite error messages:

                  $ cat t.c
                  long long long l;
                  $ cc -c t.c
                  t.c:1:11: error: ‘long long long’ is too long for GCC
                  
                2. 3

                  As someone who was alive and working (at Sun) through much of this time the only thing I would add (that’s relevant to the OP) is that Microsoft was not known to engage with the community of the time (you might recall that their first internet ready OS was called Windows 95).

                3. 3

                  People joke about how we’re now going to need 128-bit integers…well, anyone who works with IPv6 addresses or UUIDs loves 128-bit integers.

                  1. 3

                    Yeah but IPv6 addresses and UUIDs are really opaque blobs of bits. You’re not doing arithmetic on them. Bitmasking for IPv6, maybe.

                    1. 2

                      You can do 64-bit multiplication without UB overflow if you have 128-bit integers. Then you can more easily and without UB check for overflow.

                      1. 4

                        This seems like a strange way to check for overflow. Frankly, C should just have built-in checked arithmetic intrinsics. Rust got this right.

                        1. 1

                          I agree.

                          1. 1

                            gcc does have built-in checked arithmetic. Standard C doesn’t have add-with-overflow, but gcc alone is more portable than rustc.

                            1. 2

                              Yeah, gcc and clang have intrinsics.

                              Fair point about Rust.

                              Just checked, and Zig also has intrinsics for this, so at least newer languages are learning from the lack of these in the C standard.

                              1. 1

                                Swift traps on overflow by default; if you want values to wrap, you use &+ &- etc operators.

                      2. 1

                        32-bit GCC doesn’t provide 128-bit integers, which complicates these things indeed.

                        1. 1

                          however, on a RV128I you just need a long ;)

                      3. 0

                        Im gonna say it: this language sucks