1. 16
  1.  

  2. 2

    (Ever used MSVC as a compiler for a POSIX target? I have.).

    I … I’m simultaneously horrified and intrigued. What did you do? And why?

    Such simplicity by ignoring things leads people to think “well, why not plain makefiles, and get rid of autotools and CMake?” Your makefile would be simple, until it has to support multiple operating systems, at which point you’re basically reinventing autotools from first principles as you deal with i.e. linking differences. At which point, why not use a proper build system generator like CMake that actually is portable beyond POSIX too?

    Maybe this is a hot take, but as much as I like its (potential) portability, that’s not why I use cmake. For pure portability, this GNU Makefile, for instance, is pretty awesome and doesn’t dive too far into reinventing autotools from first principles. But I maintain my own CMakeLists.txt for that library just the same. The thing cmake really gets right for me is making the best debugging tools for each platform you care about just work. You generate a build system and you can use the native debugging tools.

    The article that you were (partially) responding to read like “turtles all the way down” to me. I think the only part of that one that I liked much was the notion that testing on weird machines is good for you.

    1. 2

      I … I’m simultaneously horrified and intrigued. What did you do? And why?

      Interix lets you use MSVC as cc. Yes, it is cursed, but the rest of Interix is surprisingly decent (it’s like a 2003 OpenBSD system… with PE binaries). If MS took it more seriously, they could have probably put a serious threat for CS labs.

      I think the only part of that one that I liked much was the notion that testing on weird machines is good for you.

      I used to think this, but I wonder how true this really is, or if you’re dealing with bugs that exclusively you run onto that machine due to its quirks. That, or proper static analysis/type systems would make sure they wouldn’t get caught; for example, accessing a struct with a wrong field size is fatal on big endian, but how much of that can be caught by the compiler instead?

      1. 1

        accessing a struct with a wrong field size is fatal on big endian

        How? Also, how does one even access a struct with a wrong field size? Because in 30 years of experience with both big and little endian systems, I’ve never encountered this type of bug.

        1. 1

          Suppose you have

          #include <stdio.h>
          #include <stdint.h>
          
          struct foo {
              uint16_t x;
              uint16_t y;
          };
          
          void bar(int32_t *i)
          {
              printf("Value: %x\n", *i);
          }
          
          int main(int argc, char **argv)
          {
              struct foo xyzzy = { .x = 0xFF, .y = 0x00};
              bar(&xyzzy.x);
              return 0;
          }
          

          On a big endian platform:

          $ ./wrong 
          Value: ff0000
          

          On a little endian platform:

          $ ./a.out 
          Value: ff
          

          Obviously, GCC warns here, but I’ve definitely seen this occur before and hidden due to casting (gotta love C aliasing), especially with unions. Unions specifically will need some padding before.

          1. 1

            Yes, any C compiler should warn here because you are passing the wrong type to bar(). Also, your example doesn’t use unions so I’m not sure what you mean here by “[u]nions specifically will need some padding before.”

            I’ve been using C since 1990, and yes, code written to support K&R C (pre-ANSI) is very problematic, and even code written during the transitional period (say, to the mid-90s) can be dangerous, but these days with prototypes and better warnings, code like the above should be rejected (and personally, if you have to cast, you’re doing C wrong).

            1. 1

              Also, your example doesn’t use unions so I’m not sure what you mean here by “[u]nions specifically will need some padding before.”

              The other example is going to be stuff like:

              union foo {
                  uint8_t u8;
                  uint16_t u16;
              };
              

              It is very easy to get the pointer/type to the wrong field. On LE, the LSB (I forgot bit numbering rules after years of reading IBM documentation, so I might mean MSB) is always the first byte, but on BE, the LSB will be variable, so it’s hard to predict what happens here. Especially because the padding could just be zeroes! (Brief so no code example, because I’m tired and up too late.)

              I’ve been using C since 1990, and yes, code written to support K&R C (pre-ANSI) is very problematic, and even code written during the transitional period (say, to the mid-90s) can be dangerous, but these days with prototypes and better warnings, code like the above should be rejected (and personally, if you have to cast, you’re doing C wrong).

              People say this until they realize how much aliasing is going on behind the covers with i.e. runtimes, where I had my opinions formed. The reality is C allows this (as UB) because of all the code abusing it.