1. 28
  1.  

  2. 6

    So, to recap:

    a) gtk+3 with clang was broken

    b) ..due to a missing -fvisibility flag that both gcc and clang support

    c) ..because the autoconf check was written to use a gcc extension that clang doesn’t support (nested functions)

    Sounds like yet another great case against autoconf to me. In the x86 monoculture of today, making a cross-platform autoconf script seems as error-prone as just writing a makefile directly to be cross-platform. Particularly since you still need to test your autoconf scripts on all your platforms.

    1. 3

      The autoconf check was probably unintentionally written in a way that caused the nested function.

      I’d put the blame on gcc adding a non-standard extension (nested functions in C) and enabling it by default.

      1. 3

        Yes, all bugs are unintentional. What’s your point? :) (The article actually confirms your first sentence, so there’s no need for the ‘probably’.)

        Tools doing non-standard things is precisely the problem autoconf was “designed” for. So your reasoning seems like rationalization.

        Feel free to blame one turd over another. Other shit can be shit; autoconf is still shit.

        In fact, OP graphically shows that autoconf was shit even in the 90s when a bazillion flavors of unix made it not utterly irrelevant. It tries to determine features a platform has using the potentially non-standard tools on that platform. Utter circular reasoning with no solid ground anywhere.

        Summary: autoconf was a crap solution to a poorly demarcated problem that no longer exists.

        1. 4

          The problem does exist, if you’re not living in a Linux monoculture. If you support *BSD+Darwin+illumos, testing if a little piece of code compiles to determine a config option is sometimes necessary. (Yeah you often can just ifdef for the OS… but that doesn’t scale. Damn, that reminds me of feature detection on the web :D)

          Thankfully, Meson exists.

          1. 4

            You don’t need to test for anything, you need to write your makefile targets assuming specific things are available for specific targets. BSD has kqueue and illumos has event ports, you don’t need to check for either kqueue or event ports, you simply assume kqueue is available for BSD and event ports are available for illumos. Coincidentally this also allows you to cross-compile code easily, as nobody, and I mean nobody writes autoconf tests that work correctly when the target system is different from the build system (not to mention the host system).

            If you must chose between one or another specific piece of technology that might or might not be there simply the best one as a default. Do not test for it, simply use it and make the build fail if it’s not there. Let the user chose the alternate technology by setting a make variable or whatever, do not test and chose for him.

            You can even have a configure script for convenience and compatibility with higher level build automation tools that expects GNU autoconf/automake source code, but the configure script must not do any probing, it must simply set some variables (the defaults I mentioned earlier), or whatever the user chooses. Then these remain “baked-in”, so the programmer working on the project doesn’t have to remember them each time he types make.

            But whatever you do, don’t probe. It’s wrong.

      2. 3

        Also a larger lesson about failure modes. When something breaks, is this obvious to the user? Or does it report success and leave you to detect the damage? A lot can go wrong when you try to do “smart” error recovery.

        1. 1

          It’s the same lesson as the fact that automated tests should never just test “does this thing throw an exception when passed invalid input?”. They must always test “does this thing throw the specific expected exception when passed invalid input?”. Otherwise the test might pass for any number of wrong reasons.

          So too autoconf should be checking whether a compile test failed for reasons specific to the thing under test – not just whether there was a failure of any kind at all. Of course you can only do that in either a boil-the-complexity-ocean way (by parsing the compiler’s error output, which requires support for specific compilers (in specific versions…)) or a boil-the-literal-ocean way (for every test, run two compiler checks: one that tests the null hypothesis, then one that tests the actual hypothesis).

        2. 1

          Mostly, except there is no evidence for the fact that the culprit was -fvisibility specifically. All they found out was that the configure script detected different flags under GCC vs Clang despite the compilers’ support not differing.

        3. 3

          Interesting article, but it’d be super interesting to know how dropping rtti and exception support led to the hang (or if those weren’t the culprit and it was some other compiler flags omitted from that snippet).