1. 21
  1.  

  2. 3

    This phenomenon is amusing the first time you encounter it. But after running into it over and over, in multiple languages, it stops being funny and now becomes a major preoccupation. It is mighty time we investigate its fundamental causes. Why do programming language features that individually seem sensibly designed, have such unexpected interactions when put together? Perhaps there is something wrong with the process by which language features are usually designed.

    1. 2

      So what are your thoughts there?

      1. 3

        Doesn’t this behavior seem common to many/most software systems? Initially systems are conceptually simple and consistent, but overtime they get adhoc extensions that cause unexpected complexities. Those extensions cause unexpected behavior and possibly bugs.

        Programming languages seem to have more strict backwards compatibility requirements than many other software projects so it makes sense that mistakes would accrue overtime.

        1. 1

          Backwards compatibility is only a manifestation of a more fundamental problem. General-purpose programming languages are meant to be, well, general-purpose, i.e., address a very large space of use cases that you cannot possibly hope to enumerate. Designing language features based on ad-hoc use cases is a mistake.

        2. 1

          Design features of general-purpose programming languages based on general principles, not use cases. Make sure that your principles neither (0) contradict each other, nor (1) redundantly restate each other. This is more likely to lead to orthogonal language features.

          By definition, use cases are concrete and specific. They are useful as guidelines for designing software that addresses concrete and specific needs, and is unlikely to be used in situations that you cannot foresee in advance. If a user comes up with a gross hack to use your software for something else, and ends up burning themselves, you can rightfully tell them “Well, that is not my problem.”

          However, a general-purpose programming language does not fit the above description. By definition, the ways in which a general-purpose programming language can be used are meant to be limitless, but you can only imagine finitely many use cases. A general principle has the advantage that you can apply it to a situation that only arose after you stated the principle.

        3. 2

          One would imagine that the superior way would then be to make extending the language as easy as possible. In other words, Lisp. Every Lisp developer is a potential Lisp developer (wink). The extensions would compete against each other like regular libraries do, and the cream would rise to the top.

          But the actual effect (at least the way the Lisp community currently is) seems to be that since extending the language is so easy, everybody just extends it to their own liking and no (or very rare) centralized improvements that everyone adopts happen. Nobody codes Lisp, but Lisp+extension set #5415162.

          Or perhaps it just has too many parentheses. Pyret might show if that’s the problem.

          1. 2

            This just pushes the problem onto the user community. A programming language needs a vision, and a vision needs a visionary.

          2. 1

            The problem isn’t the features, it’s that people expect to use something as complex as a programming language without a single bit of reading.

            Nobody expects to be able to just waltz up to a bridge building project and play around without knowing anything about engineering. Yet people think that Python should just work exactly the way they imagine it to work in their head.

            1. 1

              it’s that people expect to use something as complex as a programming language without a single bit of reading.

              The problem you mention is very real too, but it is not fair to blame it only on the language users. Programming languages are designed in a way that makes it difficult to learn all their intricacies. Oftentimes, even the language designers are not aware of the consequences of their own designs. Features are conceptualized by their inventors exclusively in operational terms (i.e., “How do we desugar this into smaller imperative steps?”), and not enough thought is put into question “What does this actually mean?”

              Try picking arbitrary pairs (X,Y), where X is a programming language and Y is a feature in X not shared with many other languages. Enter X’s IRC channel and ask why Y was designed the way it is. Count how many times they actually give you a design rationale vs. how many times they reply as if you had asked how Y works. And, when they do give a design a rationale, count how many times it looks like a post hoc rationalization of the prior design.

              1. 3

                The problem is that people have a shallow, surface-level understanding of two features, then when they combine them they act in a way that you can only understand if you have a deeper understanding of the features. Then they throw up their hands and say ‘WTF?’

                ‘WTFs’ in programming languages, a ‘meme’ that really started with PHP in my opinion, made a lot more sense when it was the deeper design of individual features that was batshit crazy. Now people are just applying it to every language they don’t like. Two features interact in a way that doesn’t make sense from my perspective of shallow understanding? Must be the language that’s broken.

                If you actually understand the features in the context of their design - which yes, might very well be syntactic sugar over a series of small imperative steps, what’s wrong with that? - then you’ll understand why they work the way they do.

                1. 1

                  If you actually understand the features in the context of their design - which yes, might very well be syntactic sugar over a series of small imperative steps, what’s wrong with that? - then you’ll understand why they work the way they do.

                  Sure, you will understand the mechanics of how it works. But this will still give you zero insight on why the feature makes sense. It might turn out that the feature does not actually make the intended sense. Consider this answer by the ##c++ quote bot on Freenode:

                  • unyu: !perfect
                  • nolyc: The C++11 forwarding idiom (which uses a variadic template and std::forward) is not quite perfect, because the following cannot be forwarded transparently: initializer lists, 0 as a null pointer, addresses of function templates or overloaded functions, rvalue uses of definition-less static constants, and access control.

                  In other words, the feature’s behavior deviates from what its own users consider reasonable.

                  what’s wrong with that?

                  The problem is that it is ad hoc. Memorizing lots of ad hoc rules does not scale.

                  Programming is a purposeful activity. When you program, you usually want to achieve something other than just seeing what the computer might do if you give it this or that command. The meaning of a language feature is what allows you to decide whether using the feature contributes towards your actual goal.

                  1. 4

                    I’m not at all defending C++ here. It’s a perfect example of where there really is a problem. But I don’t think that the Python examples on the linked page are like this at all. They’re basic interactions of features that make perfect sense if you understand those features beyond the basic surface level.

                    Some of them (e.g. ‘yielding None’) aren’t ‘WTFs’ they’re just bugs. Bugs that have been fixed! Some of them are basic features of Python, like default arguments being evaluated at function definition time. One of them is that you need to write global a to be able to modify a global variable called a within a function! That’s a good thing! That’s not a WTF. An entirely local statement like a = 1 suddenly modifying global state because you added a new global variable would be a WTF.

                    1. 1

                      Oh, okay. You have a point there.

          3. 3

            Expecting a reference equality operator to return True when seemingly doing no work to ensure the references are guaranteed to be the same seems to be more of a “WTF Programmer” than “WTF Programming Language”.

            It may or it may not and both seem perfectly reasonable to me.

            Might as well run System#nanotime four times and be surprised that the difference between the first two invocations is sometimes the same as the difference between the second two and sometimes not.

            1. 2

              Everything here is far more ‘WTF Programmer’ than ‘WTF Python’.

            2. 2

              Nothing at all here is surprising or strange. Guess what? You need to learn a language before you start writing code in it. If you combine complex features in complex ways of course you’re going to get complex behaviour. What is surprising about that?

              1. 2

                I don’t think you get this shit with scheme. I think a lot of this is due to an overly complicated structure and grammar.

                E.g. you don’t need statements and expressions. Just use expressions for everything.

                And if you want fancy syntax, make sure it desugars to something that makes sense in scheme without too much macro wackiness.

                1. 1

                  The emoji in this are pretty cringy….

                  1. -2

                    holly cow, is this the new javascript?