1. 7
  1.  

    1. 20

      It’s important to observe that the original context of the “Blub Paradox” was Paul Graham (noted Lisp enthusiast) explaining why Lisp is the best language and every other language is for shortsighted dummies who have not yet accepted Lisp into their hearts. Amending his definition to treat language power as a lattice, rather than a linear ordering, is being quite generous to the original essay. There isn’t even much self-awareness in there with respect to the possibility that more expressively powerful languages than Lisp might exist.

      I think that the “Blub Paradox” belongs in the wastebin. Some languages are more expressive than others, but in general they cannot be ranked against one another outside the context of a task you’re trying to accomplish; programming in-the-small is different from in-the-large, rapid prototyping is different from writing software that runs on a space shuttle or a medical device, a shader program has different constraints than software for a low-power microcontroller, and so on. When the stakes are low enough, any language will do, but with strong constraints comes strong selective pressure for a language with a corresponding shape. A language feature can be “weird” and unappealing from the perspective of working in one of those domains if it isn’t useful there, or causes more problems than it’s worth.

      Much like testing methodology, practitioners with strong disagreements about the correct approach are often coming from experience in quite different domains, which in turn color their prejudices.

      1. 5

        I was expecting co-Blub paradox to mean something completely different. The idea that users of two languages might each consider the other to be Blub because there really is no well-ordered ranking of programming languages as more or less powerful. That is, LangB and LangC are both clearly more powerful than LangA, but along disjoint feature-design axes that make them impossible to compare to each other.

        1. 3

          ”Lisp is more powerful than C or Java ” - but Lisp preceded both (and had garbage collection before Java too!) Why isn’t everyone using Lisp? Maybe because Lisp actually isn’t that much more productive a language?

          (I discount Graham’s example here because during the dot-com boom you could make money programming anything)

          1. 2

            Well, to a great extent marketing. Sun, IBM, and now Oracle poured a bunch of money into promoting Java as the default language.

            Also that time period was a sweet spot in which machines were caught up in power to make Common Lisp practical, Common Lisp had years head start on Java, and frankly Java sucked at the time (it still does in many ways, but it’s a lot better and the vm and runtime have had huge investment).

            1. 3

              That just begs the question. If Lisp really is more productive, then some company or companies would have made money by promoting it, extending it, training people in it etc.

              Lisp might be a better programming language, but Java (and C) are better software development languages.

              1. 1

                No I don’t think it does beg the question. The Java ecosystem is much richer now, and some companies have made money from the lisp ecosystem. The thing is, there’s not actually that much money in programming languages per se.

                In short: something can be much superior and languish in obscurity, AND eventually the network effects of being big can outweigh the benefits of using something incrementally better.

                1. 1

                  I dunno. There are a few Lisps that are based on the Java runtime, and they can take advantage of the widespread access to the JRE on many environments. If Lisp qua Lisp delivered on the increased productivity its boosters claim, I would expect to see more of these Lisps in production, people hiring for them, etc.

                  Even if a language makes a developer 10% more efficient, if hiring or replacing that developer also costs 10% more there’ll be an implicit opposition against using that language. That’s why I qualified with “systems development” in my previous comment - it’s not just the language, it’s whether you can get long-term support, hire replacements/outsource etc.

                  1. 1

                    Ah I was comparing Common Lisp specifically with Java in the late 90’s/early 2000s as per Paul Graham’s essay.

                    Lisp as a family rather than as a specific language used to be very advanced but I’d say that most things found in lisps (macros, higher order functions, automatic memory management, object oriented programming, optional typing) have gone mainstream. It’s now more of a platform for embedding multiple paradigms and features in a single language (see racket, but also things like coalton or shen on Common Lisp).

                    If Lisp qua Lisp delivered on the increased productivity its boosters claim, I would expect to see more of these Lisps in production, people hiring for them, etc.

                    I think you’re assuming frictionless uptake of good technologies. Good ideas often simply don’t win. Pre-Colombian South Americans knew of the wheel and used it for toys but never made a vehicle of any kind with wheels. Europeans adopted the moldboard plough about 3000 years after the Chinese and about 400 years after Marco Polo visited China.

                    That said, I think that developer productivity and language “goodness” depends vastly more on tooling than anything else, and the ability to build good tooling is also more widely diffused than it was when Lisp weenies held their heads high.

                    1. 2

                      Thanks for an interesting discussion.

                      I think that developer productivity and language “goodness” depends vastly more on tooling than anything else

                      I agree with you.

          2. 2

            Turing-completeness tells us that every language can do every possible computation, and information theory tells us that choosing to encode some computations in less bits results in some other computations being expressed in more bits. Programming language power must then always be a tradeoff: a typechecker forbids some valid programs, a homoiconic tower of DRY abstractions must struggle to express some algorithm. But at the same time, not all computations are useful to us. So it doesn’t seem like ‘power’ is the metric we should be striving for: what we actually want is some notion of ‘compatibility’, that a programming language be able to concisely and understandably express exactly the computations we care about, to the extent that it can allocate the unreadable and convoluted part of solution space to computations we don’t care about.

            Interesting article for sure. Makes me wonder how many facets of programming language power are not discovered yet by humans.

            1. 8

              another thing: turing completeness doesn’t tell you anything about interacting with the real world.

              if it doesn’t have some amount of C binary compatibility, you can’t call OS functions, write a driver in it, etc.

              and performance is a feature you can’t abstract away. you can do any computation, but if your computation is to decode video at 30fps, and the program does it at 2fps, it didn’t really solve the problem

              1. 3

                Strongly agree. IMO the mistake here is thinking of languages as a linear-ish series of “power” where higher is strictly better. Design doesn’t work that way, design is the art of making tradeoffs, often complicated ones that include things besides computational expressiveness. Human ergonomics/familiarity, ease of compilation/execution, runtime performance or environment, suitability towards working on specific tasks (a good DSL can often out-express a good general-purpose language in its domain), etc etc. It’s a wide and complex field with various local optima, not a single canonical path towards some abstract Enlightenment.

                Languages are tools. Any language that doesn’t change how you think isn’t worth learning, but learning how to use a laser cutter or 3D printer definitely changes how you think too. Heck, learning to weld changes how you think. It’s silly to say that a welder is more powerful than a 3D printer or vice versa, though.

                1. 2

                  information theory tells us that choosing to encode some computations in less bits results in some other computations being expressed in more bits.

                  Nit: this is only true when no two shorter programs express the same computation, which I think is typically only true when shortening already-very-short programs.

                2. 2

                  Writing a book is a tedious and demoralizing process, so if this is the sort of thing you’re excited about, please do let me know!

                  Well that’s a very sad way to kick things off!

                  I don’t buy this. If the Blurb paradox was correct, surely this article would be impossible to comprehend unless you already knew all the programming languages it talks about? And if Co-Blurb was true, everyone would start programming in the most elaborate, feature-packed language first.

                  1. 0

                    Oh my.

                    That sinking feeling when you read something that you’re going to have to look up almost every other word, then to understand the definitions, learn the bare rudiments of maybe half a dozen new academic disciplines. You could spend a few days working through the resulting 300-odd browser tabs… if you have time… and at the end, you’ll know enough to understand what the article was talking about, but you won’t understand it very well.

                    Not well enough to do anything with it, and you never ever will, because of some life choices you made at 15 or 16, possibly intermixed with genetics.

                    Ow.