1. 29
  1.  

  2. 6

    from the paper’s conclusion: “Unfortunately, our work has identified numerous problems in the FSE study that invalidated its key result”

    tl;dr: So nobody is still yet sure, as a matter of scientific fact, if programming language design matters for code quality.

    1. 2

      The TL;DR doesn’t follow. There’s a lot of earlier research on this and related topics.

      1. 2

        Please share the research?

        1. 2

          This is a good start. I don’t have my copy on me so I can’t share direct references. It is an incredible book, though. https://www.amazon.com/dp/0596808321

          One thing that studies tend to agree upon is that defect count correlates with LOC. This implies that quality is generally better in terser languages.

          1. 3

            I don’t know if that follows. More code, more defects, ok, but is defect density a constant?

            C in particular is notorious for having competing brace styles, which can affect line count by as much as 50%. It seems improbably that I can add or remove defects by running indent with different settings.

            1. 1

              I haven’t read that book, but I’ll see if I can find a copy, it looks interesting.

              From my perspective, when I talk about Scientific fact, I’m talking about well researched, duplicated proof. Not just a paper or two(even if in a peer-reviewed journal) that seem to suggest X is true.

              i.e. The earth is round, the earth rotates around the sun. I would be very surprised if anyone could find any scientist anywhere that would disagree with either of these facts.

              I think your statement that studies agree that defect count correlates with LOC (i.e. more code == more bugs) would certainly count as fact, but your implication here, that terse languages produce less bugs I don’t think qualifies.. because then are you also implying we should all write code in brainf*ck? Surely you don’t think that is true… do you?

              1. 1
                1. If that’s true, then we need an explanation of why this study couldn’t see the effect.

                2. Do the original studies on this point include a wide variety of languages? It seems quite plausible that each language (and possibly style) has a loc:defect ratio, but that it’s not constant between different languages.

              2. 1

                Here’s one on Ada vs C. patrickdlogan posted one here somewhere showing Smalltalk more productive than competition. Memory-safe languages, by design, are immune to common classes of error. Languages like Standard ML and SPARK Ada are designed for easy verification.

                Been quite a few results about language design preventing or helping to detect problems.

          2. 4

            There’s been more scientific research than this on how programming language design affects human performance and code quality. There are a few interesting papers in the References section of this: http://www.cs.cmu.edu/~NatProg/papers/CHI2016-SIG-ProgLang-Usability.pdf

            Stefik and Siebert did an emperical study on programming language syntax and error rate, then designed an “evidence based” programming language called Quorum based on the results of their study: https://quorumlanguage.com/lessons/guides/IntroductionToQuorum.pdf

            1. 1

              To me a lot of this language can really nicely translate to functional programming. IMO software dev does not follow lambda calculus closely enough which really boils down computational representation to its simplest forms.

              if(expressionIsTrue, function then() { ... }, function else() { ... });

              “Everything is just computation” is amazingly effective.

              A more complicated version (curried version too?) would be:

              if(expressionIsTrue)(() => {})(() => {});

              Where it’s returning functions to take then/else functions.

              1. 1

                Even more lambdishly:

                true  = λx y. x
                false = λx y. y
                if    = λt x y. t x y
                
            2. 0

              Original paper does seem to hold up well under this reproduction study.

              1. 8

                That’s exactly the opposite conclusion I took from this paper. Did you mean “does not?”

                1. 4

                  Damn, yes I meant does not.

                  1. 2

                    I agree with jec, most of the conclusions in the original paper do hold up under new analysis.

                    I think you meant “does not”.