At Strangeloop in 2015 there was a talk on Quorum, a programming language whose design decisions are supposedly made based on empirical studies (the designers have done various randomized controlled trials). The limitations of this approach were pretty apparent: you end up prematurely constraining the possibility space because there are too many ways to do any particular thing to study all of them, and you end up measuring things that you hope are proxies for what you really care about, because of course “programmer productivity” as a measurable thing is elusive.
Before you can do the science on PL design the way the OP talks about it, you first have to define in a measurable way the qualities that you want to investigate. Without that you have no way to make hypothesis, observe, measure and then refine. These measurable qualities are the part that remains elusive.
Some things like bug rates are measurable but bug rates are a partial measure of productivity.
even bugs have a super qualitative aspect to them. How long does it take to fix them? What is the impact? There are so many aspects that make certain bugs less painful than others.
I would hazard a guess that the issue with PL is that the traits that get a language adopted have very little to do with the language’s support for scalable, robust software.
Of course, it’s also worth mentioning that, at scale, language fundamentals are only a small part of what makes great vs. terrible software. If you use Haskell but do Agile bullshit (e.g. user stories) and hire commodity engineers, you’ll make shitty software. If you use Python or C++ (which aren’t great languages, but adequate) but hire excellent engineers, you’ll probably be fine. Community is a bigger factor than language fundamentals, and language fundamentals largely matter for the community they draw. However, in general, the reason people like me stand up for excellent but unpopular languages rather than adequate, popular ones is because these second-order effects matter. Great software can be written in any language, but there are very few languages in which even adequate software is the norm. Of course, most of the factors producing garbage software aren’t language-level issues…
For example, I’d bet that the reason Python took off in the mid-2000s was… literals for the map (dictionary) type. No, I’m not kidding. It’s not a very deep language-theoretic concern, but it’s something that a real-world programmer feels every day. In 2016, virtually every language has some approximation of this. Even C++ will let you pass curly-brace lists after C++11. But there was a time when very few languages had this “batteries included” attitude toward ease-of-use on something perceived as so simple. (You could argue that maps aren’t simple, due to differences between sorted tree vs. hash maps and the various issues around hashing, but I’m talking about perception.) Python had a great UX at a certain time, and that’s why it won even in domains to which it’s not supremely suited, such as data science.
In terms of PL, I don’t know that we’re “getting it wrong”. There’s nothing wrong with Python. Plenty of good work has been done in it. The quality of the programmers will always matter more than the traits of the language. Sure, I’d rather have static typing than not have it, but I’ve seen plenty of wrongness in software and I don’t consider dynamic typing intrinsically wrong.
Great software can be written in any language, but there are very few languages in which even adequate software is the norm.
In your view, what are the languages that have the norm of adequate (or better) software?
Haskell and Ocaml come to mind at the top of that list, and that could have more to do with their having a small community than with language fundamentals. Clojure is strong.
Perl would like to have a discussion with you about literals. :-) IMO Perl has far better ergonomics than Python, barring exceptions.
@calvin would be nice to un-linkbait this title, replace or append with a description of what the post is about?