There are so many little things in here that are of interest, or not considered:
Yes, threads often introduce bugs (because they are easy to do poorly if you don’t understand them enough). But STM (the stated alternative) has performance problems associated with it. It’s not that people choose threads because they have a religious belief in them, or that they have some deep-seated hatred of STM. It’s often that threads provide the most efficient way of doing what they want to do concurrently.
It is unreasonable to expect language developers to scientifically prove that their latest additions are worthwhile. The slides suggest, for example, that it is wrong of the Java team to introduce new features in the latest versions, because without formal proof that they are worthwhile they may be a waste of time. Let us perhaps assume that, even without proof that such a change is worthwhile, the core team knows enough about the language and the people who use it that they can make an informed decision about adding features that we can trust. (Beyond that, why do we care if some individual language “wastes time” by adding a feature no one wants? There’s no moral imperative to converge on the “best language” as supported by research. Languages are a market, and if the market doesn’t like where the language is going it is likely to move to other languages over time.)
The information about introductory languages at universities is glossed over. This seems really interesting though. First, I am really surprised to be COBOL, Ada, and Alice (never even heard of it) included in the list of introductory languages. Second, what’s the proportion. Are there any measured outcomes based on choice of introductory language? I know, for example. that Harvey Mudd overhauled their CS program a few ago, which included moving to Python as an introductory language, and that this resulted in an increased in participation from minorities who likely would have previously started and then dropped out. In that case, the simpler language allowed them to feel more comfortable in the field, and likely reduced the influence of Imposter Syndrome and Stereotype Threat for those students, resulting in them remaining in CS.
Finally, and this is the most important point, academics should absolutely strive to properly study the things they claim to study. If Computer Science is going to call itself a science (and I have serious doubts that it is), then it should abide by the scientific method wherever possible. Experiments without control groups and analyses with bad statistics are all problems that CS academics should address, whether through stricter expectations for acceptance into conferences and journals, or through serious dialogue.
I disagree about adding scientific expectations to language developers. If it’s a toy or a research language, great, hack away.
Otherwise, yeah, studies suggesting a feature actually reduces bug counts or shortens the development cycle is so valuable that it’s reckless not to do so. Imagine introducing a new medical procedure with no testing, or a design for a building. It’s unthinkable because those fields are now quite safe, but the causation runs the other direction: the fields used to be horrifically dangerous but rigorous science made them safe. Computer programs have terrible reliability, why wouldn’t science be the right approach to improve them?
re “Threads v. STM” I think the author was just pointing out that it’s one of about three things that have ever been tested. It would be great to have studies comparing the effectiveness of threaded programming vs non-strawman alternatives instead, but nobody is doing them.
There’s more information and a link to a recording1 of the talk on the codemesh site here: http://www.codemesh.io/codemesh2014/andreas-stefik