Call me an optimist, but we have achieved amazing things in the last 40 years. Many of them are indirect or not mainstream, but still amazing.
The first indirect thing we have is really strong type systems. Every single useful feature in type systems came out of PL research, even if we don’t all use Standard ML or Haskell.
An example of a off-mainstream amazing achievement is sel4. Ok, it’s still a lot of work to formally verify a relatively small micro kernel, but we did it.
Project Everest has gotten verified cryptographic software into mainstream browsers.
TLA+ is getting used at Amazon and Elasticsearch.
Tools like Jepsen apply property-based testing to real world projects.
There are practical successes everywhere, even if the average project doesn’t use research ideas directly.
I think many of those fields are not part of the author’s definition of “software engineering research”.
I’d be surprised if they tried to argue that we have made very little progress in programming languages , cryptography, or formal methods.
Right. What other research fields are there really? Almost everything boils down to PLs or formal methods in some way.
Databases, engineering practices, distributed systems, CS education, most things involving performance, defect detection, version control, production surveys? Those are just the research fields I recently read papers in.
Nice, I was honestly having trouble thinking outside of the research that I tend to look at.
Also, “software engineering” is a specific research area about the methods by which software is produced.
Very appreciative that this post focuses on the systemic issues which cause this (namely that all incentives align to short term thinking which is antithetical to good software engineering research). Wish it proposed some solutions, but even identifying a problem is useful.
I’m not sure I follow the logic that there is short-termism caused by publish-or-perish and that the solution should be, effectively, for the existing members of the field to voluntarily perish to make space for new entrants. Even given that people would do this, I see no reason to believe that it would work - wouldn’t the newcomers just reproduce the exact same structure, given the same pressures?
There is one study I am aware of which indicated that research areas tend to expand when “star” researchers die. The claimed reasoning is that those researchers, because of their influence both direct and indirect, limit the ability of research in areas they personally undervalue to get funded and published.
That said, you’re right that this doesn’t resolve the problem of anointing “star” researchers who then bottle up opportunities in the field; it simply changes which subareas are valued.