1. 42
    1. 25

      I think it’s a lot of different factors, many of which aren’t attitudes or beliefs but complex systemic issues that happen in all knowledge fields. At the very least, there’s not much curation of software knowledge. When he says

      Where had all the ideas about reusability from the 1970s, the 1980s, the 1990s and all the years after gone?

      My first thought is “so where can I read about all these ideas about reusability in one place?” Did someone make a guide to them, or do I have to sift through dozens of primary sources?

      1. 8

        It’s a good point, there are books out there which try to collect all the good practices of software development in one place, but a lot of them disagree, and the actual practices are different across different software paradigms and use cases.

        I also think one reason why some of these ideas keep coming up as new is that new companies don’t need a lot of these practices when they start, and only really discover them at a certain size? So maybe each wave of engineers rediscovers this history anew.

    2. 17

      Where had all the ideas about reusability from the 1970s, the 1980s, the 1990s and all the years after gone?

      I’m not sure which ideas specifically this refers to, which makes it difficult to discuss. But based on reuse in places I’ve worked:

      • Bit rot is very real, and it has not yet percolated to management just how much work it is to keep a piece of software stable and maintainable over several years. Some significant fraction of the amount of work which was put in to develop the software in the first place has to happen continuously for the software to stay useful, updating dependencies, redeploying whenever anything changes, verifying integration with third parties still works, etc. Dialing that effort down to zero most of the time is going to make any updates fraught and frustrating, since nobody stays familiar with the code, practices and pitfalls over time.
      • There are a practically infinite number of axes along which developers can optimise. Call it fashion if you will, but the community clearly values eking every single byte out of RAM less now than 10 years ago, and values things like being able to utilise a GPU more. If the software is not optimised for the currently most relevant metrics, then it’s correspondingly harder to justify its reuse.
      • IMO, reuse of actual code is a pipe dream, only attained in the most trivial cases or when somebody in power pushes hard enough. The ideas in the code should be understood and reused when appropriate, but even that is really hard to achieve when developers are expected to be coding most of the time they’re not in a meeting.
      • Programming is still very much in its infancy, where for every piece of wisdom there are 100 edge cases where it doesn’t apply or is actively harmful. Until the field stabilises in maybe another 100+ years, we’re all just going by gut feel, N=50 studies on grad students, and whatever has worked in the past.

      Unlike other engineering disciplines, we do not create our body of knowledge, foster our timeless insights, work to extract the essence from the solutions we found yesterday and make it available to the community of today.

      We sort of do, though. C2 comes to mind, as does Stack Exchange. It’s just that, again, most “timeless” insights (not every) have a short half-life, or are at least framed in ways which have a short half-life.

    3. 11

      Personally, I think point 4 “We face a continuous stream of new developers” probably dwarfs the other factors. But that’s just my opinion.

      1. 4

        I don’t think having new people join an ecosystem is as much of a problem as having older people either leave or become sidelined. A single experienced person can train several junior people at the same time, the problem starts when the ‘arrogance of youth’ from the article is endorsed by management and ignoring prior work becomes corporate policy. This has basically been the Silicon Valley startup mindset for a few decades. It probably was at Microsoft 40 years ago, but a lot of the arrogant youths stayed around for a long time and are now experienced experts. Companies like IBM, HP, and Arm have largely managed to avoid this mindset (the Arm founders, in particular, were well aware of their arrogance in even attempting what they were doing and tempered it by actively learning anything anyone else had done that could help), though they’ve all suffered from different management problems.

        1. 5

          A single experienced person can train several junior people at the same time

          I’m not sure this has really been my experience. Or rather yes, you can train several junior people at the same time but that’s pretty much a full time job. Obviously this varies by where you’re at, but most shops I’ve worked in have maybe a 5:1 ratio of people who would really benefit from some technical mentorship to people who could competently mentor them. I’ve done a lot of mentorship and I wouldn’t get one other thing done in a day if I had 5 people to work with.

          Which isn’t to say I disagree with you, senior people fleeing into management or retiring or burning out does absolutely compound the problem. But I think the issue is at both ends, people with a measly 15 YoE are rare and CS and boot camp graduation rates go up every year.

      2. 1

        Reminds me of the 10K XKCD comic.

    4. 7

      My opinion is this is not a problem of attitudes or beliefs; it is a market problem and it’s called deskilling.

    5. 7

      It sounds like we have even forgotten about the continuous amnesia issue.

      What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows. — Socrates, on the invention of writing ~2500 years ago

      I was also very disappointed when the author didn’t share his thoughts on reusability… I think we (as a discipline) have learned and retained quite a lot about reusability…! So much that most new programmers are not even aware of the reuse anymore.

      Long long (long) gone are the days of learning to program by first copying the code for a usable text editor from a magazine into the computer.

      1. 5

        In every era you can find people making the same complaints: young people have failed to learn what is already known, young people are disrespectful and dissolute, every year is worse, society is collapsing, and new media are making us into morons.

        And yet, largely speaking humanity’s collective ability to produce food, heal injury and disease, communicate over long distances, preserve knowledge, and kill each other mostly increase rather than decrease.

        1. 4

          In every era you also find people responding to any longer term observation of decline that it has always been thus and that there is nothing to worry about.

          I know this:

          • coworkers seem less and less interested in building up skills, and maintainable code, and expect to ditch whatever they’re working on in 2-3yrs

          • enormous amounts of lessons from the desktop era have been lost. zoomers don’t know any of this stuff and have to discover the hard way that replicating serious app UIs in browsers is real work

          • compilation and tooling is still slow as balls, despite computers having gotten much, much faster.

          • relying on an IDE to codegen imports and boilerplate will create a codebase where you can’t find anything and every file is huge

          • people keep thinking they can write O(n^2) state transitions better and faster than they can write correct O(n) intent, and they keep being wrong.

          1. 4

            Addressing your first point: the tenure of an awful lot of software devs seems to be 2-3 years. Then they get a new job, for one or more of the following reasons:

            • the company folded or had layoffs
            • the company has a policy of not giving competitive raises
            • companies think 2-3 year stints are normal
            • that’s the amount of time needed to dig their way into trouble
            • they have acquired enough resume points to find a better-paying position

            All together, it seems to be a reasonable personal choice which is terrible for the world.

          2. 1

            Yes but for the most part it was ever thus. I think we genuinely haven’t figured out yet how best to propagate knowledge within our field.

    6. 7

      I think a large problem is that “timeless ideas” in computer science are often presented in the context of a specific technology—while the ideas may be timeless, the context isn’t, which makes it easy to dismiss both as outdated. For example, I remember getting a lot of value out of Damian Conway’s Perl Best Practices—it’s a great book, even if you’re not a Perl programmer, but you won’t realize that until you’ve read it. In a similar vein, MySQL Performance Tuning is a great introduction to understanding database query planning in general, but if you’re exclusively using Postgres you probably wouldn’t pick it up.

      On the whole I’d like more books like The Pragmatic Programmer which takes a higher-level approach and isn’t explicitly tied to any specific language or technology.