1. 7
  1.  

  2. 4

    I’d wager the classical model is far more sane to the individual than the modern one: fewer tools, more time using them. Also, less of the weird collectivism going on where you must cobble your app out of ten different frameworks because we believe the crowd has more collective wisdom than the people actually doing the work.

    1. 4

      weird collectivism going on

      ↑ This formulation really captures the ’17 zeitgeist, Matt, in more aspects than just software development.

      To your point, though: I’ve recently come to realize that there is immense value in being able to slash dependencies and write just the code you need. Say you need to do A. Instead of reaching for the 1000-line library (with bad tests or memory leaks) that does A, but also does B,C,D, you can just write a small component that does A and be done with it.

      You do spend some time up front having to implement A, but you gain a lot more time in the future that would be spent on keeping up with changes, breakage, bugs, or drama upstream. Yossi Kreinin has a good blog post that touches tangentially on this topic.

      There is a lot more going for the classical model, not least the fact that engineers can actually learn their domain and become valued experts, vs. $fad-of-the-day-fungible-cogs.

      1. 1

        In my article I’ve made the argument that even small tasks like looping over a list can require a lot of wisdom you might not possess. As typical web developer you don’t know that much about the runtime and how to optimize it.

        If you are developing all by yourself, chances are high that you have to deal with many “unknown unknowns” (quoting D. Rumsfeld).

      2. 3

        This is depressing and horrible. No wonder everything we use is a tower of poor abstraction and riddled with bugs.

        1. 4

          Oh, no need to be like that, turn that frown upside down. :-)

          It is a great gig replacing all that webscale cruft with 10 lines of shell and making it faster in the process too.

          Usually it depresses the developers, but really they brought it upon themselves.

        2. 3

          (Slightly off-topic)

          I believe a much bigger change for software development in the last 20 years is the now pervasive use of CI. Thanks to cheaper hardware, we can now run expensive test suites for every commit on a collection of different platforms, architectures, operating systems, configurations, etc. Sure, testing does not catch every error, but it is great for regression testing. Once an error is fixed, you write a test and ensure that the error stays fixed forever.

          1. 3

            “This is what makes a good developer: finding the right combination of libraries, keeping them up to date, and reducing self-written code to the absolute minimum. Ideally the only code you write should be the part that makes your application special.”

            I agree with the author. The part he misses is that modern day applications do a lot more. Small teams build exceedingly complex applications. So yes while you have more tools at your disposal (open source, APIs, managed services etc.) the scope of the work has also expanded. So yes the work has changed, easier in some ways, harder in others.

            1. 4

              In my experience it is that the solutions are more complex than need be.

              1. 2

                You’re right!

                Actually I had planned to cover the change in scope of modern applications, but it was that much text, that I decided to write an own article about that topic ;)

                1. 1

                  welcome to lobsters!

                  1. 1

                    Thanks! :)

                    Nice to be here.

              2. 2

                My only beef with this write-up is that the process itself of keeping up with the ever-changing landscape of software typically takes some amount of know-how in evaluating it’s performance/scaling/resource utilization/deployment needs/etc. I’m not willing to stake my company’s future on a trendy new JavaScript framework just because it’s the first one that popped up on Google or on Lobsters. Instead I’m going to dive into the source code and the docs to really understand what it’s doing, how, and why it would or wouldn’t be useful to my task at hand.

                Really, we should be saying that the modern day programmer surely produces less code her/himself, but that doesn’t mean the modern day programmer doesn’t need to know how to write/read/evaluate code.

                1. 2

                  As I am the author of this article, I have to say that I tried to stress out the importance of being able to actually evaluate a new framework. In order to that, you will require lots of “classical” programming skills but also a lot of experience.

                  Still the question is, how far you do you have to dig into the source code? That depends on the project’s size. Given a programming project where a huge amount of developers will be working on for some years, you will certainly invest a lot of time in evaluation versus a one-man-show webapp where you simple use React or Angular these days.