1. 18
  1.  

  2. 16

    Software has become easier…in certain ways.

    I think this is worth reflecting on a bit.

    In the late 90’s when I installed Linux for the first time, this was back when setting up X11 had a big warning that if you got your monitor refresh rate wrong you could damage it, I managed to install the OS, build my own kernel, and get the whole graphics stack running, and connect to the internet…without using the internet. I only had one computer and it was busy being barely functional while I was doing this. I then went on to write Hello World in C, which required putting a few lines into a .c file and using gcc to compile it. Pretty straight forward.

    I recently started trying to make a frontend project using TypeScript, React, and BlueprintJS. Discounting the hours I spent trying to figure out which frameworks to even use, I could not have accomplished my Hello World without around 1000 dependencies in NPM and stackoverflow. While I’m getting there, I’ve found the experience really challenging. Error messages are bad. Components make all sorts of silly assumptions. Even just how things fit together is really challenging to discover.

    I’m not saying installing Linux that first time was easy but I don’t feel that things are easier now. The author tries to say that business logic grows very complex. But this doesn’t match my experience. Instead I find that the complexity is just getting things to work because every framework is huge. Most Hadoop jobs I see are dead simple but they are 3000 lines of junk to get the ten frameworks one needs to count words to play well together. I’ve spent more hours just trying to get a Java dependency to work in my system than writing the code for that feature I’m using the dependency for.

    I think the author is right in that many developers today just do things where the reason isn’t backed by any kind of evidence or even makes sense after a few minutes of thought. Microservices are like this. I’ve seen a lot of places go hog-in on microservices without appreciating that their system is simple enough that maybe they could get away with 2 or 3 big services, or even one if they really wanted, and dramatically underappreciate the cost of microservices. Microservices have created an entire industry of solutions to fix a problem that many people impose on themselves for no particularly good reason.

    What I’m saying is that we need to head back in the direction of simplicity and start actually creating things in a simpler way, instead of just constantly talking about simplicity. Maybe we can lean on more integrated tech stacks to provide out of the box patterns and tools to allow software developers to create software more efficiently.

    I don’t think the author gives an operationally useful definition of simplicity. It sounds like they want a big platform that has all sorts of complicated stuff integrated into it, an idea that I don’t think is very simple. A more “integrated tech stack” usually just means you push all of the complexity into one place rather than reduce it. While a bit extreme for me, I think it’s unfortunate that Handmade hasn’t picked up more steam.

    I think it’s important to note, as well, that IME, a lot of software complexity is out of ignorance. Many developers just don’t know what simplicity is or how to achieve it. If you’re a JavaScript developer all of the examples you’ll run across are huge. Angular 2 installs 947 dependencies on a blank project. React installed 200 something. Maybe Go is the best example today of simplicity? Maybe all developers need to go through a six month course where they solve all problems with UNIX pipes.

    1. 3

      I agree with your assesment, I think there are a couple of different factors at play.

      To be fair, package systems, dependency management, dll hell has been a problem for a loooong time. One of the axis this difficulty splits on is richness of the standard library. For example, c programs on linux have libc linked in at runtime which offers a fair bit of functionality but requires a lot libs for anything else. Go programs compile a significant runtime library into the final exe that includes a whole lot of functionality. Go has a very rich library built in. Javascript has near zero standard library, and no real module system. Hence npm and dependency hell++.

      One thing that stands out to me today is how prevalent network backed package managers have become; maven, npm, heck debian and yum too. I also attempted to install linux back in the day, but I failed, because without a network connection, I coulnd’t use anything or research anything not on the disks I had available. Today, with a network connection you have access to all the dependencies and information you need. Most all languages and platforms have dependency managers and repositories available. In other words, (some) things are so much easier now.

      Which all contributes to the eternal september we find ourselves in today. The barrier to entry is a lot lower, which is a good thing. Every year there are more and more fresh/green developers, and fewer and fewer old experienced developers. Those of us who fought to do anything in the old days are growing few and fewer. The new generation didnt’ suffer like we did, and that’s ok. I think they suffer in different ways, as your lament about web development shows. The reality is, a lot of the cruft in javascript land is solving real problems, as much as I hate to admit it.

      It all brings to mind this quote from Alan Kay (~2004):

      Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

      1. 2

        FWIW, I agree with the overall gist of your post. However…

        Angular 2 installs 947 dependencies on a blank project. React installed 200 something.

        The @angular/core library has one hard dependency on tslib. React has a total of 16 direct and indirect dependencies. If you use a starter kit or one of the cli tools, they will install many more, but that’s because they’re depending on entire compiler ecosystems.

        1. 2

          I’ve seen a lot of places go hog-in on microservices without appreciating that their system is simple enough that maybe they could get away with 2 or 3 big services, or even one if they really wanted, and dramatically underappreciate the cost of microservices. Microservices have created an entire industry of solutions to fix a problem that many people impose on themselves for no particularly good reason.

          I can’t read this without hearing “I want to work on cool things” over and over. That’s what everyone says, right? I want a job where I can work on cool things. And every chance that’s presented to shoehorn in a cool solution to an otherwise simple problem is taken. Isn’t this exactly what we saw with “big data”? Where a company’s “big data” could fit onto an couple of expensive hard drives, but everyone wanted to use Hadoop or write their own distributed code? Hell, I fell into this trap all the time a few years ago.

          Another aspect of this is that we as programmers are really, really bad at estimating how efficient an approach is and how demanding a problem is. It’s why wise performance people (alliteration^2!) always shout at you “if you haven’t profiled it then you don’t know how fast it is!” And they’re right. And of course they’re right. What we work with, as you’ve noted, is horrendously complicated. But it sure feels great to stand back and say “well, a microservices approach would be a great fit here, and help us tackle our volume/latency/whatever requirements (that haven’t been measured), because partitioning the problem is more efficient (an unestablished claim), and our service will be more reliable (if it’s engineered correctly, maybe, again unestablished)”.

          Because acting like we know is just so much fun. I submit that this should be the definition of an “architecture astronaut”.

          Maybe “mechanical sympathy” exists, but if it does, it is hard won intuition born out of experience and lots and lots and lots of experimentation and measurement.

          Hell, I could set @peter off on a rant about this for—literally—hours.