1. 29
  1.  

  2. 31

    1995 was the year in which programmers stopped thinking about the quality of their programs.

    Use software from before 1995 and you know that this isn’t even close to true.

    Edit: To include an example, here’s a review for the High C 386 compiler from 1987 if you want to see just one of many examples of poor quality (and expensive) software rushed to market.

    1. [Comment removed by author]

      1. 5

        “Quality” to this author apparently means size and runtime efficiency rather than bug count or usability.

        In other words, it’s a chuntering “Kids These Days” polemic nobody is obligated to take seriously.

        Security was not addressed either which is the ultimate problem with unrestrained complexity ime.

        Talking about security without a threat model is so much air. Is your simple, proven program written in Haskell and verified to be absolutely free of any kind of resource leak or unescaped input bug secure? I don’t know, give me a lead pipe and five minutes with the person using the program and I’ll find out.

        One might ask how much of todays software will be runnable in 10 years?

        This is more to do with proprietary software, and is orthogonal to “quality” unless you define the word oddly.

        1. [Comment removed by author]

          1. 3

            Your entire post is vile misrepresentation of what was written. What’s your agenda?

            Maybe you should calm down a bit.

        2. 0

          It seems you misread the article. They were talking about abstractional layers, which is also a security problem. Who has the time to vet 13,000 dependencies for a TODO app?

      2. 22

        And no, you don’t have to learn assembly and start writing your web application in that. A good way to start would be to split up libraries. Instead of creating one big library that does everything you could ever possibly need, just create many libraries. Your god-like library could still exist, but solely as a wrapper.

        That’s exactly what the linked “dependency hell” example did. They imported create-react-app, which is a thin wrapper combining 15 other dependencies, which each wrap a few more, which wrap a few more… Also, I have no idea how he got 13,000 dependencies when the package-lock only has 7,400 lines.

        Also, node’s style of “many one-line dependencies” is specific to node, as it was encouraged by the npm company as best practice. Saying “Developers … have also become lazy” is unwarranted.

        1. 4

          The output of npm is a bit misleading with regards to dependencies installed. People looking to make a point about too many dependencies often quote that number instead of looking at the real dependency count.

        2. 14

          I used to struggle with this entirely unappealing view of our industry for a long time, and I still am, to some degree, but I hope some of these observations might make things a little more palatable. Please take this with a grain of salt – they’re personal observations, anecdata and factoids, not a historical study.

          • I think a lot of our perception of old software as better is shaped by the fact that, back then, we were younger and a lot more ignorant. That means existence was generally better for many of us (my teenage years weren’t necessarily happy, and my young adulthood was definitely terrible, but youth makes you cope in different ways). And that we didn’t see shortcomings in software the way we do now, when we have who knows how many years of experience behind us. For example, I have very fond memories about Windows 2000, mostly because I tend to remember the good things, not the fact that if you plugged in the network cable on a computer with fresh install, you’d get infected with the Sasser worm before you could download a firewall or install an antivirus. (There are exceptions, I guess. WindowMaker absolutely was bloody amazing :) ).
          • There was absolutely no shortage of poor-quality and bloated software back in the day, nor was there any shortage of lamentations about how bloated software got. But our perception is skewed by the numbers we’re working with today. We think software that needs just 128 MB of RAM is slim but 15 years ago it absolutely wasn’t. Mate is now revered as a lightweight desktop, but Gnome 2 was absolutely seen as a bloated and slow back then. On my machine, Emacs starts in less time than it takes VS Code to open the preferences screen, but Eight Megabytes and Constantly Swapping was a real thing. Fitting it along Netscape on my old Pentium II was a bit of an adventure even with 32 MB of RAM.
          • Worse: the software we tend to remember is the one that’s so good it stood the test of time, or at least brings back good memories. That’s not how most software was. 90% of everything is crap, at any time. Grab your favourite shovelware archive from the 1990s and once you get past the nostalgia, you realize lots of software sucked, and you realize it was painfully slow, too. Windows 10 absolutely is a bloated piece of garbage but it boots way faster than Windows 95 did on my old Pentium II. The fact that it’s not loading off spinning rust certainly plays a factor and it contradicts Wirth to some degree :). IMHO the only thing that modern software legitimately really does worse is latency, which really is embarrassing, but I’m optimistic that there are solutions for that at the horizon as well. (FWIW, I also think it’s really, really bad at efficient screen space use, but I reckon I’m being subjective here).

          Rushed releases of poor-quality software have always been a thing. Yes, today’s average enterprise software, written by some outsourcing shop with people coming and going so fast you’d think it was a pub, not a software company, absolutely sucks. But it doesn’t suck much more than your average piece of Fox Pro garbage, which also absolutely used all the RAM it had. Programmers back then would have gladly used 8 GB of RAM if they had it.

          Nostalgia is a hell of a drug. Don’t go down the rabbit hole, you feel good at first but nothing good awaits you in the end.

          Also Wirth is a quiche eater and Real Programmers shouldn’t listen to him :).

          1. 13

            About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much!) An operating system had to manage with 8,000 bytes, and a compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary. Were it not for a thousand times faster hardware, modern software would be utterly unusable.

            Alright; I’d like to see people using an editor fitting in 8k of storage compiled with a language fitting in 32k[1] as a daily driver.

            Okay, perhaps it’s a bit lame to focus on those numbers specifically; but my point is that programs have also become immensely more useful. Editors – even editors like Vim – today do a lot more than 40, 30, 20, or even 10 years ago. And how long did it take that guy to build that TODO app with his 13k dependencies? How long would it have taken 30 years ago to build a similar program?

            And look, I’m not a huge fan of large dependency trees either; but I think this article is a gross oversimplification.


            [1]: tcc is 275k on my system.

            1. 7

              Over time you should expect things to become more powerful and easier to use! Look at microcontrollers for instance. They have become smaller, faster and easier to use over time. The argument is that software seems to be getting slower at a more rapid rate than it is gaining functionality.

              Hardware has increased in speed and gained fancy new features at the same time. Why is it that modern websites are sluggish? Why can’t we have our cake and eat it too? More complicated software that is also faster (given that the hardware is getting faster – this should be free).

              I don’t think software being more powerful or complicated is at all a fair argument for why it is slower.

              1. 6

                I don’t think software being more powerful or complicated is at all a fair argument for why it is slower.

                I think it’s entirely fair. That doesn’t excuse sluggishness in modern software for some tasks (like say, typing lag), but it does explain some of it. If you’re doing a lot more work you should expect it use more resources. There is also user expectation: those with more computing resouces tend to want their software to do more.

                I don’t want my software to be sluggish either (it drives me up the wall when it’s slow for basic tasks, which is why I cannot abide web apps for the most part), but if you’re going to compare old software that did very little with new software that does a lot as the post does, then discounting the feature sets is not at all a fair comparison.

                1. 8

                  I would 100% agree with you if the medium on which software is run on hadn’t increased in speed by orders of magnitude over the years. Has software increased by orders of magnitude in terms of power and complexity? Maybe, but then it should be the same speed as software was a few decades ago.

                  The fact is that software has gotten more complex and more powerful, but not nearly to the extent that hardware has gotten faster. There is certainly some reason for this (although I disagree with the article on what that is).

                  1. 4

                    Certainly it seems like it should be a lot faster, and maybe it’s slow, but there’s no question that I’m more productive on systems now than I was in the mid 90s.

                    1. 3

                      Recently, I was on a Windows machine and was obligated to write a text file. It had NotePad++, which I remembered as a “good editor,” so I used it. On Linux/Mac I’ve gotten used to a well-tuned neovim running in a GPU accelerated console… suffice to say that Notepad++ is no longer in its previous category.

                2. 5

                  Most webpages aren’t that slow; most of the “big” websites generally work quite well in my experience. There are exceptions, of course, but we tend to remember negative experiences (such as websites being slow) much more strongly than positive ones (websites working well).

                  A lot of the slowness is from the network. When you’re sitting in front of a webpage waiting for it to load stuff then most of the time the problem is that the HTML loads foo.js which triggers a XHR for overview.json and the response callback for that finally triggers data_used_to_render_the_website.json. Add to that the 19 tracker and advertisement scripts, and well… here we are.

                  There’s a reason it works like that too, because people want mobile apps (for different platforms!) and whatnot these days too, and turns out it’s actually quite tricky to serve both a decent website and a decent mobile app. It’s probably underestimated how much obile complicated web dev.

                  Note that some things are ridiculously slow. I have no idea how Slack manages to introduce 200ms to over a second of input lag on their chat; it’s like using a slow/inconsistent ssh connection. It’s been consistent crap ever since I first used it 5 years ago and I don’t quite understand how people can use Slack daily without wanting to chuck their computers out the window. But slow and crappy software is not a new thing, and Slack seems the exception (for every time I visit Slack, I’ve also visited dozens of sites that work well).

                  At any rate, it’s much more complex than “programmers stopped thinking about the quality of their programs”.

                  1. 5

                    Most webpages aren’t that slow; most of the “big” websites generally work quite well in my experience. There are exceptions, of course, but we tend to remember negative experiences (such as websites being slow) much more strongly than positive ones (websites working well).

                    We certainly use a different subset of the modern web! I find even GMail is sluggish these days, and often switch to the HTML-only mode. Jira (and basically all Atlassian projects) are what I would call “big” websites and wow are they slow.

                    A lot of the slowness is from the network. When you’re sitting in front of a webpage waiting for it to load stuff then most of the time the problem is that the HTML loads foo.js which triggers a XHR for overview.json and the response callback for that finally triggers data_used_to_render_the_website.json. Add to that the 19 tracker and advertisement scripts, and well… here we are.

                    Eh, I don’t fully buy this. Is it the network’s fault that every website comes bundled with 30 JS modules that need to load and then call out for more crap? I mean sure, with no-js this doesn’t become as much of an issue – and I don’t actually understand how someone can use the modern web without it – but I wouldn’t blame the network for these problems.

                    There’s a reason it works like that too, because people want mobile apps (for different platforms!) and whatnot these days too, and turns out it’s actually quite tricky to serve both a decent website and a decent mobile app. It’s probably underestimated how much obile complicated web dev.

                    Modern webdev is unbelievably complicated. I’ve been working on a project recently that dives into the depths of linker details, and it is nothing compared to how complicated setting up something like webpack is. But I would also argue that this complexity is superficial. Things like Svelte and Solid come to mind for what I think the modern web should look more like.

                    Note that some things are ridiculously slow. I have no idea how Slack manages to introduce 200ms to over a second of input lag on their chat; it’s like using a slow/inconsistent ssh connection. It’s been consistent crap ever since I first used it 5 years ago and I don’t quite understand how people can use Slack daily without wanting to chuck their computers out the window. But slow and crappy software is not a new thing, and Slack seems the exception (for every time I visit Slack, I’ve also visited dozens of sites that work well).

                    I’m right there with you! Its really unfortunate that no matter what company I go to and how good their engineering fundamentals are, the tools used are Jira, Slack and every other slow website.

                    At any rate, it’s much more complex than “programmers stopped thinking about the quality of their programs”.

                    I completely agree with you! Unfortunately, I’ve seen quality take a back seat far too many times to “just get something to work!” that I do think it is a part of the problem.

                    1. 5

                      I’m not a huge fan of modern web dev either; in my own app I just use <script src=..> and for the most part ignore much of the ecosystem and other recent(-ish) developments. /r/webdev called me “like the anti-vaxx of web dev” for this, as I’m “not listening to the experts, just like the anti-vaxx people” 🤷‍♂️😂

                      But at the same time the end-result is … kind of okay, performance-wise anyway. Most of my gripes tend to be UX issues.

                      Is it the network’s fault that every website comes bundled with 30 JS modules that need to load and then call out for more crap? I mean sure, with no-js this doesn’t become as much of an issue – and I don’t actually understand how someone can use the modern web without it – but I wouldn’t blame the network for these problems.

                      That’s kind of an unrelated issue; a lot of these SPA websites are built against a JSON API, so it needs to call that to get the data and it just takes time, especially if it’s a generic API rather than an API specifically designed for the app (meaning it will take 2 or more requests to get the data). Good examples of this are the Stripe or SendGrid interfaces which feel incredibly slow not so much because they got funky JS, but because you’re waiting on those API requests

                      1. 4

                        I’m not a huge fan of modern web dev either; in my own app I just use and for the most part ignore much of the ecosystem and other recent(-ish) developments. /r/webdev called me “like the anti-vaxx of web dev” for this, as I’m “not listening to the experts, just like the anti-vaxx people” 🤷‍♂️😂

                        This is hilarious! I’m with you though.

                        That’s kind of an unrelated issue; a lot of these SPA websites are built against a JSON API, so it needs to call that to get the data and it just takes time, especially if it’s a generic API rather than an API specifically designed for the app (meaning it will take 2 or more requests to get the data). Good examples of this are the Stripe or SendGrid interfaces which feel incredibly slow not so much because they got funky JS, but because you’re waiting on those API requests

                        That’s a good point. It might not necessarily be slow frontend, but it is still slow engineering. One of the previous places I worked at I fixed up an endpoint which was just spewing data, and upon talking to the frontend engineers they were using maybe 10% of it. Made a pretty significant speed improvement by just not sending a ludicrous amount of unused data!

                        1. 2

                          Yeah, it’s inefficient engineering, but consider the requirements: you need to make some sort of interface which works in a web browser, an Android app, an iOS app, and you frequently want a customer-consumable API as well.

                          This is not easy to do; if you build a static “classic” template-driven web app it’s hard to add mobile support; so you have to build an API alongside that for the webapp for the mobile apps to consume, which is duplicate effort. You can trim the API to just what you need for this specific page, but then other API users who do need that data no longer have it.

                          For a comparatively simple site like Lobsters it’s fine to not do that since the desktop UI works reasonably well on mobile too, but as soon as things start getting more involved you really need a different UI for mobile, as it’s just a different kind of platform.

                          It’s a difficult problem to solve, and it was much easier 20 years ago because all computers were of the same type (a monitor with a keyboard and a mouse). People are kinda figuring this out how to best do this, and in the meanwhile we have to suffer Stripe’s UI making 15 API requests to load the dashboard.

                          GraphQL is intended to solve this problem, by the way, but that has its own set of problems.

                          1. 1

                            Oh its certainly not easy, but I also don’t think it is particularly difficult. The reality is that there needs to be separate APIs for each of the use-cases (app, webpage and customer-consumable), since they all have different upgrade cycles and usage-types. One of the problems I see often is everyone wanting there to be one API for everyone and that will never work efficiently.

                            Netflix has a nice blog post [1] about how they handle the number of APIs they have (Netflix has gaming consoles, smart TVs and a whole host of other platforms that their API supports and it isn’t one “mega” API for all of them). They essentially have a Proxy API on the server-side which bundles all of the microservice APIs into whatever API calls the various frontends need. That way backend engineers can keep publishing APIs as they see fit for their microservices, and frontend engineers can group together what they need into an efficient API for their platform. And note, I’m using “frontend” loosely since there are so many different platforms they support.

                            Of course whether this effort is necessary for a small shop is unclear but for a bigger place (like Stripe or SendGrid) it is frankly poor engineering to not be fast.

                            I was very excited about GraphQL for a little while, but you’re right it does come with its own set of problems. Its still yet to be seen whether it is actually worthwhile.

                            [1] https://netflixtechblog.com/embracing-the-differences-inside-the-netflix-api-redesign-15fd8b3dc49d

                            1. 3

                              Of course whether this effort is necessary for a small shop is unclear but for a bigger place (like Stripe or SendGrid) it is frankly poor engineering to not be fast.

                              Yeah, I don’t know. Netflix is perhaps a bit of an outlier; they also managed to make their extensive microservice platform work very well for them, whereas many other organisations’ experiences have been markedly less positive.

                              Perhaps a big part of the problem is that there’s no “obvious” way to do any of this. That Netflix approach looks great, but also a very non-trivial amount of bespoke engineering effort with quite a lot of operational overhead. It doesn’t look like something you can pick off from the shelf and have it “just work” for you, so companies tend to focus their engineers other efforts as they feel that gives them better ROI. That sounds kinda lazy, but these kind of projects can also fail, or worse, get implemented and then it turns out it doesn’t work all that well and then you’re stuck with it and it becomes a liability.

                              The issue with GraphQL is that it’s really hard to implement well on your backend. Basically, it allows your API consumers to construct arbitrary queries against your database (okay, you can lock this down, but that’s how it’s supposed to be used). It’s certainly possible to facilitate this, but it’s not an easy problem to solve. Perhaps radical improvements in the network stack (e.g. with QUIC and others) will alleviate much of this (because all of that is also an inefficient organically grown mess).

                              On HN one of the Stripe devs said they’re working on fixing the performance of the dashboard a few weeks ago after I complained about it by the way, so at least they’ve acknowledged it’s an issue and are actively working on a fix. Perhaps things will get better :-) Actually, I already have the impression it’s better compared to half a year ago.

                              Also, I’d like websites from “small shops” to be fast as well. All of this shouldn’t be higher engineering you can only do if you can afford an ops team and 16 devs.

                              My own approach is to just use server-rendered templates with about 700 lines of jQuery sprinkled on top :-) It sounds very old-fashioned, but it actually gives very good results, IMHO. Then again, I’m not Stripe or SendGrid and am operating on a rather different scale in almost every way.


                              Also, related: last night (after my previous comment) I happened to be talking to a friend about the new reddit design, and I figured I’d try it and see if it’s any better (I normally use old.reddit.com). I couldn’t believe how slow it is. Like, I literally can’t scroll: it just jumps all the time and Chrome uses both CPU cores to the max. Granted, I have a pretty slow laptop, but what the hell? What a train wreck. My laptop can play movies, play games, compile code, and do all sorts of things, but it struggles with … a text website.

                              This is the kind of stuff you’re talking about. Yeah, some sites really are inexcusably bad.

                            2. 1

                              if you build a static “classic” template-driven web app it’s hard to add mobile support

                              It was like, a handful of lines of CSS and HTML for my static site to be supported on mobile with no frameworks. And often I go on a website and the sheer frameworks that they use get in the way of me using their site on mobile. This UK government coronavirus website was apparently optimized for mobile but none of the buttons worked, even when I zoomed in on them. This BBC News website was impossible to navigate because it kept on freezing up on my recent phone.

                              Whereas web forms from 20 years ago work fine and are perfectly fine to use, they might require some zooming but nowhere near as much effort to attempt to navigate, and at least they work.

                              Things will work well enough if you let go of the frameworks. Let the browser do it’s job!

                    2. 1

                      Hardware has increased in speed and gained fancy new features at the same time.

                      Yet hardware also has loads of legacy cruft and compatibility layers making things slower than they would need to be. But that’s inevitable with the incremental nature of development. But I think incremental development is the only way to gain experience. And then every now and then you have a new technology come around and get a chance to start from a somewhat empty slate and apply what you’ve learned. I have high hopes for RISC V for example. And I think there are similar developments in software. Better tools and languages allow for writing better and faster software. A good example ist Rust, where you regularly see blog posts of Rust reimplementations of the supposedly pristine C implementations of ancient Unix tools outperform the C version, because the compiler can optimize more, because it has stronger guarantees from the language and you can use more high level constructs. Similar to that, I think that webassembly will improve the web performance story quite a bit.

                    3. 5

                      Having used some old editors that come with some C compilers prior to 1985 and running them on emulators to try and do real work on those systems, I can attest to them being woefully underpowered compared to modern editors.

                      1. 2

                        I still have fond memories of the Lattice C (SAS/C) editor circa 1987. Maybe I’m a masochist.

                        That being said, I hate editors with a lot of features. I end up never using 90% of the features so all those features do for me is add complexity, slowness, bugs, and unexpected modalities. I suppose that explains why my favorite editors are ED (from AmigaDOS/TRIPOS), the original vi (yes I know modes), and sam…

                        1. 1

                          I used to be similar, but after neovim added async plug-ins (so the features don’t affect the performance of the main interface), I started to build a much more extensive collection of them. Language servers are a fantastic innovation, allowing the good parts of working in an IDE without loading up some monstrosity.

                      2. 2

                        Tangentially,

                        I spent a few years using ex-vi (I’ve also in the past used ed(1) and vim’s vi(1)), C, POSIX, and surf(1), along with a minimal desktop environment (that was roughly, openbox with custom shortcuts, and XMonad) and it was pretty fun. It’s a nice feeling to know that my programs will work in a decade with minimal changes. Now I’ve moved to Python and Doom Emacs, on the one hand, nothing has changed much. Some things are maybe easier, maybe not. On the other hand, it’s given me a respect for the things that are easier.

                        One thing I will note is that the lag in using these new ‘high powered tools’ is much, much greater. Despite the fact that doom emacs, for example, goes out of it’s way to make latency when actually using the editor something that doesn’t bother the user. Loading up chromium takes an age when you’re used to surf popping up in under a second. Waiting for lightdm to start, log in, and then ‘start’ GDM is excruciating when you’re used to being dropped to a terminal after boot and loading Xorg in under a second.

                        There isn’t that much of an advantage to all of these bells and whistles, most of the complex stuff averages out because grep mostly gives results in the same time that an IDE takes to show the dialog. Everything you can do now you could achieve with shell scripts and get the same convenience and practically perform the task each day with about the same timing.

                        1. 1

                          You can do a modern programming language with a text editor from 1994 : Rus Cox co-authored Go with ACME.

                          1. 2

                            Russ made a video intro on it: https://research.swtch.com/acme

                            1. 1

                              You do not let any choice, I must try it again now! https://9fans.github.io/plan9port/ or http://drawterm.9front.org/ + http://9front.org/

                          2. 1

                            Vim is 37MB. Can someone please explain to me why a text editor like vim needs such a huge size?

                            1. 5

                              Where do you get this number? The vim executable on my work Ubuntu desktop is 2.6 MB, another <megabyte of shared objects, and 4.5 MB for libpython3.6m which is optional. A download of Vim for Windows is 3.6 megabytes, so about the same. Did you miss a decimal point?

                              1. 2

                                Not to put words into @Bowero’s mouth but the latest VIM sources are almost 60MB, so maybe they were referring to source or source+buildtime resources or translations or something?

                              2. 5

                                37M seems wrong, it’s nowhere near that on my system:

                                -rwxr-xr-x 1 root root 2.6M Jun 12 18:05 /usr/local/bin/vim*
                                

                                That 37M probably includes runtime files? Which aren’t really needed to run Vim but are just useful:

                                88K     ./plugin
                                92K     ./colors
                                128K    ./tools
                                136K    ./print
                                140K    ./macros
                                224K    ./pack
                                276K    ./compiler
                                892K    ./indent
                                1.1M    ./ftplugin
                                2.1M    ./autoload
                                2.5M    ./tutor
                                3.6M    ./spell
                                6.7M    ./syntax
                                8.2M    ./doc
                                27M     .
                                

                                Does Vim need support for 620 different filetypes, detailed reference documentation, a “tutor” in 72 different languages, or some helpful plugins shipped by default? I guess not; but it doesn’t get loaded if you don’t want to use it. It’s just useful.

                              3. 1

                                I used an editor under MS-DOS that was 3K in size, and could handle text files up to 64k (if I recall—it’s been 30 years since I last used it, and I only know the size because I typed in the source code from a magazine). It was a full screen editor.

                                Turbo Pascal pre-4 was I think 50k, and that included the editor, so that would fit your criteria. Do these editors give the functionality found in modern editors or IDEs? No. But they are usable in a sense [1].

                                [1] In the “Unix is my IDE” sense, using the command line tools to search and process text.

                                1. 6

                                  But would you still choose to use those tools today? I mean, Unix was built with teletypes and printers, so you can undoubtedly build very useful things with limited tools, but why use a limited tool when you’ve got enough computing power to use a more powerful one?

                                  1. 1

                                    I might, especially if I found myself back on MS-DOS. I’ve tried various IDEs over the years (starting with Turbo Pascal 3) and I never found one that I liked. Back in the 80s, the notion that I would be forced to learn a new editor (when I had one I knew already) is what turned me off. Since the 90s, I’ve yet to find one that wouldn’t crash on me. The last time I tried ( just a few years ago) I tried loading (never mind editing or compiling) a single C file (57,892 bytes) and it crashed. Hell, my preferred MS-DOS editor I used (40K executable, not the 3K one I mentioned above) written in 1982 could handle that file. You can’t use what doesn’t run.

                                2. 1

                                  A few minutes?

                                  #!/bin/sh -e
                                  
                                  mkdir -p "$HOME/.config"
                                  
                                  case "$1" in
                                  ('')
                                          exec sed '=; s/^/  / p; s/.*//' "$HOME/.config/todo"
                                          ;;
                                  (add)
                                          shift
                                          exec echo "$*" >>$HOME/.config/todo
                                          ;;
                                  (del)
                                          tmp=$(mktemp)
                                          sed "$2 d" "$HOME/.config/todo" >$tmp
                                          exec mv "$tmp" "$HOME/.config/todo"
                                          ;;
                                  (edit)
                                          exec $EDITOR "$HOME/.config/todo"
                                          ;;
                                  (*)
                                          echo >&2 "usage: todo add text of task to add"
                                          echo >&2 "       todo del num"
                                          echo >&2 "       todo edit"
                                          ;;
                                  esac
                                  
                                  1. 4

                                    Right-o, now ask your mother, spouse, brother, or other non-technical person to use that. It’s equivalent (probably even better) for people like us, but it’s not really an equivalent user-friendly GUI program.

                                    1. 1

                                      I honestly think it could have been done likewise in 1985 (35 years ago) before people unable to use a shell would be entirely unable to access a computer’s files and features.

                                      If it really got extremely popular, it could have been adopted by 30 users author included.

                                      In 1995 whoever finding it on internet willing to give it a try would download it, spawn COMMAND.COM (Windows 95) try to run it, see it fail, open it and wonder “what the fuck?”, close it.

                                      I guess a todo.apk would get some more luck today for roughly the same time in Android Studio.

                                      1. 2

                                        I honestly think it could have been done likewise in 1985 (35 years ago) before people unable to use a shell would be entirely unable to access a computer’s files and features.

                                        Yeah, probably. But, for better or worse, the “average user” is quite different now than it was 35 years ago. Actually, this is probably one of the reasons things are so much more complex now: because while I’m perfectly happy with a script based on $EDITOR ~/.config/todo, this clearly won’t work for a lot of people (and that’s fine, not everyone needs to be deeply knowledgable about computers).

                                        1. 1

                                          Agreed! A lot of things happen in 35 years! Software shaping society at high pace, society shaping software likewise.

                                        2. 1

                                          Which means that the challenge is becoming increasingly harder for many factors (more, less skilled, more distributed people expecting more done faster by computers of more diverse types).

                                          We need keep the software stack manageable, as if a simple Todo app already takes us libraries to get the job done in reasonable time, I do not want to know what your accounting software will require to be maintained (SAP anyone?).

                                          And the TODO app made with 13k dependencies means a huge amount of time spent maintaining the 13k dependencies for everyone. Now we cannot stop maintaining these 13k dependencies because every TODO app in the world now relies on it.

                                  2. 7

                                    The article decries libraries as software bloat, but in 1993, it was the opposite declaration.

                                    Poor management of software development is another important contributor of flab. Good management would prevent programmers from spending countless hours reinventing wheels, all of which end up on your hard disk.

                                    Also, OO and reusable objects will save us from oversized software.

                                    Perhaps the most promising development is the coming of object-oriented operating systems. The upcoming Taligent operating system from Apple and IBM, along with Cairo from Microsoft, promises to save unnecessary code by providing the user with a series of objects that can be purchased, enhanced, and linked together as necessary.

                                    Byte magazine, April 1993.

                                    1. 6

                                      It was Edsger W. Dijkstra who tried to improve the quality of code and coined the concept of structured programming.

                                      This line makes me think that the piece is starting from a bit of hero worship more than anything else. Structured programming is a wonderful idea, but it doesn’t have much to do with bloat–it’s just off topic.

                                      On top of that, structured programming has won completely[0], and just about every party to every dispute in programming today uses structured programming. You can argue about the next layer of ideas, about how to handle dependencies and abstraction, and OOP and everything else, but you’re probably doing structured programming.

                                      [0] Every so often someone will note that the Linux Kernel does use GOTOs, but reading Knuth’s paper shows just how prevalent GOTO statements were at the time https://www.cs.sjsu.edu/~mak/CS185C/KnuthStructuredProgrammingGoTo.pdf.

                                      Edited to emphasize that the main point is structured programming is unrelated to bloat. The point that structured programming has won is also true, and contradicts the thrust of the post, but is less important.

                                      1. 4

                                        Modern programming languages have completely destroyed the old-fashioned goto. You can only use goto within a given subroutine, as opposed to being able to use it from any arbitrary line of code to any arbitrary line of code, absolutely regardless of any other boundaries the language has. In short, modern goto respects the subroutine (and, therefore, the class and/or module) as an absolute bulkhead, whereas old-fashioned gotos allowed you to knock as many holes in the walls as you wanted.

                                      2. 4

                                        I wonder if the current state of software was inevitable. The article’s point that it is entirely due to libraries I think slightly misses the root reason. As software began to dominate, we had a complete dearth of programmers. I believe that this lead to the rise of languages and tools which were very quick to learn and use by beginners – in order to increase the supply of programmers. This of course leads to dynamic-typed languages like Python, Ruby, and Javascript – which are easy to learn!

                                        But of course they are slow, since the complexity of programs was shoved onto the VM and interpreters. So much of modern software is written in these languages because they optimized for the beginner experience – which for a long time was extremely important.

                                        Over time, as the programmer supply began to level out a bit, we’re seeing the rise of statically typed languages – which also happen to be faster! They are more complicated to learn and use, but we are no longer optimizing for the beginners experience. For instance, Rust programs use a ton of libraries but are incredibly fast (being compiled and optimized). Similar with Go, although I’m not particularly familiar with the library situation in Go.

                                        I don’t think the issue are libraries but instead just the type of developer our economy optimizes for. While programmer supply is low, we want to optimize for the beginner experience at the cost of speed and reliability. As the supply normalizes I think we’ll see (and are starting to see) that speed and reliability will make a comeback.

                                        1. 4

                                          I feel the biggest problem with the influx of developers is just lack of training and collective experience. A bunch of 20-somethings in their first job building a startup is almost a stereotype, but it’s also kinda true, to a degree. You can be very smart and talented, but that doesn’t compensate for experience.

                                          Add to this that 1) many are constantly switching languages/frameworks instead of actually gaining in-depth knowledge about a specific piece of tech, and 2) don’t tend to stay in the same position for more than a few years (in fact, doing so is “stagnation” and considered a sign that you’re incompetent by many, there was a large HN thread on this just the other day) means that a lot of software is being written by a lot of different people working on a project for a relatively short period of time, who tend to not be very experienced (either in general, or with the tech they’re working with).

                                          There are plenty of experienced people too of course, but if you have 8 inexperienced people and 2 experienced ones then it’s kinda hard to manage. A better ratio would be the reverse. At my last job a lot of “senior developers” were in their 20s, which is not out of the ordinary in the wider industry. These were not bad developers, but I think software is the only industry where it’s common to be considered “senior” in your 20s.

                                          I’m not sure if there’s anything that can be done about this, except wait until the growth slows down. And maybe also stop expecting every developer to know React, Vue, JavaScript, TypeScript, PHP, Python, MySQL, Apache, Docker, k8s, AWS, or what-have-you and consider them “outdated” if they don’t (I was rejected for a Go job last year because I don’t know React).

                                          I don’t think abstraction or ““shoving complexity” is a bad thing. If I’m writing some code then ideally I’d like to focus 100% on the actual code and logic, and as little as possible on “housekeeping”. It’s just more efficient. I’m okay with this having a performance impact, because much of the time it’s still fast enough.

                                          I think the reason that static typing is becoming more popular now is just because we, as an industry, have more in-depth experience with dynamic typing now. It takes at least a decade to get the experience of building and maintaining complex systems, and turns out that people were probably a bit too optimistic about dynamic typing.

                                          No one really knows what the best to write software is though; even a fundamental question such as “is static typing better than dynamic typing?” is essentially unanswered. It’s also just a pendulum shift in zeitgeist: just as election results swing back and forth over time, because it all looks great when you’re just reading a party programme, but a party actually being in power tends to be different.

                                          I’m not particularly familiar with the library situation in Go.

                                          Dependencies tend to be eschewed in general; adding them only when it’s really needed. “A little copying is better than a dependency” is one of the “Go proverbs”.

                                        2. 4

                                          Some of this is trading simplicity for memory usage. For example, it used to be a heroic effort to identify misspellings of words. There was a lot of code in spell checkers to identify words, compress to trigrams and compare frequencies, and various other tricks.

                                          Today, you load a dictionary into a hash table, drop some common suffixes or prefixes, and see if the word’s there. Yeah, there’s a bit more, but in general you can throw memory at the problem and not worry about compressed tables and heuristics.

                                          On the other hand, there’s a huge amount of boneheaded code that isn’t trading simplicity for memory usage – it’s just plain wasteful, layering abstraction over abstraction, instead of using the surrounding system effectively, or clearly writing algorithms.

                                          1. 1

                                            using the surrounding system effectively

                                            Somehow, I have the impression that programs that avoid making use of the surrounding system to aim independent from their system, such as portability, get at the opposite very hard to port.

                                            Example: a whole programming language: Java.

                                            clearly writing algorithms

                                            Interesting how on one hand there is bit twiddling to decrease memory usage, while a peek at an algorythm book would have gotten as much memory saving done in some cases.

                                          2. 4

                                            This is a thoughtful post, which I was glad to read, but I’m not convinced the author’s solution is, um, a solution. On the contrary, I think that the proposed solution - make many small libraries - has already been tried in some languages, such as JavaScript. It turns out that it contributes to the problem rather than making it better.

                                            Notionally, I would have thought that unnecessary functionality in libraries is among the problems that smart linkers are supposed to address. I realize that not all languages have smart linkers, and in fact many languages have semantics that make cross-module dead code removal all but impossible. I think that points to a problem that I suspect is bigger: In modern languages, we’ve built a lot of layered abstractions, and these abstractions come with a lot of overhead.

                                            I think this is a question that could be handled quantitatively, so I won’t claim to know what “the” answer is, but I do think it deserves much more serious research than anything I’ve seen to date. The author is entirely correct that we can’t go on like this.

                                            1. 2

                                              Part of the problem in the JavaScript world is that there is no concept of libraries or modules or dependencies — there are only blobs, even though they may have been minimised, uglified and polyfilled from NPM libraries on the developer’s machine.

                                              So we end up downloading hundreds of copies of JQuery every day.

                                              1. 1

                                                Very true.

                                              2. 2

                                                Sometimes, it is good to get a topic handled right, such as dedicating a library for:

                                                • formats and encoding (json, low level wire protocols, DSL, audio/image…)
                                                • a hard topic, like crypto, that rarely messes with a format (X509 being notable exception), math (large number handling)
                                                • communication with hardware, or a 3-rd party project: that lets the platform to interact with evolve and update their library, and let everyone benefit from the changes without having to update its code.

                                                But library shall avoid depending on another library.

                                              3. 1

                                                I think about this a lot. The worst part is that we’re not even getting a lot of additional functionality in the process. I find that my usage patterns with my phone haven’t changed much in the past decade. The functionality of the apps I use is about the same, yet the hardware needed to run the device has gotten incredibly more powerful.

                                                I feel like part of the problem here lies with capitalism creating wrong incentives. Companies have a drive to continuously sell new hardware, so it’s actually convenient for them that software continues to get ever more bloated necessitating new hardware with each cycle.