1. 36

Twitter thread.


  2. 27

    Knuth has something similar about editable vs reusable code

    I also must confess to a strong bias against the fashion for reusable code. To me, “re-editable code” is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.

    Interview with Donald Knuth

    The older I get and the more code I write the more I’m convinced Armstrong and Knuth are right.

    1. 32

      This is a pretty spot-on example of why I stopped using Twitter; the first reply is someone telling Joe Armstrong that he’s never coded beyond hello world.

      1. 9

        What our Twitter reply is saying is of course Joe Armstrong is past Hello World, he’s saying that it’s impossible to be without dependencies and this is just “old guy” advice. Joe Armstrong is choosing to have his cake and eat it too chastising dependencies while the real world revolves around them.

        I think this example is anecdotal to Mr. Armstrong and that as a practice I’d rather people use their judgement. The argument stinks to me of handwaving and of the old class of developers who’d rather not learn new stuff.

        If you’re Google you can vert. up all you like. Don’t act like any old place but large ones can put up with an engineer trying to argue this in reality.

        1. 6

          My guess is that most languages wrap system IO routines, which means that even a “hello world” isn’t truly dependency free.

          Now, if Chuck Moore was making this statement, I’d fully believe it. He made a career out of building things to the exact spec necessary—the only dependencies were often encoded in the CPU (and other hardware)’s silicon.

          1. 6

            That’s not all! In his later years, Chuck Moore build an entire VLSI CAD system in Forth so he could design his own CPUs. Some of these were even fabbed. There might be a few left for sale.

            Sounds extreme, but it’s true. That’s why we call him “Chuck Mooris”.

            1. 5

              He’s the kind of programmer that “full stack” should refer to. An elite level where you can do everything from apps to full-custom silicon.

              1. 4

                @technomancy and I have a t-shirt that’s been in the works for a while. It’s totally my fault that it isn’t currently available (as it’s been done for ages), but it celebrates Chuck in a “witty” way. This might just be the nudge that gets me to follow through with the rest. ;)

          2. 14

            I’m glad that person made that comment, actually, even if it was a bit rude.

            When I first read Joe Armstrong’s statement “Of course - I try to write code with zero dependencies - code I wrote 25 years ago with zero dependencies still works today.”, my first thought was something along the lines of “hm, yeah, that’s sensible advice from Joe Armstrong, and he’s a well-known name he probably knows what he’s talking about”.

            Then some random guy told him publicly that he was wrong in an un-nuanced way.

            And I thought to myself, well, hm, that’s a rude way to put it - but yes, actually, “A software system simply cannot be built without components based on other components whether they are soft or hard dependencies.” has some merit to it as well. Maybe Joe Armstrong isn’t being completely literal when he says that’s he’s written telephony code for Ericsson base stations that’s run for 25 years with absolutely zero dependencies other than the compiler. Maybe some dependencies are getting “snuck in” via the compiler, or the hardware drivers for those base stations, or in changed hardware itself, even if the Erlang code running in the VM has been around for 25 years unchanged. Maybe the code he’s talking about that’s remained unchanged is itself a dependency for other code that changes more frequently - certainly there have been plenty of updates of the capabilities of the phone system over the past 25 years, and I doubt that Joe Armstrong’s quarter-century-old code now needs absolutely no additional augmentation now. Maybe telephony is a special domain, and eschewing dependencies is a good fit for solving telephony problems but a bad fit for, say, creating new pieces of software that do things undreamt-of 25 years ago.

            And none of this is to say that Armstrong’s points about dependencies increasing fragility and causing code bloat are invalid. He’s 100% right to point out that it’s bad to add 200K of CSS in order to make a button look pretty. But maybe doing that is the least bad of several trade-offs (if you couldn’t use bloated CSS libraries, maybe web design would look terrible, or maybe a bunch of useful websites would just never have been built). My own opinion is that things like 200K CSS files for one button are a local but not global minimum - the entire web is highly path-dependent, and relies on a bunch of inefficient hacks to make it usable as a software distribution platform, but there’s also a huge breaking-backwards-compatibility cost in moving away from the web and towards a better software ecosystem that does the same things as the web but doesn’t encourage 200K CSS libraries. Maybe webassembly is a small step towards climbing out of that local minimum to something globally better.

            1. 9

              Should we really need a prompt to think that way, though? Whenever I hear anything I imagine the counter position as a test. It’s a way of getting at the nuance. I like to think that it’s something we should foster as an engineering mindset.

              1. 12

                I recently learned about the principle of charity: https://en.m.wikipedia.org/wiki/Principle_of_charity. Most online discussion and tools could use a lot more of it.

              2. 9

                The argument would be helped if the original responder didn’t descend to ‘ball-licking’ level: https://twitter.com/ethericlights/status/1075531837286555648

                1. 7

                  “zero dependencies” but he does have the OTP system to support all this. Not like he’s rewriting the TCP socket logic in every program.

                  I think this is an argument for the seaparation of thought process between standard libraries and external dependencies. Standard libraries have very large backwards compat requirements in general, so relying on that is really not an issue. So having a standard library that is vast and covers a lot of non-controversial but tedious stuff (building HTTP requests, parsing JSON if that’s a thing, listening to sockets) can mean that you won’t feel the need to pull in external dependencies for the most part.

                  And what he said about vendoring in stuff and ripping out what you don’t need is… pretty good advice for lots of usages! Though libraries get added improvements, if you vendor in the libraries you can just change the stuff up to your specific use cases without much issue.

                  But yeah “I write a bunch of programs with no dependencies” kinda precludes a lot of stuff if you were doing something in C (for example). But zero-dep Python/Erlang(OTP) seems really doable in general

                  1. 3

                    Broadly agree, but zero-dependency C is often pretty doable too IME, for the kind of things where C shines. And even if you do need dependencies, well-known C libraries tend to be more stable than e.g. the nodejs ecosystem.

              3. 13

                This is baby filled bathwater.

                The problem is not the dependencies, so much as the pretending that you don’t have to store the code you depend on.

                Pretty much all software has dependencies. Most of the pain comes from late-binding-via-package-manager dependencies.

                Heh, I sound like a broken record: https://lobste.rs/s/3pfy6o/backdoor_popular_event_stream_npm_repo#c_yrsqfo

                1. 4

                  Lovely. I said something similar at https://news.ycombinator.com/item?id=16882140#16882555.

                  However, I weakly disagree that dependencies are not-at-all a problem. The question you run into is how far down you want to go; it’s dependencies all the way down. Our proposal of copying dependencies is a kludge. A powerful kludge that will hopefully help better see the real underlying issues. But still a kludge.

                2. 7

                  I can definitely see Joe’s point, and far from me to claim that I know one thousandth of what he knows. HOWEVER. Every time he says stuff like that it seems to be in reference to telephony/network/low level infrastructure related code, and I think this is an unfair comparison. The kind of code that he wrote pretty much his whole life has significantly different requirements from web apps or SaaS backends, or mobile apps.

                  1. 4

                    I think if you look at what he said, he makes that point himself:

                    ’Code I wrote 5 years ago with external dependencies often fails. ’

                    I presume that code is more ‘modern’ in context than the low-level code. His point isn’t that he did it better, but that dependencies are a risk to running software for a long time, and if that’s your aim, try and avoid them whatever your domain.

                    A lot of web app or SaaS code simply won’t last, so I guess it doesn’t matter so much for them.

                  2. 6

                    Reminds me of my favourite credo:


                    @sjl have you tried Erlang?

                    1. 4

                      Interesting that Losh lists Clojure as being something that breaks every time he updates it, when there’s a thread on the Lobsters front page about how Clojure never breaks when you update it.

                      1. 1

                        Might be the same effect as with TeX? TeX itself is incredibly stable. However, most people use LaTeX and a bazillion packages on top which are not stable at all.

                        1. 5


                          Matthew Bauer wrote this nice post on making reproducible LaTeX documents using Nixpkgs.

                          1. 1

                            I think that’s a reasonable observation. Losh does describe pain when having to specify version numbers for libraries.

                      2. 6

                        When somebody asked me last year for a motto that would fit on a license plate, I said NODEPS. But it took me half a career to figure this out, and it’s still frustrating how unactionable such advice from the greats is. I wish there was a well-defined way to take control of my stack, to collaborate with others without creating profligate dependencies, to be able to understand the internals of arbitrary codebases on an on-demand basis. Instead I’m having to figure out answers from scratch.

                        I’m likely more than halfway through my working career, and I’ll probably spend the rest of my life on it. Hopefully it’s not all just screaming into the void.

                        1. 3

                          NODEPS was a pleasant surprise when I recently evaluated the Open Location Code project: it provides deps-free implementations for several platforms.

                          Of course, this is easy to do for a pure algorithm like a “geo hash”. This hints at a parallel to “unit tests” vs “integration tests”. And a parallel to immutable vs. mutable programming models: state tends to manifest as a cognitively-expensive implicit dependency, which becomes explicit when you try to extract the system for re-use in other systems.

                          1. 1

                            I’m not quite following the parallel. Could you elaborate?

                        2. 5

                          The proliferation of easy package managers like Rubygems and NPM have made this lesson so much more obvious. The easier it is to change, the more likely one of those upstream changes will break you.

                          1. 3

                            OTOH, it’s become very easy to pin and document your dependencies with great reliability and precision, so from that perspective his attitude does seem a bit out of date.

                          2. 5

                            The subject is probably one of those domains where Lindy effect manifests itself.

                            “Dependency” is a term which is usually applied to newer software, e.g., your shiny new mobile UI framework; in opposition, compilers, standard libraries such as glibc, TCP stacks, are not framed as such, though, in some sense, they are.

                            So I buy Mr. Armstrong’s argument in the sense that avoiding introduce such obvious dependencies in your design is a good rule of thumb, for they’re expected have a worse life expectancy in comparison to older, well-proven, technologies.

                            1. 3
                              1. I’m confused, usually a “Twitter thread” refers to multiple connected postings by a single author. This here is more like a retweet with some additional comment by Joe Armstrong.
                              2. “Zero [External] Dependencies” can mean a lot, without additional explanation a) a meaningful discussion is difficult and b) dis/agreeing is fairly easy.
                              1. 2

                                It also depends heavily on what kind of software you write. Compilers have few dependencies and will work for a long time. Anything with a GUI probably does not.

                                1. 8

                                  Anything with a GUI probably does not.

                                  So, interestingly, if you use the Windows APIs, for quite a long time the software you wrote really did last a long time. Like, it’s only the internalized braindamage of the Web and Linux that’ve led people to mistrust their native OS’s GUI offerings.

                                  1. 4

                                    I learned a valuable lesson when a few months ago I tried to resuscitate a custom bit of laboratory equipment built for a one-off experiment in 2008. Fearing the worst, I found that the computer (GUI) interface was one giant single python file which someone had saved from the old CentOS box that was scrapped along with the rest of the experiment, using Tk, part of the python standard library.

                                    So anyway, I just typed `python2 $SCRIPT_NAME’ into my macbook and lo and behold up came the gui (ugly but perfectly functional) and it had comms with the box over pySerial and everything just worked first time. I was surprised, and I was so, so grateful for this unfashionable huge mono-file thing, rather than, I don’t know, something built using whichever long-dead PyQT was around at the time, managed with whatever package manager was definitely the right answer that year, like pyenv is now, and so on and so on.

                                    1. 2

                                      Backward compatibility is part of their lockin strategy. So, it makes sense theyd keep things working somehow to keep the cash flowing.

                                      1. 10

                                        I wouldn’t say “lockin strategy” so much as “value proposition”, though that’s very much “you say potato, I say potato”. :)

                                        1. 2

                                          Yeah, two sides of same coin here. :)

                                        2. 2

                                          That makes no sense. Backwards compatibility increases the utility of existing systems relative to the alternative.

                                          And if Microsoft did not value back-compat your indictment would be the “upgrade train”.

                                          In any case, back-compat is extremely valuable and is the prime directive of Linux kernel itself.

                                          1. 1

                                            The negative part comes with them making it hard for others to be backward compatible on top of hard to copy formats and protocols. Then, companies wishing to switch to competitors with better offerings cant without risk of critical stuff going down.

                                            The backward compatibility is one tool of many, esp obfuscated closed-source, that combine to create the lock-in. Also, lets them barely update the product or otherwise piss off users.

                                        3. 1

                                          internalized braindamage of the Web and Linux

                                          You’re going to have to explain this one to me. Linux, in my experience, has been perfectly backwards compatible for me, and I don’t just mean the kernel. Meanwhile every time I upgraded my Windows computer it broke.

                                          1. 1

                                            Think of it as a developer and not as a user–there are numerous competing APIs for doing the same GUI (and sound, christ!) work on Linux…this situation is not nearly so bad on Windows.

                                            1. 0

                                              That doesn’t make it backwards-incompatible. It makes it a good platform that isn’t directed by some horrible company for its own purposes.

                                              1. 3

                                                The web (and linux) are backwards-compatible, but lack an overarching plan for their API design (unlike windows).

                                                Since APIs were added as they were invented but never removed, both are kind of a mess to develop for.

                                                1. 0

                                                  I don’t know whether the web is a mess to develop for but Linux certainly isn’t