1. 3
  1. 6

    I think the idea that software doesn’t wear out is important, but after that, the perspective that software doesn’t fail doesn’t really help you:

    You never have anything close to a full specification, so it’s impossible to know if it’s correct or not.

    Your software will have to run in a large, changing range of software and hardware environments, that are all vastly under specified. So it might well work here and now, but not there and then. Bitrot is real.

    For example, you have to make a bunch of assumptions about what the OS or the CPU will do. But then Spectre comes along, or a bug in a dependency or platform. It does not help to argue “my software is correct” - it still does the wrong thing and still has to be fixed.

    1. 4

      An excellent example of how adhering to rigid definitions removes all utility from language. We can’t say software fails for the reasons the article describes, which are technically correct. We can only say that software does not work, and never did. But in reality according to the definitions set out, there is no non-trivial software that does anything useful and also works. So we can not say that a given piece of software works. So we are left with no language to describe the difference between two pieces of software, one of which works most of the time and one of which is plagued by constant failures and bugs.

      I am normally the one advocating for rigid definitions, but they have to be useful ones. The main hallmark of a bad definition is that it encompasses almost everything, or almost nothing within the set that it divides. In this case the word ‘working’ when applied to software encompasses almost nothing. This means that it is a bad (non-useful) definition.

    2. 4

      This is a funny idea. It would be true if computer systems existed in a vacuum. But they don’t, they’re integrated with other systems and those systems change too, to meet new needs.

      1. 2

        I don’t think the author would acknowledge the reality you seem to, that reality is more real than the author’s thoughts (i.e. abstractions). At the argumentative level, the author would just argue that this is specification failure rather than actual software failure (i.e. if the software conformed to the specification but the specification didn’t anticipate changes in the real world, then that is the specification’s fault, not the software’s).

      2. 4

        I think there’s some really juvenile sleight of hand here is failure to define the key term up front (i.e. “failure”) followed by failure to define any useful terms to replace the general usage the author criticizes. The author seems to confuse “sophistication” (presumably of thought and knowledge) with sophistry, which is in fact what this is.

        This might have been amusing but it is:

        • Far too long
        • Fails to make its point well
        • Extraordinarily juvenile and (ironically) unsophisticated

        I’d also quibble (and I feel justified here, in that this whole article is a quibble) that, in fact, the digital representations of books and films are in fact “software” in the sense that data is largely an illusion created by abstraction and in fact the files containing these are just instructions for the recreation of images on a screen or an audio waveform using certain hardware and software. So the author’s operating at a fairly high level of abstraction, a luxury granted by the hard work of a number of other people who built the abstractions for the author to (apparently carelessly) wield.

        1. 2

          It can only be failed.

          1. 2

            Software is only one part, and the least interesting part, of a system, and systems fail all the time. Software does go bad, because it’s a reification of a model; models are necessarily reductive and lossy; and the object of the modelling exercise – the world outside of the model – changes all the time. So, sure, it can’t “fail” but it absolutely starts to rot, as soon as it’s written.

            1. 1

              By the same logic, one could you say that hardware schematics never fail. If the schematic is incorrect, it doesn’t work. It’s the physical components that fail by moth or rust. So think the authors logic is consistent and practical.

              However, it’s a real philosophical question because all matter looks like it has internal coherency (rules of physics), even if we don’t get it fully. So is moth or rust a feature or a bug? A feature I’d say. So how could hardware fail then? Hardware was implemented wrong? Easy to blame something we don’t get and call it failure.

              Rules of physics can be thought of as code. Rules by which matter changes state. Again, what really is a failure? Our software is on top of other software or stacks. So if your software fails because of a uncontrollable side effect in a library or stack, then what? Git relies on Sha-1. If there is an identical OID, git fails. Think it may best to think in terms of side effects we can’t duplicate instead of failure.

              Functional programmers have the terminology and intellectual framework to grapple with this. Philosophers I suspect already struggled with this before the microprocessor and the Internet, like Plato and Aristotle. Can you separate form from the object, like code from hardware? Aristotelian and related traditional philosophers got very far with this question.