1. 20
  1.  

  2. 6

    Some of this is good, but it’s poisoned by a smug, patronizing tone that is a) all too common in this sort of material, b) extremely off-putting.

    1. 7

      I agree with what the author says about the abstractions of Unix and that they could be better but…it’s all phrased with the most contempt culture-y way imaginable and it’s really really icky.

      1. 2

        I may have a little bit of confirmation bias, but I think I might have thought that way at different times. Just shameful. Of course the irony is that I switched language fairly often.

      2. 5

        In general I think I understand what the author is saying, but I am entirely unconvinced that his replacement is much better. For example, he wants users to use a command line system and even a scripting language. I really don’t think he has seen real users.

        Now his statement about using languages that are not memory safe being engineering malpractice – I would tend to agree, if we had a real developed software engineering field. But I don’t think we are anywhere close to where that should be.

        Unix must be destroyed indeed.

        1. 1

          I always saw memory safety as an implementation problem rather than a language problem. There’s no reason a C implementation can’t guarantee that the application will immediately crash when out-of-bounds memory is accessed, like many languages with exceptions and such do. This is pretty similar to what some malloc implementations already do with guarding allocations and unmap upon free, although the ones that currently exist aren’t really silver bullets.

          However, concerns about type safety are definitely more of a language problem…

          1. 4

            There’s no reason a C implementation can’t guarantee that the application will immediately crash when out-of-bounds memory is accessed

            I am afraid that the C type system is unable to deal with these issues, so it would be left to a runtime system check or a static analysis tool. The runtime system would be unacceptable because it would slow things down, and a static analysis tool cannot be guaranteed to find all instances.

            An invalid array access is usually confined to undefined behaviour for this very reason. I can see malloc being the basis for a runtime system though.

            Looking at what rust or haskell do with memory access is interesting. Rust tries to contain ownership and Haskell just wraps it in a function call.

            1. 1

              Maybe finding most instances is enough - being able to ask for a large object and treating it like malloc treats an address space is a fact of life loophole in most general purpose safe languages, and a common technique in areas like embedded runtimes and emulators. Runtime checks are also a common solution in a few popular languages, but I suppose the trade-off is more acceptable there. :)

              1. 3

                Runtime checks are a great example of why the ‘culture’ of C++ is a benefit for performance: If you check the bounds in some manner yourself (to be explicit) and then access the element, you definitely do not want to have it check it again. So on an vector you can use .at(…) to check or operator[] to avoid the check.

                I think what would be nice would be a way of proving you have checked the bounds for a static analyzer in the compiler, and it can assume a certain range is thus valid. That would be a great way to discount the areas you don’t need to worry about.