1. 20
  1.  

  2. 4

    All computers have a limited memory. Does that make all computers not Turing-complete?

    1. 6

      Yes. Technically, no physically implementable system can be Turing-complete. I believe the pedantic term would be LBA-complete.

      1. 3

        Technically, no physically implementable system can be Turing-complete.

        Even more of a technicality, this is making assumptions about physics that might not be true. If there is a finite amount of energy/mass in the universe, and a resolution limit like planks constant, it is probably true. On the other hand if you could do something like have your machine grow larger as more of the tape is used, or pack information more densely, without limit, it wouldn’t be true.

        As a stupid examples imagine you stored information by bouncing photons off of a mirror that was moving away from earth, as it got father away you have more space to fill up with photons, so as long as you never run out of energy to produce photons (or run into pesky issues like dust obscuring the mirror) you can have unbounded memory and a turing complete computer.

        In practice things usually don’t scale infinitely, it’s not entirely clear that’s a physical impossibility though.

      2. 1

        That depends, for two reasons. A Turing machine does not define any mechanism for I/O other than the initial and final tape values. Similarly, a Turing machine can only access (at most) n locations on the tape in n computational steps.

        This means that a Turing machine program that runs for a finite amount of time can be represented on a finite computer because although the Turing machine may have an infinite tape it can only ever access a finite subset of it and so you can compress the representation of the rest of the tape to a zero size.

        Similarly, the ability for a real computer to perform I/O means that the upper bound on the tape size is effectively unlimited (within the confines of the universe). A 32-bit process may only be able to store the equivalent of a 2^40 element Turing Machine tape in memory, but it can add arbitrary levels of indirection and use all of the rest of the universe to store an arbitrarily long tape. Of course, in theory the universe itself is also finite but in practice you’re going to run out of patience long before you run out of space.

        1. 1

          They’re only universal when state size is unlimited. A c64 can’t emulate a modern PC no matter how much time you have.

        2. 1

          It seems pretty similar to the RAR Virtual Machine that allowed to write short programs (“filters”) to prepare input data so that the compression ratio would be better. The RAR VM was deprecated in newer RAR format, I wonder what was the rationale, and how JPEG XL would benefit from a similar strategy that RAR has considered not useful.

          For some security-oriented analysis, Tavis Ormandy did some research on it: https://blog.cmpxchg8b.com/2012/09/fun-with-constrained-programming.html – I wonder if any aspect from this could be used against JPEG XL predictors as well.