1. 0

    When you’re doing a termination proof, you disregard the fact that the computation might not terminate due to an external reason. Similarly you can prove that a language is turing-complete. The model is supposed to disregard external factors such as the limited amount of memory.

    1. 2

      The limited amount of memory allowed by the C specification is not external to the question of whether C is Turing-complete. The problem is not ‘there’s limited memory in practice’ but instead ‘C forbids an implementation from having unbounded memory’.

      1. 1

        But that doesn’t necessarily change things because with infinite amount of memory you could run simultaneous instances of C program. First one runs with pointer size 1, second with 2, third with 3, so on up to infinity. All machines get the same input, and machines are discarded up to until they all produce the same result without overflow error in calculating the addresses.

        1. 1

          Hmm that is an interesting point indeed. Is it allowable to say ’if we would run out of memory, restart with twice as much memory and sizeof(void*)++.

          1. 1

            If you pick up the Peano numbers, 0, s0, ss0, sss0… There’s infinity included in there because you don’t have a restriction to how many times you can increment a number other than running out of the paper or patience.

            Since Peano numbers do not have an end, what exactly is definition of ‘infinity’ in these terms? There may be several but one plausible approach might be to treat infinity as the set of all Peano numbers that you can construct. Giving you a set that is infinitely large.

            So sizeof(void*) = infinite makes sense if you think of it as.. sizeof(void*) ∈ N, where N stands for a natural number.

            I think there’s a subtle difference in running infinite amount of things in parallel versus discarding the work and restarting the whole thing with a next option. If you run them one-by-one, then you don’t know from the results whether the program produces the same output with larger pointer sizes.

            1. 1

              If you pick up the Peano numbers, 0, s0, ss0, sss0… There’s infinity included in there because you don’t have a restriction to how many times you can increment a number other than running out of the paper or patience.

              No, there isn’t. There are infinitely many natural numbers, but none of them are infinite. Every natural number is finite.

              That’s not why sizeof(void*) cannot be infinite though. It cannot be infinite because the C standard explicitly requires pointers to have a finite constant size.

              I think there’s a subtle difference in running infinite amount of things in parallel versus discarding the work and restarting the whole thing with a next option. If you run them one-by-one, then you don’t know from the results whether the program produces the same output with larger pointer sizes.

              If program behaviour changes as a result of the integer representations of pointers changing then that program is (to my understanding) relying on undefined behaviour anyway. So it shouldn’t be a problem. Any program uses only a certain fixed amount of memory at any one time. Any program that halts on a given input uses only finitely much memory on that input. I think the scheme might work for C. It’s worth thinking about further.

      2. 2

        Did you read the memo? The point is that the amount of memory you can address in C is finite, therefore there are Turing machines you cannot implement in it (eg, the TM which always moves the read head left).

        You can assume as much memory as you want, but it’s always going to be bounded.

        1. 0

          Yup. But if you have infinite memory, then you can also afford sizeof(void*) that is infinite.

          1. 3

            No you can’t, because pointers (and all types) are required to have a statically known size:

            Values […] consist of n × CHAR_BIT bits, where n is the size of an object of that type, in bytes.

            There is no n such that n × CHAR_BIT is infinity.

            I don’t mean to be dismissive, but this is brought up in the first two paragraphs.

            1. -1

              Sure you can. n must be a natural number that extends to infinity. Or do you know what would prevent a mathematically perfect machine stacking peano numbers forever?

              So what’s exactly hard about this?

              struct thing { int a; void* b; int c; };
              sizeof(int) = 4
              sizeof(void*) = infinite
              sizeof(thing) = 4 + infinite + 4
              

              Likewise offset(.a) would be 4, and offset(.c) would be 4 + infinite.

              Basically. You can fit infinitely large things into infinite memory. If you couldn’t fit an infinitely large object there, then it wouldn’t be infinite.

              1. 1

                sizeof(void*) can’t be infinite.

      1. 3

        https://simple.wikipedia.org/wiki/Turing_complete

        Actual computers have to operate on limited memory and are not Turing complete in the mathematical sense. If they can run any program they are equivalent to linear bounded automata, a weaker theoretical machine. Informally, however, calling a computer Turing complete means that it can execute any algorithm.

        The article makes the statement that:

        Of course, on a real computer, nothing is Turing-complete, but C doesn’t even manage it in theory.

        implying that there are languages that do manage this.

        I don’t understand this. At some point all languages end up with the same representation (machine code).

        Is the author’s statement equivalent to saying that C deals with pointers directly while other languages (like say Python) don’t?

        Could some one given an example of a language that is theoretically Turing complete in the sense the author wants to imply?

        Thanks!

        1. 3

          I think any almost turing complete language that doesn’t specify the representation of objects should count. If the language spec doesn’t guarantee that any object can have its address taken with some finite number of bits, then it avoids this particular problem.

          (To clarify, “almost turing complete” is the informally specified class of things like C, that support all the goodies of turing machines, but don’t technically have unlimited memory. I do not know what kind of automata that actually is).

          1. 5

            I think C gives you a pushdown automata, as function call stack is unbounded.

            1. 3

              A two-stack pushdown automata is Turing complete, so my suspicion is that a one-stack + the rest of C might still be TC.

          2. 4

            So

            1. the claim is false. Real computers are equivalent to finite state machines, not linear bounded automata. This is trivial to show: run the classic LBA problem of checking to see if parenthesis match. There is some n so that after n (’s the program will be unable to record how many it has seen. QED

            Edit: I couldn’t help myself and went and fixed the Wikipedia page

            1. I think the meaning may be that some languages have, in theory, bignum types that are not finitely bound. Of course, C supports bignums from a library so it’s wrong on that too [edit for personal Coc] .
            1. 3

              Would there also be some FSMs they can’t simulate, because those FSMs have too many states?

              1. 1

                Sure, you can have a machine with 256^(2^64) + 1 states, but that just means you need a bigger computer. :)

                1. 1

                  definitely.

                2. 2

                  The argument TFA is making excludes “external” bignum libraries, since those are not part of the core language. If the libraries were implemented using pure standard C as part of our exercise, the arguments TFA makes would still apply (there are a finite number of pointers in C, every object can have a pointer obtained for it, and thus the total number of objects and thus distinct tape locations is finite).

                  I believe I countered those arguments in my other comment, however, by using the stdio functions to emulate a Turing machine tape.

                  1. 1

                    I think the article is silly, but bignums don’t help. The claim as I understand it is that a Turing machine has a tape of infinite cells. It is an infinitely sized array, which C does not have. Even with a bignum implementation that can go to infinity (practically impossible), that’s the range of one value, not the number of different values. Turing machines have a finite range of values (symbols), too.

                    1. 2

                      If you have unbounded integers - bignums in some theory - then you have an infinite tape. Think of the tape as holding a string of bytes and if you can pull bytes off the bignum, you can read the tape to any position

                      data_position(position, tape) { bignum_t position, tape; while(position > 0){ tape = tape /256 ; position–}; return tape % 256; }

                      1. 2

                        I was thinking about that, wasn’t immediately sure if the conversion was legit.

                        But in any case, the same rule applies. The bignum is declared by the standard to be of finite size. Or a linked list of finite elements.

                        1. 1

                          does the C standard say anything about bignums?

                          Anyways, it’s a silly game. Turing machines are intended to represent a model of what is “effectively computable” in the the mathematical sense, which is only loosely connected to what is computable by our computers.

                        2. 1

                          C doesn’t have unbounded integers. A bignum library is still written in C. The C standard requires that implementations have bounded memory.

                          1. 1

                            Of course, all programming languages run on computers with bounded memory. So I guess your point is that the higher level of precision in the C specification means one cannot, incorrectly, interpret the standard as permitting “integers” that have infinite range?

                            1. 1

                              Of course, all programming languages run on computers with bounded memory.

                              No they do not. Languages don’t ‘run’. You can give them meaning with semantics, and there are many possible interpretations of C, some of which are implementable.

                              So I guess your point is that the higher level of precision in the C specification means one cannot, incorrectly, interpret the standard as permitting “integers” that have infinite range?

                              It’s not that C has a ‘higher level of precision’. In fact the C standard is not precise at all. It has lots of ambiguities, and it has a lot of behaviour that is simply left undefined.

                              The entire concept of turing completeness is that it’s about operations of potentially unbounded size. You can absolutely implement a turing machine. All you need to do is say ‘whenever an operation would go beyond the bounds of the tape, extend the tape’. A turing machine doesn’t have actually infinite memory, it has potentially infinite memory.

                              C in comparison is required to have a fixed amount of memory. Of course you cannot give it an actually infinite amount of memory, but you could implement it in a ‘if we take up too much space add more space’ kind of way as is possible with Brainfuck. But because of the specification, this isn’t even possible in theory.

                              1. 1

                                The entire concept of turing completeness is that it’s about operations of potentially unbounded size. You can absolutely implement a turing machine. All you need to do is say ‘whenever an operation would go beyond the bounds of the tape, extend the tape’.

                                But, in fact, there is no computer or computer system in the world that can actually do that.

                                My bridge design is better than yours, because yours has a finite weight bearing specification and my specification is to add additional trusses whenever you need to.

                                1. 1

                                  But, in fact, there is no computer or computer system in the world that can actually do that.

                                  What does that have to do with anything? There’s no computer in the world that can actually represent an arbitrary real number either. Does that mean that the mathematics of real numbers is any less interesting or useful? Not at all. Turing-completeness is computer science. It’s theoretical.

                                  My bridge design is better than yours, because yours has a finite weight bearing specification and my specification is to add additional trusses whenever you need to.

                                  This isn’t about bridges, it’s about Turing-completeness. You don’t need to use analogies, the subject at hand is able to be discussed directly. Nobody is saying C11 is a bad specification. This is not a ‘Your language design is worse because it’s not Turing-complete’. It isn’t Turing-complete. That’s all there is to it. It’s not a value judgement.

                                  Also, your analogy implies there’s something vague about ‘extend the tape whenever you have to’. There isn’t.

                                  1. 1

                                    Does that mean that the mathematics of real numbers is any less interesting or useful?

                                    I’m utterly at a loss to see where you are going with this. If I state that Zk is a finite group that doesn’t mean I think there are no infinite groups or that the theory of infinite groups is not interesting. Programming languages describe programs for actual computers, they are very different from e.g. lambda calculas or PR functions. I guess it’s possible you are using “programming languages” in some sense that applies to mathematical notation for effective computation, but otherwise you appear to be arguing that the approximation of Z in, say, Haskell or Lisp is the same as Z because nobody bothers to tell you in the language specification the obvious fact that the approximation is not the same as Z because it rests on finite length bit patterns.

                                    1. 1

                                      I’m utterly at a loss to see where you are going with this. If I state that Zk is a finite group that doesn’t mean I think there are no infinite groups or that the theory of infinite groups is not interesting.

                                      Then why are you talking about what real computers can actually do in a discussion about Turing-completeness? That’s like people talking about some property of infinite groups and you going ‘but that doesn’t hold in finite groups’. Yeah nobody is talking about finite groups. What computers can ‘actually do’ has literally nothing to do with whether programming languages are Turing-complete.

                                      Programming languages describe programs for actual computers, they are very different from e.g. lambda calculas or PR functions. I guess it’s possible you are using “programming languages” in some sense that applies to mathematical notation for effective computation, but otherwise you appear to be arguing that the approximation of Z in, say, Haskell or Lisp is the same as Z because nobody bothers to tell you in the language specification the obvious fact that the approximation is not the same as Z because it rests on finite length bit patterns.

                                      Scheme is basically untyped lambda calculus with some built-in functions and Haskell is basically typed lambda calculus with some built-in functions, some types and a bunch of syntactic sugar. Programming languages are formal languages. Whether people have implemented them on computers has nothing to do with this discussion. They’re formal languages that can be given formal semantics and about which questions like ‘Is there a limited number of algorithms that can be expressed in this language?’ can be asked.

                                      Haskell and Lisp don’t have an ‘approximation of Z’. They have things called integers that behave like mathematical integers. For example, R4RS permits implementations to restrict the range of its types.

                                      Implementations may also support only a limited range of numbers of any type, subject to the requirements of this section. …

                                      Implementations are encouraged, but not required, to support exact integers and exact rationals of practically unlimited size and precision, and to implement the above procedures and the / procedure in such a way that they always return exact results when given exact arguments. If one of these procedures is unable to deliver an exact result when given exact arguments, then it may either report a violation of an implementation restriction or it may silently coerce its result to an inexact number. Such a coercion may cause an error later.

                                      But nowhere does the standard require that integers have a maximum length like C does. Nowhere does the standard require that at most N objects may be represented in memory like C does. It permits these restrictions but it doesn’t require them.

                                      This means you can consider a hypothetical implementation where any integer is valid, where memory is unbounded, etc. and consider whether it’s possible to implement a Turing machine. And of course, it is. That’s why Scheme is Turing-complete. C, on the other hand, makes such an implementation illegal.

                                      1. 1

                                        Then why are you talking about what real computers can actually do in a discussion about Turing-completeness?

                                        Because you brought up programming languages where, as in your spec, “integers” may have “practically unlimited size” - a specification that is impressively imprecise, but sufficiently clear to make the point.

                                        As for scheme being untyped lambda calculus , you can twist yourself into a pretzel and find some justification for set! (and set-car, set-cdr ) if you want, but I don’t see the utility or find it that interesting.

                                        1. 1

                                          Because you brought up programming languages where, as in your spec, “integers” may have “practically unlimited size” - a specification that is impressively imprecise, but sufficiently clear to make the point.

                                          The specification does not state that integers may have practically unlimited size. That’s not a specification. It’s a non-normative suggestion to the implementer.

                                          The specification does not give a maximum size to integers. Why would it? It also doesn’t require that an implementation’s integers have a maximum size. Why would it?

                                          And programming languages are formal languages like any other.

                                          As for scheme being untyped lambda calculus , you can twist yourself into a pretzel and find some justification for set! (and set-car, set-cdr ) if you want, but I don’t see the utility or find it that interesting.

                                          I’m not talking about set!.

                    2. 2

                      I don’t understand this. At some point all languages end up with the same representation (machine code).

                      The implementation of a language is not the same as the semantics of the language.

                      Could some one given an example of a language that is theoretically Turing complete in the sense the author wants to imply?

                      Brainfuck. It doesn’t bound the amount of memory you can access, which is how it achieves Turing completeness.

                      Of course, an implementation on a real machine can’t have access to unbounded memory, but that’s irrelevant.

                      1. 1

                        I don’t understand this. At some point all languages end up with the same representation (machine code).

                        This is not true. A language is a set of strings over some alphabet. In the case of most modern languages that alphabet is Unicode, but for some languages it’s ASCII and there are even some weirder examples. The syntactic rules of a language are used to determine which strings over that alphabet are valid. "main(){}" is I believe a valid C programme. "blah(){}" is possibly a valid freestanding C programme. Possibly the empty string is too.

                        It’s possible of course for a language to actually have no syntactic constraints at all. Most have many. In very cases is the formal grammar in a language specification actually the real grammar. Things like ‘is a variable actually defined everywhere it is used’ are not normally included in those. They give a context-free approximation of the grammar.

                        The representation of a language is the language. It’s not machine code. A translation into machine code is a semantics for a language.

                        The semantics of a programming language are ‘what those strings mean’. What does main(){} mean? What does 1 + 2 mean? The specification of a language in some way constrains the possible semantic interpretations of the language. For example the C specification requires that if you cast a pointer to char* then back you’ll always get the same pointer. But at the end it also has a rule that relaxes the requirements on compilers: they may interpret any programme in a way inconsistent with the specification that is not observably different to an interpretation consistent with the specification. In other words, it’s the “as if” rule: you can interpret a program differently from what the spec says as long as the programmer can’t actually tell. This is what allows optimisation.

                        Semantics don’t have to be in terms of a translation into machine code or the operation of an actual computer. Real formal semantics usually are not given in terms of the semantics of machine code. They can be given in terms of some sort of ‘virtual machine’ or hypothetical unimplementable machine.

                        The C specification has some rules that appear to require that any implementation has only finite addressable memory. This is not true for example of the Haskell standard, as far as I am aware.

                      1. 2

                        I believe that C is Turing-complete: the various stdio file functions allow modifying a file in a context-free manner; you can seek forward and backward one position at a time. Thus it is trivial to emulate a Turing machine tape using these functions.

                        The standard heavily implies that file positions must be storable in a long but doesn’t actually say that as far as I can tell: the position is something that is “internal” to the file. fseek can take a long value for an absolute position, but it can also seek from SEEK_CUR some finite number of steps in either direction. ftell returns a long but can fail in an implementation-defined manner for pretty much any reason.

                        The Turing-completeness above doesn’t require any infinite implementation-defined limits (file sizes are not mentioned in the implementation at all). If we’re allowed to have higher implementation-defined limits for language features, we could also implement a dual-stack push-down automaton using getc/ungetc for one stack and the (unlimited) recursive function call stack for the other stack.

                        1. 1

                          I posed this argument to the author of the article, and he claimed this not valid. His reasoning (I believe, this was a while ago, so might be misrepresenting him) was that the C standard never requires an attached hard disk, and it is perfectly valid for file access to never succeed.

                          1. 1

                            (author here) Sure, I’m willing to accept that “C + at least one file of unbounded capacity” is Turing complete. As voronoipotato commented below, this is pretty much just a contrived mental masturbation exercise; the scope of my argument is limited to things guaranteed by the standard.

                            1. 1

                              which language standards require a file of unbounded capacity?

                              1. 2

                                None, but many languages don’t explicitly bound (in the standard) storage in RAM

                                1. 1

                                  “the scope of my argument is limited to things guaranteed by the standard.”

                                  1. 1

                                    Brainfuck’s specification doesn’t require an unbounded tape, though; it allows it to be as small as 30,000 cells. If you’re allowed to consider only Brainfuck implementations with an unlimited tape, then why can’t we consider C implementations with an unbounded file size?