1. 5
    • org-mode: I keep my thoughts, tasks, drafts, blog posts – everything here, it’s my organisational heaven

    • mypy has made me enjoy Python and benefit from the best of both (static/dynamic) worlds

    1. 2

      is org-mode that good?
      Is there a possibility to use it on ios/Ipad os?
      What is with online sync and merging?

      1. 4

        is org-mode that good?

        I’ve yet to encounter an ex-org-mode user, which is a pretty strong endorsement IMO.

        You really need to be running emacs to use it, though, which has been too high a barrier for me so far (and ensures it won’t run on iOS).

        1. 2

          Now you have!

          I never felt satisfied with org-mode because I found it too flexible - it was hard to know what the best way to do things was, and so I kept fiddling with my workflow rather than actually getting things done. I went through a few different tools, and settled on using Trello with a kanban-style workflow in 2018, which I can definitely say was a gamechanger for me.

          1. 1

            I’m using it on my Android with orgzly and it’s not so bad. Mostly only for adding new notes and searching (Emacs is superior in terms of organization), but it does most of what I expect from my org-mode files on the phone anyway.

            Also there are peopel who run Emacs on the phone (not sure if you can do it on ios), so with an external keyboard it’s meant to be the same experience.

          2. 1

            AFAIK there are iOS apps, but extracting it out of Emacs kind of looses the point.

            And it’s a text file, you can sync and merge is just like any other text file. Org is just the magic that interprets and manipulates the text.

            1. 1

              org is glorious. the IOS options aren’t too good, I jump between beorg and plain text edits via an text editor (buffer) when on mobile but neither is really there. Organice.200ok.ch/ comes very closer to what I’d think a good compromise would be but afaict is not a native app.

          1. 1

            My own. I’ve bounced between a few of the more common ones, but there was always something that I didn’t quite like… so I decided to stop forcing other people’s tools to work for me and just wrote my own.

            1. 10

              So don’t look for hard problems—important ones are ultimately more fun!

              For some people, yeah. My current employer definitely has more of the “important” problems (impacting the lives of millions of people) than the “hard” ones, and it’s very satisfying to work on something and be able to say “wow, this feature I worked on, which only 0.2% of our users even look at, is still helping thousands of people!”

              But the problems I’ve had most fun working on are the hard ones, even if they’re relatively unimportant. I’d much rather work on lots of hard problems which affect barely anyone, than lots of important problems which are nevertheless straightforward.

              1. 4

                Microservices definitely have problems which monlliths don’t, but I’m not sure this article is entirely fair.

                Consider an HTTP request serviced by a monolith. When the request is received, a single server handles the transaction from beginning to end. If there is a problem, be it a software bug or hardware failure, the entire monolith crashes – every failure is a total failure.

                Surely your monolith has state, in which case that only holds if your monolith commits the entire state change from your request atomcially.

                Tools like Chef and Puppet are good enough for deploying a monolith, but for microservices, you need something much more sophisticated like Kubernetes.

                At work we use puppet for microservices just fine.

                1. 2

                  On the monolith point. On reflection, I think you’re right in the sense that in reality, your monolith isn’t really a monolith, it’s likely got some external database (and perhaps some other services) floating around that it depends on which would need to be dealt with in the case of failure. So the monolith, itself does have to be aware of partial failure as well. I still think my point stands though that since microservices are more distributed, the partial failure problem is worth.

                1. 19

                  As somebody who first learned Python, and then Haskell, and then left both, this review reminds me of many of my own experiences.

                  I was reading through some of Hillel Wayne’s blog posts, and one of them discussed the Curry-Howard correspondence, which somehow proves a 1:1 association between aspects of a mathematical proof and aspects of a type system. You can convert a proof into a type, if your type system supports it. I think Haskell’s type system respects this. I don’t think Python’s type system does.

                  Yes, but undefined happens to be a value of every type, in Haskell; this corresponds to being able to prove any statement, including the false ones. So while it is indeed possible to turn proofs into programs, it’s not possible to turn Python or Haskell programs into useful valid proofs.

                  (Somebody may pedantically note that every Python or Haskell program gives a proof, just not of the fact that we might hope that they prove; they usually have Python-shaped or Haskell-shaped holes in them.)

                  I don’t think you can do anything like [fmap] with Python or any other practitioner’s language that I’m familiar with.

                  I think that this is part of the roots of the Haskeller memetic complex. Because Haskell has the lightweight syntax of an ML, and its prelude is rich in algebraic tools, suddenly there is a blindness as to the fact that the tools have been there all along. For example, in Python 3:

                  >>> plusOne = lambda x: x + 1
                  >>> lmap = lambda f: lambda xs: [f(x) for x in xs]
                  >>> nada = object()
                  >>> mmap = lambda f: lambda x: nada if x is nada else f(x)
                  >>> dot = lambda f: lambda g: lambda x: f(g(x))
                  >>> zf = dot(lmap)(mmap)(plusOne)
                  >>> zf([1, 2, 3])
                  [2, 3, 4]
                  >>> zf([1, nada, 3])
                  [2, <object object at 0x7ff65c5db0d0>, 4]
                  

                  Even the base error type, bottom or |, is defined as an evaluation that never completes successfully, either because a computation failed, or because there’s an infinite loop. … I don’t think Python’s BaseException has a built-in notion of infinite loops evaluating to an error condition. This appears super powerful to me, because the big error type introduced by going over network connections are network timeouts, which appear to most programs as infinitely long computations.

                  There are many opinions on this, but I am of the opinion that the sort of incompleteness generated by an unresponsive network peer, known as FLP impossibility or perhaps FLP “incompleteness” of network requests, is not the same as the incompleteness that Turing, Gödel, Tarski, Lawvere, and others showed via diagonalization. They are connected, in that the FLP result implies that a Turing machine which simulates many computers on a network may encounter a Turing-complete subproblem which prevents the simulated network peers from responding.

                  In Alice-and-Bob framing, suppose Alice is waiting for Bob to reply over the network. The FLP impossibility of Alice having to wait forever for Bob’s reply is connected to the Turing undecidability of whether a simulation of Alice and Bob will hang forever while simulating Bob’s preparation of their reply.

                  Haskell has Test.QuickCheck, a property-based testing framework that generates values to check a declared property. Python has hypothesis, but having used it a tiny bit for an open-source contribution, I don’t think it’s the same without the Haskell type system.

                  I think that this is another memetic root; specifically, the root is type-driven dispatch. To compare apples to apples, a better analogue of QuickCheck in Python might be PayCheck, while Hedgehog is closer to Hypothesis in Haskell. As soon as the one-dimensional comparison becomes two-dimensional, the flexibility of Hypothesis, and crucially the ability to select arbitrary strategies for generating test cases, far outweighs the convenience of simple type-checking. This makes sense, in a certain way; if Haskell’s type system were sound in the first place, then QuickCheck would be little more than a totality checker. Hypothesis’ own author recommends Hedgehog over QuickCheck, and in my language, Monte, I implemented both a classic unit-test runner and also a Hypothesis-style property test runner.

                  Ultimately, I don’t think [indentation] is a big issue. Haskell has code auto-formatters like haskell-formatter, much like Python’s black. Use it and move on. Bikeshed about something else, you know?

                  Since you didn’t mention it, and none of your code samples seem to use it, and to give an example of something to bikeshed over, it’s worth pointing out that Haskell supports curly-brace-oriented syntax as well as indentation. This is a recurring topic in programming language design: We want code to be readable, and that leads to caring about whitespace.

                  1. 8

                    This makes sense, in a certain way; if Haskell’s type system were sound in the first place, then QuickCheck would be little more than a totality checker.

                    I don’t think that follows unless you’re using incredibly specific types. There are many total functions of type Ord a => [a] -> [a], but not all of them would pass the tests of a sorting function.

                    1. 3

                      This is a fair observation. QuickCheck test selection indeed is driven not just by types, but by typeclasses and associated behaviors, and I was remiss to dismiss it so glibly.

                      The main desire is that one might express algebraic laws, and ultimately all of these testing tools are only approximations to actual equational reasoning. That doesn’t preclude a type system from giving useful annotations.

                    2. 4

                      Wow, amazing comment! I understand about half of this. I wish I had more than one upvote to give you.

                      I am really curious as to whether Python’s NoneType would map to Haskell’s bottom in that regards. Would that be the Python-shaped hole you’re talking about?

                      I did see a tutorial by David Beazley on implementing a lambda calculus in Python. I didn’t think it was all that practical tbh, and I think he said the same in the video: https://www.youtube.com/watch?v=pkCLMl0e_0k

                      Could you give a brief summary of how FLP impossibility and Turing incompleteness are different? I know of “Turing completeness” and that it’s a thing, but my knowledge ends there. I haven’t heard of FLP impossibliity before. Bookmarked that website.

                      Huh, the author of hypothesis says QuickCheck doesn’t have integrated shrinking. From his statement on generators rather than type classes, I understand your point on how Hedgehog and Hypothesis are more similar.

                      Hmm, there weren’t a whole lot of examples of curly-brace oriented syntax. Most examples used indentation. The Haskell wiki does mention it though: https://en.wikibooks.org/wiki/Haskell/Indentation#Explicit_characters_in_place_of_indentation

                      (I do care about whitespace and clean code; I read through “Clean Code” my second year out of college, but in my experience as a practitioner so far I’d rather use a tool and automated process to have a consistent way of doing things with multiple people, in order to spend political capital elsewhere.)

                      1. 10

                        (Phew this got long, time to turn it into a blog post ;) )

                        I am really curious as to whether Python’s NoneType would map to Haskell’s bottom in that regards.

                        It’s important to distinguish between the empty type and the unit type. In (type) theory there are no values of the empty type, and there is one value of the unit type. In a language without _|_, like Agda, we might define them something like this:

                        data Empty : Type
                        
                        data Unit : Type where
                            unit : Unit
                        

                        This says that Empty is a Type, that Unit is a Type, that unit is a Unit. Since we’ve given no contructors (or “introduction forms”) for Empty, there’s no way to make one. Likewise we can write function types like Int -> Empty which have no values (since there’s no way for any function to return a value of type Empty), we can write tuple types like Pair Int Empty that have no values, etc. We can also write function types like Empty -> Int which do have values (e.g. const 42), but which can never be called (since there are no Empty values to give as an argument). Incidentally, the fact that we can have types with no values is one of the reasons we can’t have a value with type forall a. a (in Haskell) or (a : Type) -> a (in Agda): they claim to return a value of any type, but we might ask for an Empty.

                        The unit type is trivial, literally. Since there’s only one value of this type (unit), whenever we know that a value will have type Unit we can infer that the value must be unit, and hence (a) if it’s a return value, we can always optimise the function to simply unit and (b) if it’s an argument, it has no effect on the return value (there’s nothing to branch on). In this sense, the unit type contains 0 bits of information (compared to Bool which contains 1 bit).

                        As an aside, Either is called a “sum type” because it contains all the values from its first argument plus those of its second: Either Unit Unit is equivalent to Bool (with values Left unit and Right unit). Pairs are called “product types” since they contain every combination of their first argument and second arguments, which is the number of values in each multiplied together. In this sense Empty acts like zero: the type Either Zero a doesn’t contain any Left values, so it’s equivalent to a, just like 0 + a = a for numbers. Likewise Pair Empty a contains no values, since there’s nothing to put in the first position, just like 0 * a = 0 for numbers. Either Unit a acts like 1 + a, since we have all of the values of a (wrapped in Right) plus the extra value Left unit; note that this is also the same as Maybe a, where Right acts like Just and Left unit acts like Nothing. Pair Unit a is the same as a since each a value can only be paired with one thing, and we know that thing is always unit; this is just like 1 * a = a with numbers.

                        Back to your question: things get confusing when we try to apply this to “real” languages, especially since a lot of the terminology is overloaded. In Haskell the type Unit is written () and the value unit is also written ().

                        In Haskell the type Empty is called Void, but that is not the same as void in languages like Java! In particular, remember that we cannot have a function which returns Empty, so void being Empty would mean we could not implement methods like public static void putStrLn(String s). Such methods certainly do exist, since they’re all over the place in Java, but what happens if we call them? They will run, and return a value back to us (hence it can’t be Empty). What is the value we get back? We know, without even running the method, that we’ll get back null. That’s just like the Unit type (where we know the value will be unit without having to run the code). Hence Java’s void acts like Unit, and null acts like unit.

                        If we follow a similar line of reasoning in Python, we find that None acts like unit (or Java’s null) and hence NoneType is like Unit (or Java’s void). AFAIK Python doesn’t have anything which acts like Empty (Haskell’s Void) since, lacking any values, Empty is only useful for type-level calculations (of the sort which get erased during compilation), which Python doesn’t tend to do.

                        That just leaves “bottom”. There are a couple of ways to think about it: we can be “fast and loose” where we ignore bottom as a nuisance, which mostly works since bottom isn’t a “proper” value: in particular, we can’t branch on it; hence any time we do a calculation involving bottoms which results in a “proper” value, we know that the bottoms were irrelevant to the result, and hence can be ignored. Any time a bottom is relevant, we have to abandon our nice, logical purity in any case, since catching them requires IO, so why bother complicating our pure logic by trying to include them?

                        Alternatively we can treat bottom as an extra value of every type, in a similar way to null inhabiting every type in Java. From this perspective Haskell’s Void type does contain a value (bottom), and the () type contains two values (() and bottom). In this sense we might think of Java’s void corresponding to Haskell’s Void, but I find this line of thinking difficult to justify. In particular, we can branch on null in Java (in fact we should be doing such “null checks” all the time!), whereas (pure) Haskell doesn’t even let us tell whether we have () or bottom.

                        As a final thought, whenever comparing dynamic and static languages, it’s important to not conflate different concepts which just-so-happen to have similar names. In particular I find it useful to think of dynamically typed languages as being “uni-typed”: that is, every value has the same type, which we might write in Haskell as a sum like data DynamicValue = S String | I Int | L List | O Object | E Exception | .... Python is actually even simpler than this, since “everything is an object” (not quite as much as in Smalltalk, but certainly more than e.g. Java), so Python values are more like a pair of a class (which itself is a Python value) and a collection of instance properties.

                        This is important because “dynamic types”, like in Python, aren’t directly comparable to “static types” like those of Haskell; they’re more comparable to “tags” (e.g. constructors). In particular, think about a Python function branching based on its argument’s “dynamic type”; this is the same as a Haskell function branching on its argument’s constructor. What does this tell us about “bottom” in Python? From this perspective, there isn’t one (at least not reified; an infinite loop might be comparable, but it’s not something we can e.g. assign to a variable). Python’s None is just a normal value like any other, in the same way that Haskell’s Nothing is a normal value (we might sometimes use it to stand for an error, but that’s not inherent to the system); likewise Python’s exceptions are also normal values (we can assign them to variables, return them from functions, etc.); the idea of “throwing and catching” (or raise/except for Python) is actually a perfectly normal method of control flow (it’s actually a limited form of delimited continuation), and this is orthogonal to error representation and handling.

                        This makes raising an exception in Python very different to triggering a bottom in Haskell, since Haskell provides no way to branch on a bottom (or whether we even have one). In Python we can raise a value (with the exception “tag”) as an alternative to using return, and we can catch those values, inspect their tag and value, etc. to determine what our overall return value will be, with none of this being visible by the caller. To do anything like that in Haskell we need some “proper” data to inspect and branch on, hence why I say None, exceptions, etc. are just normal values (even though they’re used to represent errors, and we treat them specially because of that; but the same could be said about Either String values in Haskell, for example). Consider that in Haskell the only way to even check if a bottom exists is to use IO, but the idea behind IO is that it’s like State RealWorld, i.e. every IO a value is a pair of (RealWorld, a), so conceptually we never “actually” see a bottom; it’s more like triggering a bottom changes the RealWorld, and we’re branching on that.

                        1. 4

                          Thank you for the clarification on None vs bottom! I think it definitely helps me understand how much more different Python is from Haskell.

                          I like the idea of saying data DynamicValue = S String | I Int | L List | O Object | E Exception for dynamic languages, and I think from a type theory perspective I finally understand what people say when they say “In {Ruby, Python}, everything is an object”.

                          I hate try/catch logic in Python. It works, but I much rather prefer the functorial structure espoused by Haskell.

                          I agree on the description on IO! The book does mention IO is technically without effects, since it is merely a description of what IO actions should be taking place instead of imperatively executing IO.

                          1. 4

                            I like the idea of saying data DynamicValue = S String | I Int | L List | O Object | E Exception for dynamic languages

                            That’s a good intuitive starting point, but it’s not the whole story. The “unitype framework” is a way of understanding dynamic types from a static typing perspective, but we can also start from a “dynamic typing perspective” and see where it gets us. A dynamically-typed language is one where you can manipulate type definitions at runtime, and test for type membership at runtime. So saying DynamicValue = S String | I Int ... will always be incomplete, because we might not know something’s type even after creating it!

                            An intuition I like here is that dynamic types are boolean predicates: x is of type T iff T(x). This means we can define the type of, say, all functions with format as an optional keyword, or the type of all iterables that aren’t strings, or the type of all types that appear as return values in functions.

                            (The downside of this is it gets hellishly confusing very quickly, and allows for a lot of spooky action at a distance. Also, there’s no guarantee that checking type membership is total. That’s one reason even highly dynamic languages shy away from going deep in dynamic types.)

                            I think from a type theory perspective I finally understand what people say when they say “In {Ruby, Python}, everything is an object”.

                            Everything can also be an object in a static system, too. My rule of thumb for what “everything is an object” means: are object methods also objects? Is the call stack an object? If you are using message passing, are the messages objects? Can you send messages to other messages?

                            1. 1

                              RE: “Everything is an object”, I usually put it in quotes since the meaning is usually context-dependent. A colleague once said that to me regarding Java, and I replied that Java classes aren’t objects yet Python classes are; and Python’s if branches aren’t objects yet Smalltalk’s are (at least, the closures are); and so on.

                              RE: Unitypes, etc. yes the static/dynamic distinction can be built up from the dynamic side too (I tend not to think that way since fixed points and things trip me up). The important thing is that people aren’t talking past each other, e.g. it’s fine to argue about whether “foo can/can’t do bar”, but not so useful when each person is using “foo” and “bar” to mean different things ;)

                          2. 3

                            The overloaded word “void” across languages is confusing. In Rust and Typescript (and others?) they called it “never”, which I think makes it really clear: values of this type never actually exist. And “this function returns never” is like a pun: “this function never returns” and “this function’s return type is ‘never’”.

                            1. 3

                              This was wonderfully clear. Thank you.

                        1. 0

                          When you’re doing a termination proof, you disregard the fact that the computation might not terminate due to an external reason. Similarly you can prove that a language is turing-complete. The model is supposed to disregard external factors such as the limited amount of memory.

                          1. 2

                            The limited amount of memory allowed by the C specification is not external to the question of whether C is Turing-complete. The problem is not ‘there’s limited memory in practice’ but instead ‘C forbids an implementation from having unbounded memory’.

                            1. 1

                              But that doesn’t necessarily change things because with infinite amount of memory you could run simultaneous instances of C program. First one runs with pointer size 1, second with 2, third with 3, so on up to infinity. All machines get the same input, and machines are discarded up to until they all produce the same result without overflow error in calculating the addresses.

                              1. 1

                                Hmm that is an interesting point indeed. Is it allowable to say ’if we would run out of memory, restart with twice as much memory and sizeof(void*)++.

                                1. 1

                                  If you pick up the Peano numbers, 0, s0, ss0, sss0… There’s infinity included in there because you don’t have a restriction to how many times you can increment a number other than running out of the paper or patience.

                                  Since Peano numbers do not have an end, what exactly is definition of ‘infinity’ in these terms? There may be several but one plausible approach might be to treat infinity as the set of all Peano numbers that you can construct. Giving you a set that is infinitely large.

                                  So sizeof(void*) = infinite makes sense if you think of it as.. sizeof(void*) ∈ N, where N stands for a natural number.

                                  I think there’s a subtle difference in running infinite amount of things in parallel versus discarding the work and restarting the whole thing with a next option. If you run them one-by-one, then you don’t know from the results whether the program produces the same output with larger pointer sizes.

                                  1. 1

                                    If you pick up the Peano numbers, 0, s0, ss0, sss0… There’s infinity included in there because you don’t have a restriction to how many times you can increment a number other than running out of the paper or patience.

                                    No, there isn’t. There are infinitely many natural numbers, but none of them are infinite. Every natural number is finite.

                                    That’s not why sizeof(void*) cannot be infinite though. It cannot be infinite because the C standard explicitly requires pointers to have a finite constant size.

                                    I think there’s a subtle difference in running infinite amount of things in parallel versus discarding the work and restarting the whole thing with a next option. If you run them one-by-one, then you don’t know from the results whether the program produces the same output with larger pointer sizes.

                                    If program behaviour changes as a result of the integer representations of pointers changing then that program is (to my understanding) relying on undefined behaviour anyway. So it shouldn’t be a problem. Any program uses only a certain fixed amount of memory at any one time. Any program that halts on a given input uses only finitely much memory on that input. I think the scheme might work for C. It’s worth thinking about further.

                            2. 2

                              Did you read the memo? The point is that the amount of memory you can address in C is finite, therefore there are Turing machines you cannot implement in it (eg, the TM which always moves the read head left).

                              You can assume as much memory as you want, but it’s always going to be bounded.

                              1. 0

                                Yup. But if you have infinite memory, then you can also afford sizeof(void*) that is infinite.

                                1. 3

                                  No you can’t, because pointers (and all types) are required to have a statically known size:

                                  Values […] consist of n × CHAR_BIT bits, where n is the size of an object of that type, in bytes.

                                  There is no n such that n × CHAR_BIT is infinity.

                                  I don’t mean to be dismissive, but this is brought up in the first two paragraphs.

                                  1. -1

                                    Sure you can. n must be a natural number that extends to infinity. Or do you know what would prevent a mathematically perfect machine stacking peano numbers forever?

                                    So what’s exactly hard about this?

                                    struct thing { int a; void* b; int c; };
                                    sizeof(int) = 4
                                    sizeof(void*) = infinite
                                    sizeof(thing) = 4 + infinite + 4
                                    

                                    Likewise offset(.a) would be 4, and offset(.c) would be 4 + infinite.

                                    Basically. You can fit infinitely large things into infinite memory. If you couldn’t fit an infinitely large object there, then it wouldn’t be infinite.

                                    1. 1

                                      sizeof(void*) can’t be infinite.

                            1. 3

                              https://simple.wikipedia.org/wiki/Turing_complete

                              Actual computers have to operate on limited memory and are not Turing complete in the mathematical sense. If they can run any program they are equivalent to linear bounded automata, a weaker theoretical machine. Informally, however, calling a computer Turing complete means that it can execute any algorithm.

                              The article makes the statement that:

                              Of course, on a real computer, nothing is Turing-complete, but C doesn’t even manage it in theory.

                              implying that there are languages that do manage this.

                              I don’t understand this. At some point all languages end up with the same representation (machine code).

                              Is the author’s statement equivalent to saying that C deals with pointers directly while other languages (like say Python) don’t?

                              Could some one given an example of a language that is theoretically Turing complete in the sense the author wants to imply?

                              Thanks!

                              1. 3

                                I think any almost turing complete language that doesn’t specify the representation of objects should count. If the language spec doesn’t guarantee that any object can have its address taken with some finite number of bits, then it avoids this particular problem.

                                (To clarify, “almost turing complete” is the informally specified class of things like C, that support all the goodies of turing machines, but don’t technically have unlimited memory. I do not know what kind of automata that actually is).

                                1. 5

                                  I think C gives you a pushdown automata, as function call stack is unbounded.

                                  1. 3

                                    A two-stack pushdown automata is Turing complete, so my suspicion is that a one-stack + the rest of C might still be TC.

                                2. 4

                                  So

                                  1. the claim is false. Real computers are equivalent to finite state machines, not linear bounded automata. This is trivial to show: run the classic LBA problem of checking to see if parenthesis match. There is some n so that after n (’s the program will be unable to record how many it has seen. QED

                                  Edit: I couldn’t help myself and went and fixed the Wikipedia page

                                  1. I think the meaning may be that some languages have, in theory, bignum types that are not finitely bound. Of course, C supports bignums from a library so it’s wrong on that too [edit for personal Coc] .
                                  1. 3

                                    Would there also be some FSMs they can’t simulate, because those FSMs have too many states?

                                    1. 1

                                      Sure, you can have a machine with 256^(2^64) + 1 states, but that just means you need a bigger computer. :)

                                      1. 1

                                        definitely.

                                      2. 2

                                        The argument TFA is making excludes “external” bignum libraries, since those are not part of the core language. If the libraries were implemented using pure standard C as part of our exercise, the arguments TFA makes would still apply (there are a finite number of pointers in C, every object can have a pointer obtained for it, and thus the total number of objects and thus distinct tape locations is finite).

                                        I believe I countered those arguments in my other comment, however, by using the stdio functions to emulate a Turing machine tape.

                                        1. 1

                                          I think the article is silly, but bignums don’t help. The claim as I understand it is that a Turing machine has a tape of infinite cells. It is an infinitely sized array, which C does not have. Even with a bignum implementation that can go to infinity (practically impossible), that’s the range of one value, not the number of different values. Turing machines have a finite range of values (symbols), too.

                                          1. 2

                                            If you have unbounded integers - bignums in some theory - then you have an infinite tape. Think of the tape as holding a string of bytes and if you can pull bytes off the bignum, you can read the tape to any position

                                            data_position(position, tape) { bignum_t position, tape; while(position > 0){ tape = tape /256 ; position–}; return tape % 256; }

                                            1. 2

                                              I was thinking about that, wasn’t immediately sure if the conversion was legit.

                                              But in any case, the same rule applies. The bignum is declared by the standard to be of finite size. Or a linked list of finite elements.

                                              1. 1

                                                does the C standard say anything about bignums?

                                                Anyways, it’s a silly game. Turing machines are intended to represent a model of what is “effectively computable” in the the mathematical sense, which is only loosely connected to what is computable by our computers.

                                              2. 1

                                                C doesn’t have unbounded integers. A bignum library is still written in C. The C standard requires that implementations have bounded memory.

                                                1. 1

                                                  Of course, all programming languages run on computers with bounded memory. So I guess your point is that the higher level of precision in the C specification means one cannot, incorrectly, interpret the standard as permitting “integers” that have infinite range?

                                                  1. 1

                                                    Of course, all programming languages run on computers with bounded memory.

                                                    No they do not. Languages don’t ‘run’. You can give them meaning with semantics, and there are many possible interpretations of C, some of which are implementable.

                                                    So I guess your point is that the higher level of precision in the C specification means one cannot, incorrectly, interpret the standard as permitting “integers” that have infinite range?

                                                    It’s not that C has a ‘higher level of precision’. In fact the C standard is not precise at all. It has lots of ambiguities, and it has a lot of behaviour that is simply left undefined.

                                                    The entire concept of turing completeness is that it’s about operations of potentially unbounded size. You can absolutely implement a turing machine. All you need to do is say ‘whenever an operation would go beyond the bounds of the tape, extend the tape’. A turing machine doesn’t have actually infinite memory, it has potentially infinite memory.

                                                    C in comparison is required to have a fixed amount of memory. Of course you cannot give it an actually infinite amount of memory, but you could implement it in a ‘if we take up too much space add more space’ kind of way as is possible with Brainfuck. But because of the specification, this isn’t even possible in theory.

                                                    1. 1

                                                      The entire concept of turing completeness is that it’s about operations of potentially unbounded size. You can absolutely implement a turing machine. All you need to do is say ‘whenever an operation would go beyond the bounds of the tape, extend the tape’.

                                                      But, in fact, there is no computer or computer system in the world that can actually do that.

                                                      My bridge design is better than yours, because yours has a finite weight bearing specification and my specification is to add additional trusses whenever you need to.

                                                      1. 1

                                                        But, in fact, there is no computer or computer system in the world that can actually do that.

                                                        What does that have to do with anything? There’s no computer in the world that can actually represent an arbitrary real number either. Does that mean that the mathematics of real numbers is any less interesting or useful? Not at all. Turing-completeness is computer science. It’s theoretical.

                                                        My bridge design is better than yours, because yours has a finite weight bearing specification and my specification is to add additional trusses whenever you need to.

                                                        This isn’t about bridges, it’s about Turing-completeness. You don’t need to use analogies, the subject at hand is able to be discussed directly. Nobody is saying C11 is a bad specification. This is not a ‘Your language design is worse because it’s not Turing-complete’. It isn’t Turing-complete. That’s all there is to it. It’s not a value judgement.

                                                        Also, your analogy implies there’s something vague about ‘extend the tape whenever you have to’. There isn’t.

                                                        1. 1

                                                          Does that mean that the mathematics of real numbers is any less interesting or useful?

                                                          I’m utterly at a loss to see where you are going with this. If I state that Zk is a finite group that doesn’t mean I think there are no infinite groups or that the theory of infinite groups is not interesting. Programming languages describe programs for actual computers, they are very different from e.g. lambda calculas or PR functions. I guess it’s possible you are using “programming languages” in some sense that applies to mathematical notation for effective computation, but otherwise you appear to be arguing that the approximation of Z in, say, Haskell or Lisp is the same as Z because nobody bothers to tell you in the language specification the obvious fact that the approximation is not the same as Z because it rests on finite length bit patterns.

                                                          1. 1

                                                            I’m utterly at a loss to see where you are going with this. If I state that Zk is a finite group that doesn’t mean I think there are no infinite groups or that the theory of infinite groups is not interesting.

                                                            Then why are you talking about what real computers can actually do in a discussion about Turing-completeness? That’s like people talking about some property of infinite groups and you going ‘but that doesn’t hold in finite groups’. Yeah nobody is talking about finite groups. What computers can ‘actually do’ has literally nothing to do with whether programming languages are Turing-complete.

                                                            Programming languages describe programs for actual computers, they are very different from e.g. lambda calculas or PR functions. I guess it’s possible you are using “programming languages” in some sense that applies to mathematical notation for effective computation, but otherwise you appear to be arguing that the approximation of Z in, say, Haskell or Lisp is the same as Z because nobody bothers to tell you in the language specification the obvious fact that the approximation is not the same as Z because it rests on finite length bit patterns.

                                                            Scheme is basically untyped lambda calculus with some built-in functions and Haskell is basically typed lambda calculus with some built-in functions, some types and a bunch of syntactic sugar. Programming languages are formal languages. Whether people have implemented them on computers has nothing to do with this discussion. They’re formal languages that can be given formal semantics and about which questions like ‘Is there a limited number of algorithms that can be expressed in this language?’ can be asked.

                                                            Haskell and Lisp don’t have an ‘approximation of Z’. They have things called integers that behave like mathematical integers. For example, R4RS permits implementations to restrict the range of its types.

                                                            Implementations may also support only a limited range of numbers of any type, subject to the requirements of this section. …

                                                            Implementations are encouraged, but not required, to support exact integers and exact rationals of practically unlimited size and precision, and to implement the above procedures and the / procedure in such a way that they always return exact results when given exact arguments. If one of these procedures is unable to deliver an exact result when given exact arguments, then it may either report a violation of an implementation restriction or it may silently coerce its result to an inexact number. Such a coercion may cause an error later.

                                                            But nowhere does the standard require that integers have a maximum length like C does. Nowhere does the standard require that at most N objects may be represented in memory like C does. It permits these restrictions but it doesn’t require them.

                                                            This means you can consider a hypothetical implementation where any integer is valid, where memory is unbounded, etc. and consider whether it’s possible to implement a Turing machine. And of course, it is. That’s why Scheme is Turing-complete. C, on the other hand, makes such an implementation illegal.

                                                            1. 1

                                                              Then why are you talking about what real computers can actually do in a discussion about Turing-completeness?

                                                              Because you brought up programming languages where, as in your spec, “integers” may have “practically unlimited size” - a specification that is impressively imprecise, but sufficiently clear to make the point.

                                                              As for scheme being untyped lambda calculus , you can twist yourself into a pretzel and find some justification for set! (and set-car, set-cdr ) if you want, but I don’t see the utility or find it that interesting.

                                                              1. 1

                                                                Because you brought up programming languages where, as in your spec, “integers” may have “practically unlimited size” - a specification that is impressively imprecise, but sufficiently clear to make the point.

                                                                The specification does not state that integers may have practically unlimited size. That’s not a specification. It’s a non-normative suggestion to the implementer.

                                                                The specification does not give a maximum size to integers. Why would it? It also doesn’t require that an implementation’s integers have a maximum size. Why would it?

                                                                And programming languages are formal languages like any other.

                                                                As for scheme being untyped lambda calculus , you can twist yourself into a pretzel and find some justification for set! (and set-car, set-cdr ) if you want, but I don’t see the utility or find it that interesting.

                                                                I’m not talking about set!.

                                          2. 2

                                            I don’t understand this. At some point all languages end up with the same representation (machine code).

                                            The implementation of a language is not the same as the semantics of the language.

                                            Could some one given an example of a language that is theoretically Turing complete in the sense the author wants to imply?

                                            Brainfuck. It doesn’t bound the amount of memory you can access, which is how it achieves Turing completeness.

                                            Of course, an implementation on a real machine can’t have access to unbounded memory, but that’s irrelevant.

                                            1. 1

                                              I don’t understand this. At some point all languages end up with the same representation (machine code).

                                              This is not true. A language is a set of strings over some alphabet. In the case of most modern languages that alphabet is Unicode, but for some languages it’s ASCII and there are even some weirder examples. The syntactic rules of a language are used to determine which strings over that alphabet are valid. "main(){}" is I believe a valid C programme. "blah(){}" is possibly a valid freestanding C programme. Possibly the empty string is too.

                                              It’s possible of course for a language to actually have no syntactic constraints at all. Most have many. In very cases is the formal grammar in a language specification actually the real grammar. Things like ‘is a variable actually defined everywhere it is used’ are not normally included in those. They give a context-free approximation of the grammar.

                                              The representation of a language is the language. It’s not machine code. A translation into machine code is a semantics for a language.

                                              The semantics of a programming language are ‘what those strings mean’. What does main(){} mean? What does 1 + 2 mean? The specification of a language in some way constrains the possible semantic interpretations of the language. For example the C specification requires that if you cast a pointer to char* then back you’ll always get the same pointer. But at the end it also has a rule that relaxes the requirements on compilers: they may interpret any programme in a way inconsistent with the specification that is not observably different to an interpretation consistent with the specification. In other words, it’s the “as if” rule: you can interpret a program differently from what the spec says as long as the programmer can’t actually tell. This is what allows optimisation.

                                              Semantics don’t have to be in terms of a translation into machine code or the operation of an actual computer. Real formal semantics usually are not given in terms of the semantics of machine code. They can be given in terms of some sort of ‘virtual machine’ or hypothetical unimplementable machine.

                                              The C specification has some rules that appear to require that any implementation has only finite addressable memory. This is not true for example of the Haskell standard, as far as I am aware.

                                            1. 2

                                              I believe that C is Turing-complete: the various stdio file functions allow modifying a file in a context-free manner; you can seek forward and backward one position at a time. Thus it is trivial to emulate a Turing machine tape using these functions.

                                              The standard heavily implies that file positions must be storable in a long but doesn’t actually say that as far as I can tell: the position is something that is “internal” to the file. fseek can take a long value for an absolute position, but it can also seek from SEEK_CUR some finite number of steps in either direction. ftell returns a long but can fail in an implementation-defined manner for pretty much any reason.

                                              The Turing-completeness above doesn’t require any infinite implementation-defined limits (file sizes are not mentioned in the implementation at all). If we’re allowed to have higher implementation-defined limits for language features, we could also implement a dual-stack push-down automaton using getc/ungetc for one stack and the (unlimited) recursive function call stack for the other stack.

                                              1. 1

                                                I posed this argument to the author of the article, and he claimed this not valid. His reasoning (I believe, this was a while ago, so might be misrepresenting him) was that the C standard never requires an attached hard disk, and it is perfectly valid for file access to never succeed.

                                                1. 1

                                                  (author here) Sure, I’m willing to accept that “C + at least one file of unbounded capacity” is Turing complete. As voronoipotato commented below, this is pretty much just a contrived mental masturbation exercise; the scope of my argument is limited to things guaranteed by the standard.

                                                  1. 1

                                                    which language standards require a file of unbounded capacity?

                                                    1. 2

                                                      None, but many languages don’t explicitly bound (in the standard) storage in RAM

                                                      1. 1

                                                        “the scope of my argument is limited to things guaranteed by the standard.”

                                                        1. 1

                                                          Brainfuck’s specification doesn’t require an unbounded tape, though; it allows it to be as small as 30,000 cells. If you’re allowed to consider only Brainfuck implementations with an unlimited tape, then why can’t we consider C implementations with an unbounded file size?