1. 37
  1.  

  2. 5

    Once the architecture and assembly language were defined, the next step on the “software” side of the project was the creation of a higher-level language, something suitable for Tetris. Thus I created Cogol. The name is both a pun on “COBOL” and an acronym for “C of Game of Life”, although it is worth noting that Cogol is to C what our computer is to an actual computer.

    I read this as “Cogol is WAY BETTER than C.” :)

    Seriously, I have dreamed of a general-purpose cellular automata-based computer for years… This is the type of substrate on which you’d implement a universe, you know?

    1. 6

      I’m also very impressed with the engineering achievement! In strictly reductionist terms, it’s at least as cool as the original Wireworld computer and NAND-to-Tetris put together. Considerably more so, once you appreciate all the intermediate GoL gadgetry.

      But… in a way, it’s actually a little disappointing. The fundamentally interesting thing about cellular automata as a model of computation – what makes them seem like the substrate of a universe – is extremely fine-grained parallel replication of a simple fixed function, with interactions constrained by spatial relationships. This may or may not be a good way to model the fundamentals of biology (whatever those may be), but it’s a neat simplification of physics. This is quite in contrast to the standard models (Turing machines, lambda calculi, etc) that are much more about processing arbitrary symbols with arbitrary interactions, essentially, a simplification of human language and reasoning.

      From this perspective, that we can build wires, gates, flip-flops, registers, ALUs and whatnot in a CA isn’t really that surprising. And once we can do that, of course we can make compilers for these things. It’s just retracing the history of computing machinery that brought us to where we are now.

      So, from a purely theoretical standpoint, I find things like Sexyloop or Sayama’s Swarm Chemistry much more interesting.

      1. 2

        Hm? I just assumed that we’d use some variation of Langton’s loop to print the pieces that make up this QFT machine. Or use the QFT machine to pump a stream of gliders into said printer so it could be used to print arbitrary things… Stuff like that.

        Both of my ideas are strictly silly in a sim running on your desktop: you can put any cell anywhere from outside the sim.

        Its only in some kind of blockchain-based shared CA that these techniques would be useful! I think. :)

        1. 1

          and NAND-to-Tetris put together.

          This is what I was thinking about as I read it. I had to put my projects on hold for now but I did get the book and at least skim it. The book takes us up from logic to gates to a CPU to a system to run the Tetris game. What I was watching in this project just makes that look like a joke of an achievement. Well, that’s to be expected as one is an intro course whose real brilliance is making something hard an easier, fun, step-by-step experience. Still, it was on another level to watch them animate these gates with something that had been presented to me as merely theory in the past.

          “of course we can make compilers for these things”

          Well, I haven’t looked into the CA sub-field in a long time but when I did they didn’t know if that was possible. The goal of new groups was to see if they could map high-level functions to equivalent CA’s with their properties of parallelism and possibly resilience. What’s the best work you’ve seen that’s state of the art and practical on such a goal?

          1. 2

            You said practical, so I’ll say RALA, from Gershenfeld’s lab at MIT. I used to study this kind of stuff in school, but I haven’t kept up lately. Less practically, Matthew Cook (who proved Wolfram’s Rule 110 CA Turing-complete) had an interesting paper on a more efficient polynomial construction of his basic concept with r110.

            But the point I was trying to make was just that, once you have built a familiar-ish sequential instruction processor in a CA, of course you can write code for the processor: that’s why you built it in the first place. But by that point you’ve abstracted away all of the properties of CA that make them so interesting, and you just have an amazingly slow, fragile, and complicated emulator. I wan’t trying to claim that we can compile arbitrary programs to a useful or even natural CA representation. The GoL community’s growing toolset is nonetheless quite impressive.

            Ulam and Von Neumann invented CAs in the 1940’s as a way to think about abstract biology, and that’s still what I find most fascinating about it, as a paradigm. The universal constructor concept (finished by Arthur Burks in the 60’s, after von Neumann’s too-early death) is an example of a CA that I would find fundamentally interesting.

            1. 1

              Hi, minimax. You may find this interesting. http://robust.cs.unm.edu/doku.php?id=start

      2. 2

        Oh gosh, this is amazingly fun. This guy must have spent weeks on these answers!

        1. 3

          Years! The question was asked in 2013. http://blog.phinotpi.com/2016/05/30/the-quest-for-tetris/

        2. 1

          What kind of computer can simulate this Game of Life with a board size of 3Mx11M? Terabytes of memory? Petaflops for a smooth play?

          1. 4

            Optimized GoL is apparently a solved problem. https://en.wikipedia.org/wiki/Hashlife is one option, but I believe there are others.

            1. 2

              Look at the algorithm just a bit, I don’t think any desktop computer as of today has the ability to even detect the cycle of 2^44 cells in reasonable time, let alone responding to user input to move the tetris blocks. You need a TB just to represent the quadtree.

              1. 1

                I thought you’d only hash the small slices? Can’t you hash each loop of an individual OTCA Metapixel? Would a whole VarLife (wireworld) cell fit in a hash “chunk”?

                I’ll admit that my mind just goes blank when I think about how the interaction between metacells is supposed to be handled in Hashlife. …But, I know it works because I saw some youtube videos of “Life in Life”. :)

                1. 1

                  I guess you can get away with 2^34 cells at an order of a few GB memory, which can give you more than 100x100 OTCAs. An extra factor of 1k is just too much, IMHO.

                  1. 1

                    Update: I downloaded their https://github.com/QuestForTetris/QFT/blob/master/TetrisOTCAMP.mc.gz and ran it in Golly for 1.2 trillion generations. I’m not really familiar with Golly… I saw some OTCA cells flash ‘on’ for a second around the two billion gen mark, but it was only a few of them, and I never saw it again. Of course, I was running at 2^8 most of the time so maybe I skipped over them?

                    I could see LWSS (I think) moving around the perimeter of each OTCA cell, but overall the thing was still.

                    I did this in a 4-core VM with 4GB of ram.

          2. 1

            This is so wonderful.

            It got me thinking though, CGL can be time-stepped with a pretty simple GPU program. But now I’m thinking, why stop at the rules of CGL and go the whole hog representing actual transistors, gates and other higher level things while you are at it using different texel colors and even multiple-layers with through-hole interconnects. Maybe people do this already, I have no idea.

            In any case, what a tour de force - wow!