1. 1

    What kind of computer can simulate this Game of Life with a board size of 3Mx11M? Terabytes of memory? Petaflops for a smooth play?

    1. 4

      Optimized GoL is apparently a solved problem. https://en.wikipedia.org/wiki/Hashlife is one option, but I believe there are others.

      1. 2

        Look at the algorithm just a bit, I don’t think any desktop computer as of today has the ability to even detect the cycle of 2^44 cells in reasonable time, let alone responding to user input to move the tetris blocks. You need a TB just to represent the quadtree.

        1. 1

          I thought you’d only hash the small slices? Can’t you hash each loop of an individual OTCA Metapixel? Would a whole VarLife (wireworld) cell fit in a hash “chunk”?

          I’ll admit that my mind just goes blank when I think about how the interaction between metacells is supposed to be handled in Hashlife. …But, I know it works because I saw some youtube videos of “Life in Life”. :)

          1. 1

            I guess you can get away with 2^34 cells at an order of a few GB memory, which can give you more than 100x100 OTCAs. An extra factor of 1k is just too much, IMHO.

            1. 1

              Update: I downloaded their https://github.com/QuestForTetris/QFT/blob/master/TetrisOTCAMP.mc.gz and ran it in Golly for 1.2 trillion generations. I’m not really familiar with Golly… I saw some OTCA cells flash ‘on’ for a second around the two billion gen mark, but it was only a few of them, and I never saw it again. Of course, I was running at 2^8 most of the time so maybe I skipped over them?

              I could see LWSS (I think) moving around the perimeter of each OTCA cell, but overall the thing was still.

              I did this in a 4-core VM with 4GB of ram.

    1. 2

      Hmm.

      I often speak about the spectrum of testing, from the finest grained unit tests to full system end to end tests.

      As soon as you move away from the either end, the amount of set up and dependency cutting you have to do increases.

      As soon as you move away from the Finest Grained Unit test, the fragility of the test increases. (ie. It tends to break due to changes in things other than the particular behaviour under test.)

      Therefore I advocate doing the Finest Grained Unit Testing.

      What is that? In the C world, the smallest chunk I can throw at the linker is a .o file. That sort of defines it for me.

      In the Ruby world, I usually can step down to test smaller chunks.

      The aim is to thoroughly verify the required behaviour in a manner that is robust (only breaks due to changes in the code that implements the behaviour under test), and provides excellent Defect Localization. ie. Tell me which test case failed, and I will have an excellent idea which line of code is broken.

      1. 1

        The aim of “thoroughly verifying behavior” is a good one… sometimes. Consider the example I gave of 3D rendering, though: the only thing you need to verify is that it the output looks good to a human. Or consider a prototype: you don’t need to verify the behavior, you’re just learning by doing and will then throw the code away. Automated tests, or at least fine grained tests, are probably a waste of time. The real code will get them, they’re not needed for a prototype/spike.

        If you say “well of course I wouldn’t do fine grained tests on a prototype”, well, that’s my point: you have to start with goals. As experts we tend to not consciously see the steps in our though process, so we go and tell junior programmers “finest grain testing” and then get confused when they go and do it in places where it’s obviously (to us) inapplicable. And what we start with when we decide how to test is goals: you could test a C function in isolation, but it would take too long, so you don’t. Part of your goal is “ship code on time”. So it’s important to make it explicit.

        1. 3

          I only regret testing when I don’t do it. I almost never see a prototype replaced by “real” code. I’d prefer to build tests for a “prototype” for 2 reasons: 1) it’ll grow into the real thing sooner rather than later, and 2) I use tests to ferret out my goals anyway.

          Writing microtests (and refactoring) helps me write only the production code I need. I tend to ship faster with tests. There are exceptions, but they tend to have more to do with whether the code is integrating with some external API than how low-level the language is.

        2. 1

          Mike “GeePaw” Hill has been using the term Microtests (https://www.youtube.com/watch?v=H3LOyuqhaJA) for a while now. It’s a descriptive name, and I like it. Lots of very very tiny tests which give you confidence in your code while still allowing you to change or refactor your code.

        1. 26

          I’m wholly unconvinced a single number type is a good thing. It’s true, you don’t need to think as hard if you unify your integer and floating-point types. But that just provides an opening to introduce subtle new bugs in cases where the semantic differences between the two matter.

          1. 15

            I can’t even slightly imagine a world where unifying naturals or integers with reals is anything but a giant explosion waiting to happen. I do think there’s something interesting in pretending that numbers are ideal (thus infinite) by default and having Int32, Int64, etc only as special cases.

            1. 2

              Ermm, don’t Lisps do that?

              1. 8

                Scheme has something like that, but it’s a bit more finnicky than that. Instead of taking the full algebraic perspective Scheme distinguishes abstractly between exact and inexact values and computations. It’s dynamically typed, however, and so a little difficult to understand exactly whether or not it truly “unifies integers and reals” in any sense.

                Javascript is well-known for making that unification and it lacks any notion of exact/inexact. So, instead, you just face a bunch of issues.

                Real numbers probably just straight shouldn’t support equality. It’s a theoretically undecidable operation.

                1. 4

                  Relatedly, does anyone know much about what it would take to replace “real numbers” with “real intervals”? It seems like something that has certainly been tried in the past. It would neatly solve equality issues by replacing them with more quantitative measures like “what is the (relative) measure of the overlap of these two intervals?”.

                  Do real intervals form a field? (They ought to.) What does linear algebra look like here?

                  1. 3

                    Mathematically, I agree that real intervals should form a field (I haven’t looked into the corner cases,though). In the context of ‘Real numbers probably just straight shouldn’t support equality,’ I’m unsure how you’re going to define your identity values for + and *, since they depend on equality of real intervals which presumably depends on on equality of reals. There’s a similar argument for inverses.

                    1. 2

                      Why do they depend upon equality? You’d need some form of equality to prove their properties but their operation shouldn’t require it.

                    2. 1

                      Spire has an interval type but it relies on an underlying implementation (which might be “computable real”). I don’t see what you’re suggesting or how it would work - if you want to compute precisely with real intervals then you just do twice as much work. If you want to compute with single values and have strict control over the error margins, wouldn’t you just work with computable reals?

                      Intervals form a semiring but not a field.

                      1. 1

                        I’m more spitballing than thinking hard about it—evidenced by the fact that I didn’t even bother checking whether they were a field myself.

                        I just think it’s really interesting that real numbers are non-computable. Handling this sanely has roots in Brouwer and likely you could find even earlier ones. Andrej Bauer has some interesting papers on this as well, if I recall.

                      2. 1

                        I did some experiments with plotting formulas with interval arithmetic last year. (The formula parser is still broken because all the operators associate right.) Playing around with it may give you some ideas for what works well and what works badly in naïve interval arithmetic. There’s been some work that I don’t fully understand by Jorge Eliécer Flórez Díaz extending interval arithmetic to do super-efficient ray-tracing which I think may solve some of the problems.

                      3. 1

                        Real numbers probably just straight shouldn’t support equality. It’s a theoretically undecidable operation.

                        Theoretically undecidable under what model? Tarski demonstrated real closed fields are complete and decidable.

                        1. 2

                          I think the difference is that while decidability pertains to equality of arbitrary formulae over reals, I’m talking about arbitrary equality of values. If you receive two real numbers knowing nothing more about them and wish to build an algorithm for testing equality then you’ll hit an infinite regress. You can easily show two reals are equal up to some pre-defined precision, though.

                          1. 1

                            Hm. I’m going to have to go back to my modern analysis texts, but I’m pretty sure Tarski-Seidenberg provides for equality: by constructing the reals through Dedekind cuts over the rationals, you can guarantee a unique representation for each real number.

                            This is all mathematical, though - while I’m 99% sure Tarski has an algorithm to prove two reals are equal, I think the implementation is unreasonably slow. I don’t know what’s new on this front. But I know which textbook I’m cracking open over the weekend!

                            1. 1

                              I’d be very interested to see that. I believe it’s possible to prove that two reals constructed from some algebra of reals are equal (though I think the algorithm is superexponential), but I’m fairly sure there’s no constructive function (n m : Real) -> n = m. Uniqueness of construction is a little different, though. You probably would get away with proving that by stating that if there were two representations you’d derive contradiction.

                  2. 7

                    Agreed. In particular, many languages would benefit from the addition of first-class rationals, complex numbers, etc.

                    1. 5

                      Obligatory Guy Steele link: https://www.youtube.com/watch?v=_ahvzDzKdB0