1. 13
    1. 7

      A long time ago trains were hot and new. Everyone saw all things through trains and steam. Ocean problems were train solutions and so they imagined crossing the ocean on trains. And it’s not that we can’t currently build ocean crossing trains, it’s that planes happened so we don’t want to. So today, when graphics are hot, we are presuming that if a civilization can make a simulation they will. What if they make something else eventually? What if the sim runs for a while but it is overcome by something else where a sim isn’t needed? Where the train is perfect for train problems but no one could have predicted flying? A simulation I guess is a solution to all problems because you have mastery over the environment and all perception is perfect. But I could imagine mastery of the host environment or something.

      I like the logic behind destruction vs advancement but I don’t like the presumption of future projects. Of course these people have thought about this idea a lot more than me, I was just chewing on this recently and realized: “wait, what if they don’t build a sim?”.

    2. 5

      Within Bostrom’s line of reasoning, the possibility of a Simulation Barrier seems to put a limit on what any local sense of a world can know about what is outside of it.

      I think Bostrom’s argument gets around this by not requiring that we know what’s outside our world, only by what’s (conceptually) in it.

      Bostrom proposes that simulating our current world is possible to the degree that the simulants will also be able to run simulations. If we show that such a thing is true, we have not made any statement about what is outside our world per se but simply that, due to the (in this scenario) now proven conjecture, simulated worlds that resemble ours far outnumber “real” ones and thus we are likely in a simulation.

      The Flatland conjecture from TFA doesn’t necessarily hold, IMHO. Our universe may be simulated inside a 16 dimensional hypercube made of unimaginable material, and the Universes we simulate might be on silicon, but from the perspective of the simulants it’s the same in every way that matters (what “matters” is subjective conscious experience and a mostly-consistent shared world).

      It’s like those Minecraft worlds that have working Turing machines in them. The “outermost” world is running on silicon and electricity but the first inner world is running on granite and gold (or whatever, I don’t actually play Minecraft).

      Perhaps I misunderstood TFA; please correct me.

      (Also, if you like this sort of thing, I highly suggest reading…well, anything, by Greg Egan. Permutation City might be the best start.)

      1. 4
      2. 2

        Thanks for the ref and thoughts. What is TFA?

        1. 3

          “The ahem Fine Article,” i.e. the linked article, by analogy with “RTFM”

    3. 2

      I simply have to plug this YouTube channel here: https://youtu.be/vcvU6UMYRHM John Michael Godier

      Before this channel, I wasn’t aware of anyone who specialized in being speculative but who also took science seriously enough not to let their speculation come at the cost of what we know to be true. There is apparently a word for this type of person: a futurist.

      It’s pop sci, but it’s highly entertaining for someone who usually shakes their head at the nonsense speculation typically found on YouTube.

      All of his videos are great. Even if you’re not into this one in particular, it’s worth subscribing to him, because he often reports on scientific news of the day. For example, I remember someone on HN telling me that the Wow signal was probably a comet. Yet apparently the truth is the opposite: https://youtu.be/RAZaRYcDFEM

      Anyway. Apologies for the shilling. I just love “speculative science” like the simulation argument.

    4. 1

      A thing I thought of about this simulation stuff. If we’re a simulation, and we eventually produce our own simulation of a civilization just like ours, with individuals just as smart as we are, then wouldn’t that simulated civilization produce its own simulation of another equivalent civilization? Which then goes on to produce its own, and so on, until we have an infinite number of simulations just under us. Presumably there would be another infinity of simulations above us too. This seems kind of implausible, like it has to stop somewhere, right?

    5. 1

      I think others have pointed out that talking about the probabilities here involves something like sampling over a space of hypothetical possibilities larger than Ackerman’s billionth number. It’s debatable whether a discussion here is meaningful but let’s suppose these possibilities all “exist” and we can talk about their likelihood and so talk about the probability that we’re in one of the “simulated world” possibilities rather than one of the “non-simulated world” possibilities.

      The thing here is that with the existence of these zillions of possible simulations we have to assume that there are many such simulation arbitrarily close to “our world”. So, we can reasonably assume that if, on one particular simulation, “our simulation”, one God-like entity suddenly decides to cheat and start playing favorites or doing whatever rules violations they choose. Well, one can assume that on many of the other hypothetical simulations, no cheating happens and the world continues to have a consistent logic. Between the various very-close worlds, which is really us? One can’t meaningfully answer directly but one note that an internally consistent trajectory continuing forward seems likely. We can call that us - it seems like what we’ve experienced so-far.

      To put it in Bostrom’s language - if a substrate is low-level and consistent enough, an intelligent entity doesn’t need to know, can’t and doesn’t have to know what the substrate “exists in”. Indeed, whether the substrate exists in a single place is debatable. This seems especially true if we’re taking an “all the hypotheticals exist” approach in our argument logic.