1. 15
  1.  

  2. 8

    Emulation is a fascinating rabbit hole insofar as perfection is basically impossible.

    You could spend your entire life on a single machine and never get it perfect. But you could get it to the point of being indistinguishable to a human observer fairly rapidly (say 5-10 years.) So that’s what we do.

    I suspect that in another 10-30 years, we’ll increasingly just be scanning (at least older) system chips in and mapping it all out, Visual 6502-style. If that code is then optimized and refined and boiled down to readily-available FPGA cores and the like, it stands the potential to render things we’ve spent our whole lives reverse engineering and emulating redundant.

    But I’m okay with that. In fact I hope it happens that way. Black box reverse engineering is immensely draining and challenging, and less popular systems often are heavily neglected, while the more popular systems receive so much attention that there’s just endless reinventing of the wheel that is not a useful allocation of rare talent. Preserving history is what matters the most here.

    1. 3

      I have to acknowledge the massive differential in experience to qualify my thoughts here. I am talking more to the audience than to you, who knows and has experienced all I’m about to say.

      Emulation today is largely adversarial. The systems being emulated depend on being opaque. Opacity allows for security through obscurity, which does not solve the DRM Problem but allows for it to be deferred long enough to prevent pirates from releasing same-day cracks and to preserve face. However, the systems being emulated are also sold on a market by a business for a putative profit. Horizontal business alliances are therefore desirable, since horizontal integrations both improve economies of scale and improve the quality of the supply chain. These same alliances also lower the prices of individual parts in the supply chain, commoditizing them and leading to a breakdown of the adversarial condition due to wide availability of the individual parts needed to reconstitute ad-hoc versions of the original system.

      To cut to the point, we are only fighting with console manufacturers and PC game publishers up to the point that they dictate the hardware used to play the game. As hardware becomes more generic and affordable, publishers lose control over the hardware. Reverse-engineering becomes more symbolic, climbing towers of abstraction, giving us simpler and more powerful tools for emulation, cheating, homebrew, TAS, etc.

      Your point about perfection is very sharp. I wonder whether we may find a relaxed notion of correctness which we can use instead. We often do not care about the precise nature of the internal state of an emulation; we only care about its ability to reproduce specific effects. Similarly, we do not care about the structure of its evaluation, only that it is fast enough to execute a certain number of steps per second. Perhaps there may come a day when we have enough symbolic insight to archive games by abstract specification, rather than by executable bytes alone. In that day, we might expect that emulators never have to be written from scratch, but composed from structured modules.

      (It bothers me that I do not have a good ready link to a description of the DRM Problem. It is the obvious one: DRM producers give consumers a key and a lock, and then attempt to make access conditional when the consumer can just(ly) unlock the produced content once and for all.)

      1. 1

        I suspect that in another 10-30 years, we’ll increasingly just be scanning (at least older) system chips in and mapping it all out, Visual 6502-style. If that code is then optimized and refined and boiled down to readily-available FPGA cores and the like, it stands the potential to render things we’ve spent our whole lives reverse engineering and emulating redundant.

        From a practical I-just-want-to-run-this-software-correctly point of view, this forecast looks very plausible and even desirable. There is, however, a sort of side effect of all those reverse engineering efforts that we should preserve: clear documentation for posterity. To quote MAME:

        MAME’s purpose is to preserve decades of software history. […] This is achieved by documenting the hardware and how it functions. The source code to MAME serves as this documentation. The fact that the software is usable serves primarily to validate the accuracy of the documentation (how else can you prove that you have recreated the hardware faithfully?).

        Scan of decapped chips will help us preserve old hardware/software, but not understand it.

        But to rephrase what you said: is understanding every single detail of a weird forgotten IC worth a lifetime of hard studies?

        (In the humanities, where retrospection is common, the answer would be a clear yes. In the hard sciences, where discovery is the main focus, the answers is probably not so clear.)

        1. 2

          There is a case to be made for both understanding the logical behavior of systems alongside adequately preserving the experience of using them.

          From a cultural perspective, emulators are important because they allow us to continue experiencing and sharing software that would otherwise be difficult to experience without legacy hardware (which will inevitably decay at some point). They stand to make our media more accessible and timeless. This was the main focus of the blog post, though you guys do raise some great points about the next facet of emulation:

          From a scientific perspective, preserving the behavior and implementation of these systems is important because it allows us to deeply understand the design and operation of that system, and having accurate documentation of that matters - because again, those systems will decay eventually. And if we want to build a truly 1:1 functional replica, we’re gonna need documentation. We literally do not know how to make Damascus steel, because the methods of creating it were not successfully preserved - we merely know that it existed.

          But I do still believe that these can be achieved separately. They definitely benefit each other mutually, and it is in our best interests to pursue both whenever possible. But ultimately I do wonder if the experience of our media might be just a little bit more important in the grand scheme of human history, as opposed to technical documentation. What do you think is more important - the specifications of the printing press that was used to print and publish Shakespeare’s works, or the content of the works themselves?

          I realize this is deeply subjective and there isn’t really a correct answer. At the end of the day, it still is in our best overall interests to preserve everything as extensively as we can.

      2. 5

        From this point forward, we will potentially have a near-perfect recreation of every machine we ever build. Every work of art that we created on those machines.

        I wouldn’t say “from this point forward”. Once intellectual property dies off, then maybe. Apple has done a pretty good job of suing emulator devs.

        1. 2

          This is a great point, I’ve edited that paragraph to address this.

        2. 3

          While preserving and archiving software and digital documents is easier than preserving physical media, it still takes funding and effort to maintain large scale archives. Everyone who cares should consider working on software for managing the archives, sharing their own collections publicly, helping curate public collections, and donating to organizations like the Internet Archive.

          1. 1

            Absolutely. Archival and preservation is not something that happens automatically, it is still a responsibility we have to accept for ourselves. Projects like these are the reason I wrote this blog post, after all.

          2. 2

            Blech. Why is it always people who want to gouge out your eyes with their theme that disable reader mode on their blogs? More importantly why does Firefox respect authors’ wishes, instead of my wish to avoid eye strain? If I were disabled rather than just old and crotchety surely it’d be a criminal act in some jurisdictions?

            1. 3

              I actually didn’t know that reader mode was disabled, I will look into fixing that. I’m sorry the white-on-black theme is too intense, would you prefer a more muted grayscale pallette? Bolder fonts? I went ahead and set it back to the default black-on-white for now since I had not considered the visibility implications.

              1. 1

                Sorry for assuming it was done on purpose! Don’t change your design to please random internet weirdos, but if I were you I’d look into why it doesn’t support reader mode on FF. For what it’s worth, reader mode on Safari worked fine, and it’s a good article.