1. 25
  1.  

  2. 19

    I’ve started thinking of this as “hairshirt computing.” Or “Amish computing” might be a good name too. Some people appear to be fascinated by The Olden Ways and want to recreate them for themselves — 8-bit computers, 80-column terminals, ancient operating systems.

    I know this article is about sustainability and post-apocalyptic computing, but as others have said here, it seems unlikely that computers will be useable or useful at all in such an environment. (Good luck sourcing electrolytic capacitors or RAM chips in a world gone iron-age, once all the Fry’s stores and Foxconn plants have been looted.) I suspect the author is interested in it more for the fun of messing with CP/M and PS/2 connectors.

    Most of these Amish folk seem a lot younger than the stuff they’re nostalgic for. I’m 56; I actually used CP/M back in the day. It sucked balls. This is something that literally made DOS 1.0 look like rocket science. The only justification for it is that it ran on an 8-bit 4MHz(?) CPU with 64KB of RAM and 100KB floppies.

    If your “100 year computer” has a 32-bit ARM SOC and gigabytes of Flash, why subject yourself to this instead of some kind of Unix? At roughly the same time in history, our primitive ancestors also had BSD 4.2. I also used that — it was my first Unix, running on a big-ass VAX 11/750 that the whole company timeshared. It was great! I mean, I’d never want to go back to it if I could avoid it, but it at least could run more than one program at a time and had directories to put your files in.

    1. 7

      I know this article is about sustainability and post-apocalyptic computing

      Post-apocalyptic computing isn’t mentioned once. The closest thing is a reference to collapseos’ Forth-based OS, which wasn’t deemed appropriate.

      Most of these Amish folk seem a lot younger than the stuff they’re nostalgic for. I’m 56; I actually used CP/M back in the day. It sucked balls.

      CP/M Plus has an extensible architecture. It has a ludicrous volume of professional applications and programming languages to draw on, software is still developed today and it fits offline-first tasks well. The CP/M being used is a CP/M Plus compatible implementation called MultitaskingCPM. It’s multisession, so multiple programs can be run at the same time. I too am old and have used old computers when they were current. CP/M 1 or 2.2 sucks balls to various degrees. A decently configured MultitaskingCPM does not.

      This is something that literally made DOS 1.0 look like rocket science. The only justification for it is that it ran on an 8-bit 4MHz(?) CPU with 64KB of RAM and 100KB floppies.

      DOS 1.0 didn’t have subdirectories, user areas,or support for fixed disks. The FAT12 implementation was terrible. There were hardly any products for it and the 5150 came with 16kb of RAM vs many CP/M systems’ 32, 48 or 64k. The 8088 in a 5150 ran at 4.77mhz.

      If your “100 year computer” has a 32-bit ARM SOC and gigabytes of Flash, why subject yourself to this instead of some kind of Unix?

      There are many reasons but perhaps the most obvious one is that the piece mostly talks about ESP32, which doesn’t have an MMU. A thin CP/M layer isn’t limited to MMU-based SoCs. If you did want something that ran an old BSD I’d recommend looking at liteBSD for the PIC32MZ.

      it at least could run more than one program at a time and had directories to put your files in.

      There is a video in the piece literally titled “Multitasking - Multisession CPM/3 (Plus) with ESP32 and FabGL”. The manual, linked from the piece walks users through how to use multitasking and directories.

      Technical issues aside, there’s a broader issue here. You might’ve skimmed part of the piece, but you very clearly haven’t taken time to read and understand the content. That’s ok. It’s a big piece with a lot in it and I can understand not taking it all in. But somehow, despite not having taken the time to read about the work of others you have decided to open a comment with insults about people who take the time to do something about the current state of computing, from a system that will almost certainly be built with every problem highlighted in the piece from bloat to slave labour.

      I’m sure you’re not a horrible person, but as public statements go, is this what your username to be attached to? And that’s the thing, this place is often touted as a place with better discussion than hacker news. I’m not convinced your comment would be fit for hacker news. I would ask that everyone who’s commented here have a think about whether or not they really understand the piece - if they mention that this is for apocalyptic or collapse computing, they have not.

      This isn’t about the apocalypse. It’s about rethinking the way we compute in light of the power available versus common need. It’s about rethinking the choices we make when purchasing. It’s about rethinking the need to be online for things to work and what drives that. Most importantly of all, it’s designed to encourage thinking. I would ask that people here don’t fall into the trap of just reacting, because that’s what other sites are for.

      @snej If you want to build an heirloom computer that uses 4.2 BSD, I’ve given you a link to a project that will let you build the core of that. Fabrizio di vittorio has a board you should be able to hook that up over serial for VGA out. I’m not trying to single you out, but your reference to ‘haircut computing’ is coming from my use of the term heirloom, and however intended, that’s been received as an unnecessary personal insult.

      1. 6

        Steve, I intended no insult at all. As I said, the term “hairshirt computing” came to me a while ago, not in connection with anything you wrote. A hairshirt was a garment worn by Christian ascetics as a deliberate discomfort. I’m using it sarcastically of course, but against an idea, not any person.

        I was not aware there were more advanced versions of CP/M. I assumed it had died off unmourned soon after the IBM PC was released.

        I still don’t understand the logic of going back to ancient software. We have access to state of the art compilers that generate excellent machine code, so why retreat to coding in assembly, with a probably 10x loss of productivity? And what makes an ESP32 more ethical than a PC motherboard? They’re made in similar factories.

        As for “bloat”, it means that we can do more stuff, more easily. That’s a good thing. I remember building GUIs on a 1987 Mac SE; it was a slow and painful process. Today I can build something similar in an hour with SwiftUI. Sure, the binary is much larger and uses many more clock cycles, but human time and energy is vastly more important than that. I would hate to go back to when coding was for Real Men who memorized ISAs and knew how to parse decimal numbers in the fewest possible number of instructions. That’s exactly the culture that drove most of the young women out of computer science classes in the 1980s. (I was there, I was part of it.)

        1. 3

          I still don’t understand the logic of going back to ancient software.

          It’s a space I’m exploring and I decided to start more simply so I could iterate more quickly. I started with a quick and dirty BASIC I knocked up and hit dead ends. Then I emulated a ZX81, then a Spectrum and hit dead ends. CP/M is just the first OS where I haven’t hit dead ends I can’t pass yet. I’m sure I’ll hit them with CP/M but it’s been small enough to extend around without hitting compatibility problems. If on the other hand I put all my energy into porting a 68k Mac emulator and hit dead ends, it’d just take longer before I got to something where I could reach them. If I write a custom OS, I have to write the userland, which means ultimately working towards posix layers, C etc. and I may as well just build a new Linux SBC at that point.

          We have access to state of the art compilers that generate excellent machine code, so why retreat to coding in assembly

          You’re not tied to assembly. So far I’ve been coding in Forth, Lisp, BASIC, tons of stuff. Last night I wrote a clear screen tool in C, today I’ll port it to asm because it’s a few bytes vs a few K and I think it’d make a good manual tutorial. Obviously there’s no JS or Python but there are lots of options for CP/M that aren’t present on a lot of similar era systems, even a MicroEMACS port.

          with a probably 10x loss of productivity?

          I get the impression that you’re seeing this as some sort of product to be sold rather than an exploration of ways of computing. I’m not convinced that productivity in the day-job work sense of the term is what this is for. It’s for exploring more sustainable and different ways of computing. Honestly, I was blown away by just how user-friendly SuperCalc 2 really is, and WordStar 4 is actually really good. I imagine if you wanted to do those things you could, but they’re not applications I’m massively interested in exploring. The heirloom notes system is one, but not the only one. I like the fact that kids could probably learn programming on this one day. Someone wrote to me this morning and said this:

          I think often of the Japanese temples that exist with continuity, yet are rebuilt periodically due to disaster or war. Alive both in physical instantiation and in people’s heads.

          Which really chimed with me. This sort of thing is to encourage people to explore what their own digital temples might look like and what might live in them.

          And what makes an ESP32 more ethical than a PC motherboard? They’re made in similar factories.

          The ESP32 is just there because it’s got a lot of RAM for what it is. It doesn’t become more or less ethical till they’re pulled out of scrap to replace dead ones. By then there’ll be other processors that can be used, STM32s, RiscV IoT boards and whatever thin emulation layers are used they should be portable between them. Also I’m serious about LiteBSD, you can build your own with this and it’ll run BSD Unix. You’ll hit the same stumbling blocks I did a few years ago, but they may not matter to you. It’s all just exploration and learning. There are no wrong answers.

          I would hate to go back to when coding was for Real Men…

          The Retro Zer0s were built by my partner and I. I’m sure she’ll have a chuckle at that one. I can’t send you a retro zer0 but if you’re interested in trying the ZX20, drop me a message with some details and when the next round of prototypes are ready I’ll send you a kit.

      2. 5

        In the face of a likely collapse of the global supply chain for hardware in the next 50 years, these projects are not just nostalgia but a way to mobilize people around the preservation of computing in the West at least in our lifespan. Now, to many people, fixing some bug in an obscure open source project seems more relevant than working on such stuff. Tomorrow, when this same person will be old, lucky enough to be alive and still toiling in a field to put some food on the table, he will regret not joining the collective effort to develop reliable, long-lasting embedded processors to automate part of his work or maybe, let him dream, a reliable home computer.

        Now we still have human technical resources that can be converted to a more responsible form of collective-driven technological development. Tomorrow we won’t. But I guess if you’re 56, this is not going to be your problem.

        1. 7

          I responded to this point in more depth in another comment here. Basically, I don’t believe we will ever be in a situation where we still have computers but they’re primitive. We will either have the tech to build full fledged computers, or none. If the world tech infrastructure collapses, we’re going back to the Industrial Revolution, if not to the Iron Age, and we won’t have computers at all. The tech required to build ESP32s (or Z-80s) is basically the same tech that builds Core i5s. The next step down from chip fabs is soldering things together out of discrete transistors.

        2. 4

          Most of these Amish folk seem a lot younger than the stuff they’re nostalgic for. I’m 56; I actually used CP/M back in the day. It sucked balls.

          Concur. I didn’t use it (no way my family could afford a computer!) but I’ve set it up in an emulator to explore some of the early C compilers and it is downright terrible.

          We like to complain about stuff now, but rest assured computing was an ugly, ugly slog 40+ years ago.

          1. 3

            There are generational shifts in awareness of macro-scale political and social situations and their implications for one’s ability to project forward and so build things you want for your future. Older generations have grown up in a world plentiful with resource, money and security. Younger generations have very not. This leads to a disparity where later generations interpret younger ones as foolishness and naivety but younger generations interpret what they’re doing as imperative simplicities to prepare for an economically and materially desolate world, let alone exist in the current one.

            There’s a ton of ‘zooming out’ and empathy with other generations and those that don’t have anywhere near the privilege we have required here, otherwise of course you can’t understand the worldview that makes these kinds of things interesting and actually, necessary

            1. 5

              I grew up middle-class in the 1970s. I agree that I had better expectations of a college degree and job than people today. But more relevantly to this, may I point out that I never had access to a computer before my teens, nor a computer at home till I was 16. I never got to use any high level languages until college. I didn’t have ARPAnet until high school, and that was over a 300bps modem. You youngsters (sorryj are the ones who grew up with plentiful resources, technologically speaking. That’s why I’m rolling my eyes at the idea that it would be a good idea to throw it away.

              An economically and materially desolate world will not have computers as we know them. Either we have the tech level to run chip fabs, or we don’t. If we don’t, you’d better get used to wire-wrapping and hope someone knows how to make discrete transistors so you can hand-build something on the scale of ENIAC. You won’t be able to run CP/M on it.

              If we’re talking about today, not a Mad Max vision, even people in poor countries have access to cellphones that run modern OSs. People who want to avoid factory-built computers can make their own with $10 SOCs that resemble high end 1990s PCs. In a world where I can buy a $30 Raspberry Pi that’s a very capable Linux system, what good is an end-user computer that’s too low powered to run anything but CP/M?

            2. 2

              I’ve started thinking of this as “hairshirt computing.” Or “Amish computing” might be a good name too. Some people appear to be fascinated by The Olden Ways and want to recreate them for themselves — 8-bit computers, 80-column terminals, ancient operating systems.

              I think you are absolutely right. I am half way in my 40’s, but if there was an Amish computer club nearby, I would join it without hesitation. There is definitely some nostalgia at work here, but another part of it is a real frustration with the current state of affairs. I was an adult in the 90’s and used DOS and Windows 3.11 and Windows 95. And you know what? Society functioned pretty well back then. I know that, because I was there. With Moore’s law still going strong, I feel there were so much missed opportunities, so many things that could have gone better.

              That is, I like the capabilities a smartphone gives me, and I like staring at the pretty pictures on my 4k monitor. But I tend to do that a lot, because I have to wait 2 minutes before the Jira ticket I’m working on is loaded. It’s just text and low-res pictures, but all the combined engineering effort that went into my laptop, the server and the internet was not enough to deliver it faster. Or it actually is, but it was all squandered by some parties involved due to laziness, wrong incentives, lack of knowledge, you name it.

              So yes, I’ve never used CP/M, but I believe you in an instant if you say it sucked. But the present somewhat sucks too. There must be a better, more minimal way of doing things. I feel these type of dreams and fantasies are an outlet for that frustration and at the same time can be a productive way of trying to create something better.

              1. 3

                I’m not immune to the appeal of “retro” either. Half the reason I’m interested in embedded systems is because they remind me of the OS-less limited systems of my adolescence, like the Apple II. There’s an appeal to a system simple enough that you can understand it all (at least the software layers, not the hardware!)

                A lot of stuff is too big or slow. I think that’s because the Web took over to such a degree that people started using browsers and client/server for everything. So I have a 100MB program that’s really a web browser running a super high level emulated program that still delegates most of the work to a bunch of far away servers. That’s what I think is broken. And we don’t need to go back to primitive computers or abandon modern user interfaces so fix it.

            3. 4

              I have trouble seeing how the first half of this article connects to the second. The first half is something I think about a lot: computer-as-appliance instead of a disposable toy. The second half is fussing around with very old, irritating technology that I don’t think I could convince anyone in my family to ever use. I don’t think you could even convince me to use CP/M, and I like that sort of thing!

              If you want this 100 year heirloom computer, it should be pretty simple for a regular person to understand and use. It should be good at doing things regular people want: file storage, email, nice writing software, etc. These are things going ultra-retro will not allow you to do.

              1. 2

                Part of this is about drawing on the past to build something for the future, instead of drawing on the past to build for the past. I wouldn’t get stuck on CP/M, it’s the third OS I’ve tried in this space this year and the first that hasn’t been a complete failure.

                It should be good at doing things regular people want: file storage, email, nice writing software, etc.

                So just to be clear on a few things about this particular implementation:

                • File Storage is handled by an SD card wrapper using SPI, so as long as whatever device at the other end speaks the same language it doesn’t matter. This was what I tried to reference with adapters. I had originally made reference to 3d-printed magnetic filament and synthetic DNA-based storage, but Substack has space constraints so they were cut. The FS is actually FAT32, so unlike a regular CP/M system you can transfer files back and forth. Think of this as more a distraction-free space that isn’t always connected but can interact with the outside world.
                • Email should be possible, there’s a tool called CRR for offline QRK and CRS archive reading. These are mostly used with BBSes for things like Fidonet. I haven’t had time to look into setting anything up to permit interaction. The ESP32 will do wifi connectivity and I have BBSed from it (and it is great fun) but I’m not sure I’d want a permanently nailed up mail client.
                • WordStar 4 is excellent. Don’t ask me, ask George R. R. Martin. I wrote some of the Retro Zer0 manual on it.
                • Also other good stuff, SuperCalc2 really was quite impressive to use, lots of programming languages from Ansi C to Lisp, even an EMACS port.

                Part of the issue (and I cover it in the piece) is that we have to be prepared for these systems to be offline for years if not decades, and we have to be prepared for not everything to be functional when switched on. Supporting screens are great, needing a screen with a specific interface to function is a single point of failure. That might be fine for some people, might not be for others. That’s the whole point - to explore different (and hopefully more sustainable) ways of doing things.

              2. 3

                I’ve been writing something adjacent into a sci-fi story that’s been in my WIP folder for a year+ now: i.e., after collapse, what kinds of hardware will be sufficiently prevalent in electronic scrap, warehouses of unshipped consumer electronics, etc. to actually build a standard information appliance from? What would that appliance do, and how would that be different from our current computing work? How would person-to-person communication work with such hardware? How many people would share a computer/how many would need to contribute to its upkeep?

                It’s a fun thought experiment on its own, and has led me to some interesting places thinking about disaster-resilient computing, community networks, etc. MCUs, simple serial protocols, and latency-insensitive P2P networks seem like a good starting point, but bulk storage that doesn’t fail completely under a couple of years of normal I/O workloads still feels like a sticking point.

                If you can’t manufacture new flash chips and you’re not running on highly-connected devices amenable to continuous storage replication, how are you going to keep data around longer than a handful of storage MTBF intervals? (I have some ideas about this based loosely around the Git patches-over-email model, but I’d love to hear about others that are better.)

                1. 15

                  After a collapse, most people wouldn’t have needs for computation, but rather, direct survival. Should someone have the desire/necessity, realistically, it’s going to be whatever PCs they can find, probably running Windows, without an internet connection. Most developers are soft targets that will probably die in a collapse, and there aren’t many of them. The simplistic embedded targets that people talk a lot about that would be useful “post-collapse” require a lot of attendant infrastructure like soldering irons, computers with comfortable enough input that can assemble and flash a target, someone who actually knows Z80/AVR/whatever assembly (or a manual and the luxury of time to study it), etc.

                  This also assumes there wasn’t some EMP-like event.

                  tl;dr: Post-collapse computing is masturbation. People would be better looking for bread and water, and it’s not a world I’d want to live or think about living in.

                  1. 2

                    Virgil Dupras has done lots in this space. He talks about the two stages of collapse here. What you’re calling post-collapse here computing is still part of the collapse. Post-collapse computing is about re-establishing computing after the last computers are gone.

                    FWIW I think that if Dupras was looking to the first stage of collapse your point about masturbation would be spot on. But he’s not, he’s looking decades beyond that when the next culture tries to recover.

                    1. 3

                      I’ve read Dupras’ works which prompted me to think about this before. It assumes that a future civilization will be looking at our relics for establishing their future; who knows what they’ll think about us? If we’ll have the foresight, or if they’ll have the foresight? It seems quite possible that they might also rediscover it independently, or they possibly never the develop the need for it. Maybe it’ll take years, at which point all that decays. (forgot, edit) Dupras reads to me like it’s still masturbation, because it feels like he’s simply establishing his aesthetic preferences of the now and hoping he’ll be rediscovered, in the same way the renaissance rediscovered classical science and humanities to establish taste.

                      I think a lot of nerds tend to think about collapse in the Neal Stephenson sense, when in reality, it would be in the Threads sense. Yeah, they were beginning to re-establish civilization in that world, but computers were quite a long way out. And it’ll be hell.

                      1. 1

                        Dupras reads to me like it’s still masturbation, because it feels like he’s simply establishing his aesthetic preferences of the now and hoping he’ll be rediscovered, in the same way the renaissance rediscovered classical science and humanities to establish taste.

                        I dislike the masturbation bit but I do like the rest of your point. Maybe that being rediscovered rings true in some way, but probably more from the pov of if this stuff all goes away, being rediscovered would be a good thing.

                        I think a lot of nerds tend to think about collapse in the Neal Stephenson sense, when in reality, it would be in the Threads sense. Yeah, they were beginning to re-establish civilization in that world, but computers were quite a long way out. And it’ll be hell.

                        I think this explains a lot about your view. I remember when threads aired in the UK as a kid. Absolutely terrifying. We had these protect and survive booklets that were equally awful. They’re masturbation right there. If anything, COVID taught us that everything we were taught by TV and films about the world ending was wrong. The only thing I can see is that it’ll be weirder than anything we could imagine. On the point of it being hell… I’m not convinced that what we have now in some ways isn’t. But I think we come from very different places with differeing views. I just hope you find a differing view as interesting as I did :D

                  2. 2

                    but bulk storage that doesn’t fail completely under a couple of years of normal I/O workloads still feels like a sticking point.

                    There’s a really interesting thought right there:

                    1. How would I/O [workloads] change after the collapse?

                    2. What are we prioritizing for processing post-collapse?

                    1. 2

                      Well, since you asked I’ll offer the answer from my fictional scenario: “online” communication mostly happens over high-latency/async channels not entirely dissimilar from classic email or Usenet, though with authentication provided directly by peers rather than a central server. “Bulk” transfers happen via sneakernet, ala Johnny Mnemonic. :)

                      Local storage is still important, though. You need somewhere to capture your interactive/video tutorials and reference guides for basic agriculture, engineering, and medicine; we don’t have to go entirely back to the dark ages just because we lost cheap and ubiqitous access to YouTube and MMOCs.

                      Likewise, some important “end user” data (image tiles for maps, photos, and audio recordings) needs more than a few errand MBs on MCU-attached SPI flash, so where does that land? Access doesn’t necessarily need to be instantaneous, but it does need to be reliably available.

                      What you don’t need is ready access to TBs of online-first media. Netflix, TikTok, talking head news videos aren’t gonna make it through the cataclysm.

                  3. 3

                    Why use something as complicated and bloated as a PS/2 interface for mouse/keyboard? The previous RS232 or D-SUB connectors worked just fine and is much more universal. Standardising on RS232 would reduce the needed types of connectors drastically and even allow for networking devices together using the same plug. If the argument is that the RS232 is outdated, how does that not apply to the PS/2? It’s only a matter of time before that kind of hardware is unavailable and you have to migrate to a more modern USB connector/controller.

                    Eventually as you replace the components to keep the machine running you end up with the Ship of Theseus problem. Have you really designed a computer to last a hundred years, or have you just spent time replacing and upgrading the individual components as they failed to keep that old beater running?

                    The same thing can be said for the software. One compatibility layer could be swapped for another compatibility layer, and eventually you’re back to the Ship of Thesus thought experiment.

                    What you have invented is exactly the same as a regular computer, except that you have sourced all the components (virtual, soft- or hardware) yourself instead of having it supplied by a manufacturer that has vertical integration in the market.

                    1. 3

                      As I read the piece, it feels like this

                      eventually you’re back to the Ship of Thesus thought experiment

                      is a feature, not a bug. You’re spending time specifying the individual components so that they can be replaced and adapted and extended over a long service life. And even so that it can be done from easily salvaged components.

                      I thought it was a really cool thought experiment to imagine these machines on a much longer cycle than I usually do. Sourcing the components so that you understand them and can service them beyond the expected life of the manufacturer that has vertical integration in the market seems prudent if you’re hoping for this sort of useful life.

                      1. 2

                        Why use something as complicated and bloated as a PS/2 interface for mouse/keyboard? The previous RS232 or D-SUB connectors worked just fine and is much more universal.

                        Both chip series being looked at provide UARTs already so minimal serial exists. Doing full RS232 requires more than just a D-SUB connector. Depending on what you’re connecting it to this might not be too bad (as RevK found connecting an ESP32 to an ASR-33), or it might need substantial circuitry to handle level shifting. Most of this is usually done with something a MAX3232, I’ve found them unreliable with ESP32s. You could probably do something with 74xx chips, I imagine they’ll be around for quite a while.

                        As for PS/2 being bloated, I’m not sure I understand what you mean there. To implement a proper D-Sub connector you’re talking 9 pins, with TX, RX, VCC, GND and a handshake loop as a minimum, or additional handshaking pins. PS/2 is VCC, GND, Clock and Data.

                        But if you wanted to implement D-SUB instead you absolutely could. If that’s what you wanted you could fill your boots. That’s part of the point of the piece.

                      2. 3

                        I read a book in which time travel was doable, specific to locations; a favored vehicle for it was the Ford Model T, because it was so basic that repairs would be easiest, no matter when in the history of the US you were.

                        At another point, I wondered how well an extendible NAS would fit, running as a platform for blogging software, image archives, etc.

                        1. 3

                          I don’t want to break it to these people, but the only thing remotely close to being specified well enough for this kind of longevity is Java. And Squeak. Nothing will ever kill Squeak.

                          1. 2

                            This post really captured my imagination, but the phrase “heirloom computing” brought something rather different to mind for me from what the author proposes. To me, an heirloom must be beautiful and a timeless achievement of craftsmanship, so that people want to use it and preserve it for 100 years.

                            Seems to me there are two ways to build a 100-year computer:

                            • Make it extremely easy to repair and replace the parts, so your “ship of theseus” still functions 100 years later. Reminds me of a safety razor. Works fine indefinitely, as long as you keep replacing the razor blades regularly.
                            • Build something indestructible, with as few moving parts as possible, designed to work as originally constructed for 100 years. Reminds me of a straight razor, which if you dispense with a folding handle can have zero moving parts, and works indefinitely as long as you keep it sharp.

                            Both are significant design challenges, but I think building something where the original object is designed to function for 100 years is the more daunting. I struggle to conceive of a computer that could do it, it is so different from current designs.

                            I think it would be easier to tackle peripherals first. I can conceive of a 100-year heirloom keyboard, for instance. It might use Hall effect switches, as the Keystone by Input Club intends to, featuring magnetic sensors instead of physical electrical contacts to last for billions of keypresses, theoretically ~20 times longer than conventional mechanical switches. (But I must admit that I didn’t back that crowdfunding campaign, preferring conventional mechanical switches that are easier to acquire/replace.) People still love well-crafted antique typewriters, and they still would work just fine for writing today. Datamancer “steampunk” keyboards look like they achieve the kind of beautiful craftsmanship I would want from an heirloom keyboard, but I’m not sure they have the durability or long-term planning required. (Can’t say either way, never having owned one.)

                            When I asked my girlfriend what the most complex tool she can think of from 100 years ago that is still relevant, she suggested a sewing machine. When she was growing up, her family had multiple antique sewing machines that they used, including one that was built into a table so that the sewing surface was flush with the tabletop. The trick will be guessing what tasks will still be relevant in the future and designing a simple but reliable workflow that will remain usable.