1. 49
  1.  

  2. 17
    Joy

    (…) computers are more than just a means to an end.

    The problem is that these “modern” computers are also inferior to the older ones in the “getting things done” (and I don’t mean GTD methodology) aspect. Nowadays, you rarely can just sit down to the desktop and do your thing as you wanted to. Even the tools and applications designed to do the thing you wanted, often produce different, not exact results or have many artifical limitations or annoyances which were missing back then.

    1. 15

      There’s something that me and a friend of mine grumpily called the WEBA point (from Why Even Bother Anymore) way back. I promise it’s somewhat neat and relevant.

      Every product line in every industry comes to a point where it’s so good that, if you only take into account what you see in the first five minutes of usage, it might as well be finished. I mean yes there are always bugs to fix and new features to add, from ASLR to sandboxing engines and from new 3D graphics APIs to low-latency audio features, but the “common” parts that everyone uses, the one that virtually all users care about, might as well be done. At that point, it feels like there’s no point in “upgrading” anymore – it’s already good.

      As a product starts approaching that point, various teams start being pressured into justifying their existence. That’s e.g. how Windows has twenty years’ worth of useless Find File… dialogs in spite of having had a functional, reliable one at one point. Because you can’t just go to your boss and say you know what, boss, the thing we had last time was really good, why ruin a good thing? How about we just fix the bugs we found and add these neat features? How’s your boss going to get a bonus for that?

      As it goes past that point, the price of coming up with an even better version becomes too high to justify keeping the whole machine around, and companies start looking at alternative revenue streams and packaging solutions. Visible changes are prioritised because they make it easy to sell new versions as fresh, even when they’re half-baked – that’s how we ended up with Windows 10’s design, which is so dysfunctional that it probably caused more of a resurgescence of 1990s nostalgia than the latest Star Trek TV shows.

      Now there is some inherent conflict to this – I wish my Windows 10 machine had Windows 2000’s functional UI instead of this nondescript thing that mostly consists of whitespace, but I can’t say I miss plugging the network cable, going to the kitchen to microwave a burrito, and coming back to a freshly wormed computer that’s going to automatically shut down in 90 seconds.

      Overall I guess we’re in a better place. But based on how 2000 looked vs. 1979 back in 2000, I for one hoped we’d have been in a far better place in 2021 :).

      1. 0

        Not gonna go into the whole debate but Windows 10 + ClassicShell is really good in my opinion.

        I have used Win since 95 and this is pretty darn good. Stability, speed and performance are off the charts in comparison.

        1. 3

          Yeah, I mean, technologically, Windows 10 is by far the superior version of Windows. But I personally just can’t stand the user interface… which makes me sad, because Windows 7 is probably my favorite modern operating system.

          1. 2

            I have a Windows 10 machine I use for work with that setup. It’s definitely good and, as I mentioned above, definitely better than Windows 95 ever was. I don’t wanna go back to that. But as far as the UI goes, not even ClassicShell can save you from a lot of things (flat, thick titlebars need registry hacks to get them to a usable size, and they’re still flat, Settings dialogs are messy and lack.all sorts of things, the Search feature – between marketing-driven crippling and legit difficulty – couldn’t find a document if it were the only thing on the hard drive, can’t use wallpapers with both light and dark colours because the icon labels have no background and so on and so forth).

        2. 8

          Open Source software mitigates or removes that: GNU Emacs is GNU Emacs in all important respects, not “Visual GNU Emacs .Net++ 2k21.8 Now With Ribbon Bar” or “GNU POWER iMacs Butterfly Keyboard” or whatever it would have turned into by now under the stewardship of some proprietary company obsessed with chasing trends off a cliff.

          1. 22

            I think this is valid for some very conservative projects like Emacs, but if you look at things like GNOME or Firefox, I think it’s clear that open source rarely succeeds in protecting the user from these things in practice.

            1. 4

              I’m not sure about “rarely” but I do take your point; I will say, however, that Open Source gives me the option to not use stuff much more effectively than closed source does, like how I can stick with the very stable Window Maker and not have to switch to Gnome and its UI treadmill regardless of which OS and distro I use.

              1. 15

                It really varies a lot. Like when GNOME changed things in unpopular ways, forks like Cinnamon and Mate happened, because the codebase was comprehensible and maintainable. But when Firefox made unpopular changes, you ended up with Palemoon, which has pretty severe security issues due to the complexity and incomprehensibility of the codebase.

                1. 1

                  Yeah, some changes are inevitable. I also stuck to the very stable WindowMaker for years (some of my patches ended up in the master branch, too) but sooner or later you have to use modern applications – like Firefox – and sooner or later you stumble into GTK3 land and it’s just not worth it anymore.

                2. 1

                  I think this uncovers the important distinction: that the changes occur in free software because they are changes that a community around free software wants, not because they are changes that an executive committee trying to drive services revenue wants. Sometimes those are motivated by commercial reasons, as with “open source version of X” projects where X changes its feature set or interface, but oftentimes they are not, as with emacs not having its butterfly ribbons.

                  And I get the same from some retrocomputing (or at least old computing) projects. In my Amiga life, I can use a real, old Amiga and know that C= aren’t going to make me subscribe to Commodore One fitness tv music plus just to get new features, because there aren’t new features! But I can also use AROS and know that it isn’t going to pull in a weird direction, and is going to work on my newer computers (obviously Amigas are old enough that I can emulate them on newer computers at better than full speed anyway).

                3. 2

                  Yeah. I don’t see how this has anything at all to do with general development models and/or software licenses.

                4. 9

                  I strongly disagree - it’s just that FLOSS is subject to the same problem via a different set of incentives. jwz famously described it as The CADT Model:

                  This is, I think, the most common way for my bug reports to open source software projects to ever become closed. I report bugs; they go unread for a year, sometimes two; and then (surprise!) that module is rewritten from scratch – and the new maintainer can’t be bothered to check whether his new version has actually solved any of the known problems that existed in the previous version.

                  1. 3

                    From the outside that’s not the impression one gets. For the longest time there was “GNU Emacs” and “lucid Emacs” competing for mindshare; now, it’s “doom Emacs” versus “quake Emacs” or something similar. Vanilla (gnu) Emacs doesn’t project an impression of “finished and ready”: more so a toolbox and an enormous time sink to configure it just right.

                5. 17

                  I disagree on one point:

                  Quite literally, the only way to use HyperCard is to get a hold of an old Mac – or emulate it, but emulation always falls short of the real deal

                  Emulation often better than the original. I ran OPENSTEP 4.2 for i486 in a VM for a while. Most 486 systems were a bit underpowered for what OPENSTEP really wanted, but on a 1GHz machine it was amazingly responsive. The emulated display was also higher resolution and with a better colour depth than most contemporary hardware.

                  Somewhat more obscurely, the best spreadsheet that I’ve used for keyboard navigation is the one that came with the Psion Series 3a (I had a Series 3 with the spreadsheet on a ROM cartridge). There’s a Series 3x emulator for DOS and you can run it in DOSBox. Most of the Psion applications were intended to be portable across the entire range and so didn’t hard-code anything about the screen size. The emulator lets you run at 640x480, whereas the 3a had a 480x160 screen, so you get a huge amount more screen real-estate. And, yes, I do find it a bit depressing how much more useable a spreadsheet in a late ’90s emulator for a mid-’90s platform running on an early 2000s emulator for a late ’80s OS is than anything more recent.

                  1. 8

                    If you like that spreadsheet interface, you might enjoy visidata. I recently picked it up because I needed a fast way to deal with very large CSVs, and it reminded me in a very good way of keyboard-driven TUI spreadsheets from that era.

                    1. 4

                      I never used TUI spreadsheets but I love Visidata. I like doing data analysis, and Visidata is a fantastic way of doing some easy exploratory analysis at a glance before hammering at the data in my actual environment of choice.

                      1. 4

                        For the record, the Psion UI wasn’t a TUI; it was a proper GUI, albeit keyboard-driven.

                        1. 1

                          I think I’m the one who made it sound as though it was. I only meant that it felt like a TUI (again, in a very nice way) to me.

                    2. 3

                      Hm, in terms of performance and, to a limited degree, usability, I think you’re right. But there’s more to a computer than that.

                      Most obviously, old user interfaces were designed for CRT monitors, and you just don’t get the same experience out of a flat-screen monitor. Take the screenshot of HyperCard that I included in my blog post, for example. It really doesn’t convey how HyperCard looks on my iMac G3. It looks way too sharp, and the colors are a bit off.

                      I think you can get a lot of mileage out of emulation, but it completely misses the hardware, which is half of the picture. It can never truly convey what it was like to use the computer.

                      1. 2

                        I think you can get a lot of mileage out of emulation, but it completely misses the hardware

                        I’m not sure that’s fair. Emulating hardware perfectly is very hard, but most emulators I’ve experienced are at least trying. On the other hand, some people /want/ crisp and sharp, even if it’s not period accurate. I’ve also seen more than my fare share of poor “scan line” implementations that don’t resemble anything I saw in my time on CRTs.

                        1. 2

                          No, that’s what I mean. Emulators emulate processors, and they succeed reasonably well at that, but they rarely emulate other aspects of the hardware, such as the mouse, keyboard and monitor. And when they try, they fail, just as you said, because software just can’t emulate certain aspects of the physical world.

                        2. 2

                          Most obviously, old user interfaces were designed for CRT monitors, and you just don’t get the same experience out of a flat-screen monitor.

                          Other than nostalgia, though, why would someone want this? CRTs were terrible. I understand nostalgia as a side gig, I actually own a decent collection of vintage computers (late 70s to early aughts), but staring at (and having to fiddle with) a CRT all day would drive me nuts.

                          1. 2

                            Well, I’d say it’s not as simple as that. Modern monitors are great for modern operating systems, but I wouldn’t want to use one with OS 9, because the text will look too sharp. Sharper isn’t better – especially if the user interface in question has been designed with lower sharpness in mind. It will arguably be displayed incorrectly on a flat-screen monitor.

                            Also, I don’t have the same experience with CRT monitors that you have. As a specific example, the one built into my iMac works fine out of the box, no fiddling required, even twenty years later.

                          2. 1

                            On the other hand, an old 21” CRT is way cheaper, easier to get your hands on, and generally easier to repair, than an old Mac. The iMac G3 is a bit more recent so it’s not as obvious, I guess, but when it comes to ‘90s-era software and earlier, there are parts of the experience – like 30 seconds’ worth of disk thrashing – that you only miss for a few minutes.

                            (Edit: albeit, I’ll give you that, 30 seconds’ worth of disk thrashing brings me great joy on a Saturday evening :-) )

                            1. 5

                              Yes, I generally agree. A good thing about buying 80s/90s computers today, though, is that you can get a hold of things that would have been far out of your price range at the time. You don’t have to settle for the average of whichever time period you’re interested in.

                              1. 2

                                Based on your blog I think you already know that, but any outlooks may be disappointed to know that it depends a little on what your hobby is now that retrogaming is a big business. A while back, I don’t know if it’s still the case, C64s sold for outrageous prices – hell, C64 parts sold for outrageous prices – despite not really beings collectors’ items.

                                Things are a little better for x86 beige boxes though, yeah, the kind of systems I was drooling over in magazine ads can now be acquired for slightly pricier than average peanuts :-).

                                1. 2

                                  Yes, that is definitely the case. I’m also lucky to be interested in the more boring 90s computers! :-)

                                  1. 2

                                    It’s not just C64s anymore. Apple IIs are regularly going for multiple hundreds of dollars. You can’t get a VIC-20 for less than $100! The only cheap things from the 80s are relatively unloved boxes, like TI-99A and Timex Sinclairs (American ZX-81). Even those are rising faster than inflation.

                                    90s stuff is all expensive now, too because of the cap plague, and the CMOS batteries all leaking and destroying parts and PCBs.

                                  2. 2

                                    Just note the inevitable tension between retrocomputing for an authentic period experience, and retrocomputing as a romantic exercise in self delusion about what the past might have been.

                                    I have a Pentium 3 for reminiscing about a late 90s PC experience. The device is a few years newer than truly late 90s, and it has a 2005 GPU which means it can run NT 4 in 1080p on a flat panel, which people didn’t do in the 90s.

                                    Recently I thought it died, and looking on eBay, $100 now gets a drop-in replacement board with TWO processors. It may be cheap and available, but it just increases the gap between my fantasy 90s and the real 90s.

                              2. 1

                                If you have a link on how to run Psion stuff under emulation I think my dad (hard core Psion fan) would be very interested!

                                He most likes the “freeform” database application which doesn’t seem to have been ported/replicated in later software.

                                1. 4

                                  I think this is the emulator I’m using. It works great under DOSBox (just unzip it into the DOSBox shared directory and run it). There’s a .ini file that tells it what screen resolution to use. I think it will go up to whatever DOSBox is using, but I’ve not trued it above 640x480. You can also use DOSBox’s scaling to give a bigger window for DOSBox on a more modern monitor.

                              3. 9

                                For me “repairability” (and availability of parts, both used and third-party) is one of the strongest arguments to use “older” hardware. Though “old” in this case is far from retrocomputing, more in the “not upgrading my 2010-2012 era systems anytime soon” sense. I also do like retrocomputing, but more for the fun and interest of it, not for production work.

                                My daily drivers are a set of T420’s which I’ve opened up, swapped parts from, exchanged things, flashed the BIOS, installed a new CPU (its one of the last models with a socketed CPU), etc. Parts (and full replacement units) are plentiful and cheap, and it’s actually possible to do things. Speed is alright for my workload (mostly writing/compiling C and “normal” business email/web stuff, some VMs), and in my opinion, changing to a newer model would lose me more in convenience and opportunities than it would gain me. Not to mention it also saves plenty money and, in a more remote sense, the environment by preventing e-waste.

                                I have little interest in my computers being “super-slim” at the cost of not being able to swap hard disks, RAM or batteries, and that seems to be what the majority of current offerings are optimizing for (at least in the mobile computing space)

                                1. 3

                                  Definitely agree. My “best” computer is a custom-built PC from around 2010. A decade later, it still plays most modern video games without many problems, and I have the freedom to upgrade it gradually in the future, if I ever need to.

                                2. 8

                                  I’ve done all my retrocomputing in software. For instance, I spent a few months in 2008 or so, teaching myself TOPS-20 and the Macro-10 assembler. I used emulated PDP-10s for that.

                                  It’s a nice way to get an idea of “roads not taken”. TOPS-20’s command line interfaces were some of the best. The primitive for building a humane CLI was baked right in to the OS. It was a system call (a JSYS in DEC terminology). You could access the command line parsing facilities directly from an assembly language program. For that matter, Macro-10 seemed to have more in common with a high-level language than it did with typical macro assemblers made either before or since.

                                  When it comes to retro computing, the inevitable question is how to access the web. The answer is that it isn’t always possible.

                                  My first computing device was essentially a PDA. It had a whopping 640 KiB of storage provided by battery-backed RAM. It also had I/O hardware and an RS-232 serial port. I regularly used to browse gopher and the web with that device, by dialing in to a shell account on a more powerful machine . Battery life was on the order of 24 hours of active use between charges.

                                  The web was supposed to be more or less output-device-independent. If that vision had been kept, nobody would really have to ask that question.

                                  1. 2

                                    I do software-only retrocomputing, too; I even like old DEC systems, although I prefer ITS to TOPS-20. ITS has the “old pair of shoes” kind of user-friendliness: Once you’ve gotten them on and molded them to your feet, there’s an unparalleled comfort there, even if they look really funky to outsiders.

                                  2. 7

                                    Now I just have to point to The Thirty Million Lines Problem by Casey Muratori: old OSes were apparently better than the current ones. Why?

                                    Assuming it’s not just Rose Tinted Glasses nostalgia, there is one big cause for this: the end of competition. In the early 90’s, which I remember from playing games on my Dad’s Atari ST, there was one OS per computer game. Everybody wrote an OS as part of shipping a program. The hardware itself might have some bugs (not that many, really), but those were stable bugs, so if it worked on my Amiga, it worked on your Amiga.

                                    At some point, the number of OSes started to dwindle. The first driving force for this, I believe, was the advent of multi tasking. Programs had to cooperate with one another in some way to coexist in the same computer. So we started to agree on an OS. Then the OS started abstracting the hardware, provide services, and take a more and more important role (Unix spearheaded that trend decades before we got Windows 95 on our improved IBM PC clones).

                                    Then hardware exploded. Lots of USB devices, graphics cards, and more. They weren’t specified, let alone standardised, so vendors ended up writing drivers. We have so many of them now that kernels have become giant codebases spanning dozens of millions of lines of code. To the point where right now, writing an OS for serious consumer use has become flat out impossible.

                                    Hence the end of competition. We have NT, Linux, and FreeBSD (MacOS) on the desktop. And variants thereof on palmtops, I believe. 3 kernels to rule them all, and in darkness, bind them.

                                    We could have our old world back. Well we probably don’t want to give up on multi tasking, even for computer games (some people are streaming, and as an Elite Dangerous player, I need third party tools). The condition for that is standard, simple, stable hardware. It can be crazy complex under the hood. It can have hairy instruction sets like x86-64 has. I just need a buffer to write to, a buffer to read from, and the relevant interrupts. The data I need to send is allowed to be complex, as long as its format is open and stable (just like x86-64). With that, writing a 20K lines kernel would be possible again. A single developer could do it, and various kernels would emerge, just like the library OSes we see popping around Xen.

                                    Thing is, it’s up to hardware vendors. What I’m asking for here is an open game console. Serious hardware people can program directly to, without the usual locks. Casey argues that making such hardware, possibly as a small consortium of big hardware vendors, could pay huge dividends for the hardware company, as well as everyone else. Except hardware vendors don’t act like it was true. They may be risk averse, but that’s not just it: making an open game console is trivial: take your closed console, remove or disable the DRM, and publish the specs. Okay, it might be more complex than that when your graphics chipset comes from NVIDIA and they gave you a binary blob to drive it instead of specs. Anyway, vendors could easily open their console if they believed it would net the more money. Why don’t they?

                                    Sadly I don’ know.

                                    1. 1

                                      Your vision isn’t far from steam machines with vulkan

                                      1. 1

                                        It’s close, with a couple caveats.

                                        First, Vulkan is still an API, not an ISA. While it’s wonderful how Vulkan manages to be that much closer to the hardware, but there’s still need to be some big ass driver behind it. Validators make it easier to discriminate between driver error and application error, but it’s still a farcry from “I can trust this piece of silicon to perform as specified”. (Of course, if we had an actual standard ISA, the first thing to do is to implement Vulkan on top of it so it can support older applications.)

                                        Second, steam machine didn’t take off. I don’t know why. I believe Valve agitated them to warn Microsoft they’d stop depending on Windows if they really have to, instead of actually trying to push them to the mainstream, but still, I expected (hoped for, really) more sales.

                                        Note that to fully achieve the dream of fully specified hardware, the Steam machine would have needed active collaboration from a GPU vendor at least.

                                        1. 1

                                          I can’t recall when I got my Steam machine, must have been over five years ago, and it’s still kicking with very few upgrades. Anyway, SteamOS is pretty much abandonware.

                                          Back in the day you could install or easily build Debian stuff into its “desktop mode”, which is this shitty Gnome thing that works just well enough to launch CrossOver or something else.

                                          That’s becoming impossible and the next version of SteamOS doesn’t look like it’s happening. This might be because it’s still an ok build target for games, but I’m sure a regular Debian booting directly into Steam would do more for you.

                                    2. 5

                                      Simplicity

                                      To experience how simple things could be is important to question the status quo.

                                      …or sometimes… to feel young again!

                                      1. 5

                                        In case anything wanted to know more about what Hypercard is that he mentioned in the article. Here is an archive video link.

                                        1. 4

                                          For those who want a modern version of HyperCard, I urge you to take a look into LiveCode. It is a modern take on HyperCard which can also distribute standalone applications to macOS, Windows, Linux, Android, and iOS. It runs on macOS, Linux, and Windows, so you’re kinda covered even if you don’t have a Mac. It used to be able to import HyperCard stack but I think that feature was removed some time ago. I’ve written a post called LiveCode is a modern day HyperCard couple years ago that might interest people here.

                                          Oh, and LiveCode has a FOSS GPL version at LiveCode.org for those who want to keep their feet firm into FOSS.

                                          PS: In that article I say I work for LiveCode, well, I don’t work there anymore even though we’re still friends.

                                          1. 4

                                            Thanks for the tip. To be truthful, I haven’t tried LiveCode, and while I’m happy that people are trying to offer modern HyperCard alternatives, I haven’t come across any such program with the level of polish that HyperCard had, in terms of the user interface.

                                            Another thing about HyperCard that modern incarnations fail to recreate is that HyperCard stacks were something of a mix between documents and applications, whereas SuperCard, LiveCode and similar software seem entirely focused on the application aspect. That’s a worthwhile aspect of it, to be sure, but it doesn’t capture the totality of what make HyperCard so unique and useful.

                                            1. 1

                                              If you tell me what document features you’re missing, or that are important to you, I can try to write a post showing if they are still present in LiveCode.

                                              1. 2

                                                Yeah, sure, well, it’s not really the features that I miss. I’ll try to explain it.

                                                What’s unique about HyperCard, in my estimation, is that the interface is paper-like. It naturally encourages users to carry over things that they would do on paper to HyperCard. It bridges the gap between paper and computer in a way that no user interface has done since. Back in the day, companies put their entire catalogs in HyperCard, because paper-based material translated very naturally into HyperCard stacks.

                                                Looking at LiveCode, it seems entirely focused on building graphical applications, but I don’t think HyperCard is primarily a GUI programming environment, it’s more like interactive paper. Sort of like the web, but much more flexible.

                                                I think the following question makes the distinction quite clear: Can it be printed to paper? For most LiveCode applications, I suspect the answer would be no. But the contents of HyperCard stacks is generally a pretty good fit for paper.

                                          2. 4

                                            From a design perspective, I have a few in this list that have capabilities todays’ systems don’t. Examples: clustering reliability of OpenVMS systems; productivity and whole-system debugging of LISP machines; predictability and hang-resistance of RTOS’s like QNX/INTEGRITY; self-healing of MINIX 3 (esp drivers); loose coupling with easy integration of components in systems like Genode.

                                            I’ll add OpenVMS making all languages use same calling conventions and stuff. On modern systems, you often get the best performance or integration using C’s. Then, a language doing it differently has a mismatch that can affect performance or correctness. Knocking that out encourages using best tool for the job from the platform up. .NET CLR took a page from their book at VM level. Then, its VM sits on native languages illustrating the point.

                                            eCOS let you configure the OS to leave off every unnecessary component at kernel level. Mainly for size optimization. That would help with reliability, security, and (in clouds) transfer costs.

                                            Re hardware

                                            I miss the power switch on old machines that was next to the “power” switch. Sometimes one would flush out problems. Other times I needed both. I’ve come to believe it was peripheral devices that needed a hard reboot whose invisible failures propagated to higher-level, visible functions. I like power buttons that work. I also don’t want to hold it for 10 seconds.

                                            Audio knobs that use analog or just reliable circuits. Too many times something gets really loud with me having to turn volume down. Soft buttons were often unreliable at that exact moment (high system load?). I could grab and twist that knob in a split second with it working every time.

                                            Repairs/customization. Let’s say the system components are pluggable with standardized, commoditized interfaces. I can replace faulty components with new or used ones cheaper than a whole system. Even neater, modders can extend systems to make them do what they previously could not.

                                            Instant or rapid startup. This was a special feature rather than common. The system would turn on to something usable instantly or so fast you didn’t loose what was in your head while waiting. Sleep mode got rid of this problem for most of us. I’d still find it useful for debugging or just improving reliability/durability with common restarts. You turn them off at a good time to reduce the odds they themselves off at a bad time. QNX still advertises this as a differentiator for them in applications like automotive, entertainment systems.

                                            RAM-based HD’s. You see those memory hierarchies showing how slow disks are. Not if they use RAM! Many applications. Work best when they have onboard flash to back up volatile state plus time for an initial load.

                                            NUMA architecture. Save best for last. Companies like SGI in systems like SGI UV had ability to chain motherboards together over a memory bus that transferred GB’s a second, microsecond latencies, and maintained consistency. First benefit was hundreds of CPU’s, up to TB’s of RAM, and many graphics cards. A benefit people forget is NUMA turns parallel programming from something with distributed, middleware-heavy programming to something like multithreaded programming just watching out for data locality (i.e. ops and data they use stay on same node). Vast, vast improvement in usability for programmers wanting to scale.

                                            So, there’s some of my favorite things. Hope that helps some people.

                                            So, there’s a few for me.

                                            1. 3

                                              I’m a big antiques nut and would absolutely love to get into the retrocomputing scene… too bad my local antique shops don’t ever have electronics/computers. Have found some pretty cool DEC tradeshow buttons though

                                              1. 2

                                                I believe “Frugality” would belong here. You need to squeeze everything into a tiny space when dealing with kilobytes of RAM and minimal storage. The care taken writing games in Assembly for older devices is an art itself.

                                                1. 1

                                                  I’d emulate and keep using my modern machine.

                                                  1. 1

                                                    I’ve been wishing for a less complex system for some time. The towering height of any Electron application stack makes me expect it to deteriorate and fall over quickly. I’d rather have a short, wide system, so that any new failure or obsolescence would result in less total loss of function. And then I read:

                                                    I think the only solution is to stop expecting every computer to be general-purpose.

                                                    In other words, slim down the complexity horizontally, not just vertically. I don’t know if I’d say it’s the only solution, but it’s one I had not been considering. Thanks for the post.

                                                    1. 1

                                                      My equivalent is Lotus Agenda, there’s been nothing like it since. I was an avid and power user, and in a similar way, I went to the extreme (?) of installing a DOS based VM to run it for a while.

                                                      1. 1

                                                        Because they’re legitimately better, for a wide spread of criteria.