1. 17
  1.  

  2. 13

    School: “I’m going to stay up all night to figure out this problem”

    Work: “just go rebuild the universe from scratch in 10 minutes using that script we wrote so we can hit the bar for happy hour at 5”

    1. 8

      it was common for individual systems to evolve a unique personality. No two boxen were exactly alike—hardware quirks, hacky workarounds, custom patches, niche startup scripts, and years of general wizardry led to these machines taking on a life of their own.

      This sounds like an absolute nightmare. How can you possibly ensure that your servers are secure when they have years of weird hack scripts added by various admins which may not even be working there anymore. I think the answer is probably “You don’t” given how insecure old software was. This modern change has been driven by a need to protect extremely valuable data against constant global attacks which wasn’t the case with these personalized quirky servers.

      Eliminating most state was one of the best things to happen to ops. Your systems state is well defined in a docker or similar config. You can read over it and quickly understand exactly what the system is and all the changes that have been made. It’s also super simple to undo any change.

      1. 6

        Kids today. Some of them can’t even grow gray beards at all!

        1. 6

          You can have that, but most people don’t want that. Instead we prefer a mixture of services which we tailor to is individually.

          Nothing really stops you from setting up a Unix. A different variant would be a NextCloud or Mastodon instance, if you prefer the browser over ssh.

          1. 9

            Build systems that will last, where multiple services can coexist effectively, and where users will want to return to work, communicate, and setup up their .project files.

            I am not sure that this is realistic outside some subcultures that do it for nostalgical reasons or because they want to be part of a niche subculture (all completely valid!). Unix systems used to be multi-user, because they were fast, large, and expensive machines compared. So, it was more price-effective to let people log onto a UNIX machine from dumb terminals, than to put a VAX on every desk. Now machines have become so powerful that everyone can just have their personal single-user machine. For social stuff, we have Facebook, Twitter, Mastodon, WhatsApp, Signal, or whatever.

            Rebooting used to be a mark of failure as a sysadmin: you couldn’t figure out what was going wrong and had to resort to the nuclear option. Today, we don’t even bother to reboot the system. Instead, just destroy the whole thing and start over. What went wrong? Who cares!

            Or just switch to the previous NixOS generation. No need to destroy the system ;).

            1. 8

              “I am not sure that this is realistic outside some subcultures that do it for nostalgical reasons or because they want to be part of a niche subculture (all completely valid!). “

              Exactly. We have a finite amount of time to live, a subset of that to be productive at our goals, and lots of potential goals. The non-console-UNIX systems showed us a lot of stuff can be automatic and have little to no learning curve if its developers invest some effort into that. Maybe an optional, escape hatch where a support team or command line lets of do more than the GUI’s defaults. That frees up more time to achieve our goals that we’re not spending on incidental complexity in our systems.

              So, I’m opposed to the old model of learning all kinds of complicated crap to keep a system going. I’d rather they be designed to handle that for you with admins using just enough effort to steer the automated system in the right direction. Where possible, simple enough for non-admins to administer it from system defaults to pre-packaged help.

              “Or just switch to the previous NixOS generation. No need to destroy the system ;)”

              Or switch to Minix 3, QNX, etc that let you keep going despite many failures. Alternatively, verified components that prevent them. Or a mix. We have lots of options today.

            2. 2

              I read this mostly as a reminiscence for .project and .plan files, w, write, finger, talk, and the like.

              An artifact from those days of UNIX as BBS: https://web.archive.org/web/20010404230749/http://www.artifex.org:80/lib/community_unix.html

              1. 1

                According to some old book, the finger command reads both the .plan and .project files. Does anything else read them? Write to them?

                Any additional lines in .project are ignored.

                So, you could keep additional lines in there about previous projects and they’d be visible to the curious (or nosy) but otherwise hidden?