Threads for jmtd

    1.  

      I went digging and discovered / remembered that DJ Delorie works for RedHat on GLIBC these days.

      1.  

        Yep! I was wondering whether to post “it’s cool having DJ as a colleague” or if I’d be disclosing something that might be private. I guess it isn’t. So: it’s cool having DJ as a colleague, although we don’t work anywhere close to each other tech-wise.

    2.  

      I never really got it to work. It was too unixy for a kid familiar with DOS. I also didn’t have high enough end PCs to need all the fanciness I couldn’t get from Quick C or Turbo Pascal.

      Then I got Linux and never went back.

      1.  

        I had gotten a bit of work as a C developer in the 80’s, stepping through junior operator duties with mere administrative (tape monkey) access to the Unix (riscOS) systems, into the programmer/master operator realm at home on the DOS box cobbled from pieces I gradually got together. SoCal PC markets were awesome.

        I got machine well enough spec’ed to run things other than DOS. Operator duties had me using Quartdesk DESQView, and some term.exe for multi-window telnet sessions .. but this quickly became boring when I heard of friends running academic kernels such as minix .. and then this thing called linux came along .. and I just wiped everything (almost) and booted Linux, got 8 megs of RAM together, tried to compile X .. came back after multiple attempts (5 day compiles were normal), danced around like a fool with glee when I finally got two xterms up on my 486 .. and yeah .. okay, thanks Yggdrasil, this makes it deployable on all the PC’s .. and I haven’t looked back.

        Things were awesome.

        What the hell happened?

      2.  

        My first mistake, downloading DJGPP (very slowly) as a young teenager, was also downloading EMacs.

    3. 2

      systemctl will now automatically soft-reboot into a new root file-system if found under /run/nextroot/ when a reboot operation is invoked.

      That’s interesting. I recently migrated my / to another disk and was surprised there wasn’t yet more high-level support for that kind of operation. I don’t mind doing things the hard way (especially since the runes to recall don’t change too fast over time) but I generally appreciate that an easier way exists, when it does.

    4. 19

      I suppose the author doesn’t recall what life was like two decades ago, when Firefox competed with Internet Explorer. The resulting arguments did not push Firefox to the brink, but led to an understanding that Internet Explorer generally failed to adhere to standards as well as Firefox.

      Look at South Korea for an instructive example. The current framing is not that Internet Explorer is good in South Korea, but that South Korean Web developers were wrong to rely on a browser monoculture, and individual websites are slowly transitioning from Internet Explorer-only systems to standards-based systems.

      1. 36

        IE was bad as a web browser in ways non-nerd users understood. Chrome is good as a web browser and bad as a consumer of users personal data, introduction of non-standard apis, and other similar nerd concerns. If developers start (or continue) to not develop with Firefox in mind it will be Firefox that degrades in good-browser-ness as perceived by non-nerds, furthering its decline.

        1. 21

          I think part of the problem is that Firefox can’t easily distinguish itself on consumer features anymore. IE7 released five years after IE6, and for three of those years Firefox could differentiate itself with tabs and extensions. Chrome doesn’t have the same weakness.

          1. 15

            It could ship with effective adblockers out of the box.

            Might displease the advertising company that gives Mozilla most of their funding, though…

            1. 7

              It kinda already does. Strict privacy mode blocks a TON of tracking and ads. Pretty much the first setting I change on a new installation.

              1. 4

                That’snot default out of the box.

                1. 2

                  I mean.. that’s pretty nitpicky. Go into settings, two mouse clicks (one into Privacy, one to choose “Strict”). Firefox can’t enable this by default because it breaks so many popular sites, because they all have excessive tracking scripts and often don’t include them in a failsafe manner (thus entire sections of sites just not working because an analytics/tracking script didn’t load).

                  1. 18

                    The entire history of the web browser market is characterized by the fact that almost all uses rely on the defaults.

                    1. 4

                      This is (perhaps surprisingly) not true. For example, depending on where you look, 33-43% of users use ad blockers today.

                      1. 2

                        Not if you look in google analytics :p

                        1. 1

                          Not sure what you mean?

                          1. 3

                            Adblockers typically also block google analytics (eg ublock does by default). Hence, “depending on where you look” -> “zero, in google analytics”.

                    2. 2

                      First article I find on “what percentage of users adblock extension” says 43% of users 16-64 yrs old use ad blocking tools at least once a month. https://backlinko.com/ad-blockers-users

                      This one says 26% https://www.statista.com/topics/3201/ad-blocking/

                      In Indonesia, apparently some 57% https://techjury.net/blog/ad-blocker-usage-stats/

                      It’s a lot more work to find and install an adblock plugin than it is to go into Settings and change Privacy from default to “Strict”. Better than any other browser with strictly built-in features, that I’ve observed.

                      1. 3

                        It’s a lot more work to find and install an adblock plugin than it is to go into Settings and change Privacy from default to “Strict”

                        I’m A) not sure that’s true (type “ublock origin”, hit enter, follow the first non-ad link, hit “install” - all familiar stuff vs navigating an unfamiliar bit of software) and B) an adblocker actually improves the experience of using the web, whereas adjusting privacy settings offers a more nebulous benefit.

                  2. 7

                    I mean.. that’s pretty nitpicky. Go into settings

                    I can’t find the stats, but IIRC the majority of web-browser users literally never open the settings menu.

                  3. 3

                    the set of people who use defaults is probably going to be larger than the set of folks who are willing/able to go tweak things on their own.

          2. 10

            Firefox is intentionally reducing features/user choices of late, as if the design team hijacked the project. Every few months the design changes, generally by ballooning the interface (most recently compact mode/density has become “unsupported” requiring an about:config option just to display it as an option in the customize toolbar menu. Firefox related fora constantly complain, and considering the ever declining userbase it doesn’t seem like it was a vocal minority.

            That said, performance has improved remarkably over the same period, memory usage notably decreasing a lot.

          3. 5

            Firefox has a lot of options here. Out of all major browsers, it ships with the least amount of useful (to me personally, at least) features out of the box.

            A reading list that doesn’t rely on Pocket (a service you can’t even pay for in my country), tab groups, PWA support on desktop, vertical tabs, text to speech, integration with the system keyring on macOS, a start page that does not suck, ad blocking on iOS—most browsers these days have a ton of features that you can only get in Firefox if you use extensions. That’s not ideal.

            These days I only use Firefox for development. I want to use it as my primary browser, but there’s very little reason to use it over Safari. Especially since Apple keeps adding little niceties like Live Text which Firefox will probably never get.

            1. 3

              OTOH, I don’t have most of these issues using Firefox as a daily driver on desktop because the extension system is rich enough to supplement these features. WRT iOS I think pretty much the only reason to use not-Safari is if you are heavily dependent on Chrome sync features; every browser app just offers a worse Safari experience otherwise.

              1. 1

                Only some of those features can be supplemented by extensions. For example, how do you add desktop PWA support or integration with the system’s native keyring using only an extension?

                This effectively means I have to rely on Safari (or Edge on Windows) at least some of the time.

                1. 1

                  I use a password manager instead of the system keyring; this was already necessary for me as I need access to my login items on non-Apple platforms anyway.

          4. 4

            five years after IE6

            Was it really only 5 years? It seemed so much longer than that. Time is weird.

            (Edit: I’m guessing IE6 was still dominate well after IE7 released which is why it feels much longer, but I dont’ remember the specifics anymore)

            1. 7

              IIRC, A lot of IE6 users remained after the release of IE7. Corporation with custom internal apps being a major chunk. There were also people on Windows older than XP that couldn’t upgrade.

              Also, the upgrade was optional at first and most non technical people probably didn’t even know a it was available, or didn’t care. Forcing people to upgrade was not a common practice back then, even Firefox required you to download new versions from their website. It was Chrome that introduced the practice and it was considered rather unconventional at the time. (Oh my, how things have changed!)

              It took the end of support for Windows XP, with its equivalent end of support for IE6 (the two were tied) to finally kill it for good. Until then its usage kept hovering around a few percent, just enough that you couldn’t drop support for it.

            2. 2

              At the time, it would have seemed like forever. Microsoft considered the browser “done” and disbanded the team.[1]

              You can thank firefox for not being stuck using IE6.

              [1] https://arstechnica.com/information-technology/2010/09/inside-internet-explorer-9-redmond-gets-back-in-the-game/

          5. 2

            I think your general point is true but I think they should do more to highlight things like auto-play blocking audio by default. This is an annoyance all of my chrome-using friends complain about. Especially non-technical folks who are unlikely to realize they have a choice. These things where Google makes your phone feel less intimate are pretty amenable to highly produced ads. Apple does a great job at bringing out the smallest of differences, for example. I think a big issue is that Mozilla makes so little money from the marginal browser user that in a financial spreadsheet it never makes sense for them to advertise these things.

        2. 1

          IE was bad as a web browser in ways non-nerd users understood.

          Not really at the inflection point, around IE4/IE5.

      2. 4

        Isn’t there also a major concern that Google effectively controls the standards?

    5. 12

      So as someone who only ever used xorg and had no problems, it sounds like the situation is that xorg doesn’t support sexy desktop setups. Wayland is a completely different IPC protocol and family of implementations that also don’t support sexy desktops.

      Tbh I suspect Linux on desktop struggles because not many people use it, and normal people struggle to learn how to use any computer system whatsoever (possibly this is an instance of a general problem that most people are bad at learning anything complex). If that’s the case, the biggest single problem is diversity of desktop environments.

      1. 13

        If that’s the case, the biggest single problem is diversity of desktop environments.

        This is one of the major issues. The Linux desktop is hopelessly fragmented and everyone is reinventing similar wheels. E.g. some Wayland desktop environments support fractional scaling well, while in others it’s broken, and if you through X11/XWayland applications in the mix, it’s a crapshoot. Linux desktops that are controlled by a vendor with an iron fist and with very little fragmentation (Android and ChromeOS) are hugely successful.

        1. 11

          I think lots of people in the Linux desktop scene widely underestimate just a what a boon fragmentation is. It’s the thing that allows each project to cater to whatever audience it defines for itself. Most desktop developers don’t have that luxury.

          Microsoft has been trying to ditch elements of their classic desktop for literally decades, and they haven’t been able to in part because they have to put out a desktop that all of their customers can put up with. If they tried to pull a Gnome and ditch desktop icons over a period of a few months, delegating a popular but difficult to maintain feature to an incompatible extension layer, there would be large crowds marching on Redmond and they’d be carrying at least one pitchfork.

          Meanwhile, every popular Linux desktop project, including those two popular desktop projects, can yank our useful features, engage in architecture astronautics, or play out all sorts of twisted design experiments, safe in the knowledge that even paying customers of the companies that sponsor their work will be able to wing it – by moving to something else if it comes to it.

          The vast majority of people working on modern Linux desktops would absolutely hate developing for a centralised desktop “controlled with an iron fist” by a vendor. They don’t decry fragmentation because there are twenty desktops around, they decry fragmentation because there are nineteen desktops in addition to theirs. And would absolutely go bonkers coding for it if it were the only one and design and implementation decisions would be delegated to product managers, the “iron fist” that looks over the results of focus groups and, swayed by both technological inertia and common sense, will have the final word on everything, and that final word will occasionally be “nuh-uh we’re not doing that” or, worse, “here’s what I need you to do”.

          1. 3

            I do, kind of, wish that every Wayland compositor was based on wlroots (or similar), though. That way, the set of Wayland extensions supported would be consistent across desktop environments, and things like screen capture would either just work or not, rather than working inconsistently.

            1. 1

              Unfortunately, I think that ship has sailed a long time ago :-(.

          2. 1

            That all may be true, but @danieldk is still correct.

            Yeah you can do that stuff to your users because they can move, but from the user perspective, that’s super hostile. What non-technologist wants to move DEs every few years?

            1. 4

              Oh, no, I don’t mean to say this is desirable. This lack of commitment to users is exactly why, despite like fifteen years’ worth of hard design work, Gnome 3 and Plasma 5 are about as popular among non-technologists as Gnome 2 and KDE 3 were, and Linux is still a footnote in the desktop world, even as the two major players in the desktop world (Microsoft and Apple) are cheaping out on their increasingly decrepit implementations.

              But the currently prevalent FOSS development model is completely incompatible with a single, tightly-controlled, standard desktop. You can’t have a system that is, simultaneously, developed through a tightly-controlled process, developed with complete developer autonomy, completely aligned with the requirements of a diverse user base, and completely aligned with the companies sponsoring (or owning) its development.

              People point at Windows or macOS or Android as examples of success in this world. And they’re right but the elephant in the room of desktops controlled by vendors with an iron fist is that developers rarely get to work on whatever they want, make whatever decisions they think are right, remove mechanisms they consider technically obsolete etc..

        2. 1

          Anybody is free to become an iron fist of this sort anywhere in the Linux stack.

          But you have to pay the developers. Which means you have to charge the customers at best, or advertisers and thinly veneered government surveillance organisations if you’re in it for the money and not the product.

          1. 1

            But you have to pay the developers. Which means you have to charge the customers

            I wish there were a consumer-facing Linux desktop distro that flat-out cost money, which was used to pay the devs. Note that RHEL is for Business (which is mostly servers, and most of those servers aren’t for running Jellyfin) and is not consumer-facing.

            Relying on corporate funding for home Linux installs will inevitably have mediocre results, because even with the best intentions, companies like Google will always be more concerned about their IT infrastructure scaling across three continents than about providing a server that’s easy for a barely-trained idiot to maintain when it’s running Jellyfin.

            Alas, this xkcd from 2009 remains accurate.

      2. 3

        If that’s the case, the biggest single problem is diversity of desktop environments.

        Interesting take. It does look like Wayland and some other stuff at around the same time (compositor/WM fused together; client-side decorations, yadda yadda) have made a strong effort to, putting it charitably, “prevent fragmentation”.

        The fragmentation is one of my fondest, earliest experiences of using Linux at all. It was one of the most visceral demonstrations that the way Windows (as all I had experienced to that point) does things was not the only way to do it.

      3. 3

        Linux on the desktop struggles for a lot of reasons, including chicken-and-the-egg things like “there’s not enough users so these apps don’t run on Linux / my favorite app doesn’t run on Linux so I don’t use it.”

        You don’t have the same hardware integration you get with Windows or macOS, nor do you get the same kind of ecosystem my wife and kids adore with things that “just work” across their phone, tablet, desktop machine, and accessories like Apple earbuds.

        It’s not that “people are bad at learning anything complex” – it’s that they aggressively do not care to be bothered because they don’t see the value. Doubly so when they’ve already learned how to do what they want to once already. Most people are perfectly capable of learning to use a manual transmission too, but if they don’t have to they don’t want to.

        I threw in the towel on trying to advocate Linux to the unwilling quite some time ago. If a friend or family member asks I will happily spend as much time as needed helping them adjust – but if they don’t express interest, I don’t bother.

    6. 1

      We have half a dozen of these cluttering up our office and I wasn’t sure what to do with them

      1. 2

        we run the website of our hackers youth club on one of those. From SD-card. The one from 2012 :-)

      2. 2

        I’ll take some off your hands! I’ll pay shipping!

    7. 1

      Not sure what op means by:

      This behaves a little similar to the existing instruction ENV, which, for RUN instructions at least, can also be interpolated, but can’t (I don’t think) be set at build time

      https://docs.docker.com/engine/reference/builder/#env

      The ENV instruction sets the environment variable to the value . This value will be in the environment for all subsequent instructions in the build stage (…)

      The environment variables set using ENV will persist when a container is run from the resulting image.

      A stage inherits any environment variables that were set using ENV by its parent stage or any ancestor.

      So, setting ENV at build time effectively sets a default value for a variable.

      The interaction between ARG and ENV might be spooky - but I don’t think one is worse than the other?

      Set them to document ENV vars that should/can be set at runtime?

      1. 2

        I’m talking about interpolating the value of env vars into a docker command. For example

        ENV foo=date
        CMD $foo
        

        The substitution takes place before the command is run.

    8. 1

      HomeAssistant is probably the biggest offender here, because I run it in Docker on a machine with several other applications. It actively resists this, popping up an “unsupported software detected” maintenance notification after every update. Can you imagine if Postfix whined in its logs if it detected that it had neighbors?

      The author is assuming here that HomeAssistant is detecting the presence of other things running, but that’s one thing that containers prevent, unless you’ve explicitly punched holes between them. It sounds like an obnoxious notification, but also that the author doesn’t really understand why it’s happening.

      Recently I decided to give NextCloud a try. This was long enough ago that the details elude me, but I think I burned around two hours trying to get the all-in-one Docker image to work in my environment. Finally I decided to give up and install it manually, to discover it was a plain old PHP application of the type I was regularly setting up in 2007. Is this a problem with kids these days? Do they not know how to fill in the config.php?

      Was it recent or long enough ago? What was the actual problem? Nextcloud is being used as evidence of…. Something here. But what? And what’s wrong with putting a “plain old PHP application” in a container? They don’t mandate you use a container; you have the choice.

      I like keeping PHP stuff isolated from my OS, and being able to upgrade apps and PHP versions for the apps independently. On my personal VPS roadmap is to move a media wiki into a container, precisely so I can decouple OS upgrades from both PHP and mediawiki (it’s currently installed from the Debian package)

      1. 7

        OP installed HomeAssistant as a regular piece of software outside of docker and was surprised that it doesn’t like sharing the machine. It seem to point that HA is either very greedy or demands a container as it’s primarily deployment methods. And I agree with OP either is kinda unconventional installation strategy.

        1. 7

          HA really, really wants to be installed on “Home Assistant OS”. Preferably on either a pi or one of their supported appliances:

          https://www.home-assistant.io/installation/

          Other installation methods are meant for “experts”. I spent some time looking at it and decided it was too much trouble for me. I don’t really understand why they want that, either. If I wanted to understand, it looks like the right way to go about it would be to stand it up on their distribution and examine it very carefully. The reasoning was not clearly documented last time I looked.

          1. 1

            I suspect that HA is really fragile, and makes many assumptions about its environment that causes it to fall over when even the tiniest thing is wrong. I suspect this because that’s been my experience even with HAOS.

            1. 4

              Home Assistant is actually very robust, I ran it out of a “pip install home-assistant” venv for a few years and it was very healthy, before I moved it out to an appliance so the wall switches would still work whenever the server needed rebooting. Each time I upgraded the main package, it would go through and update any other dependencies needed for its integrations, with the occasional bump in Python version requiring a quick venv rebuild (all the config and data is separate).

              Home Assistant wants to be on its own HassOS because of its user-friendly container image updates and its Addon ecosystem to enable companion services like the MQTT broker or Zwave and Zigbee adapters.

        2. 1

          Home Assistant works very poorly in general in my experience, even when you give it exclusive control over the whole machine.

    9. 38

      Sorry if I sound like a broken record, but this seems like yet another place for Nix to shine:

      • Configuration for most things is either declarative (when using NixOS) or in the expected /etc file.
      • It uses the host filesystem and networking, with no extra layers involved.
      • Root is not the default user for services.
      • Since all Nix software is built to be installed on hosts with lots of other software, it would be very weird to ever find a package which acts like it’s the only thing on the machine.
      1. 20

        The amount of nix advocates on this site is insane. You got me looking into it through sheer peer pressure. I still don’t like that it has its own programming language, still feels like it could have been a python library written in functional style instead. But it’s pretty cool to be able to work with truly hermetic environments without having to go through containers.

        1. 22

          I’m not a nix advocate. In fact, I’ve never used it.

          However – every capable configuration automation system either has its own programming language, adapts someone else’s programming language, or pretends not to use a programming language for configuration but in fact implements a declarative language via YAML or JSON or something.

          The ones that don’t aren’t so much config automation systems as parallel ssh agents, mostly.

          1. 6

            Yep. Before Nix I used Puppet (and before that, Bash) to configure all my machines. It was such a bloody chore. Replacing Puppet with Nix was a massive improvement:

            • No need to keep track of a bunch of third party modules to do common stuff, like installing JetBrains IDEA or setting up a firewall.
            • Nix configures “everything”, including hardware, which I never even considered with Puppet.
            • A lot of complex things in Puppet, like enabling LXD or fail2ban, were simply a […].enable = true; in NixOS.
            • IIRC the Puppet language (or at least how you were meant to write it) changed with every major release, of which there were several during the time I used it.
        2. 15

          I still don’t like that it has its own programming language

          Time for some Guix advocacy, then?

          1. 8

            As I’ll fight not to use SSPL / BUSL software if I have the choice, I’ll make sure to avoid GNU projects if I can. Many systems do need a smidge of non-free to be fully usable, and I prefer NixOS’ pragmatic stance (disabled by default, allowed via a documented config parameter) than Guix’s “we don’t talk about nonguix” illusion of purity. There’s interesting stuff in Guix, but the affiliation with the FSF if a no-go for me, so I’ll keep using Nix.

            1. 11

              Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.

              1. 13

                Indeed, the project whose readme starts with:

                Please do NOT promote this repository on any official Guix communication channels, such as their mailing lists or IRC channel, even in response to support requests! This is to show respect for the Guix project’s strict policy against recommending nonfree software, and to avoid any unnecessary hostility.

                That’s exactly the illusion of purity I mentioned in my comment. The “and to avoid any unnecessary hostility” part is pretty telling on how some FSF zealots act against people who are not pure enough. I’m staying as far away as possible from these folks, and that means staying away from Guix.

                The FSF’s first stated user freedom is “The freedom to run the program as you wish, for any purpose”. To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required. Looks like the FSF does not agree with me exercising that freedom.

                1. 11

                  The “avoid any unnecessary hostility” is because the repo has constantly been asked about on official Guix channels and isn’t official or officially-supported, and so isn’t involved with the Guix project. The maintainers got sick of getting non-Guix questions, You have an illusion there’s an “illusion” of purity with the Guix project - Guix is uninvolved with any unfree software.

                  To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required.

                  This is both a fundamental misunderstanding of what the four freedoms are (they apply to some piece of software), and a somewhat bizarre, yet unique (and wrong) perspective on the goals of the FSF.

                  Looks like the FSF does not agree with me exercising that freedom.

                  Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.

                  1. 5

                    Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.

                    Thanks for clarifying what I already knew, but you were conveniently omitting in your initial comment:

                    Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.

                    Using unfree software in NixOS is simpler than in Guix, because you get official documentation, and are able to discuss it in the project’s official communication channels. The NixOS configuration option is even displayed by the nix command when you try to install such a package. You don’t have to fish for an officially-unofficial-but-everyone-uses-it alternative channel.

            2. 4

              I sort of came to the same conclusion while evaluating which of these to go with.

              I think I (and a lot of other principled but realistic devs) really admire Guix and FSF from afar.

              I also think Guix’s developer UI is far superior to the Nix CLI, and the fact that Guile is used for everything including even configuring the boot loader (!).

              Sort of how I admire vegans and other people of strict principle.

              OT but related: I have a 2.4 year old and I actually can’t wait for the day when he asks me “So, we eat… dead animals that were once alive?” Honestly, if he balks from that point forward, I may join him.

              1. 3

                OT continued: I have the opposite problem: how to tell my kids “hey we try not to use the shhhht proprietary stuff here.

                I have no trouble explaining to them why I don’t eat meat (nothing to do with “it was alive”, it’s more to help boost the non-meat diet for environmental etc reasons. Kinda like why I separate trash.). But how to tell them “yeah you can’t have Minecraft because back in the nineties people who taught me computer stuff (not teachers btw), also thought me never to trust M$”. So, they play Minecraft and eat meat. I … well I would love to have time to not play Minecraft :)

        3. 9

          I was there once. For at least 5-10 years, I thought Nix was far too complicated to be acceptable to me. And then I ran into a lot of problems with code management in a short timeframe that were… completely solved/impossible-to-even-have problems in Nix. Including things that people normally resort to Docker for.

          The programming language is basically an analogue of JSON with syntax sugar and pure functions (which then return values, which then become part of the “JSON”.

          This is probably the best tour of the language I’ve seen available. It’s an interactive teaching tool for Nix. It actually runs a Nix interpreter in your browser that’s been compiled via Emscripten: https://nixcloud.io/tour/

          I kind of agree with you that any functional language might have been a more usable replacement (see: Guix, which uses Guile which is a LISPlike), but Python wouldn’t have worked as it’s not purely functional. (And might be missing other language features that the Nix ecosystem/API expects, such as lazy evaluation.) I would love to configure it with Elixir, but Nix is actually 20 years old at this point (!) and predates a lot of the more recent functional languages.

          As a guy “on the other side of the fence” now, I can definitely say that the benefits outweigh the disadvantages, especially once you figure out how to mount the learning curve.

        4. 7

          The language takes some getting used to, that’s true. OTOH it’s lazy, which is amazing when you’re trying to do things like inspect metadata across the entire 80,000+ packages in nixpkgs. And it’s incredibly easy to compose, again, once you get used to it. Basically, it’s one of the hardest languages I have learned to write, but I find it’s super easy to read. That was a nice surprise.

        5. 3

          Python is far too capable to be a good configuration language.

        6. 3

          Well, most of the popular posts mainly complaint about the problems that nix strive to solve. Nix is not a perfect solution, but any other alternative is IMO worse. The reason for nix’s success however is not in nix alone, but the huge repo that is nixpkgs where thousands of contributors pool their knowledge

      2. 8

        Came here to say exactly that. And I’d add that Nix also makes it really hard (if not outright impossible) for shitty packages to trample all over the file system and make a total mess of things.

      3. 6

        I absolutely agree that Nix is ideal in theory, but in practice Nix has been so very burdensome that I can’t in good faith recommend it to anyone until it makes dramatic usability improvements, especially around packaging software. I’m not anti-Nix; I reallly want to replace Docker and my other build tooling with it, but the problems Docker presents are a lot more manageable for me than those that Nix presents.

      4. 4

        came here to say same.

        although I have the curse of Nix now. It’s a much better curse though, because it’s deterministic and based purely on my understanding or lack thereof >..<

      5. 2

        How is it better to run a service as a normal user outside a container than as root inside one. Root inside a container = insecure if there is a bug in docker. Normal user outside a container typically means totally unconfined.

        1. 7

          No, root inside a container means it’s insecure if there’s a bug in Docker or the contents of the container. It’s not like breaking out of a VM, processes can interact with for example volumes at a root level. And normal user outside a container is really quite restricted, especially if it’s only interacting with the rest of the system as a service-specific user.

          1. 10

            Is that really true with Docker on Linux? I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user. Containerd and Podman on FreeBSD use jails, which were explicitly designed to contain root users (the fact that root can escape from chroot was the starting point in designing jails). The kernel knows the difference between root and root in a jail. Volume mounts allow root in the jail to write files with any UID but root can’t, for example, write files on a volume that’s mounted read only (it’s a nullfs mount from outside the jail and so root in the container can’t modify the mount).

            1. 10

              I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user.

              None of the popular container runtimes do this by default on Linux. “Rootless” mode is fairly new, and I think largely considered experimental right now: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/

              https://github.com/containers/podman/blob/main/rootless.md

            2. 8

              Is that really true with Docker on Linux?

              Broadly, no. There’s a mixture of outdated info and oversimplification going on in this thread. I tried figuring out where to try and course-correct but probably we need to be talking around a concept better defined than “insecure”

            3. 4

              Sure, it can’t write to a read-only volume. But since read/write is the default, and since we’re anyway talking about lazy Docker packaging, would you expect the people packaging to not expect the volumes to be writeable?

              1. 1

                But that’s like saying alock is insecure because it can be unlocked.

                1. 1

                  I don’t see how. With Docker it’s really difficult to do things properly. alock presumably has an extremely simple API. It’s more like saying OAuth2 is insecure because its API is gnarly AF.

        2. 3

          This is orthogonal to using Nix I think.

          Docker solves two problems: wrangling the mess of dependencies that is modern software and providing security isolation.

          Nix only does the former, but using it doesn’t mean you don’t use something else to solve the latter. For example, you can run your code in VMs or you can even use Nix to build container images. I think it’s quite a lot better at that than Dockerfile in fact.

        3. 2

          How is a normal user completely unconfined? Linux is a multi-user system. Sure, there are footguns like command lines being visible to all users, sometimes open default filesystem permissions or ability to use /tmp insecurely. But users have existed as an isolation mechanism since early UNIX. Service managers such as systemd also make it fairly easy to prevent these footguns and apply security hardening with a common template.

          In practice neither regular users or containers (Linux namespaces) is a strong isolation mechanism. With user namespaces there have been numerous bugs where some part of the kernel forgets to do a user mapping and think that root in a container is root on the host. IMHO both regular users and Linux namespaces are far too complex to rely on for strong security. But both provide theoretical security boundaries and are typically good enough for semi-trusted isolation (for example different applications owned by the same first party, not applications run by untrusted third parties).

    10. 18

      At the core of my complaints is the fact that distributing an application only as a Docker image is often evidence of a relatively immature project, or at least one without anyone who specializes in distribution. You have to expect a certain amount of friction in getting these sorts of things to work in a nonstandard environment.

      This times a thousand. I have tried to deploy self-hosted apps that were either only distributed as a Docker image, or the Docker image was obviously the only way anyone sane would deploy the thing. Both times I insisted on avoiding Docker, because I really dislike Docker.

      For the app that straight up only offered a Docker image, I cracked open the Dockerfile in order to just do what it did myself. What I saw in there made it immediately obvious that no one associated with the project had any clue whatsoever how software should be installed and organized on a production machine. It was just, don’t bother working with the system, just copy files all over the place, oh and if something works just try symlinking stuff together and crap like that. The entire thing smelled strongly of “we just kept trying stuff until it seemed to work”. It’s been years but IIRC, I ended up just not even bothering with the Dockerfile and just figuring out from first principles how the thing should be installed.

      For the service where you could technically install it without Docker, but everyone definitely just used the Docker image, I got the thing running pretty quickly, but couldn’t actually get it configured. It felt like I was missing the magic config file incantation to get it to actually work properly in the way I was expecting to, and all the logging was totally useless to figure out why it wasn’t working. I guess I’m basically saying “they solved the works-on-my-machine problem with Docker and I recreated the problem” but… man, it feels like the software really should’ve been higher quality in the first place.

      1. 18

        no one associated with the project had any clue whatsoever how software should be installed and organized on a production machine. It was just, don’t bother working with the system, just copy files all over the place, oh and if something works just try symlinking stuff together and crap like that.

        That’s always been a problem, but at least with containers the damage is, well, contained. I look at upstream-provided packages (RPM, DEB, etc) with much more scrutiny, because they can actually break my system.

        1. 4

          Can, but don’t. At least as long as you stick to the official repos. I agree you should favor AppImage et al if you want to source something from a random GitHub project. However there’s plenty of safeguards in place within Debian, Fedora, etc to ensure those packages are safe, even if they aren’t technologically constrained in the same way.

          1. 3

            I agree you should favor AppImage et al if you want to source something from a random GitHub project.

            I didn’t say that. Edit: to be a bit clearer. The risky bits of a package aren’t so much where files are copied, because RPM et al have mechanisms to prevent one package overwriting files already owned by another. The risk is in the active code: pre and post installation scripts and the application itself. From what I understand AppImage bundles the files for an app, but that’s not where the risk is; and it offers no sandboxing of active code. Re-reading your comment I see “et al” so AppImage was meant as an example of a class. Flatpak and Snap offer more in the way of sandboxing code that is executed. I need to update myself on the specifics of what they do (and don’t do).

            However there’s plenty of safeguards in place within Debian, Fedora, etc to ensure those packages are safe

            Within Debian/Fedora/etc, yes: but I’m talking about packages provided directly by upstreams.

            1. 1

              Within Debian/Fedora/etc, yes: but I’m talking about packages provided directly by upstreams.

              Regardless of which alternative, this was also my point. In other words, let’s focus on which packagers we should look at with more scrutiny rather than which packaging technology.

              AppImage may have been a sub-optimal standard bearer but we agree the focus should be on executed code. AppImage eliminates the installation scripts that are executed as root and have the ability to really screw up your system. AppImage applications are amenable to sandboxed execution like the others but you’re probably right that most people aren’t using them that way. The sandboxing provided by flatpak and snap do provide some additional safeguards but considering those are (for the most part) running as my user that concerns my personal security more than the system as a whole.

      2. 8

        On the other side, I’ll happily ignore the FHS when deploying code into a container. My python venv is /venv. My application is /app. My working directory… You get the picture.

        This allows me to make it clear to anybody examining the image where the custom bits are, and my contained software doesn’t need to coexist with other software. The FHS is for systems, everything in a dir under / is for containers.

        That said, it is still important to learn how this all works and why. Don’t randomly symlink. I heard it quipped that Groovy in Jenkins files is the first language to use only two characters: control C and control V. Faking your way through your ops stuff leads to brittle things that you are afraid to touch, and therefore won’t touch, and therefore will ossify and be harder to improve or iterate.

        1. 2

          I got curious so I actually looked up the relevant Dockerfile. I apparently misremembered the symlinking, but I did find this gem:

          RUN wget --no-check-certificate https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
          RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p /miniconda
          

          There was also unauthenticated S3 downloads that I see I fixed in another PR. Apparently I failed to notice the far worse unauthenticated shell script download.

      3. 5

        I’ve resorted to spinning up VMs for things that require docker and reverse proxying them through my nginx. My main host has one big NFTables firewall and I just don’t want to deal with docker trying to stuff its iptable rules in there. But even if you have a service that is just a bunch of containers it’s not that easy because you still have to care about start, stop, auto-updates. And that might not be solved by just running Watchtower.

        One case of “works on docker” I had was a java service that can’t boot unless it is placed in /home/<user> and has full access. Otherwise it will fail, and no one knows why springboot can’t work with that, throwing a big fat nested exception that boils down to java.lang.ClassNotFoundException for the code in the jar itself.

        Another fun story was when I tried to setup mariadb without root in a custom user.

    11. 9
      • Emacs. Too many reasons to list.
      • X11. I guess I am one of the few people who actually loves the X Window System. I will continue using it as long as it is feasible.
      • Linux and the GNU-ish userspace. I could live in a BSD world if I had to but it wouldn’t be the same.
      • Go has been a great enabler for the past decade.
      • IRC the only chat software I actually like.
      • Boardgamearena and Yucata for providing great ways to play boardgames online. Please don’t get enshittified.
      • Firefox. You broke my heart a few times (XUL apocalypse, mobile extensions breakage) and I am not 100% happy with the current state of things and directions but I am grudgingly grateful that I was able to return once again.
      • Wikipedia.
      • Gentoo for letting me do things the way I like.
      1. 1

        X11

        While I by no means hate X11, what makes you love it so?

        The only thing I miss about X11 now that I’ve been on Wayland for years, is how it is a network protocol: you can have a single application or a whole desktop just magically appear on your screen over the network.

        Of course, much of that convenience has been eroded by bitmap based toolkits, direct rendering, etc to the point where most applications today are a sluggish mess to use over X11 remotely.

        1. 7

          While I by no means hate X11, what makes you love it so?

          Not the original poster, but most recently, I had to run the Vivado FPGA tools. They run on Linux and Windows and are binary only x86-64 binaries. I have an AArch64 Mac. I can run them in a Docker container with Rosetta to emulate the display and then use X forwarding to expose it to XQuartz on the host. This Just Works with X11. I don’t need to have a full remote display, each window from Vivado just appears as a window on the Mac.

        2. 5

          I’d sum it up as flexibility, adaptability and longevity. The same protocol got me through 30 years of computing across a huge variety of devices. Tools from many years ago continue to work and being useful. The flexibility and openness of the protocol has enabled amazing stuff like EXWM which lets me to treat X11 windows as Emacs buffers. Maybe Wayland will get there at some point but as it is today it feels like banging my head against a wall of limitations.

        3. 1

          Not the original poster. For exposing me to a plethora of interesting and quirky applications and approaches for working visually, from a huge range of different culture and ages.

    12. 2

      If you shell into other machines, you’ll also want to copy the terminfo definition to those hosts (under ~/.terminfo I think) and sometimes to other user IDs on those hosts (if you don’t put it in at the system level)

    13. 12

      I owe my online presence to Valve, as Steam was the first “social media” I had as a kid. Half Life begat Half Life 2, Half Life 2’s modding scene begat Garry’s Mod, and YouTube circa 2007 showed off Garry’s Mod to a curious kid.

      The rest is history. Half Life has a special place in my heart. I remember speculating about the lore with friends in Steam group chats. Memories..

      1. 2

        I was in a TFC clan with Garry. He seemed fun. He was kicked out, but I can’t remember why.

      2. 2

        The GMod Idiot Box and it’s ilk is probably the only reason I’m a programmer today.

    14. 8

      Really glad they brought back the original main menu. I haven’t seen it in forever and I missed it every time. A taller order they don’t seem to have attempted is a 64-bit Mac binary.

      1. 2

        I presume they haven’t done an aarch64 one either? I haven’t checked

        1. 2

          Valve really doesn’t care about Macs anymore. The Steam client still isn’t ARM native, and a ton of their games are still 32-bit executables despite Apple screaming that they would drop support for years (and eventually did). Counter-Strike 2 also dropped support when CS:GO supported it.

    15. 16

      Half-Life was such a literal game changer:

      • Going from 256 to 16k colours made a huge difference to the immersion
      • Much more varied levels than Quake and Doom
      • An intro worthy of a movie
      • It ran smooth as glass
      1. 8

        Much more varied levels than Quake and Doom

        This is true, but in one respect there was one step backwards: the levels and overall progression became substantially linear.

        It ran smooth as glass

        Anecdote: when I launched it for the first time and played for half an hour or so I thought it looked and performed merely “pretty good”, and wondered if it was over rated. Later I realised it had launched with the software renderer, but I hadn’t noticed because they’d implemented stuff like coloured lighting in the software renderer, which even the Id games hadn’t done. Once I launched open gl my jaw was properly on the floor.

        1. 5

          It was linear, but what I remember most, besides the incredibly spooky slow arc from clean right-angled rationality to goopy organic madness, is that there was no pause when a new level loaded. There were no stopping points, so it felt like a page-turner novel that won’t let you put the book down and go to sleep.

          1. 4

            There are loading points and they’re noticeable. Sometimes they’re at the chapter transitions, sometimes they’re just at a chokepoint. They did a good job of keeping the loading pretty fast, and keeping things integrated so that the player isn’t thinking about going from one map to another, but it’s not actually seamless. There are five times in the initial train ride where the motion hitches and LOADING… prints across the middle of the screen. On a modern machine it’s maybe a tenth of a second, on contemporary hardware it’s more like a couple seconds each time.

            1. 3

              The way I remember it, the standard for other games was to blank the screen with a progress bar for 30 seconds and then hop the player to a totally different context. So technically there was a tiny loading pause, but it was so much shorter and better integrated into the gameplay that it felt like there wasn’t.

              1. 1

                Yeah, like I said. They did better than most. Better than some even today. But it isn’t “no pause” by any means.

          2. 3

            Half Life’s level transitions were much much less jarring than other games. They hid most of them in corridors. They would have identical copies of that corridor in both the “from” and “to” maps and an entity in game that represents the location of the level transfer. That entity would be in exactly the same place relative to the corridors in both the “from” and “to” levels. They’d translate the player coordinates so the player would find themselves in the same location before and after the switch.

            1. 2

              I’m disappointed that most of the games industry still hasn’t progressed beyond that. Half-Life had “seamless” level transitions in 1998, and in 2001, Jak and Daxter pretty much just didn’t have level transitions at all. But today, even open world games are still putting in loading screens or at least fastish “seamless” level transitions in many places. Basically no progress in 25 years.

              1. 1

                No Man’s Sky is as open world as it gets, and has no loading screens even when e.g. descending onto a planet. The high-quality textures loading in is noticeable, though (at least on my Deck).

              2. 1

                I’ve played some recent open world games lately ( Elden Ring, Far Cry 3 and 5) and while the loading times are obnoxious (FC5 compared to 3 especially), once you’re in the open world transitions are mostly seamless.

              3. 1

                The new Legend of Zelda: Tears of the Kingdom has no loading screens (apart from when you’re teleporting). It is a pretty amazing experience to be able to skydive from the highest point in the map (where you can see from one end of Hyrule to the other) all the way down to the ground and through it further down to the underworld, all in one seamless motion.

              4. 1

                There has definitely been progress, but I don’t think some studios really get deeply involved with it.

                I found Starfield especially jarring there, I refunded the game due to how bad it overall was put together but really the loading screens are doing a lot of lifting in that opinion. Doing simple side missions may involve going through 8-7 loading screens (4-5 for going to a location; location -> spaceship -> orbit -> other star system -> orbit -> planetary landing site -> dungeon).

                Meanwhile games like NMS, E:D, SC etc have no loading screens between scenes. Well, no visible ones, you can sometimes catch it loading stuff, but it’s well masked. The newer god of war games hide loading screens with crawl sections.

                Masking a loading screen is work but IMO its well worth it because it feels like wasting less of the players time (if done well, looking at you Calisto Protocol). But it’s way simpler and cheaper to just bring up the progress bar.

      2. 4

        “It ran smooth as glass”

        100% …why did it feel so smooth? Did it run at a better framerate than others at the time?

        1. 5

          Half-Life ran on an upgraded version of the Quake 1 engine, which was a couple years old at that point. In the 90s, hardware was advancing so fast (particularly graphics) that two years was a very long time. Upgrades included 16-bit color, skeletal animation, beam effects, much better enemy AI, and audio processing like reverb, so the burden was greater than Quake. But it came several months after the first release of Unreal, which was a technical showcase in all those ways and more and was expensive to run well. Half-Life was not as nice to look at in stills, but it ran better and had a mature feel and coherent narrative that made it the favorite.

          1. 1

            Pretty sure it was using an upgraded version of the Quake II engine. It made heavy use of features like skeletal animation, something the Quake I engine didn’t have. Also the way lighting worked is a dead giveaway

            1. 3

              They had access to the Q2 codebase but skeletal animation & lighting was entirely them: Half Life’s Code Basis

            2. 2

              If my memory serves, and it doesn’t necessarily, Quake 2 used baked radiosity lighting and had colored dynamic lights in GL, while Half-Life still used Quake’s baked and dynamic lighting but the baked lights had color.

              Half-Life used skeletal animation (skinned meshes, vertices with bone weights) but Quake 2 did not; instead it would interpolate vertex positions between the same kind of vertex-animated frames that Quake 1 used. That was also true of Quake 3. It still didn’t use skeletal animation but did split character models into head, torso, and legs parts to get some of the benefit.

              Wikipedia:

              GoldSrc (pronounced “gold source”), sometimes called the Half-Life Engine, is a proprietary game engine developed by Valve. At its core, GoldSrc is a heavily modified version of id Software’s Quake engine.

              1. 2

                Wow, that makes it all the more amazing what they managed to do with that tech!

                1. 3

                  Absolutely. They didn’t just use the engine off the shelf, they used it as a starting point and got busy turning it into what their game needed. Compared to most game creators I think of Valve as vertically integrated in the same way as Apple and Nintendo—they’ll own and operate as much technology as they need to in order to satisfy a novel product vision. Although they didn’t invent all the pieces they integrate, no one can make the same end result they do.

        2. 3

          It might have been the first 3d-accelerated game you played. Quake, for example, was usually played without any 3d-acceleration (was opengl support there at release or did it come later? I don’t even know)

          1. 12

            OpenGL support came half a year after initial release as a separate executable, GLQuake.

            Maybe you know this but I can’t resist a history lesson since it was such an exciting time:

            Quake became established as the killer app that justified a consumer’s first purchase of a 3D accelerator card, as we called them. In that way, Quake was a major factor in OpenGL finding support at graphics card manufacturers in the first place. Carmack was communicating with manufacturers and telling them what capabilities would be of best benefit to offload to hardware in their next product.

            Along with Glide and Verite, OpenGL was one of several early 3D APIs supported by Quake, with the notable exclusion of Microsoft’s Direct3D. The Quake engines’ ultimate dedication to OpenGL was a lever intended to prevent Direct3D from becoming the de facto standard 3D graphics hardware abstraction layer—a very good thing in light of Microsoft’s domination of the software market.

            1. 1

              Thank you for answering. I couldn’t find it through googling.

              Boy do I remember wanting a graphics card for glquake. Good times!

    16. 1

      A cd & ls merged into one tool https://github.com/antonmedv/walk

      1. 1

        Interesting. I do this cdls() ( cd “$@“ && la -lhrt; } alias cd=cdls

    17. 4

      mkdir -p “$1” && cd “$1” || return 1

      What does || return 1 here achieve?

      1. 4

        It normalizes all error returns to 1, but I don’t see any particular use for that. I think the author is just in the habit of using return 1 anywhere a function should fail, and didn’t pay any mind to the fact that this one doesn’t need an early-out.

      2. 3

        Ahh, I missed removing that. It was added earlier when the script had multiple lines and I did not have a global set -e for the file where these were defined.

    18. 3

      Support for the iconify control (^[2t). This isn’t new, but I think it’s handy to have an option to run commands like xcalc and have the shell’s terminal disappear until the command finishes.

      I have never heard of this feature but it sounds pretty cool. I often start something from my terminal (like vlc $video_i_just_rendered) and the terminal using up my alt-tab space is mostly useless. I wonder how this works, as I would like to be able to pull the terminal back up if needed (ex to check logs).

      On a similar note it would be interesting to have something like a “super exec” that started the program and completely closed the originating terminal. I guess you can do Ctrl-z bg disown. Maybe I’ll just make a shell function that spawns the command and then exits the shell.

      1. 2

        This came up recently elsewhere so I suppose I should mention it here again: I wrote the dwm swallow patch for precisely this reason.

        1. 2

          See also: devour an X11 WM-agnostic window-swallowing tool.

          All the best,

          -HG

      2. 2

        Sometimes I’d like every gui launch wrapped in an (invisible by default) terminal, or put another way, an easy way to connect up Stdout/stderr after the fact

        1. 4

          On GNOME most applications are connected to the systemd journal by default. So you can do something like journalctl -f -ocat SYSLOG_IDENTIFIER=vlc.desktop, add in _TRANSPORT=stderr to skip stdout or similar.

          I rarely use it but it is quite helpful for figuring out why something is crashing or similar.

    19. 8

      That’s not a terrible idea although having to write by hand transactions ultimately deterred me from using hledger/beancount.

      I recently tried beancount again using the fava interface but it was still very error prone and then I had to spent quite some time “debugging” where are the $0.06 missing!

      I’ve found the perfect balance with csv imports, specifically using the hledger csv importer. It provides tons of power to pre-categorize transactions, and human error of entering the wrong amount/date are completely eliminated!

      With that I have a ledger that I can trust, haven’t had any issues with reconciliation and have back on my hands all the power of plaintextaccounting!

      1. 3

        I’ve just had a PR merged to add a feature to the hledger CSV rules handler: regex match groups are available in field assignments. Hopefully this is useful to someone other than just me!

      2. 1

        FWIW, Beancount also provides a nice CSV importer interface to automate away the manual work.

        Depending on how it’s implemented, the importer can take care of the amount/date and even add the balancing post to the other account(s) based on any attribute of the original posting. I wrote about this workflow some time ago on my blog.

      3. 1

        My secret sauce is beancount-import coupled with institutions that support OFX export. I’ve been using this combo for the past few years and never had issues–except for attempts at creating a NixOS derivation for it, which has left enough scars that I now do my reconciliation on a separate laptop.

      4. 1

        How do you handle inter-account transactions? For example, I use Revolut, that has some insanely stupid csv export format, and occasionally I top it up from my bank card. So I have a duplicate transaction, with slightly different data with up to 3 days difference.

        I have written a script that tries to find these which works quite well, but it is definitely not automatic, not even semiautomatic.. and I haven’t even gotten to stuff like revolut’s round up spending going to a saving account..

        (I am using beancount with a java program that parses all my beancount files for existing transactions, and matches new ones, and writes out the missing ones - with git as a “transaction handler”. But I don’t see much advantage to the textual format, besides beancount having good GUI)

        1. 2

          Regarding inter-account transactions, what I do is that I only import one of them. An example is when I pay my credit card, I get a debit on my bank account and a credit on the credit card. The transactions in my credit card I ignore and the ones is may debit card I make them something like this:

          assets:checking -amount liabilities:credit-card amount

        2. 1

          I don’t get it, why is there duplicates? Is this more that there is a gap between paying and the amount actually being settled?

          The “easiest” thing to do might be to treat revolut exports as payments that are pulling from “line of credit” account, and then the bank transactions moving money from the bank account to the line of credit. That way you actually know how much you are floating. Though of course I don’t know what’s actually going on in your case, I’ve found modeling based on what is happening bank-side at least gives me options to later look at things and filter nicely.

    20. 9

      Taking screenshots of your projects is a wildly valuable habit. Most projects bitrot, but a JPEG/PNG is forever!

      1. 3

        I wholeheartedly agree, and in retrospect I wish I had taken more screenshots of projects after I’d finished them. I made a number of small game mods in high school. None of them will run now (well, at least not without modifying and recompiling) as I hardcoded my local filesystem paths into the code, and I’ve reinstalled my OS several times since. It sure would be nostalgic to take a trip down memory lane, but it’s just not practical. Though it has been fun re-reading my absolutely terrible source code.

        1. 2

          I have a load of school work in ClarisWorks 1.0 file formats. When I got my first Mac, I bought AppleWorks with it, on the assumption that it would be able to open them, but it couldn’t open files that old. I probably have ClarisWorks on floppy disks somewhere, now that Windows 3.11 runs in DOSBox I can probably recover them, but it taught me an important lesson about not putting anything I care about in proprietary file formats.

      2. 2

        What would be the equivalent of screenshots for cli / non-GUI projects? Asciinema videos?

        1. 1

          Yeah those work pretty well. I often drop in static screenshots of a terminal or animated GIFs of it running.

        2. 1

          You remind me that copy and paste from macOS terminal into macOS text editor preserves the colours by converting to rich text