Threads for jordemort

    1. 2

      I don’t see it mentioned as much in “introduction to the CLI” content these days, but apropos is a vastly underrated tool - it will search this same information and return a list of matching man pages.

    2. 3

      Really like how they override the env builtin.

      1. 2

        That’s the part that scared me off of trying it.

    3. 1

      I use a little selector trickery (really just the | operator) to style both my RSS and my Atom feed with the same XSLT: https://github.com/jordemort/jordemort.github.io/blob/main/public/feed.xsl

    4. 7

      The illustrious Foone Turing also did a teardown, thread starts here: https://digipres.club/@foone/110477318125507234

      1. 2

        I know they never liked critiques of their choice of twitter as a vector for those threads in the past, so I wouldn’t dream of saying it to them…

        but it’s so much easier to read on mastodon, and I’m glad they switched.

    5. 1

      I had a Pro Audio Spectrum 16 as a callow youth. It was the first hardware that I installed in my first PC. I asked for it because it had the MIDI interface and a SCSI controller built-in and I had dreams of acquiring a MIDI keyboard and a CD-ROM drive. I never had the money to make use of either and ended up playing most games in Sound Blaster compatibility mode, which was just fine, as I recall. I think it’s still floating around in my mom’s basement.

    6. 1

      Is it real? I’ve heard folks saying that it is not, but I looked up the chips involved and it at least seams plausible. The YouTube video included in the article seems real sketchy and only shows playing a single game, no DOS or Windows 95 action.

      1. 1

        I found it on twitter from @dosnostalgic thread, where you can watch a hands on video and the related Aliexpress link.

        1. 1

          I saw it on the same person’s Mastodon account, but I had not seen the longer video. That’s a bit more convincing than the 15 second clip in the Tom’s Hardware article.

    7. 1

      I use Zoho right now. It is very cheap, and there is no advertising. There doesn’t seem to be a limit on the number of domains I can add to it. I use a .haus domain for my primary email address and the only trouble I’ve run into is that the website for my Kohl’s card does not recognize it as a valid address.

    8. 4

      I managed to snag doo.dad during the preregistration period. Not sure what I’m going to do with it yet.

      1. 2

        Wait and sell it for millions :^}

        1. 2

          Willing to cut you a deal for $500,000

    9. 4

      Tomorrow is Free Comic Book Day. Last time I was in my local shop, the guy mentioned it, and I said “oh yeah I usually end up missing it” and I swear it hurt his feelings, so I’m showing up tomorrow.

    10. 7

      You can have the UEFI firmware verify signatures, but there’s no way to make the Pi verify a signature on the UEFI firmware. The chain of trust is incomplete on this platform and there’s no way to fix it. A malicious person with write access to the boot partition could replace the UEFI firmware with a backdoored version, which would not be possible on a platform with UEFI Secure Boot built in.

      1. 1

        Post updated, thanks for your comment!

      2. 1

        Yes, I think the only way to make this even vaguely secure would be to use some kind of write-once / read-only storage for the boot sector.

        The new SD card spec (9.0, introduced May 2022) includes provision for a write-only boot sector: https://www.sdcard.org/developers/boot-and-new-security-features/replay-protected-memory-block/ protected by a key you write to the device so if you can find an SD card that implements this part of the spec & the SD card firmware is secure, you could implement UEFI secure boot on a Pi this way. A cursory search hasn’t turned up any SD cards that implement this, but there might be some out there by now?

        (The hardware switch on the larger SD cards that fitted the early Pis was only ever advisory: The host OS could ignore it entirely!)

      3. 1

        The Pi 4 does support secure boot via the OTP fuses, per RPi documentation.

        1. 1

          Yeah, that seems pretty reasonable although it’s too bad that they made up their own scheme instead of implementing UEFI

    11. 2

      Wrapping up my current contract and then being unemployed for a couple days before I jump into a new full-time position next week.

      I’ve been neglecting my blog, and one of my open-source projects, and I’ve also been planning on writing an article about WASM, but I haven’t had enough motivation outside of work hours to do much of anything about any of that. I might poke at one or two of those things in the interregnum between jobs, but probably not.

      1. 1

        Subscribed via RSS :) Looking forward to the WASM article

    12. 2

      It’s unclear from the context; are they pushing this change out to stable releases, or is this something that’s going to happen in the next stable release? Changing it in existing releases is gonna break a whole lot of folks’ Dockerfiles and other automation.

      1. 8

        Next stable. Ubuntu 23.04 (Lunar) which just came out, and Debian 12 (bookworm).

    13. 7

      Here’s my NetWare story: my high school had a bunch of 386 PCs that were part of a NetWare network. One day, while messing around in Pascal class, we discovered that an administrator account had a really obvious password. I reported it to the school administration and they tried to give me detention for my troubles. After that, I started menacingly toting around a copy of “NetWare Unleashed” and the administration didn’t like that at all but couldn’t figure out a way to punish me for reading a computer manual that I picked up at Borders. Never really did actually learn much about NetWare though, I got the book mostly for show/to make a point.

    14. 3

      I’ve got plans, and hardware, to put together a system that would notify me of my non-smart washer and dryer finishing based on accelerometers that I’d stick to the back of it, and then firing off a message to ntfy.sh. I haven’t built it yet, because there’s never any time, but someday.

      1. 2

        That sound cool. Wouldn’t it be easier though to use some “smart” socket which measures power usage?

        1. 5

          This is how I do it; I have an Aeotec Z-Wave power switch between the dryer and the outlet. It’s hooked up to Home Assistant, which sends me a text message when the power usage drops back down after being up for at least a few minutes. It works pretty well. I was going to do the same for my dishwasher but my local building code requires that they be hardwired, so I’m going to have to put a clamp on the circuit or something instead.

          1. 1

            but my local building code requires that they be hardwired

            That seems odd to me (Australian). Ours are all just socketed; they don’t draw that much current do they?

            1. 2

              Codes change and are weird. In our bathroom (late 80s vintage) the washer is connected to a hardwired panel (protected by a rubber seal). In our vacation home bathroom, recently rebuilt, the washer connection is a socket[1] - albeit placed high on the wall. In both cases there are concerns about moisture but somehow they did a 180 regarding what’s considered safe.

              Dishwashers are socketed here too, the outlet has to be a bit higher than normal though.

              [1] possibly the socket is specifically moisture-rated.

            2. 1

              I think the idea is to discourage sockets underneath the dishwasher that could potentially be flooded

            3. 1

              Remember that the US uses 110V mains, which roughly doubles the current that a device needs to draw for the same power relative to most of the rest of the world.

        2. 1

          That feels like it involves deadly amounts of current and more than $15 worth of parts. I do software, not electricity.

    15. 3

      I ran into trouble inlining SVGs, because some of the SVGs I was inlining contained inline <style> tags that were leaking out into the rest of the document and affecting other SVGs on the page. I ended up having to do some postprocessing on them; basically I assigned each inlined SVG a unique id and then rewrote the contents of any <style> contained within to prefix each selector with the id, so that the rules would only apply to things within the SVG that contained the <style>. I wrote about it in my blog here: https://jordemort.dev/blog/fixing-leaky-svg-style-tags/

    16. 4

      Am I just lucky or is the M1 significantly less flaky than the M2? My M1 Pro has been happily buzzing along since I got it last year, I don’t think I’ve seen more than a couple apps crash. If anything, it’s been more stable than the Intel Mac it replaced.

      1. 3

        I have an M1 Pro MacBook Pro and an M2 Pro Mac mini. Both are great.

        I’ve had none of the flakiness described in the article, in hardware or software.

        If anything Ventura on my Apple silicon machines is more stable than on my (work-provided) Intel-based MacBook Pro, which has had several weird crashes. I’m more inclined to put any issues down to the OS than the hardware.

      2. 1

        This has been my experience as well. I have so few problems with the M1 that I can’t come up with any off the top of my head. The machine has been chugging along happily for 2 years. This is what prompted me to to seriously consider the M2.

        1. 1

          o_O

          Doubleplus ungood.

        2. 1

          I’m curious why you thought an M2 was much of an upgrade from an M1 in the first place. It’s well documented in many reviews that it’s only an incremental improvement over M1, and so only more likely to be worth it if coming from an older machine.

      3. 1

        Wow. That is not good to hear.

        Ah well, maybe it’s another justification for a 2nd hand M1 when they get cheap enough…

      4. 1

        I’ve been using a maxed out M1 Max MBP since it was released. Very few problems.

        But Apple does do lots of arbitrarily stupid stuff with their OSs and apps. I’m not here to defend the stupidity. I can totally understand someone getting frustrated with macOS.

        For software development, though, it has been a great machine for me.

    17. 7

      Would like to see some more technical exposition to understand why the DNS issue “can only happen in Kubernetes” and if it’s the fault of musl, or kubernetes, or the DNS nodes that for some reason require TCP. Natanael has a talk about how running musl can help make upstream code better, by catching things that depend on GNU-isms without being labeled as such.

      I also wonder where the author gets the confidence to say “if your application requires CGO_ENABLED=1, you will obviously run into issue with Alpine.”

      1. 5

        My application requires CGO_ENABLED=1, and I ran into this issue with Alpine: https://github.com/golang/go/issues/13492

        TLDR: Cgo + musl + shared objects = a bad time

        1. 2

          That’s really more of a reliance on glibc rather than a problem with musl. musl is explicitly not glibc.

          1. 4

            Not sure if it’s fixed, but there were other gotchas in musl related to shared libraries last time I looked. Their dlclose implementation is a no-op, so destructors will not run when you think you’ve unloaded a library, which can cause subtly wrong behaviour including memory leaks and state corruption. I hacked a bit on musl for another project a couple of years ago and it felt like a perfect example of the 90:10 rule: they implement the easy 90% without really understanding why the remaining difficult 90% is there and why people need it.

            Oh, and on x86 platforms they ship a spectacularly bad assembly memcpy that performs worse than a moderately competent C one on any size under about 300 bytes (around 90% of memcpy calls, typically).

          2. 1

            The result is the same; my app, which really isn’t doing anything unusual in its Go or C bits, can’t be built on a system that uses musl.

            1. 1

              Yes, but I suspect you could more accurately say that it doesn’t work on a system that doesn’t use glibc.

              1. 1

                It works fine on macOS, no glibc there.

      2. 3

        Natanael has a talk about how running musl can help make upstream code better, by catching things that depend on GNU-isms without being labeled as such.

        Expecting getaddrinfo to work reliably isn’t a GNUism, it’s a POSIXism. Code that uses it to look up hosts that require DNS over TCP to resolve will work on GNU/Linux, Android, Darwin, *BSD, and Solaris.

        1. 1

          So is it in the POSIX standard?

      3. 2

        why the DNS issue “can only happen in Kubernetes”

        They happen in Kubernetes if you use DNS for service discovery.

        On the Internet, DNS uses UDP. RFC 1123 was really clear about that. It could use TCP, but Internet hosts typically didn’t because DNS responses that don’t fit in one packet require more than one packet, and that takes more time leading to a lower-quality experience, so people just turned it off. How much time depends mostly on the speed of light and the distance the packets need to travel, so we can use a random domain name to measure circuit length:

        $ time host foiioj.google.com
        Host foiioj.google.com not found: 3(NXDOMAIN)
        
        real	0m0.103s
        user	0m0.014s
        sys	0m0.015s
        

        Once being “off” was ubiquitous, DNS client implementations started showing up that didn’t bother with the TCP code that they would never use, and musl is one of these.

        Kubernetes (ab)uses the DNS protocol for service discovery in most reference implementations, but the distance between nodes is typically much less than 1000 miles or so, so you aren’t going to notice the time-delay so much between one packet and five. As a result, when something goes wrong, people blame the wrong-thing that isn’t in most of those reference implementations (in this case, musl).

        I use /etc/hosts for service discovery (and a shell script that builds it for all the containers from the output of kubectl get …) which is faster still, and reduces the number of partitions which can make tracking down some intermittent problems easier.

        Natanael has a talk about how running musl can help make upstream code better, by catching things that depend on GNU-isms without being labeled as such.

        This is a good point: If your application calls gethostbyname or something, what’s it going to do with more than 512 bytes of output? The most common reason seems to be people who use DNS to get everything implementing a service or sharing a label. Some of those are just displaying the list (on say a service dashboard), and for them, why not just ask the Kubernetes REST API? Who knows.

        But others are doing this because they don’t know any better: If you get five responses and are only going to connect() to one you’ve made a design mistake and you might not notice unless you use Alpine!

        1. 1

          depends mostly on the speed of light and the distance the packets need to travel

          This reminded me of this absolute gem and must-read story from ancient computer history, of how people can’t send emails to people more then 520 miles away. https://web.mit.edu/jemorris/humor/500-miles

          1. 1

            That’s a fun story.

            You can use TCP_INFO to extract the RTT between the parts of the TCP handshake and use it to make firewall rules that block connections from too far away.

            This works well for me since I generally know (geographically) where I am and where I will be, but people attacking my systems are going to be anywhere, probably on a VPN which hides their location (and makes their RTT longer)

    18. 1

      I think mcfly is my favorite piece of software right now. It makes shell history so much more useful.

      1. 1

        Are you using the fuzzy search, & if so can I ask what you have MCFLY_FUZZY tuned to? I suggested a default of 2 originally, have found I’m happier at 3, and don’t have any other reports to go on.

        1. 2

          I don’t think so; I haven’t set MCFLY_FUZZY to anything, or do anything else to configure it in any way. It’s been working well for me out of the box.

    19. 2

      A sight from the dawn of my career; we had some of these at the first dial-up ISP I worked at as a dirtbag teenager. Getting to touch them was above my pay grade at the time, though.

      1. 1

        Which ones, the x86 or the MIPS variety?

        1. 1

          I’m pretty sure they were of the MIPS variety, they were contemporaneous with the cube models, which I think we had one or two of as well. This would have been around 1997.

    20. 2

      One of the things I appreciate about many “older” repository schemes is they can usually be hosted as just a directory full of files and a web server. DEB, RPM, pip… RPM at least can even do without the web server, supporting file:// schemes.

      I find it somewhat annoying when a repository format requires that you run some custom server specific to that format. I’ve dealt with this most frequently with Docker registries, but iirc NPM and Ruby gem repos also have this requirement.

      1. 1

        My main criticism of the apt repository format is that it’s difficult to update in an atomic way - to add or update a package, you have to update both Packages (or Sources, for a source package) and the Release file. If you try to do this in-place, there’s going to be at least a split second where the checksum in the Release file doesn’t match the contents of the Package file, and if a client hits the repo during that interval it’s going to complain.

        The checksums in the Release file also make it difficult to generate repositories on the fly; before you can serve the Release file, you must generate all the Packages and Sources files and checksum them, and then what you serve up once the client requests the package indices dang well better match up to what you served in the Release file. This pretty much makes generating a repo on-the-fly a non-starter; to serve the first request you need to do an expensive iteration over your entire database of packages and then basically cache the results and do no further thinking. I’ve worked on various internal build systems that tried to introduce some determinism into apt-get and you’re more-or-less stuck with using snapshots.debian.org or rolling your own moral equivalent for anything that’s not on there already.

        1. 1

          (And yeah you can avoid some of that by using “flat” repos but they’re semi-deprecated and it’s a crapshoot if tools other than apt-get that purport to grok repos understand them at all and they give you a lot less to hang apt pinning rules off of than the full-blown repos…)

      2. 1

        Totally, npm was an utter disaster when I was looking to run my own. The CouchDB requirement tells me that it was a prototype that was retroactively made the standard rather than something that was well designed ahead of time. NPM really does reflect its ecosystem: immature and overly complex.

        I run my own Rubygems repo which is served only by Apache. Bundler did add further requirements but I get away with pregenerating the response for the “dynamic” calls.