Threads for justizin

    1. 3
      Express Service Dependencies

      The daemontools design allows for various helper tools. One of them is svok, which finds out whether a given service is running. This is just another Unix program that will exit with either 0 if the process is running, or 100 if it is not.

      Thats cool and all but I wouldn’t call that a dependency. For it to work, the depended on service needs to be running, doesn’t need to be in a working state, just running. And it needs to be running regardless of any services that depend on it.

      systemd my beloved will instead, when running a service A that depends on B, will also start B, and optionally wait for B to finish starting and become usable, and constantly check that its not deadlocked or the such (also optional)(and not often implemented).

      If nothing depends on B? B never runs.

      Allow User-Level Services

      Isn’t that generally a minefield to implement? I’ve read so many articles about doing that the wrong way, I don’t think I could trust there is a right way without a lot of auditing of a mechanism that is insanely complex in its nuances. Having something I can lean onto, that I know I can lean onto, is appreciable.


      And isn’t that the result? You can do anything through executables, as a result, everyone needs to do everything but everyone does everything a different way, leading to things that sometimes work, sometimes work together, sometimes don’t work together, etc. Eventually people need to remove foot-guns at the root of all things.

      1. 2

        Allow User-Level Services Isn’t that generally a minefield to implement? I’ve read so many articles about doing that the wrong way, I don’t think I could trust there is a right way without a lot of auditing of a mechanism that is insanely complex in its nuances. Having something I can lean onto, that I know I can lean onto, is appreciable.

        What’s a minefield about this? Any user can run a service with nohup or screen. Because of how small each part of daemontools/runit is, the setuidgid executable literally just changes uid and gid before executing the command.

        Systemd will manage services running as user and is much more mysterious from an admin perspective as to how it does it. I trust it, but what’s great about daemontools is that you always know exactly what’s happening. You don’t say, “Please figure out how to run this process as a different user.”, you say, “Execute this process with a different uid and gid”

        Whether you trust that changing the uid and gid is giving you the isolation you want is up to you, but the daemontools way is to have one tiny program in charge of that so that you can be pretty confident when it is or isn’t happening, and so that the code is very easy to audit.

        key to daemontools, as well as qmail, is that for many decades (maybe still?) djb had a personally funded security bounty that i do not think anyone ever cashed in on.

        1. 1

          Noting, I didn’t realise at that time that setuidgid was part of daemontools

          Hard for me to find now, but I have seen critique of use of su, sudo, runuser, some others, for dropping into a user account to run a service.

          djb had a personally funded security bounty that i do not think anyone ever cashed in on.

          AFAIK there is a reason its never been able to be cashed in on.

    2. 4

      I have been doing a ton of work on iPad for a while. I’m fairly comfortable working in shell, so I use a cloud server or one of my homelab machines via Termius over SSH and work with vim, but I’ve successfully used code-server (web version of vscode) in browser.

      Via Tailscale, i’m able to access my home dev machines (a couple of fairly cheap, small, low power hp thin clients with respectable specs that i picked up on ebay for a couple hundred bucks a piece) without any firewall wonkiness, and my connections via Termius->Tailscale stay connected, even when the ipad is “sleeping” or whatever you’d call it when the magic keyboard is closed. Connections also migrate seamlessly when I flip from Wifi to 5G, so I can be working on something at home, walk to the bus to head to my office, and continue where I left off seamlessly.

      There’s a long way to go, but the Blink app apparently has the ability to run the vscode-server UI more natively (the stuff that pops up at the bottom of the screen in ios browsers when entering in a text field makes it weird), and has a fairly inexpensive feature to run throwaway and/or persistent shells in the cloud.

      It’s something I’ve tried on and off over the years and is definitely a way better story, now, but while I use it heavily for personal projects, there’s no way I could use it and not have a real computer.

    3. 5

      I tried iSH, but (aside from being slow) some things just didn’t seem to work. Cloning a moderately large git repo just seemed to die. I wish Apple would enable the hypervisor framework on at least the ‘Pro’ iPads. It would be a bit tricky to make it work with iOS’ memory management (which relies heavily on sudden termination), but I can imagine something like requiring every VM to be backed by a file for its memory, the CPU state is persisted when the app is moved to background, and flushing the memory to the SSD if the app needs to be killed (I think XNU already does the right thing if you use mmap with map-shared for the memory image).

      1. 2

        I have also tried iSH, Prompt, and a few other solutions. The iPad hardware is brilliant, but I find self-imposed software limitations way too constraining after a while. In contrast, Surface devices run Linux really well. Battery life is obviously less impressive as they are still x86, but from a software perspective they feel like real machines. If one needs an iPad with a keyboard, it is hard to justify not buying a MacBook Air.

      2. 1

        There is a version of UTM for iOS, it’s a bit complex to install, requiring an alternative app store, and you need to set the machine into developer mode and unlock JIT from a mac, but it’s pretty impressive, esp on the m2 pro.

        1. 4

          Without a paid developer account, you also need to reinstall it every week. And, even then, it uses emulation mode not virtualisation (the hypervisor framework isn’t exposed on iOS), which is a shame.

          I’m using it on my M2 Pro MBP and it’s pretty amazing there. Building the whole FreeBSD base system is 15 minutes, which includes building all of the LLVM tools, when using it in virtualisation mode, but I’m emulation mode it’s a lot slower (I haven’t tried the build, just booting was noticeably much slower).

          1. 3

            Hmm, perhaps the UTM/Qemu emulation for aarch64 is actually usable for practical purposes then. x86-64 FreeBSD in TCG mode on my M1 Mac Mini seems to eventually cause Qemu to crash.

            It’s truly amazing that Apple is so hellbent on control over their platforms they’d prefer their iPad range, including the supposed “pro” models, to be limited in their usefulness to casual use, drawing, and a handful of niches that happen to not overstep App Store rules and not be too multitasking heavy. (Because that’s still not much slicker than phone app multitasking.) The hardware is incredibly capable, and before every WWDC, power users get super excited that maybe, this year, finally, Apple will let us do something useful with the iPad.


            My wife can’t even be bothered to try the new iPad version of Final Cut Pro on our underused M1 iPad Pro because as usual, they’ve hobbled it and limited interoperability with the Mac version.

            Personally, I’ve recently had reason to use a laptop a lot more than the last few years - except my laptop is still the 2015 MBP, so the iPad Pro would be vastly more capable than the noticeably clunky Intel dual core CPU. I can just about get by with it for researching and writing proposals/estimated. Even then, switching between windows & tabs, heavy text editing with clipboard use, etc somehow manages to feel artificially slow, even when using a mouse and keyboard rather than the imprecise touch.

            It’s infuriating to know how much the hardware can do, and not be allowed to use it.

          2. 1

            AFAIK slightly older iOS had the hypervisor there, but only exposes via the syscall interface (and entitlement gated). It’s been disabled in release builds now, however, last time a friend checked the Darwin source.

    4. 4

      very disappointed this involved no visual metaphors! ;d

    5. 27

      I’m sorry… This is gonna sound like I am teasing you or that I’m mocking you, but actually I’m really being frank, and I’m gonna give you my advice. It’s just not gonna be great.

      Or you could say “check out the previous commit and force push it”, answering their question. I don’t like this blog post. It seems to be suggesting all our tooling should be geared towards what “humans think about” instead of what engineers need to do. Humans think “I want to build a table”, not “I have to use tools to transform wood into components that assemble into a table”, but they have to do the latter to achieve the end goal. It’s IKEA vs real work.

      1. 10

        The tools we build need to be geared towards what the users think about.

        Engineers should be familiar with the mindsets their tools force upon them. That’s the true meaning perhaps of Englebart’s violin: we adapt to our tools instead of the other way around.

        1. 3

          and when someone else already pulled that commit that you just removed…

          1. 10

            Why not simply use the command that was designed for this? Do a git revert, and then a git push. Works for the whole team.

            This is a nice example of the issue outlined in the post. Only there really is no way you can dumb down git so far that you can simply forget the distributed nature of it. The only wisdom needed here is that you always need to add stuff to the chain, never remove. This principle, and the consequences of not not following it, should really be part of your knowledge if you want to be a a software developer.

            1. 2

              I think this depends on how you read the scenario in the article - I read it as “I just pushed something that shouldn’t exist in the git history”, like I’ve been in situations where someone’s pushed a hard to rotate credential, and so you rewrite history to get rid of it to reduce damage while you work to rotate

              1. 4

                a hard to rotate credential

                Isn’t this the real problem? Rather than blaming git for being a distributed version control system, how about solving the credential rotation issue?

          2. 1

            They can still use it to create a new commit if they find a need for the content. It is really only a problem if they want to communicate about it to someone who has not pulled it. IME that is extremely rare because the way to communicate about a commit is to communicate the actual commit.

      2. 3

        It feels a bit like when the author is mentoring less experienced devs, he assumes they can’t grasp the more complex aspects of the tool because it doesn’t fully click the first time.

        Over the past three decades as I’ve learned from all sorts of folks, and done my best to pass on my knowledge on all sorts of things from operating systems to networks and all sort of software development tools, I’ve often had to ask for help on the same things multiple times, esp if I don’t use a particular mechanism often.

        For the past few years, I’ve worked as a build engineer and part of my team’s job has been to help come up with a workflow for a large team of engineers that makes sense. Sometimes we intervene when there are issues with complex merges, or a combination of changes that work on their own, but not together.

        Most people also can sort things out on their own given some time. You don’t have to back out a change and do a force push - I would, because it makes the history cleaner, but there’s absolutely nothing wrong with a revert commit, which is extremely straightforward to create.

    6. 4

      so neat! i’m about halfway through but really impressed so far, and actually learning something i haven’t bothered in my ~30y in the industry to dig into!

    7. 20

      The conclusions in Amazon’s article are that service and egress charges were too high, and that account limits for resource usage were too low. Wow. “Amazon can’t make sense of serverless!” DHH cries out, missing the point entirely, for a problem that wouldn’t exist on DHH’s “sovereign cloud” only for lack of fees to read from disk instead of S3. Run your own microservices, avoid these scaling limits, but that means you deal with your own infrastructure. That’s the argument to be made, not a stupid gotcha-rant.

      1. 8

        the overall story is, they originally designed this for substantially lower capacity than they’ve decided to expand it to, serverless microservice architecture allowed them to rapidly build the initial solution. they have made fairly straightforward changes to the code to run siloed / vertically separated copies of a monolith, with each copy running only a configured subset of functionality. so it’s still kinda microservices, but they no longer spin up an entirely new service for each quality check.

        this is a case study in properly leveraging the cloud and not simply saying, “serverless is too expensive at large scale so it is a poor choice.”

        they prototyped something, put it in production without premature optimization, it worked so well they wanted to expand its’ usage such that it made more sense to spend more energy on their infrastructure. microservice architecture has an advantage that it is easier to scale individual functions. that flexibility, like all abstractions, does introduce some complexity, latency, etc..

        they’re still using AWS services, and showing how using serverless to build something is not a forever trap. you can just change things. :)

    8. 8

      This person isn’t employed “in tech”, they have been employed in tech roles at companies which traditionally are amongst the worst at adopting and managing tech.

      Maybe this person doesn’t have very strong skills to start with? Bank jobs are not “working in tech”, and laughably so.

      This person has an incredibly small sample size and thinks they are qualified to reveal “the truth” to outsiders, and to that, I say, what a jerk!

      I have over 25 years of experience and I have had to work - a lot - everywhere I have worked. When I’m interviewing, I try to filter out people like this, because if they think that their bank job is representative of what should be expected of them, they’ll probably show up and do the same in a team where I actually need help.