Threads for feoh

    1. 1

      To me the issue here is transparency.

      I’d love to see the algorithm they use open sourced, but then I guess that opens it up to being evaded by the abusers of the wrold?

      I’d also SUPER love to see a very robust mechanism for handling false positives. I know I’d be much more willing to consent to having this happen on my phone if I felt confident my life wouldn’t be ruined because some algorithm said I was a pederast and no human could be bothered to see that instead I was posting pictures from an art museum.

      As a right to privacy fan I’m super dubious about anything like this, but I also feel pretty strongly that we may all need to bend a bit and try, rather than rejecting these ideas utterly, to find ways that could make enforcement a more constructive and less destructive process.

    2. 2

      I know I’m gonna get flombeed for this, but so be it.

      An awesome laptop like that with a 400 nit display? WHY???

      Macs and some PC laptops come with 1000 nit displays.

      And before the “Just get Matte” crowd rears up, why can’t I have my cake AND eat it too? :)

      1. 2

        Not a flame, but a different perspective. I have never once looked at the number of nits as a major factor in the decision-making process when buying a laptop. Indeed, it’s not even ‘not a major factor’, it’s not a factor at all. I don’t look at that stat in the specifications at all, never have.

        1. 1

          Hi Popey! No flame I can see, it’s always a pleasure hearing from you.

          Different perspective indeed. I realize that my needs are very niche. I’m blind in one eye, low vision in the other - 20/80 but with a very restricted field of vision.

          Having a nice, bright display is critically important to me. The 300 nit Thinkpad T15 I owned for a while had a display that totally washed out for me if a photon even came within shouting distance.

          I recognize that my needs are not everyone’s needs, and I suppose this is why I gravitate towards Mac hardware despite the fact that I wish the world ran on Linux :)

    3. 4

      I also bought the Z13, but unlike Martin, I picked up the high-resolution touch screen version. It’s a beauty.

      1. 1

        I’m really curious about how it compares with the Carbon X1 (my current laptop, which is nice but a bit too wide/big for my taste). The keyboard on the Z13 looks a lot cheaper on the pictures.

        1. 2

          Funnily enough, I had the X1C9 from my previous employer as a work laptop just before the new job and getting the Z13. I mostly use the Z13 as a desktop - connected via USB C dock to a couple of displays, and an external keyboard and mouse. So I am not a heavy user of the built-in keyboard.

          That said, It’s certainly ‘different’ from the X1C9 keyboard - (which I also predominantly used in a dock).

          I don’t find either machine to be problematic to type on. What is a problem with the Z13 is the trackpad. I am quite the TouchPoint(TM) fan, and as such use might right thumb on the middle mouse button to scroll with the nipple. The X13 has a uni-button across the top of the touchpad, with three zones, one for each button - left, centre, and right.

          It’s far, far too easy for my thumb to drift and instead of scrolling, I end up selecting chunks of text on a page I’m trying to scroll. It’s somewhat maddening. I mitigate this by forcing myself to use the touchpad for scrolling and crying inside. YMMV

          1. 2

            Thanks! This confirms my feeling: the Z13 is more a regular modern laptop while the X1 keeps the Thinkpad mojo.

      2. 1

        I really enjoyed the touch screen in my brief and ultimately doomed dalliance with a Thinkpad recently.

        I wish Apple would pick up on this and include it on Mac laptops.

        I’m a keyboard all the things kind of person, but if I MUST point and drag/click, being able to point at the ACTUAL thing is much easier than using a mouse with my fine/gross motor impairment.

    4. 18

      Saving you a web search: it’s yet another failed no-code platform.

      Amazon Honeycode is a fully managed service that allows you to quickly build mobile and web apps for teams—without programming. Build Honeycode apps for managing almost anything, like projects, customers, operations, approvals, resources, and even your team.

      1. 7

        Thanks for this. I worked at AWS for 6 years and had never heard of this.

        But that’s the AWS approach. Let a thousand flowers bloom, and crush all the ones that don’t hockey stick within a year or so of GA.

      2. 2

        yet another failed no-code platform

        No snark, but: are any of those not snake oil?

      3. 1

        Appreciate the description! Based on the name I thought it was AWS expanding canary token support…

    5. 21

      I hate to say this as I know it’s been beaten to death here but I find it incredibly difficult to take anyone who still blogs on Medium seriously, and this article does nothing to change that.

      It’s visually appealing in certain respects but as @student said the author does very little to actually make their case and from my reading also contradicts themselves in several spots.

      I think many of us can get behind the UNIX philosophy of small narrowly scoped tools working together. I don’t see this article expanding on that point very much.

    6. 3

      Writing a (to most people I’m sure) boring Django application for tracking all my health related medical crap, because doing it all freeform in my notes is both a drag and not easily searchable / reusable

      I’m really enjoying how easy it is to get going with Django. I’m sure there are problems at scale just like with anything else, but I was up and running with some custom tailored models and a simple CRUD interface in a couple of hours.

    7. 1

      Trying and failing to NAT my USB to my wifi. Uuuuuugh.

      I don’t even know if I’ve done it correctly, the USB device might be configured incorrectly and be the real cause of the failure to connect.

      1. 1

        Would you be willing to explain the use case for this? What problem are you trying to solve?

        1. 1

          I installed Parabola Linux onto my Remarkable tablet, but it doesn’t have wifi firmware (Parabola being an FSF-approved distro and thus linux-libre), so the only means of getting internet access on the tablet is via USB. If I can get it to work, then NATing my USB to wifi (and setting the appropriate default gateway) should do that, at least long enough to pacman -S a compiler and whatever dependencies I need.

          I’m ignoring my alternative options of 1) flashing a custom OS image with the proprietary wifi firmware included and hoping I don’t brick the device by accident, and 2) spending $80 on the Technoethical libre-firmware wifi dongle (mainly because I suspect it won’t work, just like the USB-to-ethernet adapter didn’t work).

          1. 3

            Wow! Your willingness to perform acrobatics in the name of principle is truly impressive. GOOD LUCK with all this!

            You should write it up when you’re done.

            1. 3

              It’s less principle, and more that Parabola was the only pre-made OS image for the RM. If there was an Arch or Debian port I’d have gladly used that. Also, I’m relying heavily on RCU to do the hard stuff for me.

    8. 5

      [Pardon the length. I have Opinions on this stuff :) ]

      Great article, and I know that many of us share your frustration.

      The problem is that, like DHH’s description of Ruby on Rails, “Apple is Omakase.

      When you buy ANY Apple device, you are buying a black box that they control. They control the horizontal and the vertical. Any amount of agency you are given with the device is on THEIR terms.

      This rubs a lot of technical people the wrong way, and I get it.

      I think that in many of our minds, Capitalism and profit motive are in direct contradiction to the ideals that many of us hold VERY dear - that information and its darling child technology want to be free, and that anyone who locks us away from the ability to hack is committing a grievous moral affront.

      The problem is that this is a fundamentally naive view in light of the way we currently structure our society. Innovation is driven in terms of engineering hours which are funded by sales. Companies MUST protect their critical assets, and often that means not sharing everything.

      So my hopefully reasonable take is: Either drink the Kool-aid and enjoy it along with all the restrictions and barriers it imposes, or choose differently and buy open hardware that you can hack ’til the cows come home but may well lack the polish you might otherwise want.

      Until we immanetize the eschaton we really can’t have out cake and eat it too, much though we’d all love that :)

      As for myself, I drank the Kool-aid and paid the $99 for a developer program license. Whether I continue doing that year after year remains to be seen, but I knew what I was getting into when I bought in so from my perspective it’s either like it or lump it. Raging against the dying of the light feels like energy better spent contributing to open source.

    9. 6

      Congrats Ted! Honk is a beautifully opinionated piece of work.

      I don’t run it myself because it’s a bit too spartan for me but I really appreciate what it is and what it does. The world needs more alternative takes like this!

      Mastodon itself is such a B-E-A-S-T to set up and run, this lightweight alternative is great!

    10. 2

      I ran through the website but I couldn’t exactly understand what it’s supposed to do. Is it some kind of a CMS?

      1. 11

        Imagine if Mastodon were not six processes written in three programming languages, but were instead a single binary written in golang using SQLite for storage. That’s honk.

        It works quite well for me; I have a cheapo DigitalOcean server and I never have to do any maintenance, or worry about defederation drama making it hard to follow interesting weirdos.

        1. 6

          GoToSocial matches that description even better since it provides Mastodon-compatible client APIs.

          1. 1

            which part of the description does it match better?

            1. 2

              Single golang binary using SQLite for storage?

              Although, GoToSocial only provides the backend, you have to bring your own UI.

              Edit: it doesn’t match the description better. (Just realized what you said, sorry.)

              I assume donio meant to say GTS may be better since it’s API is mastodon-compatible so you can just use any available UI, like or any of the mobile clients directly.

              1. 2

                you can just use any available UI

                Well, mostly. Because it’s not 100% masto-compatible (IDs are strings, not ints; rate limit headers are epoch seconds, not ISO8601, etc.[1]) a whole bunch of stuff doesn’t work (Ivory, Mammoth, without tweaks, etc.)

                [1] Which are compliant to the API spec but it seems a whole bunch of clients have (stupidly) ignored the spec in favour of copying what Mastodon does.

        2. 1

          I think to @donio’s point, you’re missing a bit with this - it’s not just about the actual server infra being lightweight, it’s the interface as well.

          No stars, no bells or whistles, just a spartan interface to read and post to your fedi feeds :)

    11. 6

      Come to Linux, the water’s warm, although a bit less friendly (also less clingy and invasive and restrictive though)

      1. 10

        Respectfully, that’s a rather ableist stance.

        Linux’s accessibility features are not there for many people’s needs. This isn’t out of malice, a11y is a HUGE problem space and people need all kinds of affordances to help them get by.

        But when you have a platform where the predominant desktop broke screen zoom for the better part of two years, I don’t honestly think anyone could in good conscience say that people who need such features would be wrong to use commercial software.

        1. 2

          That’s an excellent point and it was remiss of me not to remember that.

          I’m clearly not alone in feeling that Apple’s approach to development has gotten less and less “open” over time, though… and this is a thing that can be felt, which matters.

      2. 6
        1. 1

          They’re working on that. A fully open-source phone that was actually usable, would be nice.

          Remember how Android was forked from Linux? Yeah, funny how that ended up.

          1. 3

            A fully open-source phone that was actually usable, would be nice.

            I agree that’d be nice. I participated in the Librem 5 crowdfunding, but that ended in a useless phone. At least they upstreamed their work, so I’d like to think the money didn’t completely go to waste.

            The Pinephone is just hardware though, right? I mean, flashing a stock Android phone which uses a mainline kernel with another OS would be essentially the same as buying a Pinephone. You still need usable software to go with it.

            Remember how Android was forked from Linux? Yeah, funny how that ended up.

            It makes me sick.

          2. 2

            Will they make one with an OLED display? The power savings & color would be worth the upgrade cost.

    12. 22

      I really don’t get any FOSS enthusiasts who use Apple. It is the anti-thesis of freedom.

      1. 18

        As opposed to what? There are literally 2.5 choices for actually usable mobile devices.

        • You either get a privacy-respecting Apple with great hardwade, which is a walled garden, but actually many things can be circumvented and hopefully the EU will bust it even more open soon;

        • Or go with mainstream Android with blatant privacy violations thanks to the biggest ad-company in the world;

        • Or you restrict your device choice to Pixels and go with GrapheneOS. LineageOS and similar may be another option, but don’t forget that most other android devices don’t actually support swapping OSs and will wipe their proprietary firmware on doing so. I for example wouldn’t be okay with an expensive device that has its camera crippled due to no proper firmware support (Sony’s are known for this).

        And no, PinePhone and similar is so far from daily driver ready that it is not even funny, it is a toy to tinker with. If you do use it daily, then I’m happy for you, but let’s agree that your “daily driver” definition is completely different to that of the general populace.

        1. 7

          I fully agree with your definitions, I bought a PinePhone fully aware of that.

          But the commenter above is talking about FOSS enthusiasts here, in reply to an article about hobby application development, so I think the comment is still relevant. This isn’t about daily-driving, it’s about people having fun with toys.

          1. 1

            Fair enough!

          2. 1

            What if we need things only the proprietary world delivers even when we’re playing with our toys? :)

        2. 2

          No, those three descriptions of what you get are not the whole picture.

          I prefer hardware and a system that lets me control it than the other way around.

          Where are all the “[name] OS on iPhone” projects?

          Privacy leaking can completely be remedied on Android devices. In fact almost anything can. The same CANNOT be said of Apple devices.

          You’re telling me you rather a device you cannot control than one you can.

          1. 2

            There is iSH, which can run (emulated) x86 binaries including a whole alpine linux distro complete with package manager, readily available freely from the AppStore.

            Apple is limited, but no longer as much as it used to be and for many it is a more than okay tradeoff for a mobile device - I can live out my tinkering interests on a raspy/pinephone/desktop

          2. 2

            You’re telling me you rather a device you cannot control than one you can.

            If that device is awesome, then yes, literally yes.

        3. 1

          Removing Freedom zero (ability to run your choice of programs) is pretty terrible though.

          1. 1

            As mentioned, it can be circumvented (in a limited way): see AltStore.

      2. 4

        Then you’re not being mindful of the fact that people may have VERY different needs from you and that pragmatism sometimes dictates that we drink the Kool-Aid and take the good with the bad.

        I’m partially blind / fine and gross motor impaired and I LOVE open source.

        I use IOS and Mac because the accessibility affordances there are amazing and make it easier for me to do my job with less actual physical pain.

        Would I love it if proprietary hardware and software weren’t a thing? HECK YES! But in the world I inhabit, sometimes I have to do what I need to do to get by.

        (I dream of retiring one day and making bringing the accessibility features of Linux up to snuff my ‘day job’ :)

      3. 4

        Apple platforms provide the most freedom than any other platform on the market today. Other than GNU/Linux. And I am not talking about Android. What you guys are referring to as Linux, is in fact, GNU/Linux, or as I’ve recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.

        But I digress. When I boot up my Mac, I am free to send an iMessage to my dad from my computer. I am free to run a C compiler within a Unix environment. I am free to have 10+ hours of battery life. I am free to sit on the toilet with my laptop in one hand, typing on the other, because it weighs 3 pounds. I am free to run GNU/Linux within QEMU. I am given the freedom to reasonably expect, when someone in the United States of America gives me their phone number, that they also have iOS and I will be able to use iMessage and FaceTime. I am free to use Apple Pay which gives me the freedom to go on MTA or BART without dealing with those shitty clipper card things. I am free to use my credit card wherever I want, without worrying about scammers stealing my credit card number. I am free to be part of a platform with the most social cohesion which opens up relationships and opportunities that would otherwise not be afforded to me if I used Android™.

        But most of all, it gives me freedom of thought. I got bicycles in my mind.

        1. 1

          Yo, I loved that.

    13. 28

      Apple has a straightforward reason to do this – they don’t care about the $99 fee, but by keeping out hobby apps, people are more likely to download very expensive commercial apps of which Apple gets a 30% cut. For example, I needed a GPS-based speedometer recently, and (seeing no free option) ended up with one that charges $10 a month! Probably thousands of people have done that. On Android these types of very simple hobbyist-level apps tend to be free.

      1. 20

        On Android these types of very simple hobbyist-level apps tend to be free.

        Though good luck finding one that isn’t riddled with ads and asks for a bunch of inappropriate permissions.

        1. 24

          The F-droid app store is catered specifically for this. (Yes the Google store is revolting)

          1. 1

            That’s not on Apple.

            1. 2

              Yes, that’s the point.

        2. 4

          Perhaps I’m lucky, but I’ve actually had pretty good luck finding them. Putting “open source” into the search bar helps, and if that fails there’s often a side-loadable one on GitHub.

      2. 12

        My guess is that the actual rationale is a bit less cynical. By keeping out hobby apps — which aren’t subject to review — Apple is (trying to) optimize for end-user experience. And it’s no secret that Apple optimizes for end-user experience over basically everything else.

        I can’t really blame them for taking this position. Apple devices are better and more “luxurious” than Android devices in the market, and I think this kind of stuff is a major reason why.

        1. 17

          I don’t understand. Who is the end user when a developer is trying to improve their own experience? There’s absolutely no distribution going on in OP.

          1. 11

            That’s true, but the number of iOS users that use privately-developed apps which aren’t available on the app store are statistically zero. Even among those users, the only ones that Apple cares about are those which will eventually publish their app to the app store, and for those users the relevant restrictions are non-issues. I guess?

            1. 2

              Don’t forget about enterprise users, but I think they’re kinda not what you’re actually referring to here :)

              (If you’re a BigCo Apple will give you an enterprise profile your users can put on your phone to run privately built apps by BigCo. This is how they did things when I was at Amazon.)

              1. 2

                FYI: The definition of ‘BigCo’ is ‘more than 100 employees’ (from their docs). That puts it out of reach of small businesses but you don’t need to be Amazon-big for it to work.

                Unfortunately, iOS is pretty bad for BYOD enterprise uses because there’s no isolation mechanism between the work and personal worlds. On Android, you can set up a work profile that runs in a different namespace, has a different encryption key for the filesystem encryption, and is isolated from any personal data (similarly, personal apps can’t access work data). iOS doesn’t have any mechanism like that, so there’s no way for a user to prevent work apps from accessing personal data and things like InTune run with sufficient privilege that they can access all of your personal data on iOS.

                  1. 1

                    Thanks. I vaguely remembered reading about that, but InTune didn’t support it and required full access. Has that improved?

                    1. 1

                      I’m investigating this myself (need to set up BYOD at a new job) and haven’t checked on Intune yet much beyond an acronym level (e.g., it knows about Apple Business Manager which makes enrollment easyish).

                      The iOS and Android approaches are quite different—Android is kind of like having two phones in one box, whereas iOS separates the data but not the apps. Microsoft puts a layer on top that requires app support but gets you finer-grained control over data transfer across the boundary (like, can I copy/paste between personal and work apps).

        2. 3

          Whoa boy, folks with strong feelings are REALLY not gonna love this take :)

          But I agree with you, I do think unoformity of UX is a sizable reason for the $99 fee. It’s not so much “Hate hobbyists” as “Making it easy to sideload means Jane Sixpack will do so, brick her device, and flame Apple”.

          1. 2

            How many people have ever sued Google because a sideloaded Android app bricked their device?

            1. 2

              i’d be curious to see actual data on that.

      3. 7

        Open Google Maps and it will automatically show you your speed.

        1. 4

          The option mentioned in the support FAQ you linked doesn’t appear to exist on Google Maps iOS.

      4. 5

        ended up with one that charges $10 a month

        You could’ve bought a cheap Android device instead and it would’ve paid for itself in a few months.

      5. 5

        I just searched the App Store for ‘Speedometer’ and about 5 out of the top ~15 results don’t show anything about costing money, though perhaps they show ads.

        This one looks simple and says it has no ads:

        Did I find something different from what you were looking for?

    14. 52

      This infuriates me to no end. I have some iOS devices laying around (which would otherwise be paperweights) and wanted to build little educational games to run on them for my young son tailored to his interests.

      But no, after one week, I discovered (like the author of this article) that they would no longer run.

      Not only that, but I was also exploring embedding a few languages in iOS projects. But while deploying one of them to my iPad, I found out you are limited on how many apps you can deploy per week on each device.

      I got pissed off enough that I scrapped any kind of hobby development for Apple products. The fact that they pretend to care about environmental waste or education is a joke.

      1. 11

        You could use Altstore to sideload – it automates the app certificate refreshing business, so all you need to do every 7 days is plug the device into a laptop with iTunes installed and press the refresh button.

        1. 1

          One super downer with Altstore that caused me to lose interest is that if I read the docs right the AltStore server must STAY connected to your network so your AltStore side loaded apps can validcate their nubbins and remain runnable.

          I get why they need to do that, but as a reality for me anyway it kinda su-ucks :)

      2. 10

        I think you could still do PWAs (progressive web apps) which can look almost like native apps. They only ever have a subset of features available to them compared to native apps, but for little toys that might not matter. I’ve been wanting to play around with writing some PWAs myself for some time.

        1. 13

          Trying to do this years ago I discovered one interesting thing: the more your web app tries to look like a native app, the more users think it is “slow”. A less native looking web app which works at the exact same speed will be perceived as “fast”. It’s kind of the uncanny valley effect but for software.

          1. 6

            Yep, this definitely happens. Modern operating systems have so many subtle animations and micro-interactions that any attempts to copy them exactly are bound to fail. Web apps – or other types of cross-platform apps – need to have internally consistent design languages that work the same way across platforms. They shouldn’t try to blend in with their host platform, because that way lies madness. Anyone who has used a stock Qt Widgets app on macOS can attest to how frustrating the experience can be.

            1. 2

              You could almost be advocating for the Java Swing “Metal” look and feel here :)

        2. 3

          I’m with @jmtd. And in addition, new tech like WASM and WebGPU should theoretically enable a lot more classically native-only features

      3. 6

        Yes, agreed. This is the reason I stopped caring about the Apple ecosystem as well. In my view, the important thing about computers is that they expand the human potential for creativity and discovery. It’s particularly upsetting for Apple to not give a shit about any of that, considering that their early success is entirely due to it, as is much of their marketing even today. The company’s stated values are at odds with its observed actions.

      4. 2

        That’s a very reasonable response.

        If Apple’s black box approach pisses you off, vote with your wallet and buy devices from companies that make hack-ability a priority.

      5. 1

        Can you make it work by side-stepping the official distribution channels via e.g. jail-breaking?

        1. 2

          You can, but have you ever actually jailbroken an IOS device?

          1. The breaks are HIGHLY version dependent. You’re locked into an old OS version forever unless you want to fight the whole fight AGAIN.
          2. It’s a RAGING PITA. Like, in order to do this you need hair trigger timing to put your phone into recovery mode or whatever AT EXACTLY THE RIGHT MOMENT or it won’t work.
          3. There are toolchain issues. Like, if you want to build programs that actually run on IOS and aren’t e.g. vanilla POSIX that you run from a shell, actually deploying to your jailbroken device can be tricky from what I’ve read (Would love to hear from someone who jailbreaks, codes IOS apps and loves it :)
          1. 2

            angelXwind has been keeping AppSync Unified updated for the past decade. Also jailbroken phones de-facto use dpkg/apt and have OpenSSH preinstalled. Plus tons of ported POSIX software on various repos. Not as much as MacPorts/Homebrew but pretty damn close.

            1. 1

              That’s really great to hear! Has actually jailbreaking gotten any easier?

              I did it ONCE like 10 years ago and have never been able to again. The split second timing of putting your phone into recovery mode to trigger the thing is something I just can’t get past.

              1. 2

                Generally speaking, no. In fact, it’s gotten far worse. The jailbreak scene is a trashfire. However, despite the wider scene being a trashfire, it’s a massive scene, with it’s own sub-niches. There is a subset of software/developers/etc that is actually good. You gotta know who to trust.

                If I were to make specific recommendations, use checkra1n or palera1n (do NOT use palera1n rootless). Those are permanent exploits for iPhone X and older by trustworthy devs (for the most part). And as far as developers to follow, I trust angelXwind and Limneos (all I can think of off the top of my head atm). And whatever you do, don’t listen to, or use any software written by saurik or Coolstar. Also, avoid /r/jailbreak (and anyone involved with that subcommunity).

                1. 2

                  And whatever you do, don’t listen to, or use any software written by saurik or Coolstar. Also, avoid /r/jailbreak (and anyone involved with that subcommunity).

                  Reddit is indeed misery, but what’s wrong with those two people? I don’t follow the jailbreak scene, but I known saurik was involved with it from the beginning.

                  1. 0

                    Nice try, Coolstar.

                    1. 1

                      I’m asking because I legitimately don’t know.

        2. 1

          I wouldn’t be surprised if you could, I haven’t looked into the process. I already had a jailbroken nintendo switch lying around so I’ve been using that instead.

    15. 28

      It’s sad to see HURD still using Mach. Mach was the microkernel that showed everyone how not to build microkernels:

      • Don’t put heavy policy in your IPC mechanism. Provide enough that servers can implement capability mechanisms, but keep everything off the fast path unless every IPC message will need it. Mach does a load of permission checks that make it very slow.
      • Provide only synchronous messages. You can make these fast and you can build asynchronous models on top of a synchronous message-sending primitive easily (especially if you have shared memory and so use the synchronous message as a wake event), you can’t do the converse without a lot of overhead.

      HURD was ported to use L4 and CoyotOS microkernels, both of which solve these problems well (CoyotOS also has a nice capability model), yet the Mach version seems to be the one that survives.

      The article mentions the lack of systemd at the end. I’m a bit surprised that Debian managed to work on HURD at all with this limitation, I was under the impression that Debian had taken a hard dependency on systemd and that was the reason for removing kFreeBSD.

      1. 8

        It’s interesting how perceptions change as we learn from past mistakes.

        I remember having been invited to the FSF’s office sometime in the early 90s when I’d just recently moved to Boston.

        The folks there were SO INCREDIBLY EXCITED about the Mach micro-kernel, and how Hurd was going to do the impossible by being a fingerprint-less client (I still don’t understand what that means :).

        I remember being kind of starstruck and awed at the time.

        Hard to imagine it’s been almost 30 years.

        1. 23

          The microkernel concept is still a good idea that keeps being reinvented in various forms (modern hypervisors are basically microkernels). The nicest thing about Mach was that your communication with the ambient environment was all via ports provided when your task was created. This meant that things like chroot or jail-like isolation are basically free: you don’t give a new process access to the server that manages the global filesystem or network namespace, you give it access to some proxy that manages a new namespace.

        2. 6

          It was clear at the time that Mach solved some problems whose pain we felt, because we had running code with those problems.

          What was not clear was that Mach would bring some new problems, whose pain the GNU people didn’t feel, because they didn’t have the Hurd running and didn’t spend significant effort on gathering experience with other microkernels. So they didn’t really learn about the possibie differences within the range of systems called microkernels, and didn’t learn about the distinction between Mach and microkernels. They wanted the advantages of microkernels in general but got Mach specifically, and didn’t understand what particular pains Mach brings (and what it avoids that some other micokernels have).

          For example: They could have picked a few commercial microkernels and ported emacs to those in order to learn about the differences within the spectrum of microkernels, they didn’t. Supporting production use of emacs on QNX wasn’t exactly the top item on their priority list.

      2. 4

        If I ever have a couple of spare…years…I’m going to write a message passing microkernel that just uses a single logical address space for all processes and the kernel, with different page tables loaded in depending on which process is running and the state of its message queues.

        Basically a process can gift memory to another process to pass a message, without copying. Think Amiga Exec with read/write permissions (which IIRC they did add a bit of with the Grim Reaper stuff).

        I have no idea how it well it would work with multiple CPUs, how much more or less efficient it would be, etc. I just like Amiga Exec and like to imagine what it would look like if it existed today.

        (I remember reading a paper on Single Address Space Operating Systems and the idea of nested memory protection domains but I can’t remember it off the top of my head…)

        EDIT: Never mind, I’m misremembering what they did in AmigaOS 4. Apparently they did briefly start using the MMU to protect static data but have since disabled it apparently.

        1. 13

          If I ever have a couple of spare…years…I’m going to write a message passing microkernel that just uses a single logical address space for all processes and the kernel, with different page tables loaded in depending on which process is running and the state of its message queues.

          We actually wrote one of those in my current group. The goal was to have a compartmentalised unikernel system that would let us replace bits of Azure with something with a tiny TCB. The side goal was to demonstrate that moving to CHERI would make things much faster and use less memory. We wanted to be able to take existing bits of functionality and have zero- or one-copy communication between them without bringing them into the TCB. The system ran quite nicely in a few MiBs of RAM.

          I have no idea how it well it would work with multiple CPUs, how much more or less efficient it would be, etc. I just like Amiga Exec and like to imagine what it would look like if it existed today.

          This was quite expensive on x86 until AMD Milan because page table invalidations required a broadcast IPI and doing the invalidate on the other side. On Milan and on any AArch64 chip, it’s quite cheap (there are broadcast invalidates). RISC-V, of course, did the wrong thing, but I think there’s now an extension to do the right thing.

          1. 3

            moving to CHERI would make things much faster

            How this is possible? I thought CHERI always makes everything slower

            1. 3

              No TLB flushes on switch, one copy of page tables. CHERI makes anything that involves sharing data between isolated regions faster.

              1. 1

                ????? So overall performance of CHERI is better than equivalent non-CHERI solution????

                1. 4

                  Depends on what you’re doing. CHERI may very well be faster than a traditional hardware memory protection scheme (I suspect it is workload-dependent). It will not be faster than a software-protected scheme which prevents pointer forgery by construction.

                  1. 1

                    If we port full-blown Linux-based desktop OS with browser and everything to CHERI, will its overall performance become better? Assuming that this will be normal Linux port, not a completely different kernel

                    1. 11

                      As with any other hardware feature, it depends on how you use it. Using an MMU makes everything slower (you need to manage page tables, you need to check TLBs on access, and so on) but the wins from copy-on-write and so on are much bigger than the costs.

                      CHERI can be used at fine and coarse granularity. At the fine granularity, you can make every pointer a CHERI capability. This increases pointer size, which has some performance and memory overhead. At a coarser granularity, you can put unmodified (LP64) binaries in a CHERI compartment and embed more than one in an address space. These two compose cleanly, so you can put two threads in the same address space and have either completely disjoint sets of memory reachable from them using CHERI at the object granularity and you can also share objects simply by passing pointers between them.

                      The Linux port is fairly immature but FreeBSD is there and works with Weston and most of KDE. Chromium is getting there but is not finished yet. As a pure port that isn’t using CHERI features except to provide object-granularity memory safety in C/C++/assembly code, there’s overhead[1].

                      In addition to the basic port, there’s also some work on colocated processes (‘coprocesses’). These are created with vfork + coexec, much like the traditional UNIX model, but the new process does not get a new address space, it just gets a new set of root capabilities. IPC between coprocesses is much faster than via normal processes. For pipes, you do a system call, transition into the kernel, and then copy data into a buffer and copy it out again. For large copies, the kernel will pin the destination so that it can do a single copy via the direct map. This is worth doing only for large copies because the overhead of acquiring the locks to pin the page is bigger than the overhead of the copy for large copies. In contrast, stream-based IPC between coprocesses involves a lighweight transition to the address-space executive and then a memcpy. Because the source and destination are the same process, you don’t need any additional locking. For applications that perform IPC, this speedup can easily dwarf the overhead from using CHERI. Beyond stream-based IPC, you have lightweight call gates between coprocesses and can share memory simply by passing pointers. This lets you build structures that are vastly more efficient than you can build with an MMU.

                      This is what we’ve done with CHERIoT: build a set of compartmentalisation abstractions and an RTOS for tiny systems. We have a bit over 300 instructions in our core TCB for confidentiality and integrity (the thing that enforces the isolation abstractions for the rest of the system) and can fit a compartmentalised network stack and a JavaScript interpreter, along with some application code, in 256 KiB of RAM (code + data). We can do zero-copy I/O by passing pointers as function arguments between compartments. Oh, and we have complete spatial, and heap and cross-compartment temporal memory safety, for C/C++/assembly code. All of this works with a core that is a few percent larger than the same core with a 16-element MPU instead of CHERI.

                      [1] More on Morello than there should be. Arm has written up the things that were rushed in adapting the Neoverse N1 to CHERI and how they’d be fixed in later microarchitectures, but I don’t think the report is public yet.

            2. 1

              I don’t actually know anything about CHERI’s performance, but it does in hardware the sort of thing a capability-based OS has to do in software.

          2. 2

            Years ago, I wrote a blog post about a problem that shows how devastating these IPI can be to (real-time) performance in a microkernel. The solution was to avoid operations that cause TLB flushes. In other words, there was no solution other than working around the problem. Writing hard real-time code on an x86 platform gives you good job security - at least if you get it to work at all ;-)

        2. 7

          Memory protection as implemented in present-day cpus and kernels is a fundamentally flawed concept; because it makes it so that communication between trust domains is more costly (both performance-wise and expressiveness-wise) than communication within a trust domain, users are encouraged to make monolithic programs with a single point of failure. We can see this in unix, where deliberate use of privsep is rare (chrome, qmail?), laborious, error-prone, harmful to performance, and even for all that pretty coarse-grained. A well-designed computer system should rely on software protection and fine-grained object capabilities.

          1. 2

            Burroughs Large Systems did something like this IIRC.

            And of course there’s Java.

            Dis and Inferno too. We should do something like now that with WASM or something.

            1. 7

              Burroughs Large Systems did something like this IIRC.

              This was one of our inspirations for CHERI (a few years ago I found some notes I’d made back in 2006 for a sketch of how to implement a B5500 architecture on a modern microarchitecture, CHERI contains a surprising amount of this).

            2. 4

              Doesn’t wasm allow pointer forgery? If so, it’s useless here. (And even if not it has plenty of other undesirable properties anyway.) You might want to make a compiler from it as an easy-ish compatibility stopgap for useful existing c/c++ code. Java might make more sense, though it has its own issues (eg both it and wasm are statically typed, whereas static typing is really something you want to layer on top; and anyway their type systems are pretty inexpressive). The likes of e are far closer the mark, even when embedded in a unix. (I believe someone did a bare-metal erlang—‘hydros’ or so?)

              1. 5

                Doesn’t wasm allow pointer forgery?

                Yes. Of all the bad design decisions in WAsm, this is the one that annoys me the most since WAsm’s MVP came two years after I’d demonstrated that you could compile C for a memory-safe target and have good source compatibility. The difference in environment between WAsm and POSIX is more of a porting effort than proper pointer semantics would have been. MS-WAsm adds memory safety with a very CHERI-like model (they used the CHERI LLVM port).

                The biggest problem with WAsm for your use is that it would require sharing via explicit shared memories. And, because C to WAsm compilers lower pointers to an offset within a memory, you’d need a different (bigger) pointer type to express pointers to shared memory objects. Our work with the hybrid ABI for CHERI showed that this incurs orders of magnitude more porting effort than simply making all pointers bigger.

                1. 1

                  MS-WAsm adds memory safety

                  Is that this?

                2. 1

                  I’d demonstrated that you could compile C for a memory-safe target and have good source compatibility

                  Did you do that on commodity hardware?

                  1. 1

                    No, for CHERI (CHERI MIPS back then). Commodity hardware is not a memory-safe target. A virtual environment on commodity hardware could be a memory-safe target if WASM had chosen to make it one but they didn’t.

        3. 3

          Basically a process can gift memory to another process to pass a message, without copying

          You don’t need a single address space for that to work (although it can make it nicer, as the piece of memory you’re gifting can contain pointers that will still be valid in the callee processes address space); you can have the kernel map the pages into the other processes address space. IIRC one of the mechanisms L4 uses to pass larger messages is this.

      3. 4

        $work runs all the servers on Debian. The upgrade cycle from bullseye to bookworm is starting; none of the bullseye machines use systemd as init, and testing has not shown any need to change that for bookworm.

      4. 3

        I’m a bit surprised that Debian managed to work on HURD at all with this limitation, I was under the impression that Debian had taken a hard dependency on systemd and that was the reason for removing kFreeBSD.

        Debian can still officially be run with sysvinit as of the last release, there was a blog post circulating on Lobsters recently that discussed it in the context of the installer. Debian has an architecture criteria policy which is orthogonal to the systemd debate.

    16. 4

      As a visually impaired person who struggles a bunch with reading and interpreting complex data structures with many fields and levels of nesting, this is incredibly cool and a welcome find.

    17. 5

      This had me thinking for a few weeks. While I do think HTTP/1.1 is good enough for most tasks, so that a completely new-from-scratch application layer such as Gemini might feel redundant, there are several aspects from HTTP that I do not particularly like:

      I don’t really understand this logic. Gemini was conceived in part as a reaction to the fact that “the web” as it exists today is almost impossible to use in any drastically size reduced form with actual websites because as others have said everyone requires their piece.

      To me, the idea of saying “You can’t have the 900 pound gorilla because this is a fundamentally different kind of animal” makes all the sense in the world.

      But I’m neither a web developer nor a mark-up expert so I’m probably missing something :)

    18. 5

      This one confused me until I saw the announcement was the inclusion of Rust in the Windows kernel.

      While this is a milestone, and good on the Rustaceans for another hill taken, I wish there were more meat on this bone :)

      What are some of the interesting aspects of implementing Rust in the Windows environment? Are Rust’s improved concurrency and memory safety being leveraged to improve Windows overall stability?

      Inquiring minds want to know :)

      1. 2

        David Weston’s BlueHat talk (slides) on Windows 11 security discusses this change. The Rust coverage is here.

        1. 2

          This is a really excellent video. Thank you for the pointer.

          In addition to the Rust stuff, the Win32 app containerization / sandboxing is fascinating as well.

          I just hope they work on making the manifest / sandbox interface configurations easy enough to manage that people get it right.

          I can’t tell you how many times I tried a Snap (Or Flatpak in the early days) only to find that the app was essentially busted because it needed access to the filesystem it couldn’t get.

    19. 2

      He’s hitting a content limit of 60 comments. If you like this idea as I do, please consider going and giving a thumbs up at as per his suggestion.

    20. 8

      My favorite bit is this:

      Finally, to IBM, here’s a big idea for you. You say that you don’t want to pay all those RHEL developers? Here’s how you can save money: just pull from us. Become a downstream distributor of Oracle Linux. We will happily take on the burden.

      Unrelatedly - real talk for a minute: The tech industry is experiencing a rough time, especially in MegaCorp land. Companies are closing their doors, which means a LOT less customers for all kinds of bvusiness, including RedHat.

      This situations IMO cuts to the quick of why profit motive and open source can make for very uncomfortable bedfellows.

      Long standing traditions like RHEL source being freely available are built on (some) genuine good will and a (LOT of IMO) business people betting that this will enhance their rep with the community and build mindshare that will translate ultimately into billable revenue.

      But when belts start tightening and people start losing their jobs, that goodwill eveaporates and the business people pop claws and fangs and play the Capitalism game with all the ferocity and barbarism inherent in its nature.