1. 3

    The webpage seems to be down, is there a working mirror?

    1. 2

      It’s back up. I need to move it over to my new server; will probably do that this weekend.

    1. 16

      On one side, I strongly agree with this. I use GCP and DigitalOcean often to outsource what I do.

      On the other hand, I’m watching an entire community of people put out fires because they built their IT on a managed service which Apple bought and effectively terminated yesterday, causing people to wake up to entire fleets of devices with broken policies.

      Like everything else in tech, there’s no right answer, rather it’s a set of tradeoffs someone has to make.

      1. 2

        MASSIVE DISCLAIMER I WORK ON GOOGLE CLOUD

        I think there is definitely a difference between using AWS/Azure/GCP/AliCloud and a startup like Fleetsmith. I feel super sad for the people that got impacted, as that sunset is really bad (I know that GCP has a 1 year sunset for GA products). If you’re using say, GKE for your k8s clusters, you can be confident that’s not going away.

        Yesterday I was trialing EKS (k8s) on AWS. I did not like the experience, I ended up abandoning the AWS native method for a third-party tool called eksctl and it still took ~30m to provision a 2 node cluster. I cannot begin to imagine how one would self host a k8s cluster.

        So yes, there are trade-offs, but I think there are definitely ways to mitigate them.

        P.S. Given the Fleetspeak turn-off, one great service going away that would keep me up at night is PagerDuty, there really is no product that I know of that is anywhere near as good.

        1. 3

          a difference between using AWS/Azure/GCP/AliCloud and a startup like Fleetsmith

          Is there thought? So only use a big provider (AWS/GCP/Azure) for your startup project? No Digital Ocean/Vultr? Those are both fairly large shops with a lot of startups on them. But they’re also not too big to fail. Digital Ocean is offering more managed services (databases and k8s) but if they ever declared bankruptcy, your startup will be scrambling for another service (and could find yourself with a much higher bill on the big three).

          I’d rather see more open source management solutions for things like simply full redundancy management for postgres or mysql. What I’ve found is that most shops that have this kind of tooling keep it under lock, and it’s proprietary/specific to their setup.

          I think managed services are bad due to cost and lockin, and they’re also having the side-effect on slowing innovation for better tooling so people can self-host those same solutions.

          1. 2

            Yes, the loss of DigitalOcean in particular would be a huge blow to the ecosystem. Their documentation in particular is fabulous.

            I’m unclear about whether I’d agree with lock-in as long as you are judicious if this is a concern, e.g. Google Cloud SQL is just Postgres/MySQL with Google juju underneath to make it run on our infra. There’s nothing stopping you dumping your database at any time. Same goes for something like a service like Cloud Run where you’re just deploying a Docker container, you can take that anywhere too. But then if you go all in on GCP BigQuery, then yeah, you’re going to have a harder time finding somewhere to take a data dump of that to.

            1. 1

              Is there thought?

              I would say that the difference isn’t big provider vs startup but infrastructure-as-a-service vs software-as-a-service. Sure the major cloud providers have some software they offer as services but they all also have VMs that you can spin up and install whatever you want on. It’s not like you can install Fleetsmith on your own machines.

            2. 1

              Disclaimer: I work on containers for AWS, but not directly on EKS

              Just a note here that eksctl is the official and recommended tool for interacting with EKS. You can find it in the EKS user guide here.

          1. 4

            Some managed services are compatible with alternatives (kubernetes services are a good example)

            You can’t just move k8s deploys from one provider to another. K8s is incredibly complex. Especially when you factor in the different ways kops, EKS, Rancher and other setup master/nodes into your hosting infrastructure. Running k8s on bare metal also presents problems where you have to setup an ingress/egress system .. not to mention the networking landscape changes how tools like Istio would need to be deployed.

            In the best case, deployment YAML/json has minimal changes (and really the generation of those should be automated so they can all be updated with minimal changes at once; although tools like Helm are the major tools used for that task and they’re pretty terrible). Realistically, migrating people from one k8s cluster to another is incredibly difficult. k8s was build for providers like GCP (and now AWS and others) and using it pretty much marries you to hosting providers.

            1. 2

              Fair points. Perhaps I should have mentioned moving a mysql database from one provider to another, or switching out a standardized queueing system. I don’t have a ton of k8s experience, but from what I’ve read, there’s at least some promise of portability.

            1. 25

              I’m going to give a null option here. There just isn’t anything native that’s both good and portable.

              • I used to find Cocoa somewhat enjoyable, but it’s obviously totally non-portable. I feel like nowadays it’s getting more buggy and the documentation has degraded. There are no more Technical Notes getting published, and new APIs ship with “No Overview Available” instead of docs.
              • Qt is intimately tied to C++, and I just can’t stomach C++.
              • I’m currently using GTK, but on macOS it feels even slower and more alien than Electron.

              I haven’t seen anything close to web dev tools for native GUI development. Xcode can inspect GUI, but it’s a toy compared to browser’s inspector. I can edit CSS live without reloading my UI, and control every pixel with relative ease. I can build fancy animated web UI in a shorter time than it takes me to get a makefile work across systems.

              1. 5

                You can use a Python binding with Qt. There’s PyQT5 and PySlide (and Qt, which is a wrapper around both; depending on what’s installed).

                1. 1

                  Interesting! I can’t recall seeing such apps in the wild. Do you know any popular apps that are built using that combo?

                    1. 2

                      I wrote this one:

                      https://gitlab.com/djsumdog/mpvbuddy

                      and have a few more I haven’t released yet.

                  1. 4

                    Yeah I’m yet to find a cross platform GUI toolkit that makes for nice Mac apps, Cocoa is really the only option if you want to make something that feels polished and high quality. But often the apps using these cross-platform toolkits are apps that wouldn’t get ported to the Mac otherwise, so I’m willing to accept the tradeoff of a slightly clunky GUI in exchange for having access to a useful piece of software.

                    1. 2

                      What version of GTK do you use right now?

                    1. 24

                      I’m gonna go with Qt on this one. I learned it a long time ago (I think it was still at version 2!) and it never really let me down. It’s got very good documentation, and it’s pretty reliable for long-term development. I have projects that have been through three Qt versions (3.x, 4.x, 5.x) and the migration has been pretty painless each time. It’s not on the nimble end of the spectrum and it’s C++, but I found it to be the most productive, even though the widgets library hasn’t been as high on the parent company’s priority list. (They insist that’s not true but actions speak louder than words…). I’ve used it for huge projects (200 KloC+) and it held out great.

                      I used GTK 2 back in the day, too, and while some bits weren’t exactly enjoyable, it was generally efficient, and it was a pretty safe bet for cross-platform development, and an especially safe bet for Linux and Unix development. I really wanted to like GTK 3. I don’t know if it’s because I’m getting grumpy and impatient, or if there really is something objectively wrong with it, but I didn’t manage to like it, and now I tend to avoid it, both when it comes to writing code that uses it and when it comes to using applications written against it. Also I’m not sure how its cross-platformness is doing these days.

                      I’ve played with Dear ImGui and I can definitely say I enjoy it. I’ve used it for some pretty small and special-purpose tools (and obviously you get about as much native integration with it as you get with Electron :P) but I definitely had fun with it. I’ve also

                      1. 6

                        I’m also a big fan of QT, and in particular, QtQuick is the single most productive rapid prototyping platform I’ve ever used (beating out even Visual Basic and Electron). The first app I ever wrote with it started out as an excuse to learn Qt Quick, and I had a working, polished app within two weeks.

                        1. 4

                          I really like Qt as well. I recently started building things with PyQt5 and it’s been pretty nice to work with:

                          https://gitlab.com/djsumdog/mpvbuddy

                          1. 2

                            +1 for Qt. I was surprised to see Telegram’s desktop client not using Electron, when every popular IM client is using it, and the UI seems much faster, and pleasant to work with. Another advantage is, Qt is available on more platforms than Electron, so if you like to be portable, don’t want to be limited by GNU/Linux, Windows, or macOS, then Qt is a good choice.

                            1. 1

                              I’ve also

                              Did you intend to continue?

                              1. 2

                                Did you intend to continue?

                                It looks like I did but whatever I wanted to say has long been swapped to the write-only section of my memory :)

                              2. 1

                                Happy with Qt too, but only when keeping the project up to date (and then it’s much easier with small projects). The least progress I’ve ever made as part of a software team was when we had a long-running Qt app where some parts were Qt5-ready, but we were mostly building with Qt4 and even then using 3-to-4 adapters in parts. Not that this isn’t true of other frameworks, but that sticks out as a raw nerve in my memory.

                                I’ve also used wxwidgets (but long enough ago that I don’t remember much specific, it seemed to work), GNUstep (OK if you don’t use any super-modern Cocoa APIs, where the approach to claiming 100% coverage has been to stub out all of the implementations), and Eclipse RCP which is a real curate’s egg.

                              1. 5

                                Wireguard was easy to setup. But I also have done a lot of OpenVPN, so without that experience, I would have still stumbled over routes and firewalls.

                                Wireguard is not good on DynDNS connection. They have an official script for forcing Wireguard to re-check DNS, but I still find that clunky. The OS doesn’t store the destination hostname. It’s discarded once resolved.

                                Wireguard cannot bind to a specific adapter or IP address. This might not seem like a big deal (there’s an issue for it) because it don’t respond on a port without the correct key, but it can lead outbound packed to be on a different IP, leading to asymmetric routing.

                                I should really do my own post on this.

                                1. 1

                                  https://battlepenguin.com - Originally in PHP, then Rails, then Wordpress and now Jekyll. I did a post on its history:

                                  https://battlepenguin.com/tech/a-history-of-personal-and-professional-websites/

                                  I cover tech, philosophy and (occasionally) politics.

                                  1. 2

                                    I’ve been subscribed to your blog for a few months – really enjoying your posts!

                                    1. 1

                                      Thank you! I appreciate it. I’ve added your feed to my RSS reader. :)

                                  1. 6

                                    This has a whole boat load of prerequisites - including client side, and conveniently ignores that the vast majority of the issues it identifies with regular ssh keys, are solved by just plugging ssh and Pam into ldap.

                                    But I guess it wouldn’t be a sales blog if they highlighted a better alternative than their own service would it.

                                    1. 3

                                      This came up last time we posted about SSH certificates. LDAP + GSSAPI or something similar is a viable option, but it has way more prerequisites. Including stuff like DNSSEC.

                                      The only client-side prerequisites here are that you have an OAuth OIDC identity provider (which most already have) and you’ve installed the open source step binary client-side. Then you need to run step-ca. It’s just barely non-trivial.

                                      LDAP doesn’t actually solve all of the problems that certificates solve, like eliminating trust on first use and host key verification failure. There are other mechanisms you can use alongside LDAP to achieve similar results, but they’re at least as complicated. In fact, if you already have LDAP, using it for user lifecycle management with certificates from step-ca would be a great combo. If you don’t already have LDAP I wouldn’t set it up just for this though.

                                      If we included this in our blog post then there’d have been nothing for truculent users on link-aggregation sites to complain about!

                                      1. 2

                                        Genuinely curious, does LDAP + PAM + ssh solve TOFU? That is, can it handle host keys and how?

                                        1. 2

                                          Caveat: am smallstep employee / am biased.

                                          No, LDAP + PAM can’t solve TOFU (or HKVF on host key / hostname reuse). Not directly anyways. You can use LDAP + PAM along with Kerberos/GSSAPI and DNSSEC to solve TOFU though, or so I’ve heard. I’m assuming that’s what OP is referring to.

                                          But you can also use certificates alongside LDAP + PAM.

                                          1. 1

                                            But you can also use certificates alongside LDAP + PAM.

                                            That seems like the simplest solution to start with. You can create Ansible (or puppet or whatever) roles to auth your servers off your companie’s AD or LDAP server, restrict access to certain LDAP groups, and have config-management setup SSH to have its host keys signed by your CA.

                                            That takes care of authentication/authorization/identity.

                                            Kerberos for AD->Linux is painful. Yast on OpenSUSE is really good at connecting to AD domains, but the last time I touched it, it wasn’t easy to automate that process (but that was in 2012 so I imagine it’s gotten better).

                                            1. 2

                                              That assumes one’s company has LDAP. Many smaller companies just rely on Google for SSO, which is perfectly OK. I’m quite wary of statements containing “just” and “LDAP” in the same sentence, as usually it’s not that simple to run and maintain it. In addition, it may work only for Linux, but in heterogeneous environments (Linux/Darwin/FreeBSD), it’s not so easy to have a consistent solution.

                                              1. 1

                                                Yep. You’re basically describing how our hosted product works, which builds on step and step-ca. Except instead of syncing via LDAP we sync users & groups using SCIM [RFC7644] and have some custom NSS & PAM glue to make user management and access control work. It also uses PAM for audit logging and restricting sudo access.

                                                To get certificates issued to hosts step-ca can be configured to accept instance identity documents on major clouds, which makes everything completely turnkey. For other environments we have a bunch of other mechanisms that offer different ease-of-use vs. security vs. generality tradeoffs. We’ve discussed adding kerberos enrollment, so you can get a certificate using a kerberos ticket, but that doesn’t exist yet. I think that’d be cool for folks who already have LDAP in place. It’d be good to hear whether that’s worth doing from people who run that sort of setup though.

                                        1. 2

                                          I had a Noppoo Choc mini with nkro, but the implementation was buggy and I’d get double letters in macos (unusable) and occasional double letters in Linux. I used a blue cube adapter to force it into the boot protocol.

                                          Also, isn’t it also a limitation on how you wire your keyboard?

                                          1. 2

                                            I had a Noppoo Choc mini with nkro, but the implementation was buggy and I’d get double letters in macos (unusable) and occasional double letters in Linux. I used a blue cube adapter to force it into the boot protocol.

                                            Unfortunately, buggy firmware in USB devices is ridiculously common.

                                            HID stacks in OSes/windowing systems also don’t necessarily treat edge cases or rarely used report descriptor patterns equally, so you can end up with macOS, Linux/X11, and Windows doing slightly different things.

                                            It’s likely your issue could have been worked around software side too, I assume it worked “correctly” in Windows? I’m not aware of a generic HID driver for macOS which lets you arbitrarily rewrite report descriptors and reports into a format that WindowServer/Core Graphics deals with as intended. I’m guessing there might be some kind of built-in system for this in Linux or Xorg though.

                                            Also, isn’t it also a limitation on how you wire your keyboard?

                                            Yes, definitely, though that’s not as simple as supporting a hard limit of N simultaneous key presses, but rather that certain combinations of key presses become ambiguous, depending on which keys are wired to the same matrix rows and columns.

                                            1. 2

                                              I hear some old USB NKRO keyboards used ridiculous hacks like enumerating as multiple keyboards behind a hub, with the first keyboard reporting the first six scancodes, the second reporting the second, etc., or something. Of course, this is a completely ridiculous and unnecessary hack which implies that the people designing the keyboard don’t understand HID (or that the HID stacks of major OSes were too buggy at the time to work properly, perhaps?)

                                              As for keyboard wiring, that’s a separate matter. My post discusses the limitations of the USB protocol. What the keyboard microcontroller does to ascertain which keys are pressed is entirely up to it. In practice, to save cost keyboards use a key matrix, which creates key rollover limitations. More expensive NKRO keyboards tend to still use key matrices, as I understand it, but add some diodes to the matrix which facilitates NKRO if and only if the assumption that only one key will change between key scans is not violated (a fair assumption if the scan rate is high enough, due to the infeasibility of pressing two keys at exactly the same time.)

                                              FWIW, I also seem to recall that it’s common for modern “NKRO” keyboards to actually only be 10-key rollover, on the premise that humans only have 10 fingers (feels like dubious marketing to me.) I’m unsure as to whether this is to do with the key matrix, or whether they just decided to use a 10-element array as their reporting format rather than a bitfield.

                                              However, nothing stops you from making a keyboard which, for example, wires every key individually up to a microcontroller with hundreds of pins (and thus has the truest possible NKRO). It would simply be prohibitively expensive to do so, less because of the MCU, more because of the PCB layers it would require; I worked this out some time ago and suspect it would take about an 8-layer PCB.

                                              The Model F keyboard is known for supporting NKRO as an inherent benefit of its capacitative sensing, unlike its successor the Model M. Someone made an open hardware controller for existing Model F keyboards, enabling them to be retrofitted with USB, with full NKRO support.

                                              1. 1

                                                Can you explain why a hundred traces would require multiple PCB layers? In my mind, the MCU goes in the middle, with traces spidering out to each of the keys, and a ground belt surrounding the board. A second layer would be used to get the data and power into the MCU.

                                                1. 1

                                                  Maaaaaybe this would be feasible with a large QFP/QFN package? The chip I was looking at was only available as BGA with the necessary pin count; the escape routing seemed infeasible with a low number of layers, and the manufacturer recommended 6-8, IIRC.

                                                  1. 1

                                                    Oh yeah, pin arrays are dark magic as far as I’m concerned.

                                            1. 4

                                              This should satisfy all my requirements and is a cleaner setup then I had before.

                                              All of these requirements seem incredibly reasonable, yet I am absolutely horrified that it’s not far more straightforward.

                                              I guess the reason we accept hacks like this is because screensavers and securely locking the screen aren’t very interesting problems, and no one volunteers to do the work… OSS: “hacked together solutions that solve the broad strokes. Can you, yeah you, write some docs on how to fill in the details?”

                                              1. 4

                                                or maybe people see X11’s security as being so broken that Wayland is the answer?

                                                I’ve tried Sway a few times and there’s always something I hit that makes it feel not ready for prime time (at least for me). With Wayland, can there be Wayland level solutions? In X11, I currently use xidlehook + i3lock to handle my screen locking. I use it with i3, but the combination should work with xmonad, dwn and other window managers.

                                                In Wayland, is sway-lock specific to Sway? Does each window manager need its own set of tools, or can some of them be shared? Does it require they use wlroots?

                                                1. 6

                                                  or maybe people see X11’s security as being so broken that Wayland is the answer?

                                                  Long before Wayland, this was true, and so were all of the screensaver problems. There’s also the problems of screensaver’s not getting updated in distributions when there are CVEs and such. Here, and then in jwz fashion: Previously, previously, previously, previously.

                                                  In X11, I currently use xidlehook + i3lock to handle my screen locking. I use it with i3, but the combination should work with xmonad, dwn and other window managers.

                                                  This is kind of the point. There are tools, and you’ve pieced them together in a way that makes it work for you. But, I’ll boldly say that I am positive your solution has quirks.

                                              1. 10

                                                Wouldn’t that be NNTP?

                                                USENET predates the Web by a good decade and a half or more, doesn’t it?

                                                1. 4

                                                  I talk about NNTP in the last section. Yes NNTP is still in use (and even FIDO net) but they’re mostly used for distributing binaries.

                                                  I focus on RSS because it’s still used A LOT .. and even though a lot of end-users don’t use RSS readers, the feeds are everywhere in every piece of software. They’re used by scrapers, robots and lots of other indexing software. Professors use feeds for their journal articles, news desk editors use them to see what all their competitors are reporting on, etc. So they’re an important part of the web ecosystem and end users should take more advantage of them.

                                                1. 6

                                                  I have a Commodore 64 that’s in perfectly working order that I’ve been planning on gutting and stuffing a Raspberry Pi into… I had no idea they were worth anything now.

                                                  1. 6

                                                    sell the board, or part out the chips at least! SID’s & VIC’s are getting scarce.

                                                    1. 2

                                                      I’m surprised people haven’t designed open source compatible replacements. There are a ton of custom parts in the enthusiast space .. or at least what I can tell from 8-Bit Guy, LGR and visiting local retro shows.

                                                      1. 3

                                                        There are a lot of different aftermarket PLAs (as there are a lot of different types of PLA in use, some not compatible with others). The SID seems very hard to replicate as part of its unique sound is related (I heard) to the now-obsolete fabrication method.

                                                    2. 4

                                                      My God, your comment reminded me of this relic from 2004.

                                                      1. 2

                                                        I thought about selling it, but my parents would be upset with me. After all, this was a very expensive gift and it meant a lot to them to give to me.

                                                        hehe

                                                        1. 1

                                                          To be fair, I have none of the accessories (including cartridges), and my intent was to run a C64 emulator on boot to get most of the same experience but with modern ports. The keyboard is garbage, and the C64 was discontinued 3 years before I was even born, so I don’t have any sense of nostalgia for it. I may be more inclined to sell it to someone who cares more about it though.

                                                        2. 2

                                                          Personally, as someone who lived in the 80s with no computer until I was old enough to have a job and make enough money to buy one myself, in the 90s, I desire a C64 to experience / learn about some of the software / games of the era, and I think many are in a similar boat.

                                                          However, with time, those devices are actually fully understandable. So, I think there is some demand just to learn about computer architecture basics, even if there has been 40 years of innovation beyond them.

                                                        1. 3

                                                          Wasn’t CoreOS the same thing?

                                                          1. 1

                                                            On a high level, yes. The README has some links to details.

                                                            1. 1

                                                              CoreOS was originally based on Gentoo, although I think they just used Gentoo/Portage to bootstrap their binary build system because there was no emerge. They had a package manager but I don’t recall what it was, except that it was minimal. The last shop I was at that used CoreOS was using fleetctl and flannel. We migrated off of it onto DC/OS before Redhat got bought out by IBM.

                                                              By contrast, this looks more like a BSD where the entire base is one big install and gets patches as such.

                                                            1. 2

                                                              I can see a hotfix and stable branch with semantic versioning, only if you have a released/shippable product that people can self-host. It’s important in those situations to be able to backport hotfixes/security bugs onto older releases.

                                                              That being said, semantic tagging for internal apps, internal customer facing web apps or anything else is stupid. You only need two version numbers: major.minor. There is no purpose in a patch release (unless maybe you have different customers all running different versions of your app maybe?). I’ve heard one developer even say you should just have 1 incrementing version number for internal apps. I can still see the argument for major/minor, but three numbers and the full git-flow branch pattern mentioned is only necessary for things you ship past your company boundary.

                                                              1. 17

                                                                I don’t agree with the “If you really must use Google reCAPTCHA… There’s the Invisible reCAPTCHA.” part. The delays and privacy invasions are just the same there, you just don’t attribute the slowness to Google. It’s even worse, because when you block reCAPTCHA, as I do, it’s harder to notice why a site suddenly doesn’t work. So far I’ve complained to Bandcamp and GOG about their use of reCAPTCHA, but it hasn’t helped.

                                                                Related: https://lobste.rs/s/mqbre5/you_probably_don_t_need_recaptcha

                                                                1. 2

                                                                  What blocking rules do you use for reCAPTCHA? I’ve thought about doing this too, but had trouble finding lists of the domains I need to add to uBlock Origin.

                                                                  1. 5

                                                                    I block *.google.com, which works because reCAPTCHA is served directly from the Google domain. This is described in detail in the article I linked.

                                                                    1. 1

                                                                      You should also block recaptcha.net. What happens when you block it — how do sites react?

                                                                  2. 2

                                                                    Do you have a better alternative?

                                                                    One of the web sites I manage (“manage” ­— I try to spend no time on those things) has some forms that attract bots, and if the bot volume increases much more I shall have to do something. The question is what I might do. Blocking Tor completely would help, but I’m not exactly enthusiastic about that. The self-hosted captchas, hidden input fields etc., seem to be in an arms race with the bots, and losing. Right?

                                                                    1. 4

                                                                      Do you have a better alternative?

                                                                      There are alternatives to reCAPTCHA in the article, and in the article that I linked. My comment was about “If you really must use Google reCAPTCHA”; I’d say that if you really must use reCAPTCHA because your boss will fire you if you don’t, use reCAPTCHA, but don’t try to hide that you use it.

                                                                      1. 3

                                                                        I read the article; what it mentioned seems to be worse in my case and AFAICT also in the general case. Installing something that’s losing an arms race is a waste of time.

                                                                        I see your point about not trying to hide it.

                                                                    2. 2

                                                                      Thanks for sharing! Yeah, I agree about not using it at all and I don’t use it myself. My intention with that section was to add something for those who insist on using Google product like I did for Analytics and YouTube sections too. For me as a user, I’ve had less issues with those invisible captchas than the regular ones but I’ll certainly look into it more. Thanks again!

                                                                    1. 12

                                                                      Screenshots would be helpful :P

                                                                      1. 17

                                                                        This project is being completed under a consulting contract with Migadu for their next-generation webmail. They’re also working on a theme:

                                                                        https://sr.ht/mcvO.png https://sr.ht/ml1l.png https://sr.ht/Yk6A.png

                                                                        If it’s not obvious yet from this, Amolith, and geocar’s comments: Koushin is themeable :)

                                                                        1. 3

                                                                          Thanks, that looks a lot more promising than the other 2 posts.

                                                                          I don’t have high standards and it doesn’t have to be beautiful, but unthemed without even half an hour of css work is a bit too little effort.

                                                                          1. 2

                                                                            It looks really good! I like it. It reminds me of gmail, back in 2005, before it turned into a bloated piece of shit.

                                                                            1. 1

                                                                              I recently built a video player front end in PyQT5. I would highly recommend it:

                                                                              https://gitlab.com/djsumdog/mpvbuddy

                                                                              1. 50

                                                                                Honestly I think that suckless page is a terrible criticism of systemd. It’s the kind of rantings that are easy to dismiss.

                                                                                A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity. As a user, I use systemd, runit, and OpenRC, and I barely notice the difference: they all work well. Except when something goes wrong, in which case it’s so much harder to figure out systemd than runit or OpenRC.

                                                                                Things like “systemd does UNIX nice” are rather unimportant details.

                                                                                I’m a big suckless fan, but this is not suckless at their best.

                                                                                1. 10

                                                                                  A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity.

                                                                                  How many things does the Linux kernel support that you don’t use or need, and how many lines of code in the kernel exist to support those things?

                                                                                  1. 3

                                                                                    a lot, and it’s also a criticism of linux. but sometimes people must use linux, and now sometimes people must use systemd.

                                                                                    linux’s extra features are also much better modularized and can be left out, unlike systemd’s.

                                                                                    1. 2

                                                                                      linux’s extra features are also much better modularized and can be left out, unlike systemd’s.

                                                                                      But they can. The linked article describes that many of the features that people wrongly claim PID1 now does are just modules. For example you don’t have to use systemd-timesyncd, but you can and it works way better on the desktop than the regular server-grade NTP implementations.

                                                                                      1. 1

                                                                                        I’m sorry but how does syncing time every once in a while get much improved by systemd-timesyncd? NTP is like the least of my worries.

                                                                                        1. 2

                                                                                          Somehow my computer was insisting on being 2 minutes off and even if I synced manually and wrote to my BIOS RTC clock NTPd and chrony were insisting on messing it up (and then possibly giving up since the jump was 2 minutes). Both these daemons feel like they aren’t good matches on a system that’s not on 24/7.

                                                                                          1. 2

                                                                                            sounds like a configuration issue and nothing to do with the program itself. what distro did you use ntpd and chrony with? what distro are you using systemd-timesyncd with?

                                                                                            by default, void linux starts ntpd with the -g option which allows the first time adjustment to be big.

                                                                                    2. 2

                                                                                      If we’re going there, we might as well mention that Linux supports a whole freaking lot of hardware I don’t need. Those are most probably the biggest source of complexity in the kernel. Solving that alone would unlock many other thing, but unfortunately, with the exception of CPUs the interface to hardware isn’t an ISA, it’s an API.

                                                                                      1. 5

                                                                                        If we’re going there, we might as well mention that Linux supports a whole freaking lot of hardware I don’t need.

                                                                                        While, simultaneously, not supporting all the hardware that you want.

                                                                                        I think it’s a good example that the Linux model is culturally inclined to build monolithic software blocks.

                                                                                        1. 7

                                                                                          While, simultaneously, not supporting all the hardware that you want.

                                                                                          Ah, that old bias:

                                                                                          • Hardware does not work on Windows? It’s the hardware vendor’s fault.
                                                                                          • Hardware does not work on Linux? It’s Linux’s fault.

                                                                                          We could say the problem is Linux having a small market share. I think the deeper problem is the unbelievable, and now more and more unjustified, diversity in hardware interfaces. We should be able by now be able to specify sane, unified, yet efficient hardware interfaces for pretty much anything. We’ve done it for mouses and keyboards, we can generalise. Even graphics cards, which are likely hardest to deal with because of their unique performance constraints, are becoming uniform enough that standardising a hardware interface makes sense now.

                                                                                          Imagine the result: one CPU ISA (x86-64, though far from ideal, currently is it), one graphics card ISA, one sound card ISA, one hard drive ISA, one webcam ISA… You get the idea. Do that, and suddenly writing an OS from scratch is easy instead of utterly intractable. Games could take over the hardware. Hypervisors would no longer be limited to bog standard server racks. Performance wouldn’t be wasted on a humongous pile of subsystems most single applications don’t need. Programs could sit on a reliable bed rock again (expensive recalls made hardware vendors better at testing their stuff).

                                                                                          But first, we need hardware vendors to actually come up with a reasonable and open hardware interface. Just give buffers to write to, and a specification of the data format for those buffers. Should be no harder than to write an OpenGL driver with God knows how many game specific fixes.

                                                                                          1. 8

                                                                                            Nah, that’s not what I’m implying. It’s not Linux fault, it’s still a major practical sore from a users perspective. I’m well aware that this is mainly the hardware vendors fault in all cases.

                                                                                            Also, it should be noted that Linux kernel development is in huge parts driven by exactly those vendors, so even if it were Linux fault, there’s a substantial overlap.

                                                                                            It’s still amazing how much of hardware is supported in the kernel, of very varying quality and is committed to be maintained.

                                                                                            1. 3

                                                                                              It’s still amazing how much of hardware is supported in the kernel, of very varying quality and is committed to be maintained.

                                                                                              One thing that amused me recently was the addition of SGI Octane support in Linux 5.5, hardware that’s basically extinct since 2 decades and was never particular popular to begin with. But the quixotism of this is oddly endearing.

                                                                                              1. 2

                                                                                                was never particular popular to begin with

                                                                                                Hey, popular isn’t always the best metric. SGI’s systems were used by smart folks to produce a lot of interesting stuff. Their graphics and NUMA architecture were forward-thinking with me still wanting their NUMAlink on the cheap.The Octanes been behind a lot of movies. I think the plane scene in Fight Club was SGI, too. My favorite was SGI’s Onyx2 being used for Final Fantasy given how visually groundbreaking it was at the time. First time I saw someone mistake a CG guy for a real person.

                                                                                        2. 2

                                                                                          Device drivers are the most modular part of the kernel. Don’t compile them if you don’t want them

                                                                                          1. 1

                                                                                            True, but (i) pick & choose isn’t really the default, and (ii) implementing all those drivers is a mandatory, unavoidable part of writing an OS.

                                                                                            I don’t really care that the drives are there, actually. My real problem is the fact they need to be there. There’s no way to trim that fat without collaboration from hardware vendors.

                                                                                            1. 1

                                                                                              Well mainstream distribution kernels are still built modularly and the device driver modules are only loaded if you actually have hardware that needs them, at least as far as I understand it.

                                                                                              I don’t really care that the drives are there, actually. My real problem is the fact they need to be there. There’s no way to trim that fat without collaboration from hardware vendors.

                                                                                              Yeah that is a big PITA. It’s getting worse, too. It used to be that every mouse would work with basically one mouse driver. Now you need special drivers for every mouse because they all have a pile of proprietary interfaces for specifying LED colours and colour patterns, different special keys, etc.

                                                                                      2. 6

                                                                                        And there are no real alternatives to a full system layer. I like runit and openrc and I use them both (on my Void laptop and Gentoo desktop). When I use Debian or Ubuntu at work, for the most part I don’t have to worry about systemd, until I try to remember how to pull up a startup log.

                                                                                        systemctl/journalctl are poorly designed and I often feel like I’m fighting them to get the information I really need. I really just prefer a regular syslog + logrotate.

                                                                                        It’d be different if dbus had different role endpoints and you could assign a daemon to fulfill all network role messages and people could use NetworkManager or systemd-networking or … same with systemd being another xinitd type provider and everything get funneled through a communication layer.

                                                                                        Systemd is everything, and when you start going down that route, it’s like AWS really. You get locked in and you can’t easily get out.

                                                                                        1. 5

                                                                                          As a note to those reading: there is murmur of creating a slimmed down systemd standard. I think it’d satisfy everyone. Look around and you’ll find the discussions.

                                                                                          1. 4

                                                                                            I can’t really find anything about that at a moment’s notice other than this Rust rewrite; is that what you mean?

                                                                                            Personally, I think a lot of fundamental design decisions of systemd make it complex (e.g. unit files are fundamentally a lot more complex than the shell script approach in runit), and I’m not sure how much a “systemd-light” would be an improvement.

                                                                                            1. 16

                                                                                              As someone who just writes very basic unit files (for running IRC bots, etc). I find them a lot simpler than shell scripts. Everything is handled for me, including automatically restarting the thing after a timeout, logging, etc. without having to write shell scripts with all the associated bugs and shortcomings.

                                                                                              1. 12

                                                                                                Have you used runit? That does all of that as well. Don’t mistake “shell script approach” with “the SysV init system approach”. They both use shell scripts, but are fundamentally different in almost every other respect (in quite a few ways, runit is more similar to systemd than it is to SysV init).

                                                                                                As a simple example, here is the entire sshd script:

                                                                                                #!/bin/sh
                                                                                                ssh-keygen -A >/dev/null 2>&1 # Will generate host keys if they don't already exist
                                                                                                [ -r conf ] && . ./conf
                                                                                                exec /usr/bin/sshd -D $OPTS
                                                                                                

                                                                                                For your IRC bot, it would just be something like exec chpst -u user:group ircbot. Personally, I think it’s a lot easier than parsing and interpreting unit files (and more importantly, a lot easier to debug once things go wrong).

                                                                                                My aim here isn’t necessarily to convince anyone to use runit btw, just want to explain there are alternative approaches that bring many of the advantages that systemd gives, without all the complexity.

                                                                                                1. 2

                                                                                                  I have never tried it. But then, if it’s a toplevel command, not even in functions, how can you specify dependencies, restart after timeout, etc.? It seems suspiciously too simple :-)

                                                                                                  1. 3

                                                                                                    Most of the time I don’t bother with specifying dependencies, because if it fails then it will just try again and modern systems are so fast that it rarely fails in the first place.

                                                                                                    But you can just wait for a service:

                                                                                                    sv check dhcpcd || (sleep 5; exit 1)
                                                                                                    sv check wpa_supplicant || (sleep 5; exit 1)
                                                                                                    exec what_i_want_to_run
                                                                                                    

                                                                                                    It also exposes some interfaces via a supervise directory, where you can read the status, write to change the status, roughly similar to /proc. This provides a convenient platform-agnostic API in case you need to do advanced stuff or want to write your own tooling.

                                                                                                    1. 23

                                                                                                      No offense, but this snippet alone convinces me that I’m better off using systemd’s declarative unit files (as I am doing currently, with for similar uses than @c-cube ‘s). I’ve never been comfortable with the shell semantics generally speaking, and overall this feels rather fiddly and hackish. I’d rather just not have to think about it, and have systemd (or anything similar) do it for me.

                                                                                                      1. 5

                                                                                                        Well, the problem with Unit files is that you have to rely on a huge parser and interpreter from systemd to do what you want, which is hugely opaque, has a unique syntax, etc. The documentation for just systemd.unit(5) it is almost 7,000 words. I don’t see how you can “not have to think” about it?

                                                                                                        Whereas composition from small tools in a shell script is very transparent, easy to debug, and in essence much easier to use. I don’t know what’s “fiddly and hackish” about it? What does “hackish” even mean in this context? What exactly is “fiddly”?

                                                                                                        Like I said before, systemd works great when it works, it’s when it doesn’t work. I’ve never been able to debug systemd issues without the help of The Internet because it requires quite specific and deep knowledge, and you can never really be certain if the behaviour is a bug or error on your part.

                                                                                                        1. 12

                                                                                                          Well, the problem with Unit files is that you have to rely on a huge parser and interpreter from systemd to do what you want, which is hugely opaque, has a unique syntax, etc. The documentation for just systemd.unit(5) it is almost 7,000 words. I don’t see how you can “not have to think” about it?

                                                                                                          To me it seems a bit weird to complain about the unit file parser but then just let the oddly unique and terrible Unix shell syntax just get a free pass. If I were to pick which is easier to parse, my money would be on unit files.

                                                                                                          Plus, each shell command has its own flags and some have rather intricate internal DSLs (find, dd or jq come to mind).

                                                                                                          1. 2

                                                                                                            The thing with shell scripts is that they’re a “universal” tool. I’d much rather learn one universal tool well instead of many superficially.

                                                                                                            I agree shell scripts aren’t perfect; I’m not sure what (mature) alternatives there are? People have talked about Oil here a few times, so perhaps that’s an option.

                                                                                                            1. 9

                                                                                                              The thing with shell scripts is that they’re a “universal” tool.

                                                                                                              But why do you even want that in an init system? The task is to launch processes, which is fairly mundane except having lots of rough edges. With shell scripts you end up reinventing half of it badly and hand-waving away the issues that remain because a nicer solution in shell would be thousands of lines and not readable at all.

                                                                                                              I would actually my tools to be less turing-complete and give me more things I can reason about. With unit files it is easier to reason about and see that they are correct (since the majority functionality is implemented in the process launcher and if bugs there are fixed it fixes them in all unit files).

                                                                                                              I actually don’t get the sudden hate for configuration files, since sendmail, postfix, Apache, etc all have their configuration formats instead of launching scripts to handle HTTP, SMTP and whatnot. The only software I have in recent memory that you configure with code is xmonad.

                                                                                                              1. 1

                                                                                                                I wrote a somewhat lengthy reply to this this morning, but then my laptop ran out of battery (I’m stupid and forgot to plug it in) so I lost it :-(

                                                                                                                Briefly: to be honest, I think you’re thinking too much about SysV init style shell scripts. In systems like runit/daemontools, you rarely implement logic in shell scripts. In practice the shell scripts tend to be just a one-liner which runs a program. Almost all of the details are handles by runit, not the shell script, just like with systemd.

                                                                                                                In runit, launching an external process – which doesn’t even need to be a shell script per-se, but can be anything – is just a way to have some barrier/separation of concerns. It’s interesting you mention postfix, because that’s actually quite similar in how it calls a lot of external programs which you can replace with $anything (and in some complex setups, I have actually replaced this with some simple shell scripts!)

                                                                                                                I agree the SysV init system sucked for pretty much the same reasons as you said, and would generally prefer systemd over that, but runit is fundamentally different in almost every conceivable way.

                                                                                                              2. 4

                                                                                                                This is hardly unique to systemd unit files, though.

                                                                                                                /etc/fstab is a good example of something old. There’s nothing stopping it from being a shell script with a bunch of mount commands. Instead, it has its own file format that’s been ad-hoc extended multiple times, its own weird way of escaping spaces and tabs in filenames (I had to open the manpage to find this; it’s \040 for space and \011 for tab), and a bunch of things don’t wind up using it for various good reasons (you can’t use /etc/fstab to mount /etc, obviously).

                                                                                                                But the advantage? Since it doesn’t have things like variables and control flow, it’s easy to modify automatically, and basic parsing gives you plenty of useful information. You want to mount a bunch of different filesystems concurrently? Go ahead; there’s nothing stopping you (which is, of course, why systemd replaced all those shell scripts while leaving fstab as-is).

                                                                                                                In other words: banal argument in favour of declarative file formats instead of scripts.

                                                                                                            2. 3

                                                                                                              I don’t know what’s “fiddly and hackish” about it?

                                                                                                              It’s fiddly, because you can’t use any automatic tool to parse the list of dependencies, and it’s hackish, because the build system doesn’t know what it’s doing, it just retries starting random services until it matches the proper order. It’s nondeterministic, so it’s impossible to debug in case of any problems.

                                                                                                              1. 3

                                                                                                                You can pipe it through grep, c’mon dude. And framing it as “starting random services” is just wrong, that’s the opposite of what’s happening.

                                                                                                                1. 4

                                                                                                                  And framing it as “starting random services” is just wrong, that’s the opposite of what’s happening.

                                                                                                                  This doesn’t look very convincing ;)

                                                                                                                  Well, you can cat the startup script and see the list of dependencies if you’re only worried by one machine. But from the point of view of a developer, supporting automatic parsing of such startup scripts is impossible, because it’s defined by a Turing-complete language.

                                                                                                                  Again, it’s still fine if you’re an administrator of just one machine (i.e. you’re the only user). But it’s not an optimized method when you have farms of servers (physical or VMs), and that’s the majority of cases where UNIX systems are used.

                                                                                                                  Also it’s easier to install some rootkit inside shell scripts, because it’s impossible to reliably scan a bash script for undesirable command injections.

                                                                                                              2. 2

                                                                                                                While I agree that systemd.unit files syntax sometimes is weird and I would much more prefer for it to use for example TOML instead, I do not think that shell syntax is any better. TBH it is even more confusing sometimes (as @Leonidas said).

                                                                                                              3. 6

                                                                                                                I’d rather just not have to think about it, and have systemd (or anything similar) do it for me.

                                                                                                                Don’t be surprised when you pay the price this thread speaks of for the privilege of thinking slightly less ;)

                                                                                                                1. 8

                                                                                                                  Sometimes abstractions are, in fact, good. I am glad I don’t have to think how my CPU actually works. And starting services is such a run-off the mill job that I don’t want to write a program that will start my service, I just want to configure how to do it.

                                                                                                                2. 3

                                                                                                                  Dependencies in general are a mistake in init systems: Restarting services means that your code needs to handle unavailability anyways – so use that to simplify the init system. As a bonus, you ensure that the code paths to deal with dependencies restarting gets exercised.

                                                                                                      2. 2

                                                                                                        I really like systemd’s service files for the simple stuff I need to do with them (basically: execute daemon command, set user/group permission, working dir, dependencies, PID file location, that’s it). But there are other aspects of systemd I dislike. I wish someone would implement a service file parser for something like OpenRC that supports at least those basic systemd service files. It would ease cooperation among init systems quite a bit I think and make switching easier. It would also ease the life of alternative init system makers, because many upstream projects provide systemd service files already.

                                                                                                      3. 3

                                                                                                        A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity.

                                                                                                        This sort of computing minimalism confuses me. Should we say the exact same about the computing platforms themselves? x86 has a lot of things we don’t need so we should simply use a RISC until you need just the right parts of x86… That motherboard has too many PCI slots, I’m going to have to rule it out for one with precisely the right amount of PCI slots… If you can accomplish the task with exactly a stick and a rock why are you even using a hammer, you fool!

                                                                                                        1. 4

                                                                                                          It’s not ‘minimalism’ that makes me balk at systemd’s complexity. It’s that that complexity translates directly to security holes.

                                                                                                          1. 2

                                                                                                            It’s really a long-standing principle in engineering to make things as simple as feasible and reduce the number of moving parts. It’s cheaper to produce, less likely to break, easier to fix, etc. There is nothing unique about software in this regard really.

                                                                                                            I never claimed to be in favour of absolute minimalism over anything else.

                                                                                                        1. 4

                                                                                                          This makes me want to buy devices with Mali GPUs. Almost all the Pine devices use these rights? I’m sure the PostmarketOS team appreciates this as well.

                                                                                                          Most Android devices still use PowerVR chips though, don’t they? Are there open source drivers for those or are most of those still binary blobs?

                                                                                                          1. 4

                                                                                                            There are no FOSS drivers for powervr gpus. Imagination is notorious for largely ignoring Linux and any demands for one, which is extremely unfortunate given how many devices have these gpus.

                                                                                                          1. 9

                                                                                                            The crucial detail is this

                                                                                                            To deploy drivers built with DriverKit, allow other developers to use your system extensions, or use the EndpointSecurity API, you’ll need an entitlement from Apple.

                                                                                                            If I understand this correctly, we can say goodbye to hackintoshes.

                                                                                                            1. 1

                                                                                                              That was my first thought too. I also wonder about tools like LittleSnitch.

                                                                                                              1. 3

                                                                                                                In theory, Apple have been expanding Network Extensions API to be a sufficient substitute for NKEs. I’m not an expert on either, but if it’s anything like the EndpointSecurity framework as the supposed substitute for KAUTH listeners, there will be features that are killed off.

                                                                                                                I’m still in the process of porting some USB kexts to DriverKit, so I’ll see how good a substitute that is. I’m a little worried about the “magic” compiler-generated IPC glue being a debugging nightmare. I only recently started working on DriverKit though, largely thanks to lots of time spent on miscellaneous immediate Catalina regressions and working around the shortcomings of some of the braindead user consent implementations, which were more urgent because they immediately affected users.

                                                                                                                The 10.15.4 beta SDK has also added support for PCI/Thunderbolt drivers to DriverKit, it’ll definitely be interesting to see just how well that works out.

                                                                                                                I do think we’ll lose a bunch of software and hardware diversity on the Mac as a result of this. I’m finding the amount of effort required treading water to keep things running on the latest OS version or jumping through hoops due to badly designed features/APIs, compared to time left actually building something that has intrinsic value is becoming increasingly skewed. I fear lots of developers are going to decide it’s no longer worth it.