1. -1

    I don’t understand what this things does at all. Podman can already generate a systemd config so why use another tool to generate a systemd config.

    Wrappers wrapping other wrappers. Fuck this.

    1. 5

      Dynamic generation is important. The source of truth is the container unit file, not some random commands that produced a unit file once. What do you do when there is a new best practice? For example, I like this formulation better than previous generated unit files:

      ExecStopPost=-/usr/bin/podman rm -f -i –cidfile=%t/%N.cid

      I get this for free with every upgrade, without hard coding old versions of podman’s result.

      1. 2

        Dynamic generation is important.

        Important for what?

        The source of truth is the container unit file

        There is a single source of truth in any case.

        not some random commands that produced a unit file once

        It’s not some random command that happened to ran for some reason, it’s a very specific command that I chose to run and deploy.

        What do you do when there is a new best practice?

        Why would there be a new best practice? What’s wrong with the old one, and if there’s a new one, it means the old one wasn’t the best. I deploy things I understand not what some imaginary authority decides to call best this day of the week.

        For example, I like this formulation better than previous generated unit files:

        Then use that one instead! Also, if it’s about some thing that you like that does the exact same thing as some other thing all your problems are syntactic, not semantic, so any claims of “solving” anything are dubious. In fact, if it changes at every upgrade it means it can break at every upgrade.

        I get this for free with every upgrade, without hard coding old versions of podman’s result.

        You get this for free… and so what? Who cares? It’s an intermediary artefact of the code generation that doesn’t have any actual effect. In fact it can’t have any effect, else the code generation thing would not be forwards compatible.

        You’re just introducing another component in an already too complex system just because you like a different syntax. And this component can’t even have any effect by design. I’ll pass.

        1. 9

          I’ll take a stab at this, but the tone here is rather aggressive for what is essentially someone showing off something cool they did – whether or not this is upstreamed into Podman, this affects nothing for people that don’t want to use it. For me, this is not only cool but incredibly useful, and would simplify the CoreOS Home Server integration I’ve been working on by quite a bit. In any case:

          Dynamic generation is important.

          Important for what?

          As the blog itself alludes to, the integration between systemd and Podman is not static – the underlying mechanisms by which Podman operates evolve, which sometimes necessitates setting options on either the Podman invocations or the systemd units themselves. One example of this is how Type=notify services interact with conmon, and how this affects readiness signals for containers.

          Targeting higher-level semantics allows the underlying integration to change (as it relates to non-default options) without any change to the user-facing unit files themselves. The idea of systemd generators isn’t even that controversial itself – your system probably already uses some of these, as installed into /usr/lib/systemd/system-generators.

          The source of truth is the container unit file

          There is a single source of truth in any case.

          The difference between a systemd generator and podman generate systemd is, as mentioned before, the fact that the former evolves against user-defined semantics, whereas the latter is static and is generated based on Podman-defined semantics (there’s not much optionality in the Podman generator).

          not some random commands that produced a unit file once

          It’s not some random command that happened to ran for some reason, it’s a very specific command that I chose to run and deploy.

          Indeed, and you’re still given a choice of providing your own additional options with PodmanArgs=. Again, the specific commands that pertain to the integration between Podman and systemd tend to change with both of these projects (especially in terms of new security capabilities), and are already quite convoluted.

          The examples of generated systemd units are quite gnarly, and it’s implied that these patterns need to be re-created time and time again unless podman generate systemd is used (which again, offers little optionality in the way things are generated).

          In fact, I’ll stop myself here, as all subsequent points are just variations on the above – the issue at play here (and what Quadlet is attempting to fix) is the integration between systemd and Podman, which evolves separately to the integration between Podman and the container. I’d probably go as far as saying that podman generate systemd was the wrong abstraction, as systemd unit files are meant to be human-readable and editable, and need to support a wide variety of options that cannot necessarily be derived from Podman container definitions. Quadlet seems like a much better approach to solving this issue.

          1. 1

            I agree with most of this. I think we all have an intuitive ‘correct’ layer of abstraction and yours can be very different from mine.

      1. 3

        This is awesome, and hope to see some source (with the hopes of porting this to some embedded device). There’s another, parallel effort which I hoped would’ve panned out by now, but seems to have stalled, sadly.

        PalmOS was my introduction to embedded computing (with the Palm V, an incredible piece of hardware in and of itself) and holds a special place in my heart, but remains one of the more thought-out and consistent user experiences on a hand-held device to date.

        1. 2

          My understanding is shortly after that post, Dmitry started a new job at Apple, which as you can imagine, might make it more difficult to work on reverse engineering side projects.

          [Edit 3 hours later: over on the HN thread Dmitry confirms they’re still working on the project.]

        1. 8

          Unrelated to the article itself (though a wonderful read), I really appreciate the amount of thought and effort that’s going into Oil and related documentation. It’s quite rare to see such concerned effort in understanding the problem domain from first principles and not presupposing anything about any solution.

          The ideas and methodology here will surely become a touchstone for future projects of the same ilk.

          I’m not sure if I’ve missed it or whether it’s in the pipeline, but a survey of interactive shell features might be interesting, especially around auto-complete, history-search-as-you-type, etc. Fish does incredibly well here, to the point where it feels like magic (“How does it know I want to run this command/script by just the first character?”), which probably comes down to a few pieces of metadata:

          • Which directory was the command run in?
          • Does the command point to a local file, and does the file still exist?
          • Which command was run before the one I’m completing now, and was it the same command as in this session?
          • How many times has the command run in total?
          • When was the command run last time?

          I haven’t read the code for Fish to know for sure, but it feels like the history suggestions are prioritized based on at least a few of these attributes, and I’m not aware of any other shell that does it as well.

          1. 6

            Thanks! Oil has a very strong foundation for an interactive shell:

            Although I have cut the interactive shell out of the project proper in favor of a stable IPC interface that other people can use to make interactive shells (including GUIs and non-terminal based shell interfaces). This is called “headless mode”. It exists but I need people to bang on the other side of it:

            http://www.oilshell.org/blog/2021/06/hotos-shell-panel.html#oils-headless-mode-should-be-useful-for-ui-research

            So basically you’re not gonna get an interactive shell based on Oil until someone else makes a big effort :) And I would say that Oil is probably the only POSIX/Bourne shell that is tackling this. (e.g. busybox ash or dash don’t seem to have much ambition in this area; I talked with one zsh dev and he hopes Oil succeeds :) zsh appears to have significant tech debt in this area.)

            (As always, feel free to chat with me on Zulip, etc. – all links on the home page)

            It’s a huge project so I have to cut some things out – unfortunately the interactive shell is one of them. But the good news is I think things will go faster if people build separate projects, rather than needing to familiarize themselves with Oil source code. (I don’t think this is hard because it’s all plain Python essentially, but some people have found it hard.)

            It might be best to think of Oil as “a shell inside containers” or a “distributed shell”. That has always been the primary focus. I spent about a year on the interactive shell, and it wasn’t enough to attract contributors to that part, or users:

            I still use it interactively when I release Oil, so it works, but there’s not a hugely compelling feature and I don’t have time to work on it. The Oil language is more important!

            About fish: I wrote that “fish Oil is a good idea” last year:

            http://www.oilshell.org/blog/2020/02/recap.html#fish-oil-is-a-good-idea-link

            1. 4

              Oh wow, that’s quite incredible, and headless mode seems like a much better ecosystem investment rather than some parallel effort that is hampered by historical baggage.

              I’ll read through the links and see whether I can try my hand at some Emacs integration ala eshell, thanks for the comprehensive write-up!

              1. 4

                Some kind of Emacs support would be amazing! In fact I mentioned that here:

                https://lobste.rs/s/gbjp09/blurring_lines_between_shell_editor#c_elqwqo

                It solves the “where does this command’s output end?” problem that scraping bash -i has. And the isatty() issue.

                It is rough BUT one other person has written a client for the headless shell (Subhav). So it’s not just a random thing I came up with :) Please join us on https://oilshell.zulipchat.com/ if you have time for this!

          1. 4

            This was captured as the sweet-expressions SRFI (SRFI 110), but I believe that Wisp (SRFI 119) was created subsequent to this and elaborates on some of its designs.

            There’s a library for Guile which seems to be fairly well-maintained as well.

            1. 16

              How is this related to GNU.org? It’s hard to tell from the front-page and introductory blog-post whether this is a break-off organisation, or something else entirely.

              This sent me off searching as to whether GNU is actually trademarked in any way, but apparently it’s not? Perhaps not surprising, given Richard Stallman’s stance on intellectual property in general, but it’s interesting to see how “ownership” of a name can be very contentious – do people who toil under a name have cause to co-opt it?

              1. 5

                but it’s interesting to see how “ownership” of a name can be very contentious – do people who toil under a name have cause to co-opt it?

                Ironically, a thread was circulating on Twitter the other day in which a former executive director of GNOME pointed out that GNOME is not a GNU project and they’ve asked the FSF to stop listing it as one – without success.

                1. 10

                  Who would have thought that calling your project the GNU Network Object Model Environment would make people associate you with GNU.

                  1. 9

                    GNOME has not been an acronym for at least a decade.

                    As far as I understood it from only sort of paying attention at the time, the split away from GNU happened for a lot of reasons, not least of which was Stallman’s loud public denunciations of GNOME’s leadership, and a Stallman-endorsed attempt to impose a code of ideological censorship on the GNOME project’s blog aggregator.

                    1. 5

                      The GNU project has a habit of refusing to let go projects when their maintainer wish it. I think I remember a similar issue with Libreboot. They of course live open the door for forking the project, but they basically say that if this happen, they will search for a new maintainer on their hand.

                      1. 1

                        The GNU project has a habit of refusing to let go projects when their maintainer wish it. I think I remember a similar issue with Libreboot.

                        I wouldn’t touch the toxic tarpit around that project with a ten mile pole, when the best defence you can come up with is “we were on drugs lol” [0] you know you’re in a special place. And the drama continues [1].

                        And people wonder why I want anonymity and privacy online.

                        [0] https://libreboot.org/news/unity.html

                        [1] https://libreboot.org/news/resignations.html

                        1. 2

                          I was not aware of that, tbh. Though, regarding the GNOME project, it looks like the same pattern arise (modulo the drama).

                          1. 3

                            I was not aware of that, tbh. Though, regarding the GNOME project, it looks like the same pattern arise (modulo the drama).

                            The people around gnome are smarter and present themselves as a lot more photogenic, but if you want to see how hostile they are try and get a patch accepted in gnome. You will be drowned in bureaucratic red tape. In GNU land you might want to tear your hair out over terrible decisions, like not exporting the parse tree from GCC, but at least you feel like someone is listening to you.

                  2. 2

                    This is a “fork” of GNU, by people here.

                    1. 3

                      What are they referring to when they say “GNU Project”? The real thing or their fork? Nevermind, they updated the page since I open it the first time.

                      This post probably explains it better. Seems like a kind of union of GNU project maintainers, not a seperate project in itself, as they are still linking to gnu.org and not hosting their own code and project tools.

                      Edit: Here some more info: https://lists.gnu.tools/hyperkitty/list/assembly@lists.gnu.tools/message/RASDB353K5ONC654JDXBQCE7PFADYBSX/

                      1. 2

                        It was my understanding too: it’s not a fork, it’s mostly a group of maintainers aiming to coordinate their efforts within the wider umbrella of the GNU project (I guess they also hope to be able to steer the project in a direction more aligned with their values this way).

                        It sounds like a sane thing to do, but I fear like it won’t be welcomed well on the other hand of GNU.

                    2. 1

                      From r/freesoftware,

                      This is merely a resurgence of the “gnu-tools” initiative by the usual suspects.

                      Ostensibly it was an initiative to introduce more influence on the whole GNU project by maintainers (maintainers already have full control over their own GNU projects apart from redefining software freedom, which is where RMS has final say).

                      When asked the hard questions, it quickly became clear that this self-appointed shadow government was really about ousting RMS from the GNU project with hardly a though about how to continue after that.

                      Anyway, if you have several days, you can inform yourself. It’s all on display in the gnu-misc mailing list (from 2019-11 and onwards. Search for “social contract”)

                      In the end, most GNU maintainers weren’t on board and the discussion died down.

                    1. 1

                      I’m considering doing a similar thing to this but with a VM. Although, now that I have my PGP keys on a yubikey I’ll need to figure out how to get those passed through for commit signing and SSH. Anyone have any ideas on how that could work?

                      1. 2

                        If you don’t need those keys on the host, the simplest solution is to forward the whole yubikey USB device to the VM.

                        1. 1

                          Yes, you can forward the GPG Agent socket to the remote system over SSH. It works seamlessly even where touch confirmation on the Yubikey is concerned.

                        1. 21

                          If you mean side projects with the intention of making money: none. My side projects are for fun and for things that are useful to me.

                          Currently I am solving the last bugs in GTE: Getting Things Email. A todo-app/task management system based on IMAP. That is, all the tasks are stored as emails. Basically, I got fed up with the million todo-apps that are already out there, because exactly zero of them can meet the (I think) reasonable requirement that I can use it with clients that are native to my devices. If you work in the terminal and you have a phone that does not run Android or iOS, then… nothing is available. And that is just the first requirement I have.

                          So the idea is to have something that runs on email, because email is supported on every platform and I can immediately use it from everywhere and later, if I want to, I can build dedicated clients on top of that. But it already works terrific for me, so maybe I’ll never get to that last part.

                          1. 5

                            Are you intending on releasing that? Because it sounds very interesting, mostly because I was thinking something like that a while back (see https://lobste.rs/s/8aiw6g/what_software_do_you_dream_about_do_not#c_6bpbbx) but never got around to doing anything about it.

                            1. 4

                              Currently I have no plans to release it. Mainly because… I haven’t really thought about that yet. I am not sure if this is interesting for other people. It would need some serious polishing and idiot-proofing for that, I guess. With “solving the last bugs” I actually meant “fix the things that still annoy me on a weekly basis” :) The things you mention in your comment are possible, in theory. It is all very basic at the moment.

                              I did plan to write some posts about it, but I haven’t gotten around to that yet. If you want to have a look, here is the source code: https://git.sr.ht/~ewintr/gte

                              1. 3

                                This is absolutely interesting and very much in line with what I’ve been looking into as well – a collection of tools that utilize email for the heavy lifting and perhaps are able to work under any client (implying the tool is run as a service or in a recurring fashion). Things like:

                                • Read-me-Later/bookmarking functionality, where you send an email with a URL to a specific mailbox and the service replies with a self-contained version of the page pointed to.
                                • A simple CMS/publishing workflow, where emails in a specific IMAP folder are picked up and rendered out to Markdown, to be then picked up by whatever static-site generator pipeline is set up.

                                The code you’ve shared looks awesome, I’ll give it a try. But seriously, email infrastructure and semantics solve a lot of the issues inherent with these sorts of tasks, and there’s not much in terms of self-hosted tools that utilize email in solving them, so kudos to you.

                                1. 2

                                  Thanks! If you (or anyone else) have questions or suggestions, feel free to send me a message.

                          1. 7

                            I haven’t seen very many good arguments against what Moxie has said with regards to competing on features. Comparing this with countries - even in a representative democracy like the United States, our military is a strict hierarchy. This is despite the democratic principles of the country. The reason? There is an evolutionary pressure on militaries to find the most competitive structure - those who have tried other things (as Orwell talks about in revolutionary Catalonia) failed in part due to the structure they adopted. Moxie has noticed the loss of open source projects to these closed companies due to their insistence on a more virtuous structure and has decided that the perfect is the enemy of the good. To me this seems analogous to the arguments between pure utopian anarchists and more pragmatic people (considering Moxie’s political orientation he is probably intimately aware about the tradeoffs here).

                            1. 7

                              Since reading Moxie’s polemic against federation (I assume you’re talking about the “Ecosystem is Moving” essay, though Moxie has repeated the idea elsewhere), I’ve slowly come to the understanding that the only way to win this competition on features is to not play at all. After all, there’s almost always someone out there with more money/determination than you, which I guess is especially true in terms of open/community-owned projects vs. more commercial endeavors.

                              Part of what I’ve come to dislike about these more boxed platforms (to borrow a term from the linked article) is how they’ve made online discussions more ephemeral, both in terms of owning one’s archive, and in terms of being able to stay part of a community regardless of which device/system I happen to be participating from. For most of these platforms, clients running on “legacy” systems (be it older devices or devices on non-mainstream operating systems) are routinely dropped or not developed for in the first place; some platforms place constraints on participating over multiple/alternative devices, such as a mobile device and a desktop device (ostensibly because of E2EE concerns).

                              This contributes to my nagging feeling that I’m somehow forced, as a user, to keep up with some notion of progress, when all I want is to communicate.

                              Conversely, platforms such as IRC and XMPP continue to be useful even on ancient or under-powered devices, albeit in a perhaps degraded capacity, and allow for the sort of mixing-and-matching of use-cases made harder by their boxed counterparts. This, to me, is a more user-friendly and inclusive approach to building communication tools; unfortunately, being more inclusive means taking the foot off the pedal, or at least taking a more mindful approach in rolling out features.

                              1. 9

                                This contributes to my nagging feeling that I’m somehow forced, as a user, to keep up with some notion of progress, when all I want is to communicate.

                                Communication (like most human endeavors) is constantly changing. Language changes, communication methods change, speakers change (including marginalized groups into conversations), and expectations around media change. This isn’t “progress” in the Baconian sense but it is change. IRC and XMPP (along with their technical issues which others in this thread have covered in great detail) just hasn’t been able to keep up with changing expectations of communication. Realistically, the general population sends pictures of computer monitors to their friends over Snapchat. Sticking to IRC just codifies a different set of social norms into place.

                                1. 5

                                  Disagree.

                                  Communication over platforms typically changes because the platforms themselves change. People started sending each other image file attachments instead of image links after image attachments were implemented by popular clients.

                                  Featuritis causes these changes more often than change causes featuritis. Language changes, but those changes can still be represented in plaintext as long as encodable written language exists.

                                  1. 10

                                    Communication over platforms typically changes because the platforms themselves change. People started sending each other image file attachments instead of image links after image attachments were implemented by popular clients.

                                    You’re taking too short of a view of this. Humanity used to send smoke signals and use oral storytelling to communicate. In the meantime, we seem to have invented writing, paper, printing presses, newspapers, telegraphs, radios, and all sorts of other manners of communication. Whenever I see folks say things like change is driven by featuritis, I invite folks to tell me where they draw the line between essential change and featuritis change and to justify why this line is valid.

                                    Language changes, but those changes can still be represented in plaintext as long as encodable written language exists.

                                    Right but what about folks that are hard of sight? Folks that are dyslexic? People that comprehend things through image better than text? Traditionally these folks have been excluded from the conversation. In fact, the history of communication has largely followed the democratization of communication to new actors. Why is plaintext encodable written language the point at which we must draw our line in the sand?

                                    EDIT: I also want to point out that there are people that enjoy change. They enjoy learning new slang, they love participating in new memes, trends, conversations, and ideas. This isn’t to say that new is inherently better, but nor is it to say that new is inherently worse. But there is a contingent of people out there that legitimately enjoy change. If you want to include folks in the conversation (which is the whole point of creating open communication platforms, right, to enable humanity to communicate), you need to account for folks that will change their colored contacts on a whim as much as the person who is fine using their 20 year old laptop to view plain text.

                                    1. 1

                                      Right but what about folks that are hard of sight? Folks that are dyslexic?

                                      Dictation does not require creating a new protocol; that’s a client-side feature.

                                      Why is plaintext encodable written language the point at which we must draw our line in the sand?

                                      Because that’s the minimum requirement for communicating language, and the most accessible form of communication in existence.

                                      there are people that enjoy change

                                      Client-side change is fine as long as that change doesn’t devolve into a boxed platform. Nobody’s stopping you from switching out themes or clients when you feel like it.

                                      1. 10

                                        Dictation does not require creating a new protocol; that’s a client-side feature.

                                        What about sending voice messages? Sending images? Am I not allowed to hear my parents’ voices because of protocol complexity? What about folks that are blind and deaf, or who don’t know a certain language?

                                        Because that’s the minimum requirement for communicating language, and the most accessible form of communication in existence.

                                        Citation needed. We communicated without written language for a very long time. Why do we need to use written language now? How is it the minimum of anything? We don’t even have a universal theory of human semantic processing, so how can we prove that written communication is the minimum?

                                        But why are you so invested in trying to limit my expression? Why must we embrace minima? Because it’s easier for the implementer? Why is the implementer more important than the user? Why is an open protocol more important than the act of communication itself? When I check out a new apartment to rent and I see it doesn’t have central heating, I don’t think to myself “ah of course, what a smart design decision, the builders decided to save on complexity” I think “oh they cut corners and either did not renovate in central heating or they don’t care to support central heating, well, not my sort of place”.

                                        This is the problem that folks bring up when talking about open source adoption. Open source often cares more about principles than usage. FOSS cares more about openness, customizability, and other values that end users care for less than core functionality. If you want to communicate with your fellow FOSS-natives on Freenode and grumble about the kids on TikTok be my guest, but others will not be willing to make that choice. If FOSS actually wants to enable end users and not technical FOSS-natives, then FOSS needs to prioritize users just as much if not more than its principles. In the meantime, others will explore other tradeoffs. Like Moxie with Signal, Element with Matrix, and Gragon with Mastodon. I can tell you that IRC hasn’t changed much and FreeNode hasn’t gone anywhere in the last several decades, yet the mindshare has very much moved away from FreeNode, and that’s because users have put their “money” (or time or effort or whatever) where their mouth is and have voted with their feet.

                                        For me a good metric of success with a communication protocol/product will be when you observe average teenagers in the developed and developing world organically using your product to communicate. They use things like Instagram and WhatsApp but they probably don’t use IRC. I did use IRC when I was a teenager, but there were fewer options then and IRC mirrored the cultural context of online communications at the time much more than IRC does now. I think you’d be hardpressed to get any teenager these days to use IRC.

                                        1. 3

                                          What about sending voice messages? Sending images? Am I not allowed to hear my parents’ voices because of protocol complexity?

                                          Use the right tool for the job. A VOIP protocol would work well for sending voice. You could make a meta-client that combines a VOIP client and an instant-messaging client if you want to do both at the same time. There are better ways to do this than making the protocol more complex.

                                          What about folks that are blind and deaf, or who don’t know a certain language…why do we need to use written language now?

                                          People who are both blind and deaf can use braille readers. Plaintext allows people who are sighted, blind, and/or deaf to interpret language.

                                          But why are you so invested in trying to limit my expression? Why must we embrace minima? Because it’s easier for the implementer?

                                          Yes. I explained my rationale for simplicity in the previous post, Whatsapp and the domestication of users. When complexity grows past a certain point, implementers need to spend more working-hours and need more funds to create a viable implementation; requiring too much funding can encourage dark patterns and conflicts of interests (e.g., investor money). I’d recommend checking out the article if you’re interested in this topic.

                                          Why is an open protocol more important than the act of communication itself?

                                          Open protocols are important because the act of communication is so important. Nobody should in control over the means of communication.

                                          When I check out a new apartment to rent and I see it doesn’t have central heating, I don’t think to myself “ah of course, what a smart design decision, the builders decided to save on complexity” I think “oh they cut corners and either did not renovate in central heating or they don’t care to support central heating, well, not my sort of place”.

                                          Agreed; however, if I don’t notice a golf course in the background I’d probably feel relief rather than anger since that’s a bit more than I bargained for. A golf course isn’t part of the “house spec”. A stable temperature, on the other hand, is part of the minimum requirements for a house and should be included in the “house spec”.

                                          This is the problem that folks bring up when talking about open source adoption. Open source often cares more about principles than usage. FOSS cares more about openness, customizability, and other values that end users care for less than core functionality.

                                          Correct. These are ideological movements. They support ideologies, and reject the notion of the “end justifying the means” (the way the phrase is commonly used).

                                          In the meantime, others will explore other tradeoffs. Like Moxie with Signal

                                          I’ve explained why I find Signal problematic in the previous post. I don’t think one org should own a communication platform.

                                          I think you’d be hardpressed to get any teenager these days to use IRC.

                                          I was a teen on IRC until last year when I turned 20. There are dozens of us!

                                          Also, a quote from this article:

                                          I’m not arguing that average users are doing something “wrong” by doing otherwise; expecting average users to change their behavior for the greater good is naive. This advice is targeted at the subset of users technical and willing enough to put some thought into the platforms they choose, and indirectly targeted towards the people they can influence.

                                          I’m not trying to get the aforementioned stereotypical “teens” to suddently sign up for a FSF membership and install Libreboot (though that would be nice). I’m trying to get technical users to start caring, since they’re the ones who can influence their friends’ technical decisions, file bug reports, and make the software better. That needs to happen for the “teenagers” to sign up.

                                          1. 4

                                            People who are both blind and deaf can use braille readers. Plaintext allows people who are sighted, blind, and/or deaf to interpret language.

                                            Freedom of protocol and implementation is not worth enough for me to relegate impaired readers to second class citizens that someone has to think about. If anything, that sounds like prioritizing the freedom of the able over the freedom of anyone else.

                                            Open protocols are important because the act of communication is so important. Nobody should in control over the means of communication.

                                            For me communication is not worth gimping for the sake of being minimal or implementation-friendly. I am fine accepting complexity in client, protocol, and server to enable users to communicate in novel, ergonomic ways. Communication is much more important to me than implementer ease. I’ll go further and say, given a choice between implementer ease and rich communication, almost everyone would pick rich communication. Only a minority will be so motivated by the spectre of lock-in that they will reject attempts to broaden the platform.

                                            Agreed; however, if I don’t notice a golf course in the background I’d probably feel relief rather than anger since that’s a bit more than I bargained for. A golf course isn’t part of the “house spec”. A stable temperature, on the other hand, is part of the minimum requirements for a house and should be included in the “house spec”.

                                            This is part of your personal “house spec” of course. I think you’re going to have a really hard time finding everyone agree on a “house spec”, and in practice you see folks with very different types of living domiciles based on their preferences. In college, I had friends who lived without central heating and wore thick jackets to use the bathroom. This was their choice.

                                            Correct. These are ideological movements. They support ideologies, and reject the notion of the “end justifying the means” (the way the phrase is commonly used). I’m not trying to get the aforementioned stereotypical “teens” to suddently sign up for a FSF membership and install Libreboot (though that would be nice). I’m trying to get technical users to start caring, since they’re the ones who can influence their friends’ technical decisions, file bug reports, and make the software better. That needs to happen for the “teenagers” to sign up.

                                            For me and many other technologists, technology is primarily about the user and the human component, only secondarily about protocols, implementer ease, ideology, or anything similar. I view technology as slave to the human, not human as slave to the technology. I think you’re going to have a hard time convincing tech users like us otherwise. After all, we’ve had decades of IRC, and even among technologists IRC has lost ground, not gained it. I’m sad to say I don’t think this viewpoint has any bite except among a dedicated few, who will continue to stick to IRC and grumble about new protocols and their freedoms.

                                            “Man is born free, yet he is everywhere in chains”

                                            1. 1

                                              Freedom of protocol and implementation is not worth enough for me to relegate impaired readers to second class citizens that someone has to think about.

                                              I agree, and this is the reason why I think we should build everything we can from plaintext. Audio excludes the deaf, visual excludes the blind, but plain text includes the largest possible audience. Text is the most accessible format in existence, while other formats treat many disadvantaged users as second-class citizens.

                                              I view technology as slave to the human, not human as slave to the technology.

                                              Agreed. In order for people to be in control of their platforms (rather than the other way around), the platform should not be owned by anyone. For technology to be a slave to the user, the technology should be owned by none other than the users themselves.

                                              A lot of your concerns about UX are client issues rather than protocol issues. It’s perfectly possible to build a glossy, animation-rich client with nice colors and icons that can appeal to the masses. The benefit of open platforms is that you get that choice without excluding users on low-end devices who can’t afford to run a fancy Electron app. Open platforms are a means to include everyone and serve the human rather than the platform owner. If your use-case isn’t met, you can build an implementation that meets it.

                                              Minority users matter too.

                                              1. 7

                                                Text is the most accessible format in existence, while other formats treat many disadvantaged users as second-class citizens.

                                                I just don’t agree, and without studies to back this viewpoint up, I’m going to take this as an ideological viewpoint.

                                                Agreed. In order for people to be in control of their platforms (rather than the other way around), the platform should not be owned by anyone. For technology to be a slave to the user, the technology should be owned by none other than the users themselves.

                                                Indeed, but I think you and I have different definitions of ownership. Simplicity and ease of implementation are not prerequisites for ownership in my mind. That simply passes the buck to the technologists, of which ideally we wouldn’t force the entire population to become, much in the same way the entire population does not fix cars and build houses because they are not interested in those things.

                                                A lot of your concerns about UX are client issues rather than protocol issues. It’s perfectly possible to build a glossy, animation-rich client with nice colors and icons that can appeal to the masses.

                                                They aren’t. I want protocol level affordances for things like emoji reactions, custom emojis, threads, and such. On top of that I want the protocol to change as folks want to experiment with new things. Extensibility is a feature for me, not a bug. Also, I don’t really think a world where the UX is left in the hands of “interested” is realistic. Gemini still doesn’t have a rich client because technologists aren’t interested in one, and no surprise, the users are almost all technologists or their close, nerdy friends.

                                                Regardless I think you and I won’t really see eye-to-eye on this issue, so I wish you the best.

                                  2. 3

                                    Whether communication itself changes is perhaps debatable – my understanding is that, at least in the technological realm, the tools we have for communicating (e.g. inline images/audio/video, emojis, and what-not) evolve and change, but the underlying needs for expression remain the same.

                                    Sometimes, the constraints of a system determine the patterns of communication, which are then codified into social norms, as you say (IRC favours shorter, plain-text messages, email has “etiquette”, etc.) As more people take part in a platform, it’s inevitable that more and more of these constraints, justifiable as they may be, will become issues that require solutions, features. If the platforms don’t evolve, people will look to move elsewhere, naturally.

                                    The issue here isn’t the features or the changing expectations or even boxed platforms themselves, but rather that extending a platform with no regard for backwards compatibility tends, in the long term, to exclude people from being able to participate as freely as possible, or at all.

                                    Even more so, these “moving” platforms impose their own constrains, which in turn become social norms; people don’t expect to have access to their archive, nor are they expected to be able to see out the lifetime of their devices (though this is an issue way beyond communication platforms.)

                                    Federated or community-owned platforms are better in that regard since interoperability concerns will typically govern their evolution, and thus ensure at least some form of long-term viability and graceful degradation. Extending these platforms with additional features entails more effort, but it does happen – modern XMPP clients are quite feature-ful and pleasant to use, though still behind the various proprietary platforms in some ways. It still, however, remains possible to participate on clients operating against a reduced feature-set (not everyone has the resources to own a recent, or any, smartphone, and there’s workable XMPP and IRC clients all the way down to J2ME.)

                                    It basically comes down to this: for me, being able to communicate without fear of falling out of some compatibility cliff is more important than chasing new forms of expression; in lieu of not being able to say anything, I’d rather not be able to express myself in full colour.

                                    1. 4

                                      the tools we have for communicating (e.g. inline images/audio/video, emojis, and what-not) evolve and change, but the underlying needs for expression remain the same.

                                      I would like to push back on this but have run out of time to offer some examples. At the risk of being hasty, take a look at Sea Shanty riffs on TikTok. That form of cultural expression isn’t happening on IRC that’s for sure. (In fact, the image of someone trying to sing or rap on IRC reminds me of NWA and their entry into mainstream music, but that’s too off-topic to be more than an aside on Lobsters.)

                                      But I agree with the rest of your post. I also think that community owned communication platforms have a greater incentive to respond to and support members of the community because their incentives are not driven by investors or customers in the same sense. I’ve watched the Fediverse take shape in a very community-oriented way and have had my heart warmed by watching folks organize in a multitude of ways (from mods having fun, to co-ops, to corporations) to enable their users to communicate.

                                      1. 1

                                        modern XMPP clients are quite feature-ful and pleasant to use, though still behind the various proprietary platforms in some ways. It still, however, remains possible to participate on clients operating against a reduced feature-set

                                        imagine the following situation:

                                        • you use an xmpp client that is very simple, it only supports plaintext
                                        • you send a question to someone
                                        • they send their answer as an animated gif
                                        • your xmpp client does not support animated gif, you are not able to read the answer.
                                        1. 2

                                          This does, of course, happen all the time, as people participate with less featured clients, or clients that do not support much more than text communication (e.g. terminal clients).

                                          XMPP is actually a good example of how clients are able to fall back to at least a workable baseline, despite any new protocol-level features or changes. For example:

                                          • For P2P file-transfers (e.g. Streams, Jingle), feature negotiation takes place and the sender is required to ensure that the recipient supports the method used. If not the method will not be offered in the first place, which is arguably better than sending into the void.

                                          • For client-server file-transfers (e.g. HTTP upload), the recipient will generally see an incoming HTTP URI as a text message, if no other support exists (e.g. for inline display, etc.)

                                          • For things like threads, reactions/message attachments and the like, context is usually collapsed and the messages will appear as they would normally in the course of discussion.

                                          Ideally, all clients would be able to support all features and these sorts of ambiguities would not exist, but it’s clear that some people aren’t able or willing to participate in the same way as everyone else. And though part of communication is in its intent (after all, a thumbs-up emoji attached to a message is not the same as one posted inline ten messages below), at least these additional features don’t form an “invisible” layer of communication above the baseline.

                                  3. 3

                                    I think Matrix is a good attempt at pushing an open product first and an open protocol second. TFA notes that Element is wedded extremely closely to the Matrix spec, which makes it hard to implement, but I think this prioritization of product over protocol is necessary to deliver actual, sticky value to users. Open, federated software needs to enable users firstly; Matrix does document its decisions and open up its spec, but features are driven through the Element experience, which is where the real value for the user comes from.

                                    1. 2

                                      I think there is a lot of truth to what you’re saying, but only up to a point. It’s fine to go the Matrix route as long as there’s a point at which the organization hits the brakes, slows down rapid iteration, and focuses on helping the rest of the ecosystem catch up.

                                      I think that Matrix is at a good point for the feature-slowdown to begin.

                                      1. 2

                                        I’ve tried to onboard friends onto Matrix and they still would like to see more features. Threads come up frequently, and so do custom emojis. We are starting to see more care toward stability in the ecosystem but I think it’s a bit early yet for feature development to stop. Protocol documentation is improving and alternate servers are being birthed (like Conduit). I think client complexity is not explicitly a concern of Matrix, so slowing down now would not help the project achieve it’s goals.

                                  1. 1

                                    If the author(s) are reading this:

                                    About the Skroutz Engineering Blog Created with care by the Skroutz engineering team.

                                    I have no idea what Skroutz is and I don’t speek Greek. If you have a nice company tech blog, tell me more about your company! (Yes,the “We are hiring” link explains it a bit)

                                    1. 5

                                      To be fair, the company’s services themselves are mostly aimed at the Greek market – Skroutz started out as (and still is) a price comparison website for Greek electronic storefronts, but has expanded to fulfilling orders and acting as a bona fide storefront for some companies that don’t wish to maintain their own. They’ve been a very good service whenever I’ve looked to buy something in Greece, and am glad to see this post make the rounds, as my impression is that you get very few of these sorts of engineering organizations in Greece.

                                      Their development setups are very interesting as well – copy-on-write against a shared development database (which itself is based on anonymized production data) sounds like an excellent solution to a lot of issues: performance tests are honest and issues with runaway SQL queries are caught in dev, developers all see the same base data, so there’s less chance of mismatches, rollbacks are made much simpler, etc. I wonder how CoW works in terms of updating the base snapshot when changes to the schema have been made in a developer’s workspace.

                                      1. 1

                                        To be fair, the company’s services themselves are mostly aimed at the Greek market

                                        Sure, but I specifically meant the /tech blog/ part, and in this case most people who aren’t in Greece probably don’t care about the company per se, but it’s still nice to know what the company does, to put their solutions into perspective. Doesn’t take away from the content in any way, but “we mostly do e-shopping” is already a pretty good context which I had to grab from the job ad.

                                    1. 1

                                      I should’ve perhaps given more context about this – Snikket is a project aimed at packaging the Prosody XMPP server, and providing a more consistent user-facing experience. Part of this is in consolidating client efforts for various platform-specific clients under a single name (and in the future, UX). Currently, this means forks of Conversations for Android, and Siskin for iOS, which closely follow upstream with only light modifications.

                                      XMPP was and remains a viable choice of protocol for personal chats (as opposed to IRC, which is mostly aimed at public group chats), and has improved in bounds since it was last popular (almost a decade ago). That said, XMPP has a marketing problem, and the XSF (the foundation that oversees the development of the protocol) has thus far stayed away from pushing for some sort of better, user-facing brand. Snikket is a project that tries to do just that, as a community-owned project.

                                      I’m not directly involved with Snikket or the XMPP community, other than as a spectator, but moved my personal chats over to XMPP with a self-hosted instance of Prosody around a year ago, and have been very satisfied with how robust the system is (which, in part, is owed to how good Conversations is as a client, even on old hardware).

                                      1. 6

                                        Gave this a quick try – I’ve been looking to move off of a Hugo-based setup for my personal website as there’s too much friction between thinking of writing something, and actually pushing things out. Couple of thoughts:

                                        • Domain idea is pretty cool, especially where per-domain CSS customization comes into play.
                                        • Being able to customize the base CSS would be great (currently it’s embedded in the header.html template).
                                        • Having a mixed public/private wiki is awesome, but wondering if there’s a way of running this as a purely private CMS (there’s a -private switch, but I still seem to be able to write public notes and create domains; perhaps a bug?)

                                        I’ll give this a more thorough review, thanks for posting!

                                        1. 2

                                          Author here - the custom CSS will apply to everything (except the search, but thats a bug).

                                          As to your other comments - I will say there are a lot of CMS out there and this one has limitations. I think the lack of custom domain is the biggest, and I can’t think I would use this to make a website that needs a custom domain.

                                          However, I’ve made a lot of CMS’s - offlinenotepad, cowyo, i.rwtxt to name a few…but rwtxt is what I always come back to for just personal notes. I’ve used it every day for years now and it does a great of job of storing all sorts of notes (and attachments!) that are instantly searchable. There’s something to be said for a really good and stable personal online notes. The best thing about rwtxt is that if I ever need a new feature I can implement one right away (versus using something from big G etc).

                                        1. 7

                                          The amount of breaking changes in GNOME have traditionally been to a meme-worthy level, so I’m not sure what to make of this.

                                          A diminishing number of veterans is doing an increasing share of the work. Although recruitment is stable, newcomers don’t seem to be hitting their stride in terms of commits.

                                          So the same people that used to experiment and change everything all the time now don’t do this anymore?

                                          1. 13

                                            I think there’s a number of different aspects to this – since the introduction of GNOME 3.0, the project as a whole has become more consolidated around the idea of a consistent desktop environment, rather than, say, a desktop shell plus loose collection of apps, as GNOME 2.x was. This idea is driven all the way from how the Design Team handles everything from the HIG, the design of core parts such as GNOME Shell itself, down to the minutae of how “core” GNOME applications are maintained (there was some recent drama around Gedit).

                                            That is to say, the scope of the project has increased dramatically, while autonomy has, in some ways, diminished. This has allowed for an increase in the rate of experimentation and iteration, especially around the UX of the Shell itself (which has changed quite a bit over the years, compared to GNOME 2.x). This has also led to issues with compatibility, mostly around GNOME Shell extensions (that I’m aware of, anyways), that has led to perhaps unnecessary frustration on the part of external contributors.

                                            As a bystander and long-time user of GNOME, I think the situation here has led to a marked increase in quality of both core applications and the ecosystem as a whole – usable GUI applications that actually meshed well with the rest of the system, whatever that means, were pretty rare back in the day. Unfortunately, it also means the barrier to entry is higher for developers, especially if one is looking to make some sort of outstanding contribution.

                                            1. 4

                                              there was some recent drama around Gedit

                                              That was a fascinating if somewhat depressing read. I keep considering dipping my toes more in the GNOME world but some of the issues raised there kind of remind me why I don’t.

                                              1. 2

                                                I read through the thread, and other than some (admittedly abrasive) egos on both sides, I didn’t see much that was cause to be sad. What part of the interaction could have gone better, or were the abrasive egos the issue?

                                              2. 9

                                                Gnome is certainly a project I would never want to get involved in, based on the people alone.

                                                1. 3

                                                  Could you elaborate? I’ve never really looked, but am curious.

                                                  1. 5

                                                    I don’t know what soc was referring to, but what stuck with me was when Gnome went to the bug tracker of Transmission (a popular bittorrent client) and opened a bug asking them to remove support for notification area icons, because they would not be displayed in gnome 3, so there was no need to keep them around:

                                                    Transmission has an option in the Desktop tab of the preferences to “Show Transmission icon in the notification area”. This should probably be removed.

                                                    In response, it was brought up that removing this would only benefit gnome 3 users and would be removing useful functionality for users of gnome shell, unity, and XFCE - on top of the fact that GTK had made many breaking changes to this API in the past as well, requiring many compile-time flags and different distributions. The response was:

                                                    I guess you have to decide if you are a GNOME app, an Ubuntu app, or an XFCE app unfortunately. I’m sorry that this is the case but it wasn’t GNOME’s fault that Ubuntu has started this fork. And I have no idea what XFCE is or does sorry. It is my hope that you are a GNOME app

                                                    1. 4

                                                      Thanks for bringing this issue up. I had never heard of this, but it seems like the Transmission/Gnome3 thing was a kerfuffle indeed.

                                                      I guess you have to decide if you are a GNOME app, an Ubuntu app, or an XFCE app unfortunately. I’m sorry that this is the case but it wasn’t GNOME’s fault that Ubuntu has started this fork. And I have no idea what XFCE is or does sorry. It is my hope that you are a GNOME app

                                                      Indeed, out of context that feels tactless and unsympathetic. But let’s look at the previous comment in the chain [1]:

                                                      So now we can have three builds of Transmission that decide at compile time whether to use AppIndicator?, GtkStatusIcon?, or nothing at all, over such a stupid feature? Removing it altogether, as you suggest, will hurt XFCE users. I wish GNOME, Canonical, and everyone else involved would settle on one consistent API for this and stop fucking the app developers over. In order for this ticket to move forward, I’d like you to tell me what change should be made to Transmission that will make it work properly, out of the box, on GNOME Shell, Unity, and XFCE.

                                                      In that light, we have to ask, what should Gnome do? Should Gnome consult other DEs before shipping its features? Then the Gnome project loses its autonomy. Should Transmission conform to the whims of the 3 DEs? Then Transmission loses its autonomy. The problem is, each of these are independent projects with different users, goals, and ideas. Gnome does not want to be beholden to Ubuntu/Unity, XFCE does not want to be beholden to Gnome, and Transmission does not want to have to dance around 3 DEs, in which case, who budges? Why should it be Gnome in this case? If anything, this thread just shows why “worse is better” is the eventual shakeout of loose coupling in the open source world; it’s because the lowest common denominator is what wins when you have multiple actors with only occasionally concordant desires and goals.

                                                      And remember, if the authors of Transmission felt that this change in Gnome was not supporting, they could have simply closed the bug and not worked on it. In the Gedit case linked above, Gedit is considered part of “Gnome core” and so Gnome feels greater pressure to make it fall in line, but Transmission could just ignore this change in API and wait for Gnome3 to land and then make the changes gradually or not at all without much pushback.

                                                      [1]: https://trac.transmissionbt.com/ticket/3685 for the entire contents of the thread

                                                      1. 4

                                                        That was a long time ago – it still gets brought up because, while it was a long time ago, the Gnome project still regularly treats other developers, and users, with condescension and snark.

                                                        XFCE may not be the most famous project but I find it really hard to believe the person who posted that didn’t know what it was or what it did. Even assuming it were so, when a developer is voicing concerns about their users’ environment, you can’t just wave your hand and make them go away. Just because you don’t know what something is or does doesn’t make it go away for everyone else.

                                                        Should Gnome consult other DEs before shipping its features? Then the Gnome project loses its autonomy. Should Transmission conform to the whims of the 3 DEs? Then Transmission loses its autonomy

                                                        That was way too long ago for me to remember the technical details of that discussion, status icons and systrays have always been a bit of tarpit on Linux DEs, but as far as I recall, a third option – consult with application developers – would’ve likely helped…

                                                        If, for whatever reason, you come up with your own API or your own way of doing something, that’s great. But if you want other people to start using it, opening a bug asking for the removal of an application feature just because that feature doesn’t really work with your new thing – even though it works everywhere else, and it worked fine with the API you’re deprecating! – is probably not the most elegant way to go about it.

                                                  2. -9

                                                    +1000000

                                                    1. -10

                                                      -5 me-too?? C’mon bois!!

                                              1. 8

                                                I’ve gone the entirely opposite way (opposite to the author’s “Newsletter” section, at least) and have set up rss2email for the feeds I follow – setup was a breeze, adding new feeds is simple, and one could probably run this as a cron/systemd timer, if running on a server isn’t an option.

                                                Almost any platform I’ve used has an email client available that is at least bearable for long-form reading, you can read offline, you can use filters to automatically file new items, and so on. I’m sure clutter is a consideration, but subscribing to any mailing list is probably far worse so. Nevertheless, I subscribe to more low-volume feeds, and so haven’t had to make any adjustments to my mail setup.

                                                Good writeup either way!

                                                1. 1

                                                  I’m also having rss feeds sent to my inbox, using https://github.com/fgeller/feeder and a daily cronjob. Feeds go into a yaml file and that’s about it.

                                                1. 3

                                                  I’m looking to migrate my home-server from a hodge-podge Ubuntu/Minikube setup, to a properly provisioned Fedora CoreOS setup with Podman and systemd driving most of the functionality (both of which are easier to fit in my tiny head).

                                                  So far, I’ve been able to set up a small Makefile which handles the provisioning aspects (onto a VM, for now), and am planning on setting up things so that containers auto-update themselves via a systemd timer pulling from a remote repository, and post-checkout hooks handling re-deployment of any containers that have changed since.

                                                  It’s pretty exciting to see it work!