Threads for mro

    1. 3

      is this a way of leveraging ‘make illegal state unrepresentable’ (Yaron Minsky)?

      1. 3

        I think so!

    2. 10

      yet still true https://indieweb.org/manual_until_it_hurts. I try to do both at a time, i.e. start manual but once automating doing it completely and with the simplest tools at hand fit for the job.

      1. 8

        This weekend, I discovered some automation I’d been leaving off for too long. Typically, when I set up a Linux VPS, I have a two page long checklist that I run through manually. Think patching, creating a non-root user, adding them to the distro’s equivalent of the wheel group, making sure my public key is in the right authorized_keys file, configuring package repositories for docker, installing docker, configuring permissions, etc.

        None of it’s hard, and with my notes I usually fly through it in half an hour or so of wall clock time, start to finish. I do this maybe a half dozen times a year.

        This weekend, I was trying to document it for someone else (who would need more verbose notes than the terse points in my checklist) and in the process, I kept messing up the firewall configuration.

        The second time I needed to drop into a web shell to fix that, I decided it was time to stop avoiding ansible. I’m embarrassed that I waited so long to do that. I thought the mental load from learning enough ansible to do this was going to be high enough that it was worth the 3 hours or so a year I spend doing it manually, plus whatever I needed to spend documenting it better. I was wrong. Writing the ansible playbook took me less time than writing decent documentation would, and saves me at least 25 minutes per VPS setup assuming I don’t make any mistakes when I do it manually.

        I suppose I learned that I need to account both for the cost of documenting the manual process and the possibility for errors in the manual process a little bit better when I’m weighing whether or not to automate.

        1. 4

          it’s like “living documentation” to me, same thing for a build job (how to build the artifact(s) needed for the deploy step) or a deploy step (where the thing goes, which config, are permission fine, is the stuff up and reachable)

        2. 1

          yes, I also often start documenting and end up with automation.

        3. 1

          Do you have any lessons learned with Ansible? I’ve been working with it for like a year and it feels absolutely god awful to configure anything in it. I don’t like the magic of jinja and the with blocks are painful.

      2. 4

        Problem is that when it starts “hurting” is very different for different people. Some people are fine doing fairly manual tasks for a couple hours, some get annoyed after a couple minutes. Sometimes that push comes quicker than a simple cost-benefit analysis would suggest it should, purely because such analysis doesn’t (and can’t) include an emotional/psychological costs of doing the task that one might experience.

        1. 2

          why is ‘different for different people’ a problem?

          1. 2

            Because if you’re working in a team, it might end up being more efficient to let another person to do the manual task rather than automate it because the person originally given the task gets extremely annoyed at it.

      3. 2

        This is a fantastic practice, and one I forget almost every time…

    3. 1

      We have half a dozen of these cluttering up our office and I wasn’t sure what to do with them

      1. 2

        we run the website of our hackers youth club on one of those. From SD-card. The one from 2012 :-)

      2. 2

        I’ll take some off your hands! I’ll pay shipping!

    4. 2

      gentrifying the whole space into an anxiety-inducing corporate circus.

      a gem

    5. 15

      ocaml, musl, alpinelinux, busybox, ublock origin, curl, rsync, sshfs, nnn, fish, helix, xmllint, trang, rapper

      1. 3

        ublock for sure!

    6. 5

      adding jitter was new to me, nice!

    7. 11

      doesn’t the inline JS (onlick=“this.classList.add(‘green’)”) harm/complicate CSP headers?

      1. 1

        Yes but those are only needed if you are including random JS anyway.

        1. 3

          if you disable CSP for enable onclick, Your whole page/site, has the CSP disabled. Which means that all the rest of your page/site has now a potentiel XSS even if you include an innocent user comment for exemple.

          1. 3

            You really have to whitelist user comment markup. I know it’s nice to have depth in defense, but not letting users inject JS has got to be step one.

    8. 1

      no date, no author, no reference. Looks fishy.

      1. 35

        This is legitimately from Mozilla.

        1. 7

          In future, if Mozilla is doing official things on domains unrelated to any existing project domain, it would be helpful to:

          • Link to that domain from one of the official domains
          • Have a link in the thing on that domain pointing to the place Mozilla links from it.

          Doing this would mean that, in two clicks, readers can validate that this really is Mozilla-endorsed and not someone impersonating Mozilla. Training Mozilla users that anyone who copies and pastes the Mozilla logo is a trusted source is probably not great for security, in the long term.

      2. 18

        There’s literally a date, references at the bottom, and it says Mozilla both at the top and bottom.

        1. 6

          date acknowledged, but placing a mozilla logo is too easy faked.

          IMO would be ok on their own domain. But not on a vanity domain.

      3. 7

        I, too, question whether this page was really written by Mozilla, but I did confirm that Mozilla and other companies really do oppose Article 45 of eIDAS.

        This Mozilla URL hosts a 3-page open letter against Article 45 of eIDAS: https://blog.mozilla.org/netpolicy/files/2023/11/eIDAS-Industry-Letter.pdf. It’s a completely different letter from the 18-page letter linked by this story, though both letters are dated 2 November 2023. This story references Mozilla’s letter as if it’s by someone else:

        Their calls have also been echoed by companies that help build and secure the Internet including the Linux Foundation, Mullvad, DNS0.EU and Mozilla who have put out their own statement.

        Some other parties published blog posts against eIDAS Article 45 today:

      4. 2

        There’s a very big Mozilla logo at the top.

        1. 21

          And at the bottom, yet it’s not on a Mozilla domain, it doesn’t name any Mozilla folks as authors, and the domain it is hosted on has fully redacted WHOIS information and so could be registered to anyone. I can put up a web site with the Mozilla logo on it, that doesn’t make it a Mozilla-endorsed publication.

          1. 2

            fully redacted WHOIS information

            As is normal for any domain I order from inside the EU.

            Edit: And the open letters are all hosted on the https://www.mpi-sp.org/ domain. That doesn’t have to make it more credible, but at least that’s another institute.

            1. 9

              As is normal for any domain I order from inside the EU.

              It is for any I do as an individual. Corporate ones typically don’t redact this, to provide some accountability. Though I note that mozilla.org does redact theirs.

              1. 2

                Good to know. The company domains I dealt with all have this enabled. (Some providers don’t even give you the option to turn it off.)

              2. 1

                I’ve found this to be inconstantly administrated. For instance, I believe that it is Nominet (.uk) policy that domain registrant information may be redacted only for registrants acting as an individual. But registration information is redacted by default for all domain contact types at the registry level and there is no enforcement of the written policy.

            2. 6

              This is the link that was shared by Stephen Murdoch, who is one of the authors of the open letter: https://nce.mpi-sp.org/index.php/s/cG88cptFdaDNyRr

              I’d trust his judgement on anything in this space.

    9. 4

      stateless … persistence? Like mute sermon? Like O(0)?

    10. 2

      Looks interesting but all my servers run variants of nginx which doesn’t support CGI. Hopefully there will be a FastCGI variant in the future.

      1. 2

        there are a bunch of wrappers that provide a FastCGI interface and call CGI scripts internally. E.g. in debian repos there is fcgiwrap

        1. 2

          That could be a potential option although I’m loathe to add two things just to run one thing.

          1. 1

            whoever chooses nginx, chooses highly parallel executed server side code over cgis. For Seppo being unprivieged and single-user, cgis are an unmatched fit.

            (if I can choose, I often pick lighttpd)

      2. 1

        Hi, thanks for commenting. Running a fastCGI backend requires root access right? So that’s not an option. Who has root access can run an (additional, proxied) CGI capable server like lighttpd or apache in case.

        The #Seppo usecase is special and leverages the (unprivileged) single-user situation to achieve the goal of layperson operability. It’s a painful decision, but in doubt I favour the layperson needs. Those not being root.

        1. 2

          Running a fastCGI backend requires root access right?

          In the sense of having it configured, I suppose it requires access to the server config, yeah. But that doesn’t necessarily mean root if you have a control panel like webmin or configuration scripts that people like Bytemark provide. Or even if you’re running nginx as an unprivileged user with port forwarding to avoid needing root access for port binds.

    11. 2

      I love the idea of people being in control of their own ActivityPub presence, thank you for putting in the work to make this possible!

      The announcement didn’t include a link to a running instance to see what it looks like, though in clicking around I eventually found https://seppo.social/demo/o/p/ which I assume is a demo instance.

      To follow an ActivityPub account I normally look for something like @user@example.com which I paste into my Mastodon instance’s search box and then I can follow it. The demo instance says “@demo@seppo.social”, but if I search for that in Mastodon I get a 404 error. Sometimes searching for the permalink of a post is another way to get Mastodon to connect to a new instance, but the permalinks on these posts don’t seem to be the kind that Mastodon likes.

      Also, I’m a little confused by the installation instructions - doesn’t ActivityPub need a bunch of endpoints like /.well-known/webfinger? Does the CGI binary try to create those as symlinks to itself and just hope that it’s running in the root directory or something? What if the server requires CGI binaries to be in a /cgi-bin/ directory?

      1. 2

        looks like the cgi binary is a self-extracting archive of sorts

        1. 1

          indeed it unpacks its assets.

      2. 1

        thanks for trying, mentioning the https://seppo.social/demo is a thing for next time :-).

        Which mastodon gives you a 404? Webfinger used for discovery is in place:

        $ curl -L ‘https://seppo.social/.well-known/webfinger?resource=acct:demo@seppo.social

        And yes, seppo tries to create a symlink for webroot/.well-known/webfinger/.htaccess to the same location within its own dir. It will only touch symlinks. So if you have something there already that remains - as you may have your reasons for it.

        Concerning /cgi-bin/ - I don’t know yet.

        1. 2

          I was trying on the Mastondon instance a friend runs for me, but I just tried again now and it worked beautifully, so I guess it’s fixed?

          Thanks for looking into it!

      1. 4

        many others are less ActivityPub compliant because they rely on implicitly mandatory additional idiosyncrasies (e.g. nodeinfo) or just don’t respond to proper AP requests e.g. as basic as accept their own follow request. I am currently struggling with this and the lack of interest by most developers of being federated with as I implement seppo.social.

        Other than mastodon, what about https://codeberg.org/streams?

        The problem of interop is also rooted in the standards not having compliance tests a la html/css/atom validator or ssltest.

        So neither visitor nor operator can check the compliance.

        edit: typo

      2. 3

        People seem to have a tendency to settle on one system / product. I imagine there are a lot of reasons, but I think the sorta cyberpunk-ish utopia of lots of custom or semi-custom systems talking to one another just isn’t what most people want. Like, if you’re going to be talking to a bunch of people on Mastodon, why not just use that instead of trying to make an alternative work?

        1. 2

          Short answer: freedom; why not let people to use whatever they think fit? If you already have a WordPress blog, you can enter the Fediverse by clicking a toggle.

          1. 2

            Sure, but if it’s extra work, then most people are just going to support the one or two popular options. I mean, maybe they should have called this “Mastodon on Wordpress.com” since, presumably, any server that acts exactly like a Mastodon server will work.

            1. 1

              I see what you are saying, I feel is just similar to the linux distros situation, everyone wants their own.

        2. 1

          The nice thing about using a standard instead of a specific product is that the minority that does want to do something different than the people who settle on one system / product can.

          I have implemented my own ActivityPub server (which has an interface that is very unlike any other), and I have added rudimentary ActivityPub support to my own blog engine. People won’t have a tendency to use my implementations, for sure (I am the only user of both), but I still get interoperability - which is fun, for me.

          I made these servers for myself, not to be the most used or the one system / product that people choose, but because a) it’s fun, and b) I get to decide how they work, and can make them just the way I want them.

          I spent quite some time figuring out quirks of various ActivityPub-implementations to be able to talk to them, and sure, that was frustrating at times, but much better than having to use one system / product the majority chose, that doesn’t suit me.

          1. 2

            I definitely support that, but I do think that a company like Automattic (the Wordpress folks) just isn’t going to be more spec-compliant than the most common implementations. After all, Wordpress is a mass market product.

    12. 5

      non-native speaker here - is “eating the world” considered a good thing?

      1. 12

        It’s a bit context-dependent, but I don’t think it necessarily implies a value judgement in either direction.

      2. 11

        I mean it doesn’t sound like it?

        The term “ is eating the world” originates, I believe, from Mark Andreesen, the literally egg-headed VC whose claim to fame is to have been involved with starting Netscape. He wrote a blog post a few years back titled “Software is eating the world”. For some reason people took this to be a good thing.

      3. 4

        Doesn’t tend to be a good thing when I say it, but as others have mentioned there’s some weird context behind it.

    13. 5

      Escaping from GitHub and landing in GitLab, Codeberg, SourceHut, is no better to me ideologically.

      UX aside that’s neglecting the fact that the business models and scale are dramatically different. The first are billion dollar for-profit enterprises, one is a single-person company and one is a pro-bono association.

      Depending on your priorities one or the other may be better. Different they are for sure.

      1. 1

        I mentioned they are all great services and still a large step in the right direction, didn’t I? I don’t think I neglected anything?…

    14. 2

      I do agree that using regex works fine in simple cases, as it’s demonstrated in the post. However, in general case you cannot parse HTML with regex: https://stackoverflow.com/a/1732454 :) So, probably Python with BeatutifulSoup is a more appropriate tool in general case and allows you to adapt the scraper more quickly if the website layout ever changes.

      1. 3

        I would agree as well. I wouldn’t scrape web in bash since it can be hard

        But i just came across this tool yesterday https://github.com/ericchiang/pup

        This allows you parse html and css

        1. 3

          +1 for pup - I use it for a bunch of things because it’s nice and quick for prototyping (and I’ll switch to goquery if required later.)

      2. 2

        usually when scraping you know what you look for and choose the flexibility required. xmllint can pull out small portions from complex html documents where regex is left behind.

        Looking for a general solution can easily distract from the task at hand. Which often isn’t a general one. And python dependencies are a lot of effort to maintain over decades and OS changes.

        It boils down to “choose the right tool”.

    15. 4

      after a decade of scraping using ruby, later Go, I finally arrived at dash (plus a tiny filter in statically compiled OCaml) as well. IMO unbeatable for long-term operation with near to no maintenance. Happily survives OS upgrades and changes. https://mro.name/privatkopie

      1. 2

        Nice ,

        I’ll definitely try dash

    16. 3

      Sounds like a false dichotomy - a name can be both descriptive and short (KeePass, LibreOffice, Document Scanner), or even descriptive and cute (Files, xAutoClick, Cheese). Names which give no hint whatsoever what the thing does are bad. It’s exasperating rather than cute. Having to read a zillion descriptions to understand what should be scannable is the worst.

      Thankfully, others at GNU don’t agree with OP: Files used to be called Nautilus, and Passwords and Keys used to be called Seahorse. Imagine looking at the old names and trying to judge what they do, or whether they’re the main application for their use case or just another copy-cat.

      On a less serious note, some naming suggestions for GNU EMACS:

      • GNU’s Not (Just) EMACS (GNE, pronounced “genie”)
      • Hardcore Text Editor (HaTE)
      • You Ain’t Gonna Remember All The Shortcuts (YAGRATS)
      1. 0

        I find your nick and mine quite ok, despite being not descriptive. And they are names, right? Maybe names isn’t what you want for packages but rather short descriptions.

        I find names being human handleable, practical, good-enough, convenient identifiers. Being descriptive is optional and sometimes whimsy.

        And about judging - looking at names and not bothering about the thing maybe judging a book by it’s cover.

        1. 2

          This discussion is about product names, not personal names.

    17. 1

      Hard to disagree here. But for those of us that work on web apps, how do you make network requests more lean? Web apps are tremendously IO bound.

      1. 1

        I’d say that, having gotten to the stage of web apps — which involves the equivalent of a modern operating system* running as an application on another modern operating system (*or a hypervisor if you think about tab isolation!) — there is probably not a very straightforward way to read this advice.

        But you can still think about each part by itself — frontend, backend, the API that connects them, storage, etc. — and think about what the most orthogonal/expressive-power-having decomposition of concepts is, constantly rally for reducing complexity over time (since it will creep up on its own, always), have a goal of polished code/design and not just something that works, etc. None of this changes that a webapp will be I/O bound in many respects, but that doesn’t detract from the worth of the effort in any way.

      2. 1

        do fewer of them with less payload.

    18. -1

      Why would I pay money for this?

      1. 12

        You’d probably primarily pay for the git and build service, not the pages one specifically

      2. 10

        You wouldn’t?

        You pay for the git hosting, and you get this included for free.

      3. 1

        to not be sold otherwise.

    19. 2

      would be impressed if it wouldn’t talk to the internet but LAN would suffice.