Threads for rod

  1. 11

    When I chose a server, I considered their federation policy, because I didn’t want to out-source deciding which accounts I should be allowed to follow.

    https://fosstodon.org/about and https://hachyderm.io/about/more both have long lists of suspended servers: “No data from these servers will be processed, stored or exchanged, making any interaction or communication with users from these servers impossible”.

    I prefer the federation policy of https://qoto.org/about/more, which doesn’t suspend any servers. There’s a few others like that.

    1. 8

      The unfortunate reality of being on an instance like qoto.org is other, “heavily moderated” instances will suspend/silence you because of the lax moderation policy.

      1. 6

        The qoto.org admin notes:

        Thankfully the servers blocking us are few and far between and are limited to only the most excessive and aggressive block lists. As I said, QOTO has one of the largest federation footprints on the fediverse,

        https://qoto.org/@freemo/109319817943835261

        1. 1

          Anecdotally, every other server I’ve seriously looked at joining has had QOTO completely blocked/suspended/filtered. There are some things about it I found attractive but it seems like I’d be cut off from a lot of the community I’m looking to find on the fediverse based on where my twitter follows/followers have migrated.

          Alright, should have double checked before posting. It looks like this is correcting, as at least Hachyderm and infosec.exchange do allow it now. (Still appears blocked at Hachyderm but the issue removing it is closed)

        2. 2

          It seems to have a lax federation policy, not a lax moderation policy. It doesn’t block other instances, but it moderates its members’ behavior.

        3. 3

          I can understand your line of thought, but often times there are good reasons to defederate certain instances. For example pawoo.net (japanese instance) allows content which is illegal in other countries. And since mastodon caches content of remote servers, this makes defederation or at least restrictions almost a must.

          1. 3

            Yes, qoto.org’s policy is:

            We do not silence or block other Fediverse instances based on agenda, politics, or opinions held by their staff or users. We only require servers we federate with to follow one simple rule: respect a user’s right to disengage. Offending servers will only be silenced, not blocked, blocks will be reserved for technical assaults only such as DDoS attacks, or legal issues such as sexual abuse and child porn.

            qoto.org doesn’t currently block any servers, but is willing to if needed for the above technical/legal reasons.

            Other instances blocklists go beyond these technical/DDoS reasons. The advantage of a federated protocol is being able to pick.

            1. 1

              I was on mastodon.technology, but the whole time I just wanted my own instance. Now when it shut down, I finaly have one. Then I can deal with my own policies.

            2. 2

              Wow, I didn’t know Mastodon instances are censoring each other already.

              I just tried to send a message from qoto.org to hachyderm.io and it did not arrive.

              No error message on the sending side.

              Then I sent a message from indiehackers.social to hachyderm.io and it arrived immediately.

              1. 5

                hachyderm.io has recently removed qoto.org from its blocklist: https://github.com/hachyderm/hack/issues/8

                1. 1

                  But the direct message never arrived.

                  1. 1

                    Why is it still listed on their /about/more page?

                    1. 2

                      Possibly a mistake and/or the lifted ban hasn’t taken effect yet.

                  2. 4

                    Instances have blocked/silenced other instances for a long time. It’s a core part how the Fediverse views federation.

                    1. 3

                      One of the core ideas of Mastodon is that instances control who they federate with.

                      So you are free to create an account on any instance you like and post anything that stays within the instance’s rules. You just aren’t guaranteed an audience – other people may block you, or other instances my choose not to federate with the instance you’re posting on. This is freedom of speech in its purest form: you can say what you like, and other people can ignore you if they like. Or if they dislike their instance’s policies, they can move to another one or set up their own. But you can never, ever, a million billion times never, force another instance to federate with you or show your posts, or force another user to listen to you.

                  1. 1

                    There are lots of them, but I actually thing for this use case distributed is less flexible than federated. A federated protocol you can host your own, or not, and host it anywhere. Distributed you have to host your own, and often in up doing it on an end device like a mobile phone where it eats the battery.

                    1. 1

                      Are there any examples of battery-friendly P2P protocols?

                      Perhaps protocols could have two tiers of P2P nodes: leaf nodes and supernodes (name stolen from Skype). Phones would be leaf nodes, and would not require constant connectivity. Perhaps the leaf-supernode trust relationship could be assured via Verifiable Data Structures ( https://security.googleblog.com/2021/07/verifiable-design-in-modern-systems.html ).

                      Federated protocols (Mastondon, XMPP, Email) are just that, but without the cryptographic assurance between leaf and supernode. In practice that might be sufficient.

                      1. 1

                        Yeah, exactly. Before I got to your last bit I was thinking “you’re just describing federation” heh

                    1. 3

                      Maybe I’m missing something here, but why would you want to expose your hobbyist web server directly to the internet, without fronting it with something like nginx?

                      1. 22

                        You might just be exposing it to a local network.

                        You might prefer to expose a server in a managed language, rather than nginx (C). The days of needing to be protected by Apache/Nginx are over.

                        You might want to run your Nginx as a non-privileged user (without the complexity of a separate process doing the listening).

                        1. 6
                          • Since I switched from Apache to Nginx, I lost the ability to properly manage the accept-language header.
                          • Configuring Apache or Nginx is a significant chore.
                          • I like to keep things simple, and the most popular web servers out there are not so simple.

                          The better question would be “why would you want to front your hobbyist web server with anything?”

                          1. 1

                            All about that .venv/bin/flask run life

                            1. 1

                              Can you decipher that for me? I have absolutely no clue what you’re hinting at.

                        1. 6

                          If you want a similar tool that also supports asymmetric encryption (air gapped decryption keys), then please also give my one a try:

                          https://github.com/andrewchambers/bupstash

                          1. 1

                            Nice! A state of the art feature, as I understand it, is to permit multiple concurrent writers to the same archive. Does bupstash support this?

                            Duplicacy supports this via lock free techniques: https://github.com/gilbertchen/duplicacy/wiki/Lock-Free-Deduplication

                            1. 1

                              It supports multiple concurrent writers with no problem, the lock free removal is interested though - maybe something I can add.

                              1. 1

                                Out of curiosity, would such a change sidestep the issue in this issue from awhile back?

                                1. 1

                                  Probably not, however I am investigating ways to support filesystems that don’t support file locks. One example is how sqlite3 does it, with an option to use a fallback instead of a filesystem lock (like attempting to create a directory).

                                  1. 1

                                    Ok thanks for the detail. I am really excited to see what you come up with. Bupstash looks absolutely amazing to me.

                            2. 1

                              Another alternative for asymmetric encryption is rdedup, but I haven’t found (or bothered writing myself) the tooling around it to make it a worthwhile backup solution for me.

                              I’m currently using restic on my file server, backuping to wasabi s3-compatible storage. Works great.

                            1. 7

                              restic is part of the current-gen backup tools that uses rolling hashes to get snapshot-oriented block-based deduplication. (See also arqbackup.)

                              If you’re on a previous-gen tool such as duplicity, rsnapshot, Apple Time Machine, rdiff-backup, then it’s worth a look.

                              1. 5

                                Not only rolling hashes, but also content defined chunking https://github.com/restic/chunker which is just magic really. Deduplicating segments not at block boundaries.

                                The compression was holding me with Borg, but I’m happy to give restic a try now. I hope they improved with their performance issues…

                                1. 3

                                  What sorts of datasets did you miss compression for? In my experience anything significant already has file-level compression (e.g. jpeg, mpeg, git packfiles, …)

                                  1. 2

                                    I haven’t looked into the details, but my laptop backup gets 20% smaller with compression.

                                    1. 1

                                      Plenty of things don’t, think sqlite database (or any database really), most configuration files, some PDFs.

                                      By my experience even standard lz4 can squeeze 5% out of almost pure media file datasets. ZSTD as restic uses is a bit better. In more realistic applications, the compression ratio tends to be 20-50%. I’ve even got a single ZFS dataset that sits at 800% compression. An upside of ZSTD is also that decompression speed is not a function of the compression settings. So using zstd at it’s maximum (bearable) settings (already upgraded my restic repo) is well worth it for backups.

                                    2. 1

                                      I just read the article linked from that github (https://restic.net/blog/2015-09-12/restic-foundation1-cdc/) and it looks like “content-defined chunking” is just a new term for the same rolling hash concept used by borg (and bitbottle). Is there some reference that can explain the difference?

                                      1. 4

                                        It think that linked article already does it, but doesn’t call out explicitly. They’re completely different concepts, just used together in this case. You could have CDC without a rolling hash (for example calculating sha each time) and you can use rolling hash for whole fixed size blocks without doing chunking. Restic and borg use them both to achieve the same thing.

                                      2. 1

                                        Do you know by chance if kopia also has “content defined chunking”?

                                    1. 4

                                      The Nix space seems ripe for better hosted tooling to:

                                      • evaluate your nix config on each commit
                                      • build ISOs for each of your machines
                                      • run custom NixOS tests
                                      • secret management
                                      • rollout NixOS changes across a fleet
                                      1. 3

                                        IIRC https://hercules-ci.com/ may be (in part) what you are describing.

                                        1. 2

                                          Also found this project https://github.com/cachix/cachix https://www.cachix.org/ when searching for Hercules CI. I wasn’t aware there were hosted binary cache services. Thought only NixOS had a cache service for user convenience. Pretty cool.

                                          1. 1

                                            You can also run a cache on S3 (or an S3-compatible service - I run mine on Cloudflare R2), though the documentation for getting that set up and configured right is atrocious.

                                        2. 2

                                          https://garnix.io/ does a few of these (disclaimer: I start garnix)

                                          1. 2

                                            Seems one of the big value drivers in this and other hosted binary cache services is the real cost-savings. Running CI/CD operations on hosted providers, such as Github, isn’t necessarily free for heavy users.

                                        1. 9

                                          There is a lot of unfortunate overloading/confusion around different parts of the Nix ecosystem.

                                          On a more positive note, this problem is only getting attention due to success of reusing the modular parts of the Nix ecosystem:

                                          • single user installation
                                          • NixOS
                                          • Guix, reusing the Nix store
                                          • using Nixlang to generate YAML

                                          Still, I’m glad to see work on improving the naming.

                                          1. 45

                                            While this post is correct, I think the ship has sailed here. If we need a post defining layers of the ecosystem and boundaries between different things with Nix in the name, then we have a problem. And the problem is not the lack of blog posts carefully defining the terminology.

                                            If rather we started explicitly calling the language Nixlang or Somethingelse for example. Flakes are a great example of not being “Nix recipes” - we call them flakes and the boundaries are obvious.

                                            1. 21

                                              nixlang is such an obvious and good idea, I am going to start using that convention immediately.

                                              1. 3

                                                +1. It’s barely a rename at all, and removes the awkward overloading of “nix” (to refer to the language and the wider ecosystem).

                                                1. 3

                                                  Same here - I had been using “Nix the language” (along with “Nix the package manager”) but nixlang is so much better.

                                                2. 3

                                                  Could part of the problem be the audience? I’m not sure if the author is directing this at nix developers, such as those building hosted solutions or tooling, or the general developer the Nix community is trying to onboard. The blog site is called “Haskell for all”, so I’m torn between “Haskell” and “for all”, as there is some tension there in that title, which is probably a good thing. What the author is saying makes a lot sense and actually was helpful to me, but I care about Nix.

                                                  I mean a lot of people use Nix tools at the margins. They many not fully rely on it because they haven’t had a chance to invest time to learn nixlang. So many talk about Nix because it’s mind-blowing, but haven’t spent close to anywhere close yet to a 1000 hours using it in general. Me included. Hard for it to sink in until then. Nix is marketed to all developers. It’s mentioned on all the major places where developers tend to congregate on social media, such as Hacker News, Twitter, and Youtube. So they may not get it really because there are so many parts to Nix’s universe.

                                                  I don’t think these smart naming conventions, like calling Nix’s language nixlang or Flakes not being something like “Nix recipes”, will help the general developer, too much. They won’t be using Nix at all until it’s dead simple, like how Github made git, or Coinbase made blockchain. They may use tangentially or use it then bail and say ‘this part is hard or no good’. Nix is not even a blip on radar in absolute terms- I don’t think StackOverflow Survey even mentioned it this year. It’s kind of cool to see Nix just cross critical mass, at least from an infrastructure standpoint, where it’s going to be on the radar very soon in some way.

                                                  Any system with content-addressability and cryptographic concepts exposed to the user is inherently complex. So many moving parts. How many people does anyone know (outside of places they met on developer communities online) that are near-experts in git, ipfs, nix or the blockchain? And how many are near-experts in all 4 of those, even here? Most developers have no idea even what git is beyond basic commits and basic feature branch merging, let alone what a directed acyclic graph is. Or what a layer 2 (or 3 I can’t keep up) blockchain is, such as with Avalanche subnets.

                                                  1. 3

                                                    I think this is fair. I like the idea of Nix, but I tried it a while back and wasn’t ready to take on the learning curve it presented.

                                                    While I’m somewhat aware of the layers the author describes, I don’t think the “new user” experience separates them well (or at least it didn’t in ~2019). The overall impression I came away with was “Nix is complicated”.

                                                    1. 2

                                                      I see. If you have nodejs projects, checkout node2nix. Two commands and you can launch an app with nix. I develop a serious service with it, although we deploy with a docker image to the cloud. It’s so much nicer than running docker-compose up and messing with docker networks and port mapping. There are other projects, like rust2nix (haven’t tried yet). Sure others exist or are on the way. Think these types of high level projects are the future for devs like you and me. And AI-CoPilot stuff that can generate Flakes (or whatever they invent in the future) for you or get you over 60% there.

                                                      I’d say nix is complex. Not to nitpick. So one day it will be super simple for end-users, like with node2nix. The dependency hell of npm or python’s conda or pip is complicated. Complex things are made of simple coherent parts - that’s nix. I helped a friend rebuild a dependency tree for a Windows python training model for Linux in WSL. I had to install everything from scratch using conda and pip in a docker environment. My PC almost blew up from calculating the dependency graph. . Not to mention complications with docker recognizing recognizing the gpus ithat I had to research to solve.

                                                1. 5

                                                  I also love my Pinephone and Pinephone Pro.

                                                  They’re behind alternatives in lots of important ways, such as features and stability.

                                                  They’re ahead in others: privacy and control (cfgmgmt, UI choice, etc.)

                                                  I contribute to the community because I believe in the fundamentals, and that the flaws aren’t fundamental, and can be improved with time.

                                                  1. 11

                                                    Partnering with an OS to provide a decent out of box experience is a good idea (though controversial amongst people who want it to be a blank slate) - but Manjaro’s lack of professionalism and lack of mobile focus (versus say, pmOS) seems questionable.

                                                    1. 14

                                                      The boot loader problems the author describes are also real, and not just an issue for distro maintainers/porters. My PBP has been soft-bricked a handful of times — and is now completely dead barring destructive SMT-soldering surgery I’m not prepared to do — because of the absence of reasonable firmware and boot loader management tools.

                                                      I love that PINE64 has pushed the envelope on affordable, hackable, non-x86 hardware. I’ve bought many of their products as fun hacking devices, and expect I’ll continue to do that.

                                                      What I won’t be doing is using their hardware for any projects where, “I bricked it during a routine OS update” isn’t an acceptable obstacle. So: home media player, sure; small office firewall or backup server, no way. Anything more critical or hands-off than that, forget it.

                                                      1. 10

                                                        An unfortunate symptom of Pine stuff (at least the stuff that runs Linux - the iron and PSU seem pretty OK from what I hear) being nothing more than tinker toys for engineers, and a self-reinforcing cycle of sloppiness that prevents it from achieving any more than that. I’ve heard so many stories about weird design quirks with power management or low quality NAND that really affect the experience.

                                                        I know my PinePhone is slightly infuriating - the default phosh loadout is annoying. On the software side, one example: You try swiping down from the top for notifications or settings? No, you have to tap - and the welcome app that appears on startup tells you about this and links to a two year old GitLab issue. (I want to try Plasma or god forbid, sxmo, but swapping DEs without cleaning it up with a reflash seems unfortunately annoying.)

                                                        1. 1

                                                          This is fixed on Phosh 0.20, which now supports gestures.

                                                    1. 8

                                                      The only problem with lots of custom aliases (or custom keybindings in other programs like editors), is that the muscle memory burns you every time you have to work on a remote machine. I used to go custom-to-the-max with my config, but I’ve gradually shifted back to fewer and fewer aliases except for the most prevalent build/version control commands I run dozens of times each day.

                                                      1. 9

                                                        When I need to remote into machines where I can’t set up my shell profile for whatever reason, I just config ssh to run my preferred shell setup commands (aliases, etc) as I’m connecting.

                                                        My tools work for me, I don’t work for my tools.

                                                        1. 5

                                                          You mean, could single session only? Care to share that lifehack? I’m assuming something in ssh_config?

                                                          1. 2

                                                            Yeah, single session only. There are a bunch of different ways to skin this cat — LocalCommand and RemoteCommand along with ForceTTY in ssh_config can help.

                                                            Conceptually you want to do something like (syntax probably wrong, I’m on my phone)

                                                            scp .mypreferedremoterc me@remote:.tmprc; ssh -t me@remote “bash —rcfile ~/.tmprc -l; rm .tmprc”

                                                            which you could parameterize with a shell function or set up via LocalCommand and RemoteCommand above, or skip the temp file entirely with clever use of an env variable to slurp the rc file in and feed it into the remote bash (with a heredoc or SendEnv/SetEnv)

                                                        2. 2

                                                          every time i have to work on a remote machine i do the commands through ssh or write a script to do it for me.

                                                          1. 2

                                                            naming a meta-archive-extracter, “atool” doesn’t help either. OP used unzip for this but it is overloaded. uncompress also is taken.

                                                            What word would you guys use for aliasing it?

                                                            1. 3

                                                              I use extract as a function that just calls the right whatever based on the filename.

                                                              1. 2

                                                                I think prezto comes with x alias, and I like it a lot. It’s burns easily into the muscle memory.

                                                              2. 2

                                                                To defeat muscle memory when changing tools, I make sure the muscle memory command fails:

                                                                alias unzip = “echo ‘use atool’”

                                                                It doesn’t take many times to break the muscle memory. Then I remove the alias.

                                                                1. 1

                                                                  Is atool there by default on Linux boxes?

                                                                  1. 1

                                                                    Nope. At least I’m not aware of any Linux distro installing it by default.

                                                                    But being installed by default is IMHO totally overrated. The main point is that it is available in many Linux distribution’s repos without having to add 3rd party repos—at least in Debian and all derivatives like Devuan, Kali oder Ubuntu.

                                                                    1. 2

                                                                      I understand, but it’s not the same. If I don’t have a shell regularly there, and not my own dotfiles, I likely want to avoid installing and removing system packages on other people’s systems. When stuff breaks, I want the minimum amount of blame :)

                                                                      Not that this is not a useful tool.

                                                                      1. 1

                                                                        Ok, granted. Working as a system administrator it’s usually me who has to fix things anyway. And it happens only very, very seldom that something breaks just because you install a commandline tool. (Saying this with about 25 years of Linux system administration experience.)

                                                                        Only zutils can theoretically have an impact as it renames commandline system tools and replaces them with wrappers. But so far in the past decade, I’ve never seen any system break due to zutils. (I only swa things not working properly because it was not installed. But that was mostly because I’m so used to it that I take it as given that zutils is installed. :-)

                                                                        1. 2

                                                                          Yep, different role. I did some freelance work a long ago, and learned on (fortunately) my predecessor’s mistake: they hired me to do some work, because I guess someone before me updated some stuff, and that broke… probably PHP version? Anyway, their shop didn’t work any more and they were bleeding money till I fixed it. It was one of my early freelance jobs, so that confirmed the age-old BOFH mantra of if it ain’t broke, don’t fix it. So given time, I would always explicitly ask permission to do this or that or install the other, if needed.

                                                                          But I went a different route anyway, so even though I am still better than average, I think, I’m neither good nor professional. But I think old habits die hard, so that’s why I’m saying “if this stuff isn’t there by default, you’ll just have to learn your tar switches” :)

                                                                2. 2

                                                                  muscle memory burns you every time you have to work on a remote machine

                                                                  Note that this doesn’t apply for eshell as the OP is using: If you cd to a remote machine in eshell, your aliases are still available.

                                                                  1. 1

                                                                    Command history and completion suggestions have really helped me avoid new aliases.

                                                                  1. 7

                                                                    I have been trying to package something with Nix for a couple of days now and the experience is just horrible. The documentation gets worse and more esoteric the deeper you go - thanks for this article, will allow me to un-bodge a bodged part of CI (just to bodge it differently, but still).

                                                                    1. 11

                                                                      Yeah, Nix the language is fine overall. What’s most difficult are all the libraries that ship with nixpkgs. It’s impossible to use them without reading the code. And there are small variations that prevent knowledge from being transferable.

                                                                      Hopefully, the article helped bring back a bit more Dockerfile-like experience.

                                                                      1. 13

                                                                        Even reading the code is laborious due to dynamic typing and utter lack of documentation. If some package function takes an argument and passes it to some other function, and you want to know what type that function is, you need to find every call-site for that function or else go down the call stack and find every usage for that parameter. It’s an absolute grind.

                                                                        1. 3

                                                                          +1 to the nixpkgs libraries being tricky to follow/understand.

                                                                          This was a major motivator for my in-progress switch to Guix (which has its own issues).

                                                                          I find the library aspect of Guix much easier to follow, since they’re written wrt records with decent levels of abstractions (see the operating-system record, for example).

                                                                        2. 4

                                                                          While the whole nixpkgs is complex and lacks good docs, I’ve found it not that deep in the end. There’s a few language-specific abstractions to learn, nix language itself, but then… in the end it’s functions generating shell scripts. The serious problems remain where they are with any packaging - getting a reliable, reproducible environment. So for example dealing with Darwin sdk quickly becomes the biggest problem rather than nix.

                                                                          This is not a disagreement with your experience, i also think it sucks. Just a note that “more esoteric the deeper you go” flattens out surprisingly quickly. I think I “got” nix quicker than deb/dh.

                                                                          1. 4

                                                                            Ah, then can you help me find out how to change the kernel in NixOS to L4Linux? I hit a cognitive wall around cross-compilation, especially trying to introduce a new architecture (L4Linux seemed to require this, or maybe it looked like a reasonable way forward when I tried doing this).

                                                                            1. 1

                                                                              I think that one would deserve a blog post rather than a lobsters comment :-)

                                                                              1. 1

                                                                                I’m fine whichever way you prefer!

                                                                            2. 1

                                                                              in the end it’s functions generating shell scripts

                                                                              This idea alone scares me

                                                                              1. 2

                                                                                Most of the packaging is running configure, make, yarn, cargo, install and others. It all reduces to firing up a few processes at some point. Why is it scary?

                                                                            3. 3

                                                                              The best documentation is the implementation, for better or for worse

                                                                              1. 2

                                                                                Nix documentation is bad

                                                                                Just how long should it stay like this? This something I’ve been reading for, what, 2 years…

                                                                                1. 9

                                                                                  The documentation is awful and with the push for Flakes over regular derivations the situation is getting worse. It won’t be better for a while, although what exists is usable. But trying to do most non-trivial things requires finding someone who’s done something similar and using it as an example. I am trying to understand more of the ecosystem so I can start an alternative Nix/NixOS wiki. The current wiki is full of outdated and poorly explained disparate pieces of information and should really be more like ArchWiki.

                                                                                  1. 1

                                                                                    I don’t know how management of these projects work, but can’t they like, just assign 150k for two devs for two years to fix the documentation once and for all?

                                                                                    1. 10

                                                                                      You got 150k laying around and two people who know enough about Nix but don’t care to hack on it directly?

                                                                                      1. 4

                                                                                        You can certainly motivate people using pooled money. Nix macos is being supported through https://opencollective.com/nix-macos so maybe the documentation could be done the same way.

                                                                                        1. 2

                                                                                          Because it’s a technical problem. Documentation is a tough nut to crack. Technical people often don’t even know how to tackle it.

                                                                                  2. 4

                                                                                    Until a hero comes along.

                                                                                    The NixOS project is largely driven by volunteers. Volunteers work on the things they are interested in. And you get to benefit from it for free. That’s the deal.

                                                                                    It’s not some corporation where you allocate Human Resource on the docs. You need to actually get people interested in solving the problem, and it’s hard. Most people in open source are technical people, and they like to solve technical problems.

                                                                                1. 2

                                                                                  I agree with this effort, pay for sourcehut, mirror some repos to codeberg, and run an internal gitea instance.

                                                                                  That said I have a hard time replacing the discovery of projects that GitHub provides. Many times when looking for a solution I can search GitHub and find an existing project. Search engines aren’t specialized enough for this type of searching, and most of them are so filled with SEO junk to be considered useless.

                                                                                  Also do people really consider stars to be some sort of social thing? I star repos because it’s an easy way to bookmark projects I may be interested in later. Browser bookmarks don’t give me the same specialized filtering. I also consider stars useful when comparing similar projects, but only as one of multiple data points. I’ve never considered them to be some sort of status metric.

                                                                                  I’m really looking forward to federated forges, but I’m worried finding projects will be harder. I hope discovery is part of the federation implementation.

                                                                                  1. 4

                                                                                    +1.

                                                                                    I’m going to start querying across multiple sites, as a poor workaround:

                                                                                    “rust (site:github.com | site:gitlab.com | site:st.rt | site:codeberg.org | site:gitlab.gnome.org)”

                                                                                    Is there a better alternative?

                                                                                    1. 4

                                                                                      We could dump a scrape of all those services to typesense.

                                                                                      1. 5

                                                                                        Or better yet, they could have a standard search endpoint and other parties could fan the queries out and aggregate results.

                                                                                        Start a new forge? Register it for inclusion in the lobsters search engine.

                                                                                  1. 15

                                                                                    The genie is out of the bottle with copilot-like software. If Github don’t do it, someone else will.

                                                                                    +1 to more Git forge diversity, however. I use Sourcehut.

                                                                                    1. 6

                                                                                      These last few days I’ve seen announcements from Amazon and Salesforce regarding “ML powered code generation”. It’s definitely not exclusive to GH.

                                                                                    1. 11

                                                                                      I really wish Guix wasn’t connected with GNU. It seems like it could give Nix some competition. Unfortunately, it doesn’t have a mainline kernel and is actively hostile towards its use with non-free software.

                                                                                      1. 4

                                                                                        Nonguix exists and has substitute servers that include the mainline kernel and other popular software like steam and Firefox. So what you want in this case is available given a little research and setup.

                                                                                        1. 3

                                                                                          I’ve done this research and found the offerings. They are small and from what I’ve read are actively disparaged in the official communication channels. This means a new user will effectively be told to follow GNU purity or get out. Nix is successful because of the strength of its community contributions, which are unwelcome in Guix. That a mainline kernel and Firefox unavailable means that the community will forever be artificially limited by the GNU ideology.

                                                                                          I get it, utopia sounds amazing. I also get that utopians alienate almost everybody.

                                                                                          1. 7

                                                                                            This means a new user will effectively be told to follow GNU purity or get out.

                                                                                            Well as a former “new user” I’ve never felt anything but welcomed by the community surrounding Guix. I’ve run it on hardware with binary blobs and even now run it on a desktop that uses a Radeon card and CPU with microcode. Pass that warning on Nonguix there are detailed instructions on how to add their kernel package and a fairly active ticket system. There’s never a need to ever engage someone about installing anything if a user has the patience to read the documentation and usually the roadblocks aren’t related to the nonfree software in question and people are happy to help.

                                                                                            I think ultimately it comes down to how much a person is willing to let the idea of GNU stop them from even trying. Because IMO the artificially limiting factor isn’t the stance of GNU, it’s the willingness of others to engage with them in spite of their prejudice against GNU.

                                                                                            1. 4

                                                                                              Citation?

                                                                                              My understanding is that the relevant guideline is “By contrast, to suggest that others run a nonfree program opposes the basic principles of GNU, so it is not allowed in GNU Project discussions.” https://www.gnu.org/philosophy/kind-communication.en.html

                                                                                              As a Guix user that happily uses nonguix, I’m okay with running non-free software as being considered out-of-scope of the project.

                                                                                              It’s as out-of-scope as discussing sports or mountains.

                                                                                              AIUI, it’s fine to discuss free alternatives to non-free software.

                                                                                              Nix is successful because of the strength of its community contributions, which are unwelcome in Guix. That a mainline kernel and Firefox unavailable means that the community will forever be artificially limited by the GNU ideology.

                                                                                              The kernel is mainline with patches applied (as most distros do). That said, the patches are far more invasive, removing blobs.

                                                                                              The difficulty of running Guix reflects the difficulty of running free software. It shows where the problems are.

                                                                                              Guix would be more popular if it allowed non-free software, but that would be in conflict with its goals.

                                                                                              1. 3

                                                                                                I’m unsure what you want a citation on, but I’ll cite the nonguix readme

                                                                                                Please do NOT promote this repository on any official Guix communication channels, such as their mailing lists or IRC channel, even in response to support requests! This is to show respect for the Guix project’s strict policy against recommending nonfree software, and to avoid any unnecessary hostility.

                                                                                                That last bit about hostility is exactly what I’m talking about. That non free software is considered as off-topic as sports is precisely my point. It limits the reach, approachability, friendliness and is counter to the expectations of the vast majority of users. Users want their hardware to work, and if they can’t access information about non free software in official channels then they’ll likely just give up.

                                                                                                I started this thread saying I’d like Guix without the GNU ideology. I share many of the same goals, but I also realize that taken to the extreme they provide their own limitations on my “freedom”.

                                                                                                I’m honestly happy it works for you and meets your needs. I’m sure that given the effort it would meet mine. Maybe one day I’ll give it a try, but I’m admittedly turned off by supporting the project with my time.

                                                                                                1. 2

                                                                                                  Guix without GNU is just any of the not official channels (i.e. anywhere except the IRC and mailing list) so just join the matrix group and voila! Dream come true.

                                                                                          2. 3

                                                                                            You can use a mainline kernel trivially if you like, just enable the channel

                                                                                            Also you can use guix on any linuxy OS, it doesn’t have to be used to control the whole OS

                                                                                            1. 3

                                                                                              This is a point often lost when talking about nix or guix.

                                                                                            2. 3

                                                                                              I wonder how much effort it would require to build a similar system on top of a R7RS compatible scheme, with minimal amount of extensions. We can’t divorce Guix with GNU. The Nix language is honestly the worst thing about Nix package manager.

                                                                                              1. 4

                                                                                                Nix’s power is the package availability, contributed by the community. That it is successful despite the language is a testament to the power of its ideas. Imagine how much farther it could go with a more approachable language underneath.

                                                                                                1. 3

                                                                                                  The Nix language is honestly the worst thing about Nix package manager.

                                                                                                  Funny how opinions differ. I think the Nix language and the code written in it (nixpkgs) is the most important thing about Nix.

                                                                                                  1. 2

                                                                                                    If I understand you, it seems like this boils down to:

                                                                                                    1. How much work are individuals capable of reimplementing nix/nixpkgs (and motivated to do it) willing to spend doing so?
                                                                                                    2. Is this less time than it would take those individuals to learn nix?
                                                                                                    1. 2

                                                                                                      Would it be possible to write a mapper, that would simply parse nix package format and write another, more desirable one?

                                                                                                      I don’t know much about tech underlaying nix. With (somewhat-) centralised software packagers it should be relatively easy, but what about nix, how does it work?

                                                                                                      1. 2

                                                                                                        Yes of course, please don’t forget to post here once you’ve done it!

                                                                                                    2. 1

                                                                                                      We can’t divorce Guix with GNU.

                                                                                                      Why not? It’s the easiest project to work if you really don’t want to deal with upstream, and channels mean you barely need to

                                                                                                  1. 4

                                                                                                    This is pretty neat. What I often want is the opposite of the “mini” mode here: give a command some data and it seeds the relevant torrent without worrying about downloads or ratios: just give the data out as fast as you can to whoever wants it.

                                                                                                    1. 2

                                                                                                      Interesting – if I understand correctly, the idea is that you would pass the location of the data on disk and the infohash to the client and it would proceed to just seed the torrent?

                                                                                                      I would note that if the intention is for direct file-sharing, there are probably better alternatives (like Magic Wormhole).

                                                                                                      1. 6

                                                                                                        Yes, that’s the idea. There are two reasons for this:

                                                                                                        1. To ensure the health of less popular torrents I care about or publish
                                                                                                        2. To saturate internet upstream already being paid for with data that helps torrents I care about rather than leaving it idle
                                                                                                        1. 1

                                                                                                          In the past I’ve done this by running aria2c in a systemd service unit.

                                                                                                    1. 20

                                                                                                      I imagine this means that if you had e.g. Disqus embedded on a bunch of sites, you’d need to log into Disqus in each one. Is that correct?

                                                                                                      (I think I’d be fine with that. Just curious what the user-visible effects are.)

                                                                                                      1. 15

                                                                                                        Yes. And Like-buttons will also break.

                                                                                                        1. 6

                                                                                                          Thankfully those seem to have gone out of fashion somewhat?

                                                                                                          It’s kind of ironic that the centralization / silo-ification of the web (“people just stay on facebook all the time and don’t care about interacting with facebook from embedded widgets on random articles”) is making this amazing privacy improvement palatable for mainstream users.

                                                                                                        2. 10

                                                                                                          i admittedly have a very limited understanding of browser technologies, but everything described in the section on what they’re doing was how i imagined cookies already working in my head. i’m kind of … used to being horrified by browsers, by now, but yeah, learning how things used to work was an eye-opening lesson in how awful most browsers are. holy shit.

                                                                                                          1. 10

                                                                                                            In a better world, the way “things used to work” is how you’d want them to work. Shareable cookies do add value, they’re just very easy to abuse. I also don’t think this technically limits the tracking, though it may require it to make more network requests; it’s hard to stop two cooperating websites from communicating in order to track you, and adtech tracking is hosted by cooperating websites.

                                                                                                            1. 1

                                                                                                              I don’t understand why it couldn’t be a permission. « xxx.com wants to access some of your data from yyy.com [Review][Allow][Block] »

                                                                                                              1. 1

                                                                                                                Well, it’s more that xxx.com wants to access your data from xxx.com, but one xxx.com is direct and one is embedded in yyy.com’s page. The point I’m making on is that this is impossible to block if yyy.com and xxx.com are working together, which in the context of ads they always are. As one possible “total cookie protection” break, yyy.com could set a cookie with a unique tracking ID specific to yyy.com, redirect to xxx.com with the unique tracking ID as a URL parameter, and have xxx.com redirect back to it. Your xxx.com and yyy.com identities are now correlated, and neither site had to do anything browsers could reasonably block.

                                                                                                          2. 9

                                                                                                            As a developer working for a company that makes an embedded video player that’s used across the internet: this semi-breaks some user preferences, like remembering volume and preferred caption language — now they have to be set per embedding domain instead of applying globally when they’ve been set once.

                                                                                                            And it thoroughly breaks our debug flags: during a tech support conversation, we can have users enable or disable certain features to track down where a bug is coming from. The UI for that is a page on our domain (the domain the embed is served from). Now users can set those flags, but they won’t actually do anything, because they won’t be readable on the domain where they’re really needed.

                                                                                                            We could possibly move the UI for that inside of the embed to make it work again, but A) it would look and feel bad, and B) it probably won’t happen for a browser with a 3% share.

                                                                                                            The Storage Access API offers very little help in this context: we can’t have the player pop up a permission request dialog for every user on every player load just to check whether they even have a debug flag set, so there would have to be some kind of hidden “make debugging work right” UI element that would trigger the request.

                                                                                                            1. 6

                                                                                                              Disclaimer: I trust you know this far better than I do, I’m just curious.

                                                                                                              I can see how this Firefox feature breaks that functionality, and it sounds like unfortunate collateral damage.

                                                                                                              For volume control, is that better handled by either the browser or the OS anyway?

                                                                                                              For their preferred caption language, can the browser’s language be inferred from headers?

                                                                                                              If a user wishes to override their browser’s language, it sounds plausible that this should be at the domain-level anyway. Perhaps I want native captions on one site, and foreign captions on a site I’m learning language from?

                                                                                                              And it thoroughly breaks our debug flags: during a tech support conversation, we can have users enable or disable certain features to track down where a bug is coming from. The UI for that is a page on our domain (the domain the embed is served from). Now users can set those flags, but they won’t actually do anything, because they won’t be readable on the domain where they’re really needed.

                                                                                                              How does Safari handle this?

                                                                                                              1. 2

                                                                                                                For volume control, is that better handled by either the browser or the OS anyway?

                                                                                                                Arguable. Browsers don’t do anything helpful that I know of, and the OS sees the browser as one application.

                                                                                                                For their preferred caption language, can the browser’s language be inferred from headers?

                                                                                                                We default to the browser language (which generally defaults to the OS language) but there are reasons why some users tend to select something different for captions. It’s not the end of the world, it’s just annoying.

                                                                                                                How does Safari handle this?

                                                                                                                I’m unsure, sorry. I don’t see a ticket about it, and I don’t have any Safari-capable devices on hand.

                                                                                                              2. 1

                                                                                                                Interesting, thank you. The caption and volume preferences thing sounds annoying. But on the other hand, it won’t be any worse for you than it is for your competitors which is… something, at least.

                                                                                                                You may want to take a look at how YouTube and Brightcove (off the top of my head) handle the debug part of this – right-clicking on a video provides all sorts of debug and troubleshooting information.

                                                                                                                1. 2

                                                                                                                  We have that too, but it’s a different feature. We didn’t put the controls in there because we can give them a nicer presentation if they’re not stuck inside of an iframe :)

                                                                                                            1. 13

                                                                                                              The section on granularity alludes to another principle: “reduce the scope and size of your state”. That is, push state to the leaves of your system, so your higher level processes have less opportunity to fall into a broken state.

                                                                                                              Containerisation is a form of the this.

                                                                                                              https://grahamc.com/blog/erase-your-darlings is a realisation of this for a Linux distribution: each boot is from the exact same state, with optional allowlisted state to persist between reboots.

                                                                                                              1. 1

                                                                                                                The whole arm board/SoC thing is a mess now. I guess we need a powerhouse of the IBM in the early PC era.

                                                                                                                1. 5

                                                                                                                  https://tow-boot.org/ does that. The recommended installation approach is to install that to your board’s SPI, which can then boot generic arm64 images.

                                                                                                                  It’s used by ~most Linux mobile distributions, and allows the distributions to publish a single arm64 image.

                                                                                                                  1. 2

                                                                                                                    Oh tow-boot looks really good thank you for the link!

                                                                                                                    1. 2

                                                                                                                      It doesn’t support Rock64 yet, but the developer recently suggested it wouldn’t be much work, and they’d give it go, if others could help verify it: https://github.com/Tow-Boot/Tow-Boot/issues/151

                                                                                                                      Consider offering to verify, if you’re keen!

                                                                                                                1. 4

                                                                                                                  As an alternative to needing containers for sites, folks might want to consider using about:config’s privacy.firstparty.isolate which runs each ~site in its own isolated container.

                                                                                                                  This provides the same isolation benefits as FF containers, but saves you having to create/manage/open containers. As an example, if you log into facebook.com, then visit messenger.com, messenger.com won’t be aware of your facebook.com session.

                                                                                                                  See also: privacy.resistFingerprinting.

                                                                                                                  1. 1

                                                                                                                    Thanks - privacy.firstpary.isolate looks interesting - I’ll try that out, I’m actually already using privacy.resistFingerprinting, I’ve added that to the post.