Threads for NinjaTrappeur

  1. 4

    So if I’m reading this article correctly, the point of this is to remove more potential sources of nondeterminism from Nix. Have there been any demonstrated benefits so far, or is this still all theoretical/robustness/WIP?

    1. 14

      It’s mostly about running nix-built OpenGL/Cuda binaries on a foreign distribution (Ubuntu, Fedora, Debian…). You need a way to inject some sort of GPU driver to the Nix closure. You won’t be able to run a nix-built OpenGL program on a foreign distribution if you don’t do so.

      NixGLHost is an alternative* approach to do this.

      * Alternative to NixGL. NixGLHost is in a very very alpha stage.

      1. 3

        One of my gripes with nixgl is that i have to run all my nix applications via nixgl. If I run a non-nix binary with nixgl it usually doesn’t go well, so i can’t run my whole user session with nixgl and have it propagate to child processes. Is there any, for example, NIX_LD_PRELOAD one could use, that could be set system-wide, that is ignored by non-nix binaries?

        1. 2

          To be honest, that’s not a use case I had in mind when exploring this problem space. I’d probably need more than 5 minutes to correctly think about this, take what I’m about to say with a grain of salt.

          My gut instinct is that we probably don’t want to globally mix the GPU Nix closure with the host one. I guess an easy non-solution for this would be to skip the problem altogether by provisioning the Nix binaries through a Nix shell. In this Nix shell, you could safely discard the host library paths and inject the nix-specific GPU libraries directly through the LD_LIBRARY_PATH (via nix-gl or nix-gl-host).

          Now, if you think about it more, the use case you’re describing seems valid UX-wise. I’m not sure what would be the best way to tackle it. The main danger is getting your libraries mixed up. NIX_LD_PRELOAD could be a nice trick, but it’s kind of a shotgun approach, you end up preloading your shim for each and every Nix program, regardless if they depend on OpenGL or not.

          As long as you don’t plan to use CUDA, I think the best approach would be injecting the GPU DSOs from libglvnd. There’s already all you need to point to your EGL DSOs through the __EGL_VENDOR_LIBRARY_DIRS env variable. There’s no handy way to do that for GLX, but I wrote a small patch you could re-use to do so.

          I’ll try to think more about that, cool use case, thanks for the feedback.

    1. 7

      How would replacing bash with nushell play with bootstrapping of nix and nixpkgs? When comparing guix and nix, guix did quite a good job on that topic and there is really a minimal set of packages to build everything from scratch. I’m wondering if bringing Rust in, just to build nushell, just to build stdenv based on it, would make bootstrapping nearly impossible.

      1. 5

        100% agree, this article completely eludes this central question. Bash is semi-trivial to bootstrap!

        Nixpkgs does not bootstrap rustc, we’re currently using a binary distribution:

        Adopting this as a default stdenv would require to push this massive bindist to the Nixpkgs bootstrap seed. That seed is already massive compared to what Guix has, I don’t think we want to degrade this further.

        Rust is definitely source-bootstrapable, as a matter of fact, Guix manages to do it, there’s no reason we can’t do the same. The bootstrap chain is pretty long though. On top of what we already bootstrap (gcc, openssl, etc.), we’d need to bootstrap llvm, mrustc, then rust 1.54 -> 55 -> 56 -> 57 -> 58 -> 60 -> 61 -> 62 -> 63 -> 64 -> 65.

        So yeah, pushing this to stdenv would considerably degrade the bootstrap story in any case.

        From my perspective, Bash is a local optimum, I personally wouldn’t change it, it’s certainly a good balance between a language that is easy to bootstrap and a good-enough expressiveness to express builds.

        If we really want to move to something more modern, Oil could be a more serious contender, they seem to take bootstrapping seriously. There’s a drafted RFC wrt. Oil adoption.

        [Edit]: The question is eluded, but I don’t think the author expects this to replace stdenv, at least it’s not mentionned in the article. Don’t take this comment as an overwhelming negative “this project is worthless”. Cool hack!

        1. 4

          This made me realize that Rust is fundamentally non-bootstrapable. It’s definitely going to produce a release every six weeks for quite a number of years, and Rust’s release N needs release N-1 to build, so the bootstrap chain, by design, grows quickly and linearly with time. So it seems that, in the limit, it is really a choice between:

          • using a fully bootstrapped system
          • using Rust
          1. 2

            Is there a reference interpreter, perhaps? I imagine that that can’t be a complete solution since LLVM is a dependency, but it would allow pure-Rust toolchains to periodically adjust their bootstraps, so to speak.

            1. 2

              There is mrustc which is written in C++ and allows you to bootstrap Rust. There is also GCC Rust implementation in the works, that will allow bootstrapping.

            2. 1

              In my defense, I do use the term “experimental” several times and I don’t make any suggestion of replacing stdenv. I could be wrong, but I think that flakes are going to decentralize the Nix ecosystem quite a bit. While the Nixpkgs/stdenv relationship is seemingly ironclad, I don’t see why orgs or subsets of the Nix community couldn’t adopt alternative builders. Any given Nix closure can in principle have N builders involved; they’re all just producing filesystem state after all.

              As for bootstrapping, yes, the cost of something like Nushell/Nuenv is certainly higher than Bash, but it’s worth considering that (a) you don’t need things like curl, jq, and coreutils and (b) one could imagine using Rust feature flags to produce lighter-weight distributions of Nushell that cut out things that aren’t germane to a Nix build environment (like DataFrames support).

              1. 1

                Yes the new Oil C++ tarball is 375 kilobytes / 90K lines of compressed, readable C++ source :)


                The resulting binary is about 1.3 MB now. I seem to recall that the nushell binary is something like 10 MB or 50 MB, which is typical for Rust binaries. rustc is probably much larger.

                There was some debate about whether Oil’s code gen needs to be bootstrapped. That is possible, but lots of people didn’t seem to realize that the bash build in Nix is not.

                It includes yacc’s generated output, which is less readable than most of Oil’s generated code.

                1. 1

                  Nushell is certainly larger! Although I would be curious how much smaller Nushell could be made. Given that it’s Rust, you could in principle use feature flags to bake in only those features that a Nix builder would be likely to use and leave out things like DataFrames support (which is quite likely the “heaviest” part of Nushell).

              2. 1

                For sure it would make bootstrapping much harder on things like OpenBSD. Good luck if you are on an arch that doesn’t have rust or LLVM. That said, I don’t think this would replace stdenv.. for sure not any time soon!

                Also the article does mention exactly what you are pointing out:

                it’s nowhere near the range of systems that can run Bash.

                1. 1

                  My question is orthogonal to this, and maybe I should have specify what I mean by bootstrapping. It’s “how many things I have to build first, before I can have working nixpkgs and build things users ask for”. So if we assume that nushell runs wherever bash runs, how much more effort is to build nushell (and rust and llvm) than bash? I would guess order of magnitude more, thus really complicating the initial setup of nixpkgs (or at least getting them to install without any caches).

              1. 8

                Are there any plans for moving Nix flakes from experimental to stable? I see that Nix flakes are all the hype now. Even the linked guide states that:

                Anyone beginning their Nix journey today should center their learning and experimentation around Nix flakes.

                Do you agree with this?

                I’ve been in love with NixOS for some time now but I avoided experimental features so far.

                The thing is, Flakes FOMO starts to creep in but I’d rather avoid putting (even more) time into learning Nix/NixOS/nixpkgs/nix-the-command if flakes are going to be deprecated next spring.

                1. 10

                  The thing is, Flakes FOMO starts to creep in but I’d rather avoid putting (even more) time into learning Nix/NixOS/nixpkgs/nix-the-command if flakes are going to be deprecated next spring.

                  For what it is worth, I can’t fathom a universe where Nix actually removes Flakes. The data I’m seeing and the user interviews I’ve done show an overwhelming number of new Nix users are starting with Flakes because they’re easier and it creates a framework to learn inside.

                  If there is some sort of redesign or flakes2, it wouldn’t kill flakes1.

                  1. 3

                    I agree, it seems like flakes are here to stay.

                    However, they are still considered to be an unstable feature by the Nix main developer. The flakes code you write is considered unstable and could break without further notice after a Nix update.

                  2. 4

                    I think all the knowledge you get without flakes would be still transferable to a flake-based setup. Flakes are nice for a few usecases. Most importantly, pinning and mixing packages from different sources. If you don’t have urgent needs related to these two things, you can postpone migrating to flakes.

                    Regarding experimental features, I find the new nix command something worth looking into. The CLI is much simpler and nicer.

                    1. 3

                      Part of the issue is that the RFC committee doesn’t want to rubber-stamp flakes just because a lot of people, lead by a few influential people, are all using the same unofficial and experimental feature. It makes a mockery of the RFC process and skips over a thorough design review. If flakes are really the way forward, some of all this energy should go into getting the RFC officially pushed through.

                    1. 2

                      Great article!

                      That’s the first time I hear about work notes being some form of “temporal documentation”. It makes a lot of sense.

                      I however think the workflow described in this page creates a hard dependency towards a online service by hard-linking internet URLs to your immutable and long-living git history. These URLS tend to be very mutable by nature, it’s hard to predict whatever they’ll still be available or not on a long time scale. Any migration would require you to re-write your whole git history.

                      That’s fine for your small hobby project, but it’s likely to lead to a lot of context loss in a moderately distant future (say 15/20 years) for a semi-major project (like datasette).

                      The git notes UX is pretty bad, it’s a shame. They’d be a great way to store this kind of temporal documentation attached to a commit.

                      [Edit]: Scratch that last paragraph, git notes wouldn’t work in a context where you want to store your design before starting to work on the actual implementation.

                      1. 6

                        I touched a bit more on that here:

                        If you’re going to use issues in the way I’m describing here you need to be VERY confident that your organization has a healthy enough respect for institutional knowledge that they’re not just going to throw it all away some day when they switch to a different issue tracking system!

                        One of the reasons I’ve gone all-in on GitHub Issues is that it has a really good API. I have code which exports my issues on a regular basis, both as a form of backup and as a way to run my own queries against them:

                        My plan for if I ever do work on a project that migrates to another issue tracker is to create a static HTML archive of the old issues and then run a script that rewrites the entire commit history so each commit links to the full URL to the archived issue.

                        I investigated exporting issue threads to git notes but was put off that idea when I learned the GitHub doesn’t display notes any more:

                      1. 7

                        Out of curiosity, did you write a package that uses the binary releases out of simplicity, or did you run into issues using the Go tooling in Nixpkgs to build from source? (If the latter, I’d love to hear about them.)

                        1. 4

                          I’m new to Nix. I took the easiest way possible. It’d would be great if you can show me how to build GtS from source.

                          1. 3
                            { buildGoModule, fetchFromGitHub }:
                            buildGoModule rec {
                              pname = "gotosocial";
                              version = "0.5.2";
                              src = fetchFromGitHub {
                                owner = "superseriousbusiness";
                                repo = "gotosocial";
                                rev = "v${version}";
                                sha256 = "sha256-fQDxU2+sj0QhGOQQRVjKzlyi1PEm/O0B8/V4cac4Kdo=";
                              vendorSha256 = null;

                            You can then pkgs.callPackage the above file somwhere in your nix config. Or better, if you feel like spending a bit more time on this: clean up your module and this derivation (add the relevant meta attributes) open a PR in Nixpkgs adding this derivation and your module.

                            [Edit]: there’s apparently somebody else working on this . You can potentially team up :)

                            In general, the entry point to find out this kind of information for a lanugage you’re not familiar with is the nixpkgs manual. In that particular case, here’s the relevant golang section:

                        1. 6

                          But wait, isn’t there that one nonguix project that allows you to install a normal kernel and Steam?

                          Yeah, but talk about that in the main #guix channel and you risk getting banned. GG. You just have to know that it exists and you can’t learn that it exists without knowing someone that tells you that it exists under the table.

                          Has this actually happened? Getting banned for talking about nonguix?

                          1. 9

                            Not sure about getting banned, per se, but it’s explicitly discouraged. The second paragraph of nonguix’s readme:

                            Please do NOT promote this repository on any official Guix communication channels, such as their mailing lists or IRC channel, even in response to support requests! This is to show respect for the Guix project’s strict policy against recommending nonfree software, and to avoid any unnecessary hostility.

                            1. 12

                              even in response to support requests

                              Holy shit, that’s extremely disrespectful to users.

                              1. 2

                                I would recommend actually reading the help-guix archives to see how often support issues are created and how many issues users have are ignored or told they are out of place.

                              2. 12

                                I admit I fucked up and misunderstood the rules. My complaint now reads:

                                Yeah, but talk about that in the main #guix channel and you get told to not talk about it. You just have to know that it exists and you can’t learn that it exists without knowing someone that tells you that it exists under the table, like some kind of underground software drug dealer giving you a hit of wifi card firmware. This means that knowledge of the nonguix project (which may contain tools that make it possible to use Guix at all) is hidden from users that may need it because it allows users to install proprietary software. This limits user freedom from being able to use their computer how they want by making it a potentially untrustable underground software den instead of something that can be properly handled upstream without having to place trust in too many places.

                              3. 9

                                That’s made up, like most of that article, it’s full of misconceptions. Can’t tell whether or not this has been written in good faith.

                                But hey, outrage is good to attract attention. Proof to the point: I’m commenting this out of outrage.

                                1. 6

                                  But hey, outrage is good to attract attention.

                                  Hehe, yeah, the FSF and SFC use outrage constantly! I get emails all the time telling me that Microsoft and Apple are teaming up to murder babies or whatever. It’s pretty much all they have left at this point, and I say this as someone who donated and generally supported their mission for many, many years (which is why I still get the emails).

                                  1. 3

                                    Hyperbole and untruths are like pissing in your shoes to get warm; backfire once the initial heat is gone.

                                2. 6

                                  When I wrote that bit I made the assumption that violating the rules of the channel could get you banned. I admit that it looks wrong in hindsight, so I am pushing a commit to amend it.

                                  1. 2

                                    Not to my knowledge. No. I’ve seen it tut-tutted but I’ve yet to see someone get banned.

                                    1. 1

                                      That’s 100% messed up if true.

                                    1. 4

                                      I get a 403 Forbidden.

                                      1. 7

                                        Me too now. As they themselves say:

                                        As a Professional DevOps Engineer I am naturally incredibly haphazard about how I run my personal projects.


                                        1. 6

                                          Sorry, I think my Private Cloud has a bad power supply which is having a knock-on effect of upsetting the NFS mounts on the webserver. I’m acquiring a replacement right now, and in the meantime I am going to Infrastructure-as-Code it by adding a cronjob that fixes the NFS mount.

                                          1. 1

                                            Me too.

                                            You can use

                                          1. 18

                                            I’ve been reading the Gemini specification, as well as this post, and my conclusion is that it’s just a worse version of HTTP 1.1 and a worse version of Markdown.

                                            1. 6

                                              worse version of HTTP 1.1

                                              Strong disagree. TLS + HTTP 1.1 requires to perform a upgrade dance involving quite a few extra steps. The specification is also pretty empty regarding SNI management. Properly implementing that RFC is pretty demanding. There’s also a lot of blind spots left to the implementer better judgement.

                                              In comparison, the Gemini TLS connection establishing flow is more direct and simpler to implement.

                                              TLS aside, you mentioning

                                              I’ve been reading the Gemini specification

                                              sounds like a clear win to me. The baseline HTTP 1.1 RFC is already massive let alone all its required extensions to work in a modern environment.

                                              1. 7

                                                I agree that the specification is simple to read, but the specification itself is too limited and don’t find it suitable for the real world.

                                                For example, I prefer HTTP’s optional end-to-end encryption because when working with internal routers within an infrastructure dealing with certificates is a PITA and a completely unnecessary bump in complexity and performance overhead being inside an already secured network.

                                                I also disagree on that “extensibility is generally a bad idea” as the article says. Extensibility can work if you do it properly, like any other thing on software engineering.

                                                EDIT: Or the requirement of closing the connection and re-opening it with every request, and all the handshakes that means.

                                                For clarity about what I think could be an actual improvement: I would prefer an alternative evolution of HTTP 1.0, with proper readable specs, test suites, clearer https upgrade paths, etc; instead of an evolution of Gopher.

                                                1. 4

                                                  TLS + HTTP just requires connecting to port 443 with TLS. I’ve worked on lots of things using HTTP for >20 years and I don’t think I’ve ever seen the upgrade protocol used in real life. Is it commonly used by corporate proxies or something like that?

                                                2. 6

                                                  When I looked at it (months ago), I got the same impression. I find this article irresponsible, as Gemini does not merit the support.

                                                  Gemini’s intentions are good. The specification isn’t. For instance, not knowing the size of an object before receiving it makes it a non-starter for many of its intended purposes.

                                                  This is an idea that should be developed properly and openly, allowing for input from several technically capable individuals. Not one person’s pet project.

                                                  I’ll stick to Gopher until we can actually do this right. Gemini doesn’t have the technical merit to be seen as a possible replacement to Gopher.

                                                  1. 3

                                                    It does accept input from individuals. I was able to convince the author to expand the number of status codes, to use client certificates (instead of username/password crap) and to use the full URL as a request.

                                                  2. 4

                                                    I prefer to think of Gemini as a better version of gopher with a more sane index page format.

                                                  1. 4

                                                    Interesting, I’ve been porting nixos to the nano pi m4 v2, this looks quite a bit less painful in some ways. Added to my: try this out some day list of stuff to look at.

                                                    1. 7

                                                      It is!

                                                      Conceptually speaking, Guix got a lot of things right. On the other hand, Nix precedes Guix, the opposite would have been concerning.

                                                      The documentation is amazing, they are very careful about their tooling vendor lock-in. Their clean API around the toolchains and the abstractions around derivations is the selling point to me. The language is also somehow standard and comes full batteries included tooling-wise.

                                                      There’s a catch however: the Guix packageset is much smaller, you won’t have all the nice language-specific build systems you have with Nix, overall you’re likely to miss some stuff packaging-wise. Also: no systemd (I guess it might be a selling point for some people though).

                                                      1. 1

                                                        Yep yep, this just intrigued me as I’ve gotten a bit deep in the guts of how nixos sd images are built. Its honestly not too big of a deal its just probably in need of a bit of a refactor tbh for the overall function to do this stuff but it honestly just strikes me as more: this was evolved not planned. Which is fine, just not as polished as it could be.

                                                        The scheme bit made a lot more sense to me off the bat versus having to do a fair amount of digging to figure out how I can adjust the partition sizes and make sure that my custom u-boot and its spl files etc… are getting put in the “blessed” right spot for this board (still not sure i am doing it right tbh as its not booting).

                                                        And the systemd bit is water under the bridge to me, that ship has sailed. I’ve had to port/add custom derivations to nixpkgs a lot so i’m not too averse to that if needed.

                                                        My real reason for all this is i’m building a little k8s cluster out of arm boards for shits, so nixops is my ultimate goal here.

                                                    1. 2

                                                      Nice post! Actually nice blog altogether, I started to binge read it tonight!

                                                      I couldn’t help but to notice something a tiny bit ironic though:

                                                      ~ » nslookup -query=A                                                                                                                 
                                                      Non-authoritative answer:
                                                      ~ » nslookup -query=AAAA                                                                                                              
                                                      Non-authoritative answer:
                                                      *** Can't find No answer
                                                      1. 5

                                                        After using it for a while I started to find the Nix expression language as one of the best designed syntaxes ever. It doesn’t have separators for records or lists so it’s friendly to diff. The multiline strings are perfect. Writing nested records with “a.b.c” keys is super convenient. The lambda syntax is as simple as possible. Etc etc.

                                                        1. 9

                                                          It doesn’t have separators for records or lists so it’s friendly to diff.

                                                          Records are separated with the ; symbol.

                                                          As for lists, I beg to disagree. The list symbols are separated with a whitespace, which is unfortunate since whitespace is also used to represent function application. It means you’ll have to be careful enough to wrap your function applications in parenthesis every time you’ll perform it in a list.

                                                          That’s a easy trap to fell into, especially in multi-lines statements. Add the lazy nature of the language on top of that, you can end up with stacktraces that are pretty difficult to decipher. Especially if you end up doing that in a NixOS machine description :/.

                                                          I see a lot of newcomers falling into this trap on IRC.


                                                          let pkgs = [
                                                            import ./local-pkgs.nix
                                                          ]; in ...

                                                          Instead of

                                                          let pkgs = [
                                                            (import ./local-pkgs.nix)
                                                          ]; in ...
                                                          1. 2

                                                            Sorry, I meant separators to mean comma-like separators where the last item doesn’t end with the separator.

                                                            The issue you mentioned is real, yeah. I still love the syntax.

                                                        1. 14

                                                          While it might be true that 1500 Bytes is now the de facto MTU standard on the Internet (minus whatever overhead you throw at it), everything’s not lost. The problem is not that we don’t have the link layer capabilities to offer larger MTUs, the problem is that the transport protocol has to be AWARE of it. One mechanism for finding out whether what size MTU is supported by a path over the Internet is an Algorithm called DPLPMTUD. It is currently being standardized by the IETF and is more or less complete There are even plans for QUIC to implement this algorithm, so if we’ll end up with a transport that is widely deployed and also supports detection of MTUs > 1500 we’ll actually might have a chance to change the link layer defaults. Fun fact: All of the 4G networking gear actually supports jumbo frames, most of the providers just haven’t enabled the support for it since they are not aware of the issue.

                                                          1. 6

                                                            Wow, it might even work.

                                                            I can hardly believe it… but if were able to send jumbo frames and most users’ browsers support receiving it, it might get deployed by ISPs as they look for benchmark karma. Amazing. I thought 1500 was as invariant as π.

                                                            1. 5

                                                              I was maintainer for an AS at a previous job and set up a few BGP peers with jumbo frames (4470). I would have made this available on the customer links as well, except none of them would have been able to receive the frames. They were all configured for 1500, as is the default in any OS then or today. Many of their NICs couldn’t handle 4470 either, though I suppose that has improved now.

                                                              Even if a customer had configured their NIC to handle jumbo frames, they would have had problems with the other equipment on their local network. How do you change the MTU of your smartphone, your media box or your printer? If you set the MTU on your Ethernet interface to 4470 then your network stack is going to think it can send such large frames to any node on the same link. Path MTU discovery doesn’t fix this because there is no router in between that can send ICMP packets back to you, only L2 switches.

                                                              It is easy to test. Try to ping your gateway with ping -s 4000 (or whatever your gateway is). Then change your MTU with something like ip link set eth0 mtu 4470 and see if you can still ping your gateway. Remember to run ip link set eth0 mtu 1500 afterwards (or reboot).

                                                              I don’t think that DPLPMTUD will fix this situation and let everyone have jumbo frames. As a former network administrator reading the following paragraph, they are basically saying that jumbo frames would break my network in subtle and hard to diagnose ways:

                                                                 A PL that does not acknowledge data reception (e.g., UDP and UDP-
                                                                 Lite) is unable itself to detect when the packets that it sends are
                                                                 discarded because their size is greater than the actual PMTU.  These
                                                                 PLs need to rely on an application protocol to detect this loss.

                                                              So you’re going to have people complain that their browser is working, but nothing else. I wouldn’t enable jumbo frames if DPLPMTUD was everything that was promised as a fix. That said, it looks like DPLPMTUD will be good for the Internet as a whole, but it does not really help the argument for jumbo frames.

                                                              And I don’t know if it has changed recently, but the main argument for jumbo frames at the time was actually that they would lead to fewer interrupts per second. There is some overhead per processed packet, but this has mostly been fixed in hardware now. The big routers use custom hardware that handles routing at wire speed and even consumer network cards have UDP and TCP segmentation offloading, and the drivers are not limited to one packet per interrupt. So it’s not that much of a problem anymore.

                                                              Would have been cool though and I really wanted to use it, just like I wanted to get us on the Mbone. But at least we got IPv6. Sorta. :)

                                                              1. 3

                                                                If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                                                Back when I tried running an email server on there, I actually did run into trouble with this, because some bank’s firewall blocked ICMP packets, so… I thought you’d like to know, neither of us used “jumbo” datagrams, but we still had MTU trouble, because their mail server tried to send 1500 octet packets and couldn’t detect that the DSL link couldn’t carry them. The connection timed out every time.

                                                                If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                                                                1. 2

                                                                  If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                                                  It’s even worse: in the current situation[1], your system’s MTU won’t matter at all. Most of the network operators are straight-up MSS-clamping your TCP packets downstream, effectively discarding your system’s MTU.

                                                                  I’m very excited by this draft! Not only it will fix the UDP situation we currently have, but will also make tunneling connections way more easy. That said, it also means that if we want to benefit from that, the network administrators will need to quit mss-clamping. I suspect this to take quite some time :(

                                                                  [1] PMTU won’t work in many cases. Currently, you need ICMP to perform a PMTU discovery, which is sadly filtered out by some poorly-configured endpoints. Try to ping for instance ;)

                                                                  1. 2

                                                                    If your system is set up with an mtu of 1500, then you’re already going to have to perform link mtu discovery to talk with anyone using PPPoE. Like, for example, my home DSL service.

                                                                    Very true, one can’t assume an MTU of 1500 on the Internet. I disagree that it’s on the application to handle it:

                                                                    If your application can’t track a window of viable datagram sizes, then your application is simply wrong.

                                                                    The network stack is responsible for PMTUD, not the application. One can’t expect every application to track the datagram size on a TCP connection. Applications that use BSD sockets simply don’t do that, they send() and recv() and let the network stack figure out the datagram size. There’s nothing wrong with that. For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too (unless, again, broken by bad configurations, hence DPLPMTUD).

                                                                    1. 3

                                                                      I disagree that it’s on the application to handle it

                                                                      Sure, fine. It’s the transport layer’s job to handle it. Just as long as it’s detected at the endpoints.

                                                                      For UDP the situation is a little different, but IP can actually fragment large UDP datagrams and PTMUD works there too

                                                                      It doesn’t seem like anyone likes IP fragmentation.

                                                                      • If you’re doing a teleconferencing app, or something similarly latency-sensitive, then you cannot afford the overhead of reconstructing fragmented packets; your whole purpose in using UDP was to avoid overhead.

                                                                      • If you’re building your own reliable transport layer, like uTP or QUIC, then you already have a sliding size window facility; IP fragmentation is just a redundant mechanism that adds overhead.

                                                                      • Even DNS, which seems like it ought to be a perfect use case for DNS with packet fragmentation, doesn’t seem to work well with it in practice, and it’s being phased out in favour of just running it over TCP whenever the payload is too big. Something about it acting as a DDoS amplification mechanism, and super-unreliable on top of that.

                                                                      If you’re using TCP, or any of its clones, of course this ought to be handled by the underlying stack. They promised reliable delivery with some overhead, they should deliver on it. I kind of assumed that the “subtle breakage” that @weinholt was talking about was specifically for applications that used raw packets (like the given example of ping).

                                                                      1. 1

                                                                        You list good reasons to avoid IP fragmentation with UDP and in practice people don’t use or advocate IP fragmentation for UDP. Broken PMTUD affects everyone… ever had an SSH session that works fine until you try to list a large directory? Chances are the packets were small enough to fit in the MTU until you listed that directory. As breakages go, that one’s not too hard to figure out. The nice thing about the suggested MTU discovery method is that it will not rely on other types of packets than those already used by the application, so it should be immune to the kind of operator who filters everything he does not understand. But it does mean some applications will need to help the network layer prevent breakage, so IMHO it doesn’t make jumbo frames more likely to become a thing. It’s also a band-aid on an otherwise broken configuration, so I think we’ll see more broken configurations in the future, with less arguments to use on the operators who can now point to how everything is “working”.

                                                              1. 6

                                                                According to zmap it takes 45min to scan all of IPv4 on a Gigabit connection. That could be a slow but interesting way to reliably bootstrap the network in case of an attack.

                                                                1. 1

                                                                  I like the idea.

                                                                  The 45 mins scan advertised on the homepage is probably the result of a TCP SYN scan though. You’ll probably need to add an application layer scanner on top of that (zgrab?). Not sure how this will affect the overall latency of the scan :/

                                                                1. 1

                                                                  There’s also a detailed wrietup about the business card design on the very same blog.

                                                                  I’d be interested to understand the hardware design of the board. The post sadly doesn’t cover that part :(

                                                                  1. 31

                                                                    Nice ad. :|

                                                                      1. 3

                                                                        Also at the moment according to the pricing page, payment is optional.

                                                                      2. 16

                                                                        It’s advertising an open source project, Source Hut, but also Janet, Zig, Nim, Samurai, Sway and other open source projects I like. Projects that get very little payment or gratitude for the work they do.

                                                                        Yes is a service too, a useful one at that. They support BSD well, unlike other companies, how else are they supposed to let people know this fact? Should they be paying largely unethical companies like google for ad space? Or should they just be more subversive so people don’t complain.

                                                                        Let me put it this way, if every open source project was also a business, should we hate on every single one for advertising? didn’t game the upvotes to get on the front page, people upvoted it by themselves.

                                                                        I suppose there could be a tag ‘sponsored’ so people can ignore them. Not suggesting allowing lower quality from sponsored content either, probably the inverse.

                                                                        1. 21

                                                                          The issue is that I see a Sourcehut “ad” every few days: “Sourcehut supports OpenBSD”, “Sourcehut supports migrations from Bitbucket”, “Sourcehut supports ASCII”. Yeah … we got it … A lot of these posts don’t have a lot of meat to them and at this point, it’s just getting spammy.

                                                                          1. 16

                                                                            Yeah … we got it … A lot of these posts don’t have a lot of meat to them and at this point, it’s just getting spammy.

                                                                            They don’t always have a lot of “meat,” but posts about SourceHut represent a capitalist ideology I can actually get behind. A single proprietor, working their ass off to try to change the software world, which has gotten extremely out of hand with regards to complexity, and the marketing of products that fix the complex systems we don’t need, at all, to begin with.

                                                                            What’s the difference between a SourceHut post, and an post ad that complains that as an open source author I am not compensated fairly? Hint: one should be inspiration, for the other is actually possible.

                                                                            1. 0

                                                                              SourceHut represent a capitalist ideology

                                                                              payment for the service is optional, so no it doesn’t. All the things that make Sourcehut great in my opinion are the ways in which it denies capitalist ideology. Open Source Software, optional payments, etc.

                                                                              1. 3

                                                                                optional payments

                                                                                It’s optional, right now, while in Alpha. It doesn’t seem the plan is that forever. Also, if it wasn’t clear, I’m extremely in favor of this model of charging people for a service, but releasing your software under a permissive license.

                                                                            2. 10

                                                                              Just let me other another data point here. It was thanks to the “migration from Bitbucket” post that I found out Sourcehut had a nifty script to help migrations from Bitbucket and that saved hours of work as I migrated 20+ repos effortlessly. This current post made me realize that maybe I should be paying more attention to their CI system as it looks much simpler than others I’ve used. So, in the end, I’m appreciating these blog posts a lot. Yes they are related to a commercial venture but so what? You can self-host it if you’re not into SaaS outside your control. If we set a hard line like this, then it becomes impossible to post about any commercial project at all. It is already hard to monetize FOSS projects to make them sustainable, now imagine if they are not even allowed blog posts…

                                                                              1. 4

                                                                                Same here. This string of posts made me aware of sourcehut and when I had to migrate from bitbucket, I then gave them a hard eval. I like their human, non-shitty business model of “I give them money and they give me services”, and that their products are professionally executed and no-frills.

                                                                                I don’t know how to reconcile it. These articles were very useful to me, when most product ads weren’t and I’d be disappointed if this site became a product advert platform. I think people are right for flagging it is almost-an-ad, but in this one vendor’s case I’m glad I saw them and am now a happy sourcehut customer.

                                                                              2. 2

                                                                                every few days

                                                                                A lot of these posts don’t have a lot of meat to them and at this point, it’s just getting spammy.

                                                                                That is fair I guess. I’ll have to check the guidelines on things like that.

                                                                              3. 6

                                                                                if every open source project was also a business, should we hate on every single one for advertising?

                                                                                Yes. I flag those too. Advertising is a mind killer.

                                                                                1. 6

                                                                                  But there is no other way to get large numbers of people to know about something, following your advice would be suicide.

                                                                                  I also hate advertising, I just don’t see a way around it. I won’t argue further against banishing advertising from at least.

                                                                                  1. 7

                                                                                    But there is no other way to get large numbers of people to know about something, following your advice would be suicide.

                                                                                    All these conversations are done like it’s all or nothing. We allow politics/marketing/etc on Lobsters or… it never happens anywhere with massive damage to individuals and society. Realistically, this is a small site with few monetary opportunities for a SaaS charging as little as he does. If the goal is spreading the word, it’s best done on sites and platforms with large numbers of potential users and (especially) paying customers. Each act of spreading the word should maximize the number of people they reach for both societal impact and profit for sustainability.

                                                                                    Multiple rounds on Lobsters means, aside from the first announcement with much fan fare, the author sacrificed each time opportunities to reach new, larger audiences to show the same message again to the same small crowd. Repeating it here is the opposite of spreading the word. Especially since most here that like Sourcehut are probably already following it. Maybe even buying it. He’s preaching to the choir here more than most places.

                                                                                    Mind-killer or not, anyone talking about large-scale adoption of software, ideology, etc should be using proven tactics in the kinds of places that get those results. That’s what you were talking about, though. I figured he was just trying to show latest BSD-related progress on one of his favorite tech forums. More noise than signal simply because he was sharing excitement more than doing technical posts or focused marketing.

                                                                                  2. 5

                                                                                    Every blog post is an ad for something. It may not be a product, directly, but it’s advertising an idea, the person, or persons the idea was thought by, the writing (which, btw can be a product) of the author, etc.

                                                                                    If you want to sincerely flag advertising, you might as well get offline—it’s pervasive.

                                                                                    1. 3

                                                                                      It may not be a product, directly, but it’s advertising an idea

                                                                                      Not a native english speaker here. I may be wrong, but after looking at the dictionnary definition



                                                                                      A paid notice that tells people about a product or service.

                                                                                      it seems that an advertisement has a precise definition: an ad is directly related to a paid product, not an idea.

                                                                                      1. 1

                                                                                        it seems that an advertisement has a precise definition: an ad is directly related to a paid product, not an idea.

                                                                                        This is a fairly pedantic interpretation. A person promotes an idea to sell something, if even themselves. That “sale” might only come later in the form of a job offer, or support through Patreon, etc, etc.. But, to say that you can’t advertise an idea is wrong. The cigarette industry’s ad campaigns have always been about selling an image, an idea that if you smoke you become part of something bigger. Oh, and btw, you’ll probably remember the brand name, and buy that kind instead of something else.

                                                                                        iPods were sold on the very basis of white headphones, TO THE POINT, that people without iPods started wearing white headphones to be part of the “club.” Advertisements sell you the idea of a better life, and hopefully you’ll buy my product to get it.

                                                                                2. 20

                                                                                  You’re right, and how virtuous Sourcehut may or may not be doesn’t change that. The line between ad and article is a spectrum, but this seems to be pretty well into the ad side of things. I apologise, I’ll be more discerning in the future.

                                                                                  1. 4

                                                                                    If you crack some other good places to get the word out, I’d be interested in hearing. My online circle is pretty small ( and HN), but I’m working on something I want to ‘advertise’ the hell out of quite soon…

                                                                                    1. 5

                                                                                      I’ve been trying to engage more with Reddit for this reason. I don’t really like it as a platform or see it as doing a social good, but there are users there and I’d like to be there to answer their questions. I was going to make a Twitter account, too, but they wanted my phone number and a pic of my ID and a blood sample to verify my account so I abandoned that. Finding good ways to ethically grow Sourcehut’s audience is not an entirely solved problem.

                                                                                      1. 2

                                                                                        The reason Twitter – and many platforms – asks for phone numbers is because spam and trolls are a persistent problem. Ban one neo-Nazi troll tweeting obscenities at some black actor for DesTROyinG WhITe SocIEtY and they’ll create a new account faster than you can say “fuck off Nazi”.

                                                                                        Reddit is often toxic as hell by the way, so good luck with that.

                                                                                        1. 1

                                                                                          Huh…I have a twitter account and all I needed for it was an email. Maybe things have changed.

                                                                                          1. 1

                                                                                            Nowadays they let you in with just an email, but after some time “block” your account and only unblock it after you give your phone number.

                                                                                      2. 3

                                                                                        While I also see it as an ad, I’m interested in what it being announced as a Sourcehut user. But it seems you don’t have a RSS/Atom feed for the official blog… Or is there a mailing list I missed?

                                                                                        1. 2


                                                                                          I’ve been meaning to make this more visible… hold please done.

                                                                                      3. 3

                                                                                        Somewhat amusing that this post with an interesting fully FOSS service, is marked -29 spam, whereas an actual advertisement about Huawei making macbook clones that run Linux has only -3 spam (one of which is mine).

                                                                                        1. 3

                                                                                          Said FOSS service has been on the Lobsters front page multiple times recently. I suspect the reaction is: “We get it, exists and SirCmpwn is apparently desperate to attract a paying customerbase, but a clickbaity title for a blogspam ad on the usual suspect’s software is probably crossing the line.”

                                                                                      1. 3

                                                                                        This is already happening with one specifically requiring mails to be sent from another Big Mailer Corp to hit the inbox, or requiring that senders be added to the contacts for others. Any other sender will hit spambox unconditionnally for a while before being eventually upgraded to inbox.

                                                                                        Anybody knows which bigcorp player he’s talking about?

                                                                                        1. 14

                                                                                          My mailserver, for many months, could not send mails to outlook addresses. The outlook server replied “OK” but the mail was transparently discarded. Not inbox, not spam, not trash, nothing. As if the mail had never been sent.

                                                                                          I believe nowadays outlook “only” send my mails to spam.

                                                                                          1. 9

                                                                                            I have had the same experience. With Gmail it was even more difficult to evade their hyper-aggressive spam filters.

                                                                                            I can’t call any of this “easy” and I had to struggle and learn a lot of new concepts (like DKIM, which is a pain to set up). It’s also very tricky to verify, if it fails it can fail silently; your mail is just dropped or goes to spam. I had that happen when my DNSsec signatures weren’t renewed, for example, and also when I had made a small mistake that made my DKIM invalid or not used (I don’t remember which).

                                                                                            You need to be an expert at mail before this stuff is “easy”. When you get redirected to the spamfolder, those hosts aren’t giving any information about why this happened, so you’re left guessing. Also, you sometimes don’t even know unless you’re in contact with the recipient in some other way than just e-mail (and sometimes people don’t bother to notify you that the mail got flagged as spam). There are tools out there that can help verify your technical setup like rDNS, SPF, DKIM etc. But it’s still annoying as fuck to get it set up. Once you’ve done the work, it basically runs itself though.

                                                                                            So I appreciate the article’s attempt to get more people to try hosting their own mail, I would say it’s quite one-sided and assumes a whole lot of technical sysadmin competency that the author has probably simply become blind to himself.

                                                                                            1. 1

                                                                                              I had a similar problem and my solution was to route all mail to them via a separate, dedicated IP which didn’t suffer the same problem. A solution possible thanks to the flexibility of Exim. As much as these simpler MTAs seem attractive I wonder how they would cope with such scenarios.

                                                                                            2. 4

                                                                                              I had this problem sending from my own mail server to Gmail addresses. After a couple of months I just gave up on my own mail server and went to

                                                                                              1. 8

                                                                                                They could have responsibly disclosed instead of being an asshat, stealing information and posting a ton of github issues from a fresh account.

                                                                                                1. 3

                                                                                                  stealing… information?

                                                                                                  1. 2

                                                                                                    I’m both supportive of and we participate in the responsible disclosure process for Xen, even those times we don’t make the cut for pre-disclosure. I’m sad someone would go to the effort they have here in a criminal manner when there is more [market] demand for the skillset on display here than I have ever seen before.

                                                                                                  2. 7

                                                                                                    Why the hell did github allow people to remove issues? This is annoying.

                                                                                                    1. 4

                                                                                                      It appears the issues were removed by GitHub when a third party reported the user that posted the issues.

                                                                                                      1. 2

                                                                                                        Unfortunate that GitHub was powerless to prevent nuking their account after being reported.

                                                                                                    2. 4

                                                                                                      I was telling a coworker about this and similar writeups and it turns out he wasn’t aware of the Hacking Team writeup from 2016. It’s detailled and very interesting. I would advise anyone to read it: .

                                                                                                      1. 1

                                                                                                        A 0day in an embedded device seemed like the easiest option, and after two weeks of work reverse engineering, I got a remote root exploit.

                                                                                                        thanks a lot, the whole walkthrough is quite amazing and insighful with a wide variety of tools used

                                                                                                      2. 3

                                                                                                        Did you get a copy of them? They’re deleted now :(

                                                                                                        1. 10

                                                                                                          They’ve been reposted here: (and this site has been archived here)

                                                                                                          1. 2


                                                                                                          2. 1

                                                                                                            I think web archive has some of them. Maybe not every comments.

                                                                                                          3. 1

                                                                                                            Concerning #358, what is “Flywheel” in this context?

                                                                                                            Side-note: I hate locked threads on free software projects.

                                                                                                            Update: I think it’s a hostname of one of their machines?

                                                                                                            1. 1

                                                                                                              Seems like it’s the hostname of their jenkins build slave

                                                                                                              1. 2

                                                                                                                yup, it was the hostname of the jenkins build slave.

                                                                                                              2. 1
                                                                                                            1. 2

                                                                                                              I guess I rather should have linked this page, it’s a bit more descriptive:

                                                                                                              1. 4

                                                                                                                Neither link was very clear to me without close reading and hard thought. I think you could make both pages clearer by displaying some example execline scripts. For example, show a script that cds into a directory and then uses forx or forstdin to loop through all the ‘.wav’ files in that directory and convert them to MP3 with ffmpeg. I have written a similar script before for my preferred shell (Fish), so the differences would be instructive. An example would also make it easier to visualize execline‘s compilation process.

                                                                                                                1. 3
                                                                                                                  #!/usr/bin/env execlineb
                                                                                                                  # Some commands look like their shell equivalent, but "cd" is its own binary and
                                                                                                                  # not a built-in. Note that the whole script could have been written in a single
                                                                                                                  # line without any ';'.
                                                                                                                  cd directory
                                                                                                                  # '*' has no special meaning in execline. The elglob program shipped with
                                                                                                                  # execline provides file name globbing. It immediately subtitute the pattern.
                                                                                                                  elglob g *.wav
                                                                                                                  # "$g" is now expanded to a list of file names. "forx" loops over the list
                                                                                                                  # filling "x" environment variable successively with each entry.
                                                                                                                  forx x { $g }
                                                                                                                  # The words starting with '$' are not automatically expanded into the content of
                                                                                                                  # the matching environment variable. This is the job of importas, which has
                                                                                                                  # some of the ${special:?features} of ${shell:-expansion}
                                                                                                                  importas wav x
                                                                                                                  # "backtick" has the role of x=$(sub shell expansion)
                                                                                                                  backtick x {
                                                                                                                          # heredoc replaces the bash-specific "sed 's///' <<<string" or the
                                                                                                                          # POSIX sh's "echo string | sed 's///'"
                                                                                                                          heredoc 0 $wav sed "s/.wav/.opus/"
                                                                                                                  importas opus x
                                                                                                                  # note that there is no problem with spaces in the name of the files: they are
                                                                                                                  # not split automatically (this requires a flag of importas, where you can also
                                                                                                                  # specify the IFS).
                                                                                                                  ffmpeg -i $wav $opus
                                                                                                              1. 6

                                                                                                                Here’s my 2019 take.

                                                                                                                Two big changes since last year:

                                                                                                                1. I bought a nice chair, a second hand Hermann Miller Aeron. Best 250€ I invested in my setup. The benefits are radical, my post long-session back pain completely disappeared.
                                                                                                                2. A Kinesis Advantage 2 keyboard. I have been bad-mouthing fancy keyboards for the last few years, but after hearing some crazy wrist pain stories from my friends/co-workers, I decided to bite the bullet and take a pro-active approach to this whole mess. The thumb cluster and the wells makes it really comfortable. However, it’s all made of plastic, it clearly doesn’t worth 375€, it’s damn overpriced. But hey, they are the only one selling this kind of keyboard, I guess that’s kind of expected.

                                                                                                                Regarding the desk, I still use my DIY hand crafted wooden joined desk. It ages pretty well. I also still use my m-audio 2x2 sound card together with a shotgun mic for the sound/videocalls. I store my music on my server, I mount the music repository using FUSE and sshfs on my machines. A raspberrypi 3 is connected to my audio setup and stream the music from this very same server using MPD.

                                                                                                                I have an arduino nano + some sensors + some custom scripts to display the temperature, humidity and C02 concentration on i3bar.

                                                                                                                Which leads us to software. At this point, I’m pretty much all in in NixOS. I try to setup everything declaratively. I merged all my various dot files/custom ~/.local/bin scripts into my NixOS configuration. Everything is in one repo, the same configuration tree is shared across my machines.

                                                                                                                Other than that, I still use the classic I3 + neovim + ghcid + firefox combination.

                                                                                                                [edit]: I totally forgot to talk about my AMAZING green slide whistle. Great to vent out during some annoying bug fixing session and creating a bit of comic relief during long video meetings. My neighbors hate it.

                                                                                                                1. 3

                                                                                                                  What CO2 sensor do you use?

                                                                                                                  1. 2

                                                                                                                    A Chinese module based on a MG811.

                                                                                                                  2. 3

                                                                                                                    Shout out to the MX518, I still use mine from over a decade ago

                                                                                                                    1. 2

                                                                                                                      Aeron is super worth it, even at full price. I have one that is (I think) 19 years old now. Had to replace a wheel one time, that’s it.

                                                                                                                      1. 2

                                                                                                                        “However, it’s all made of plastic, it clearly doesn’t worth 375€, it’s damn overpriced. But hey, they are the only one selling this kind of keyboard”

                                                                                                                        Business opportunity is what Im seeing in this.

                                                                                                                        1. 3

                                                                                                                          There seem to be quite a lot of custom keyboards brewing recently, esp. with the proliferation of 3D printers. As to ones that appear similar to a Kinesis Advantage, I’m interested in the Dactyl and Dactyl Manuform. Xah Lee seems rather impressed.

                                                                                                                        2. 2

                                                                                                                          I’m envious of your chair - where’d you get it that cheap?! :D

                                                                                                                          1. 4

                                                                                                                            I bought a used Aeron with a chrome base back in 2012 from London on eBay, and had it shipped to Sweden. I think the chair was around £300. Companies sell them for cheap all the time in London. I ended up selling it again, at a £50 profit, even after the shipping I paid!

                                                                                                                            1. 2

                                                                                                                              On a french local advert website (similar to Craiglist for the US).

                                                                                                                              In my experience, you often get a better deal from these websites than eBay for this kind of stuff. Not only you cut out the transaction/delivery fees, but the market also tends to be a bit less competitive for the buyers.

                                                                                                                              If you’re not in a hurry and automate your search process with some web scrappers, you should get some pretty good deals :)

                                                                                                                          1. 1

                                                                                                                            I posted this mostly for this paragraph:

                                                                                                                            My third remark introduces you to the Buxton Index, so named after its inventor, Professor John Buxton, at the time at Warwick University. The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2,for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner. The party with the smaller Buxton Index is accused of being superficial and short-sighted, while the party with the larger Buxton Index is accused of neglect of duty, of backing out of its responsibility, of freewheeling, etc.. In addition, each party accuses the other one of being stupid. The great advantage of the Buxton Index is that, as a simple numerical notion, it is morally neutral and lifts the difference above the plane of moral concerns. The Buxton Index is important to bear in mind when considering academic/industrial co-operation.