Threads for ratsclub

  1. 1

    The code is probably not gonna work ( rec { required and maybe more) but wouldn’t something like this work?:

    apps.publish = utils.lib.mkApp {
      drv = pkgs.writeShellScriptBin "publish" ''
        ${hut}/bin/hut pages ${website} ${packages.website}/site.tar.gz \
            --domain glorifiedgluer.com \
            --not-found 404.html
      '';
    };
    

    That will not publish the file you have in front of you but the file at the build path of {packages.website}. Or do I miss something? :) It obviously has to built the publish script everytime, which is weird - but I guess that’s kind of the joke of Nix.

    I could imagine other solutions could work better; e.g. in the publish script first run nix build .#website and publish that output.

    1. 1

      Or do I miss something?

      No, you’re on point, this would publish the purely built version of the website.

      I could imagine other solutions could work better; e.g. in the publish script first run nix build .#website and publish that output.

      This would work too, but in the end it would have the same effect!

    1. 2

      You need to generate Markdown from Org files to run Hugo? There‘s gotta be a simpler way here that retains more info lost in these translations.

      1. 2

        As far as I can tell you, there’s no lost information during this translation. I’ve been really happy with this! Org makes it easy to write and format, hugo makes it easy to add features to the website. Do you know about some limitations that I can face?

        1. 1

          I’ve been wanting to try to give some feedback but the site has been down two days

          1. 1

            Hey, I’m sorry to hear that. I host my website on a low-powered machine at home and sometimes I get power outages/networking issues. It should be fine by now!

      1. 4

        Thank you for this article, I really appreciate your taking the time to write it all down! I hadn’t seen nix develop -c or nix run yet.

        It seems to me that we have a couple different “moments” in the process that are handled by these different tools for better or worse. Nix can create a development environment for you, and it can prepare the final result. It isn’t exactly perfect for the iterative development process though, for which Make is a better fit. I’m surprised it works for the “publish” stage as well; I would probably have assumed either that it wouldn’t or that one shouldn’t do that. So the tools do wind up being complementary.

        The case of non-Nix users is an interesting one. Since I’m thinking about Nix at work, I’m still unclear about whether other developers can or should be protected from having to learn much about Nix beyond “run this”. I don’t know if you have experience with that but if you do I’d like to know what you think.

        My expectation at work is that I would use Nix to create the development environment and prepare the final artifacts, and then use something like Ansible to deploy them. But that still leaves that void in the middle where iterative development is taking place, and either Make or a cadre of language-specific tools might fit in there.

        1. 5

          I agree that there is a gap that Nix currently doesn’t fill w.r.t. incremental builds, and while there are efforts to make it do that, those are far from complete and it’s not even clear yet that the approach will be viable enough to replace Make.

          What I do think you can replace today is Ansible. We use NixOS at work to define all of our Linux hosts (bare-metal, VM or container) with no Ansible or similar tools, and I like that so much better.

          1. 2

            Could you develop how that RFC would help for incremental build of a package?

            1. 3

              Yes. What you’d need for Make-style incremental builds is to have much more fine grained derivations (i.e. at the source file level, like Make, not at a the package level like most current Nix derivations). You can already write these using Nix, nothing stops you. But it’s a ton of work and really not tractable for stuff like the Linux kernel.

              What this RFC proposes is to allow Nix derivations to emit further derivations. That would allow you to have one initial derivation that calls the first-party build tool (make, CMake, Cargo, whatever) to query the build graph and generate a big graph of derivations based on that.

              This is what the RFC’s motivation means by

              allowing us to have source-file-level derivations for most languages

              edit: It should be noted that this would likely still not unlock Cargo/rustc-style incremental compilation, where the granularity is even finer - the incremental compilation cache stores information about individual types and functions.

              1. 1

                But it’s a ton of work and really not tractable for stuff like the Linux kernel.

                This is a language-specific problem. If you have a toolchain which supports compilation of individual source units or modules, then you can write a Nix harness which incrementally compiles everything. Here is my support for doing this to Monte modules; lines 12-14 run the compiler on a single module, lines 17-31 recurse over an entire tree of modules, and lines 35-37 integrate the tree by running unit tests. While this is a hack, please note in line 15 that we support Zephyr ADSL as an input language too – the compiler doesn’t care what the module is written in, as long as there is a frontend for it; it will emit a single compiled bytecode file either way, and this lets us write chimeric applications.

                I agree with you that it’s not tractable for Linux yet, but that’s entirely due to faults in C’s compilation model and typical toolchains. We can see how fresh languages like Go and Rust also get this entirely wrong, by design, at the start. In general, when compilation isn’t homomorphic, we’re not able to parallelize or incrementalize compilation.

            2. 1

              We use NixOS at work to define all of our Linux hosts (bare-metal, VM or container) with no Ansible or similar tools

              Can you expand on this or do you have a blog post about it? There are some strategies that I thought about for some time, may you share yours too? 😜

              1. Considering that my application is built with Nix, I can just put it as a packages.<app-name> output. On my server code I put my application repository as an input. This way I can just nix flake update --inputs <app-name> and rebuild the system to have the new version.

              2. A less Nix way of building a container image and sending it to a container registry, then updating the image on my server code.

              Still, I don’t know which one is more viable.

              1. 1

                No blog post I can point you to, but the tl;dr is that we use colmena more or less by-the-book and deploy to NixOS hosts exclusively. We don’t really use flakes yet for most things because they’re still experimental, but we’re experimenting with them. To pull in other repos it’s mostly plain ol’ fetchGit and JSON files.

                Where we do deploy container images, we distribute them the same way - via colmena, as nix store paths. No registry needed. Not that having a registry in the middle is bad - the extra dynamism could be a benefit depending on your situation.

              2. 1

                So for my use case, I have a few oddities:

                • I have Java war files that need to get copied to a certain Tomcat server
                • I have Python programs that need to be packaged with pex and copied to various locations
                • I have Python REST servers (Pyramid, FWIW) that need to be deployed and restarted
                • I have some Java jars that need to get placed in certain locations
                • There’s a NodeJS application that needs to be placed in a certain content root

                At the moment, this is all being managed by a Python program using Fabric, which is a bit gross. It oversees the builds and then does the deployment steps. It’s complicated by the fact that we have several environments (development, production, etc).

                Moving to Nix, my thinking was all these artifacts would be manufactured by Nix and then I would use Ansible to copy them where they need to go. The main reason here being that Ansible is used by other parts of the company and it’s got to be more reliable/maintainable than bespoke Fabric. I don’t really see how I would get the artifacts where they need to go by Nix alone. Can you elaborate on how you’d do that?

                1. 4

                  The main reason here being that Ansible is used by other parts of the company and it’s got to be more reliable/maintainable than bespoke Fabric.

                  That is a totally valid reason to stick to Ansible. In our case, we had only a bit of Ansible code and knowledge in the team, and nobody was really enthusiastic about it, which made it much easier to rip out.

                  I don’t really see how I would get the artifacts where they need to go by Nix alone. Can you elaborate on how you’d do that?

                  The simplest way, and the one that’s built in, is a command called nix-copy-closure. It takes a store path (/nix/store/...) you’ve built and pushes it to another server’s Nix store via SSH. The name comes from the fact that it doesn’t just copy the single path, but actually the “closure”, that is all store paths that are transitively depended upon by the one you’re pushing. So if you’ve built a Python program and that has e.g. a native dependency like libssl, that will also be copied if it’s not there yet.

                  Then, on the target host, the only thing you need is a service that runs your app(s). You could do that yourself, or you could run NixOS, which can generate systemd unit files for you. You could also generate container images instead using pkgs.dockerTools if that fits your infrastructure better.

                  We don’t use nix-copy-closure itself but a somewhat higher-level, more convenient tool called colmena, which can do nice stuff like push different apps/configurations to an entire fleet of servers in parallel. Unlike nix-copy-closure however, this is tied closely to NixOS and can not be used deploy to non-NixOS hosts.

                  1. 1

                    Thank you for explaining this! We’re not running NixOS so I will have to experiment with nix-copy-closure, for the things that aren’t going to be in containers.

                  2. 4

                    there are tools such as nixops, disnix, and morph which can help with this (essentially replacements for Ansible), but they’re really meant to work for managing servers that run NixOS. if you are quite certain that these build artifacts do not depend on anything else in the nix store, you could write glue code to copy them to non-nixy paths on the destination, and run that with Ansible, but that’s something you’d have to do yourself, I’m not aware of any existing tooling for it. nix build products really want to be run out of the nix store.

                    1. 2

                      The non-Python artifacts I have are actually totally safe to run somewhere else. For the Python artifacts, the issue with pex will come down to that it is generating zip applications and (at least according to the Nix thesis) Nix can’t really see the contents of the zip file to rewrite things. Maybe there is a trick for that, but I haven’t been able to get pex to run inside a Nix build anyway, yet. On the other service layer, it may be that I can convert them to a container and deploy that instead. But I’m not anticipating getting to run NixOS at work anytime soon.

                      1. 3

                        Maybe there is a trick for that

                        Haven’t tried it, nor do I know pex, but if you can separate the zipping stage of the build from the rest, you could postpone the generation of the zip archive to a phase that runs after fixupPhase, which is where all the paths are rewritten. I think distPhase makes the most sense there.

                        If you can’t separate that out, you could do hacks like unpacking the zip in preFixupHooks and then packing it again in postFixupHooks. Not great, but might do the trick.

                        1. 2

                          I think it would be fairly benign to unpack and repack, I’ll give that a try, thank you!

                2. 4

                  I’m still unclear about whether other developers can or should be protected from having to learn much about Nix beyond “run this”

                  IMHO: Can? Yes. Should? No. There is such a variety of powerful things you can do once you have a handle on Nix (pinning/substituting dependencies, really precise caching, reproducible Docker images, …) that I recommend helping other people learn it. Helping them tweak shells is a good start, and home-manager is a pretty attractive next step.

                  1. 3

                    Note that I’m not a Nix purist (if such thing exists), I just use it as a tool to make my computer life easier!

                    It isn’t exactly perfect for the iterative development process though, for which Make is a better fit.

                    Yes, I would frame it as a trade-off. You can use Nix as the only build tool, but I do think it feels awkward in this context. You can write apps as targets or have them represent another thing (you’re not using make!), but composing them together is a bit weird still. I hope someone shows a different approach here!

                    Since I’m thinking about Nix at work, I’m still unclear about whether other developers can or should be protected from having to learn much about Nix beyond “run this”.

                    I mostly write .NET code at work and the current state for .NET tooling on Nix is kind of bad (check this discussion here). It basically means that we can’t use the .NET ecosystem to the fullest with Nix right now and even building code is funky.

                    On the other hand, more than half my team use Nix as development tool and it has been a blessing as we don’t need to worry about package dependencies throughout the day. We use the mix I presented on the blog post, a Makefile with build steps and Nix to manage package dependencies. This allows people to hop on the project and use Makefile with their local packages or nix develop -c make <target> to have Nix take care of it for them. Note that we are using devenv to abstract some things away from us and I can’t recommend it enough!

                    You can also use direnv (with editor plugins) to update your environment based on your devShells.default. This way you don’t even see Nix while writing code, you just have everything setup for you when you enter the directory. If you don’t use direnv you can also run nix develop and then call your editor inside the shell.

                    I think it is more of a political issue than a technical one. It boils down to their willingness to learn a new tool and yours to fix future problems they have.

                    My expectation at work is that I would use Nix to create the development environment and prepare the final artifacts, and then use something like Ansible to deploy them.

                    Here we use the containers on Kubernetes cluster deployment strategy. If all you need is a bash script or something, do as shown on apps.publish. It should be easier, because you have a single tool, and reproducible in some sense, at least your application was built with Nix in a sandbox environment.

                    1. 1

                      I’m quite taken with Nix’s philosophy and I don’t mind being an advocate for it. I think about three other folks on the team are interested in it for its own sake, and I could certainly convince a few more of its merits. There will remain 1-3 skeptics. Editor support is a good point—we’re a JetBrains shop mostly using IntelliJ + Python plugin. I am using direnv myself and not entirely sure how robust the direnv support is, but it will matter for gaining adoption. Or else I’ll need to make it a second track in addition to the existing approaches.

                      On my other comment, I expanded a bit on the nature of our setup, which is a bit baroque, probably more than is necessary. I do think we are going to be containerizing the services and moving to Kubernetes eventually. But there will probably remain some products that simply have to be copied to various computers, and for that I don’t know precisely what the right approach is, although Ansible is widely used where I work by other teams.

                      1. 3

                        I’m a nix and jetbrains user (mostly intellij ultimate and clion) and I have to say that unfortunately nix and jetbrains hardly coexist at all, it seems like jetbrains doesn’t feel like you should change the environment out from under the IDE. So, to get around it I usually just launch the IDE in whichever shell session that has the nix environment I need. It’s annoying but doable. Unfortunately the nature of my work is such that I have to be in a lot of things at once, so I also keep emacs around to do that.

                        that said, I love jetbrains tools and will continue using them, I just hope they come around on this issue

                        1. 1

                          Yeah, IntelliJ really wants to be in charge of your developer tools it seems. I found a half-dozen or so issues on their Youtrack about Nix, but it seems like most of them have to do with installing IntelliJ or related under Nix, which appears to work fine now (probably using the solutions in those tickets) but only one about picking up the Nix environment within IntelliJ. Maybe we should report this.

                        2. 2

                          While I haven’t used the Jetbrains editors myself, my understanding is that they don’t work well with the nix-direnv environment. Occasionally we get people come to the unofficial nix discord to troubleshoot their problems, without a lot of happiness to follow.

                          1. 2

                            That’s unfortunate, but not necessarily a dealbreaker for me.

                    1. 5

                      Giving some context, this blog post was written based on a previous comment discussion and a brief conversation with a friend of mine.

                      1. 3

                        For the past few months I’ve been using make(1) with Nix to make dependencies easier to deal with. The way I’m doing it right now, is to run nix develop locally to get a shell with my development dependencies and nix develop .#ci -c make <target> on CI.

                        1. 2

                          I’ve just started using Nix and I don’t understand why Nix alone isn’t sufficient. Can you elaborate on your setup a little? (I am also a fan of Make.)

                          1. 3

                            I’m sorry if I didn’t make myself clear here. I’m not strictly talking about package dependencies, but dependencies my projects have. I can give you my blog as an example. It has the following build steps:

                            1. Write posts in Org Mode
                            2. Covert them to markdown with ox-hugo
                            3. Build the website with hugo
                            4. Create a tar.gz archive of the website
                            5. Publish it to sourcehut pages

                            Checkout the steps mentioned above on its Makefile:

                            public:
                            	emacs $(pwd) --batch -load export.el
                            	hugo
                            
                            site.tar.gz: public
                            	tar -cvzf site.tar.gz -C public .
                            
                            .PHONY: publish
                            publish: site.tar.gz
                            	hut pages publish site.tar.gz \
                            		--domain glorifiedgluer.com \
                                            --not-found 404.html
                            
                            .PHONY: run
                            run: public
                            	hugo serve
                            

                            I can easily hop into any build step with this and also keep the biggest Nix advantage of calling nix develop -c make <target> to have all the package dependencies available for that target. I know I can write a derivation to do exactly what I do here, but the main problem I faced was that it was just… awkward? In the sense that I had to keep nix build-ing things until they worked OR write a bunch of apps on my flake file which in this case I might as well write a Makefile.

                            1. 2

                              Thank you, this is very informative.

                              My sense is that, if I wanted to go the maximally Nix route, there would be a devShell that would introduce hugo and emacs, and a package that uses hugo to generate site.tar.gz. That seems all fine and good for making the site. But I don’t yet see how one goes from having a derivation that creates site.tar.gz to actually deploying it somewhere, which you do here using hut.

                              1. 3

                                if I wanted to go the maximally Nix route, there would be a devShell that would introduce hugo and emacs

                                Yes, this is exactly what I have! A devShell with hugo, custom emacs, gnumake and hut, this is what makes it possible to nix develop -c make publish on CI.

                                But I don’t yet see how one goes from having a derivation that creates site.tar.gz to actually deploying it somewhere

                                Well, you can probably do something like this on your flake.nix file:

                                # I'm not putting all the hut arguments here
                                apps.publish = flake-utils.lib.mkApp {
                                  drv = pkgs.writeShellScriptBin "publish" ''
                                    ${hut}/bin/hut pages publish ${self.packages.website}
                                  '';
                                };
                                

                                Then on CI you would just nix run .#publish and it should work. However, you kind of lose the ability to run your blog locally with live reload unless you add another apps for this, but at this point I would just go with a Makefile.

                                Edit: I should probably write a blog post about this 😜

                                1. 1

                                  Thank you! I hope you do write a blog about it, I would enjoy reading it.

                              2. 1

                                there was a good argument that we shouldn’t do networking in a makefile. but separating this out into a separate script might feel pointless. might be good though? I’m not sure.

                                1. 2

                                  What was the argument about? Never heard about it, would be glad to read more!

                                  1. 1

                                    I think the primary driver for this was packaging for systems like nix and guix, the build is performed inside a container which is intentionally not given networking so that the builds are reproducible.

                                    1. 2

                                      Oh, this makes sense, yes! I usually have a Nix derivation to use without make too, but for personal projects I just use the nix develop with make trick.

                              3. 1

                                Nix primarily sets up environments. This includes stuff like language toolchains, system dependencies and all the rest. It doesn’t really build software directly. For that it relies on tools from the environments it can provide. So stdenv in nixpkgs contains the basic set of tools required to build a simple C/C++ project through gcc (or alternatively clang), make and so on. Then, the things used in/from nixpkgs also typically default to invoke things like make && make install unless specified otherwise.

                                Nix is quite useful for setting up environments that are suited for development, like the command above your comment, which drops you into the same environment for local development and CI. That can be quite ergonomic indeed. When packaging for Nix though you’d typically want something that evaluates purely with nix build (which could do the same make invocations too, but more heavily sandboxed).

                                1. 1

                                  The parent comment was that they use make to make dependencies easier to deal with, so it’s that aspect of it that I’m confused by, because it seems to me that Nix is very good at that. It could just be that make is really just here to support the CI toolchain.

                              4. 2

                                what’s the difference between make and make(1)?

                                1. 3

                                  None, it’s just the “manpage reference”-style reference to it as a command in the 1 section. make(1)

                              1. 3

                                I had the experience of migrating my Debian VPS to my own NixOS box under my table this past few weeks and I had a similar experience as the author, almost on the same exact steps (didn’t migrate the cache directory and didn’t know how to setup ACME on the testing VM)!

                                However, I have some tips (for everyone) to avoid some of the problems they faced over the setup:

                                when I initialized the NixOS Mastodon module it starts an Nginx server because Mastodon requires TLS, it uses Let’s Encrypt for that and this requires the DNS record to point to the NixOS instance […] I decided to tell the Mastodon module to skip Nginx configuration for now setting services.mastodon.configureNginx=false

                                This kind of option on NixOS modules often bites me too. I don’t have an actual fix for this scenario but what I usually do is go to search.nixos.org, take a look at the service options and sometimes look at what the options do on the source code. Here’s the line that causes the trouble with SSL.

                                I know, having to read source code sucks but at this point I just do it over reading documentation for NixOS. :(


                                How to manage password in NixOS is a question I don’t feel comfortable answering yet

                                I use agenix to manage my passwords and the workflow is the following:

                                1. Install a barebones configuration on the host
                                2. Copy the public ssh key generated through the install
                                3. Add it to the list of users on secrets.nix on the configuration repository
                                4. Rekey everything with agenix --rekey
                                5. Install the “complete” configuration

                                I’m not sure if this is the best way to do it but it works wonderfully for me with about 3 hosts I manage.


                                Now, I don’t have much experience with multi-node setups as I only have single-box and it is enough for my needs. Nonetheless, the experience hosting NixOS has been a blast for me, and I wanted to share some things that blew my mind.

                                Monitoring

                                This has been much easier than the setup I had on my Debian machine, it’s so easy to setup Grafana, Loki and Prometheus together! I’m going to omit most of the configuration to make it as brief as possible for this comment:

                                services.grafana.provision.datasources.settings.datasources = [
                                  {
                                    name = "Prometheus";
                                    type = "prometheus";
                                    url = "http://localhost:${toString config.services.prometheus.port}";
                                  }
                                ];
                                services.prometheus.scrapeConfigs = [
                                  {
                                    job_name = "${config.networking.hostName} - node";
                                    static_configs = [{
                                    targets = [ "127.0.0.1:${toString config.services.prometheus.exporters.node.port}" ];
                                  }
                                ];
                                

                                At first it didn’t click me how much better this was over my bare-metal configuration in Debian, although, after tweaking a lot it was more apparent.

                                • If I change Prometheus’ port, Grafana will restart with the new configuration.
                                • config.networking.hostName is something declared on the configuration, right? So, I can introduce my other hosts’ metrics declaratively through code and keep everything in sync, forever.
                                ZFS

                                NixOS has a superb ZFS support, it was really easy to setup snapshots, scrubing and monitoring for my raid-z2 pool.

                                services.zfs.autoSnapshot.enable = true; # this needs some manual work :(
                                services.zfs.autoScrub.enable = true;
                                services.prometheus = {
                                  exporters.zfs.enable = true;
                                  scrapeConfigs  = {
                                    job_name = "${config.networking.hostName} - zfs";
                                    static_configs = [{
                                    targets = [ "127.0.0.1:${toString config.services.prometheus.exporters.zfs.port}" ];
                                  }];
                                };
                                
                                1. 2

                                  r.e. ZFS on NixOS - check out services.sanoid and services.syncoid options to handle snapshotting. No manual configuration required.