Threads for Foxboron

  1. 6

    I read this yesterday and wrote a new shell command later the same evening. Today I tried that command and my laptop subsequently froze after a few minutes.

    $ cat /home/fox/.local/bin/grobi
    #!/usr/bin/bash
    grobi -C "/home/fox/.config/grobi/$(uname -n).conf" $@
    

    You can probably guess the issue and how adding a comma to it would have helped :)

    1. 1

      Took me a sec… oh no

      1. 1

        Been using Linux for 10+ years. Still can’t avoid that semi-annual fork bomb from time to time

    1. 2

      For a practical use-case I’ll try out later: The Steam Deck actually mounts /etc with the A/B installed /etc as a lower layer (read-only) with a writeable upper layer for user configs.

      In theory you could probably install persistent packages into /etc/extensions/$pkgname with the caveat of having to manually upgrade each package install by hand. Maybe some weird edge-cases here but sounds pretty neat.

      1. 3

        All the polemics aside, the actual thing that surprised me was this last comment. What the actual, proper, hell? Since when anyone gets to dictate how open source software is used?

        I can understand “we won’t try to make systemd work with musl because there will be no support from upstream” but “we won’t try to make systemd work with musl because **upstream doesn’t want us to”? That’s just wild.

        I’m not even a free software purist, I actually think that stuff like the kind of license Redis is doing to try to prevent AWS from eating their lunch is pretty valid, and when that whole discussion happened about maybe including clauses in open source licenses to prevent “evil” usages, I thought that was an interesting idea to at least discuss. But this is not that. There’s no economical or even ethical reasoning. It’s just “I think your tech is ugly and I don’t want you playing with my toys”.

        Either don’t put your toys online, then, or shut up.

        1. 2

          It’s not dictating. It’s giving a recommendation or uttering a preference.

          This is quite common between us distro maintainers. If upstream tells us something is a bad idea we generally listen. This is what being a community is all about. This goes both ways as well, upstream listen to downstream maintainers around packaging and proper release management when they do weird stuff.

          1. 1

            Maybe the person from the comment I linked is misrepresenting what systemd upstream said, but “this is not a good idea” is VERY different from “I don’t want you to do this”.

        1. 7

          Comparisons of systemd with illnesses and other rude comments aside, I think the maintainers of a project do have a right to reject the addition of the systemd package due to the potential far-reaching consequences, especially as most packages in Alpine’s repositories were built with the assumption that systemd will not be added:

          there are quite a few packages in aports that use build flags like –disable-systemd-integration, –no-systemd, –without-systemd, -Dsystemd=disabled, and so on.

          What happens when this is merged and folks want to enable systemd support in those packages?

          Will there be foo (what exists today in aports) and foo-systemd (with the build flag removed) variants of each one?

          I don’t have strong opinions about systemd, but Alpine was (in a way) designed with its exclusion in mind.

          That said… yes, the juvenile anti-systemd comparisons are getting tiresome.

          Just my two cents.

          No, OpenRC is not that option. It can be PART OF an option, but it is not a competitor by itself.

          I recall that one of the Alpine maintainers was beginning work on a full-fledged alternative to systemd, both as a process supervisor and an init system. I can’t find it now, though. Ty Foxboron!

          1. 11

            No, OpenRC is not that option. It can be PART OF an option, but it is not a competitor by itself.

            That part was particularly illuminating because it shows the insistence on vendor density coming from the systemd community. They insist that all competitors have an intended scope on par with systemd’s, rather than relying on distros to provide value via integration of independently-developed components. They insist that systemd is a collection of independently-adoptable components and then when you want to swap out part of it for a competitor they point out that the competitor doesn’t have an analogous component for every other systemd component.

            1. 6

              I recall that one of the Alpine maintainers was beginning work on a full-fledged alternative to systemd, both as a process supervisor and an init system.

              You are talking about the s6 work done by Laurent Bercot, which was part of the discussion.

              https://skarnet.com/projects/service-manager.html

            1. 2

              I personally use vimwiki+vim+goyo. However what I realized is that I need something that allows me to quickly get my notes up on the screen or else I just simply don’t do anything. What I wound up with is having the terminal in an i3 scratchpad.

              Whenever I use Mod+Space the terminal comes up and I can continue on my notes without disturbing any of the work I’m currently doing.

              https://github.com/Foxboron/home/blob/master/.config/i3/config#L30

              https://github.com/Foxboron/home/blob/master/.config/i3/config#L62

              https://github.com/Foxboron/home/blob/master/.config/rofi/bin/notes

              When it comes to vimwiki I have several spaces. One for my hugo blog, one for my mdbook wiki (which are mostly adhoc notes that might be usefull for a public audience) and internal notes which are just.. sporadic and messy.

              1. 4

                Kees Cook who works the kernel security infrastructure did some streaming at the end of 2020. It was cool to see how someone did real kernel work, reviewing patches and generally interacting with the kernel mailing list.

                The recordings of his streams are up on youtube if you are interested.

                https://www.youtube.com/channel/UC6zmTkbgwe2q6l6TNjABSCg/videos

                1. 1

                  This is sorely something the NVD/MITRE miss and really need to implement.

                  This seems similar in scope to what http://osv.dev/ (from Google) is currently doing? How large is the overlap and what is the future goals?

                  1. 1

                    Not super interesting I reckon. But the ones I use the most.

                    ..='cd ..'
                    ip='ip -br -c'
                    home='git --work-tree=/home/fox --git-dir=/home/fox/.config/home.git'
                    
                    i3conf='vim ~/.config/i3/config'
                    zshrc='vim ~/.config/zsh/.zshrc && source ~/.config/zsh/.zshrc'|
                    

                    I don’t really have a lot of super useful aliases. Most of the time is spent making git config aliases and vim stuff.

                    1. 1

                      Is that BSD or a mac? My ip doesn’t know either flag.

                      1. 2

                        That is iproute2. The goal is to have ip be brief by default because I simply do not care about all the information.

                        λ ~ » /usr/bin/ip a
                        1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
                            link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
                            inet 127.0.0.1/8 scope host lo
                               valid_lft forever preferred_lft forever
                            inet6 ::1/128 scope host
                               valid_lft forever preferred_lft forever
                        2: wlp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
                            link/ether 10:3d:1c:e9:f5:cf brd ff:ff:ff:ff:ff:ff
                            inet 192.168.1.11/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp0s20f3
                               valid_lft 78751sec preferred_lft 78751sec
                            inet6 fe80::9f2c:5a98:d8ef:b06e/64 scope link noprefixroute
                               valid_lft forever preferred_lft forever
                        3: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
                            link/ether 90:2e:16:5e:1a:b5 brd ff:ff:ff:ff:ff:ff
                        14: enp36s0u1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
                            link/ether 00:50:b6:9f:fe:25 brd ff:ff:ff:ff:ff:ff
                        
                        λ ~ » ip a
                        lo               UNKNOWN        127.0.0.1/8 ::1/128
                        wlp0s20f3        UP             192.168.1.11/24 fe80::9f2c:5a98:d8ef:b06e/64
                        enp0s31f6        DOWN
                        enp36s0u1        DOWN
                        
                        1. 1

                          Thanks, but that’s why I was asking:

                          ii  iproute2                              5.5.0-1ubuntu1
                          

                          ip -br doesn’t work and neither does ip -c or ip -br -c.

                          And now I finally grasped that it would output the same as 'ip' - so it is -brief -color. Sorry, brainfart (ofc I searched before I asked that last question..) :P

                          Apparently I have never used either flag and didn’t notice them in the manual. – signed, someone gowing up with ifconfig

                          1. 1

                            Yeah I found the documentation for these commands a bit lacking when I started utilizing them initially.

                      2. 1

                        ..='cd ..'

                        I was going to post this as my most useful, because I just it all the time.

                        I also have ...="cd ../.." and so on for going up more levels. Probably not useful beyond four or five levels due to how quickly can you count how deep you are in the CWD.

                        Edit: Just to be clear, I’m talking about me visually counting up how many levels deep I am in the directory tree. Beyond three or four, I tend to just go up that much, and then look and see where I am and maybe do it again, with my finger hovering over the ‘.’ key. I don’t have a problem rapidly tapping out the ‘.’ nine times to go up 8 levels, the difficulty (for me) is determining that I want to go up 8, vs. 7 or 9 levels.

                        1. 1

                          Don’t want to keep posting it so I’ll link to the reply i made to the parent:

                          https://lobste.rs/s/qgqssl/what_are_most_useful_aliases_your_bashrc#c_fqu7jd

                          You might like to use it too!

                        2. 1

                          You (and others) might be interested in the one from my post to this:

                          function up() {
                            local d=""
                            limit=$1
                            for ((i=1 ; i <= limit ; i++))
                              do
                                d=$d/..
                              done
                            d=$(echo $d | sed 's/^\///')
                            if [ -z "$d" ]; then
                              d=..
                            fi
                            cd $d
                          }
                          

                          Allows you to just do up 4 to get cd ../../../..

                          LIFE-saver.

                          1. 2

                            Even more fun (only works in zsh, as far as I know):

                            function rationalize-dot {
                                if [[ $LBUFFER = *... ]]; then
                                    LBUFFER=${LBUFFER[1,-2]}
                                    LBUFFER+=/..
                                else
                                    LBUFFER+=.
                                fi
                             }
                             zle -N rationalize-dot
                             bindkey . rationalize-dot
                            

                            You can make this even better by adding setopt auto_cd to your config, so that if you type a directory path zsh automatically changes to that directory.

                            1. 1

                              I tend to use https://github.com/wting/autojump for smart CDing, personally!

                            2. 2
                              alias .="cd .."
                              alias ..="cd ../.."
                              alias ...="cd ../../.."
                              alias ....="cd ../../../.."
                              
                              1. 1

                                Interesting. I’ve never tried to install / use those “smart” cd replacements, where you can type “cd foobar” and it looks at your recent working directories to find a “foobar” and go there.

                                I was thinking about a variant of your up function that does something like that, where I can type “up foo” in the current directory:

                                /home/username/foobar/one/two/three/four/
                                

                                And so it just looks into successive parent directories for anything matching “foo”, and the first one it finds is the destination.

                                1. 2

                                  oh man – just use Autojump https://github.com/wting/autojump. I use it on every machine i own and it’s a GODSEND.

                                  1. 1

                                    That was what I was talking about. I’ll have to give it or maybe zoxide a try and see if I stick with it.

                            1. 2

                              Since this article was written, Rust has got mitigations for this attack:

                              • There’s mrustc compiler implemented in C++ which translates Rust to C11. It’s specifically designed to bootstrap the Rust compiler.
                              • The rustc compiler supports reproducible builds.
                              1. 1

                                Neither of these two points mitigate the trusting trust attack on their own.

                                There’s mrustc compiler implemented in C++ which translates Rust to C11. It’s specifically designed to bootstrap the Rust compiler.

                                But are mrustc reproducible, and does it produce a reproducible compiler across different environments? (See Diverse Double Compilation from Wheeler).

                                The rustc compiler supports reproducible builds.

                                Is the rustc compiler reproducible or does it produce reproducible binaries?

                                The only real world mitigation I know of is the GNU Mes C compiler which can bootstrap different versions of gcc which can itself reproduce the same GNU Mes C compiler.

                                https://reproducible-builds.org/news/2019/12/21/reproducible-bootstrap-of-mes-c-compiler/

                                1. 1

                                  mrustc can be built with a trusted C++ compiler and build rustc with a trusted C compiler. This changes the problem from bootstrapping Rust to bootstrapping C/C++, and that’s been solved.

                                  In practice people have bootstrapped Rust using both available paths: from the pre-1.0 Ocaml compiler, and from mrustc.

                                  1. 1

                                    What is a “trusted C++ compiler”? The point of Trusting Trust is the fact that achieving one is the problem.

                                    1. 2

                                      It’s a bootstrapped C++ compiler. One that has been built from a bootstrapped C compiler, which has been built from a bootstrapped assembly, which you were able to verify by hand.

                                      Please note that for the purpose of reliably building a compiler binary that corresponds to its source code, the Trusting Trust problem has been solved using the Diverse Double Compilation method.

                                      And if you mean to bring “but what about unknowable hardware backdoors, how can you trust anything at all ever?”. That is a problem with modern hardware, but it also makes the whole problem entirely meaningless, and out of scope of being Rust’s problem, as it doesn’t distinguish (not) trusting rustc from any other software running on contemporary hardware.

                                      1. 1

                                        It’s a bootstrapped C++ compiler. One that has been built from a bootstrapped C compiler, which has been built from a bootstrapped assembly, which you were able to verify by hand. Please note that for the purpose of reliably building a compiler binary that corresponds to its source code, the Trusting Trust problem has been solved using the Diverse Double Compilation method.

                                        You are describing the process but don’t explain how it’s implemented with mrustc. The existence of some of these pieces do not imply it’s achieved. Achieving a “bootstrapped C++ compiler” is not easy and can’t just be assumed.

                                        Diverse Double Compilation is a technique we haven’t actually managed to achieve except for a weak proof with the GNU Mes C compiler. If you think mrustc has made any advanced here it would be great to see.

                                        And if you mean to bring “but what about unknowable hardware backdoors, how can you trust anything at all ever?”.

                                        No, I’m ignoring this problem as it’s not really relevant to the Trusting Trust problem.

                                        1. 1

                                          I’m describing the process, because the process is the definition. A bootstrapped compiler is one that has been built using the bootstrapping process.

                                          For the process to be possible we have to assume there exists a computer that can be trusted to execute binaries run on it (if that step can’t be satisfied you give up on computing, and go live in the woods). From there you can manually verify and build backdoor-free compilers of gradually increasing complexity until you build a trusted C++ compiler that builds mrustc that builds rustc.

                                          In order words, if a trusted C++ compiler can exist, then a trusted Rust compiler can exist too, and mrustc is the key to achieve it.

                              1. 9

                                Working on this I found myself jealous of GDB and Rust having source listings from debuginfod while delve didn’t have it. So obviously I wrote the patch for it. https://github.com/go-delve/delve/pull/2885

                                Terrible being jealous because of gdb.

                                1. 6

                                  Been working on debug packages for Arch Linux for probably around 1 and a half - 2 years now?

                                  Deployed the repository handling in December for Arch Linux and wrote up the debug package detection in our developer tools 2 weeks ago.

                                  Spent the weekend writing up some patches for pacman to enhance the debug package support. There is no pretty way to figure out if a given package is a debug package, or a poorly named packages. Implement pkgtype so this can be easier for future tooling.

                                  Then I was told RPM split out it’s debug package handling into debugedit! So I swapped out the ugly AWK hack pacman had and made it utilize the new tool instead. This enables cool things like having proper debug packages in Golang and fixes a few long standing bugs which has been roadblocks for properly supporting debug packages in other languages (I hope, haven’t tested properly).

                                  In the end Arch Linux will have debuginfod in a couple of weeks hopefully.

                                  This relates to what I’m doing this week:

                                  • Play Wartales
                                  • Write up one or two blog posts detailing the above work in a more detailed manner.
                                    • I’m lazy and might only just start this work over the week.
                                  • Probably cleaning up some of the above patch submissions.
                                  1. 1

                                    I’m so looking forward to this. It would help with development so much, even with optimized builds.

                                  1. 50

                                    I assume some people don’t like Facebook, so I reformatted the text and included it here:

                                    This is written by Jon “maddog” Hall

                                    This is the long-promised Christmas present to all those good little girls and
                                    boys who love GNU/Linux.
                                    
                                    It was November of 1993 when I received my first CD of what was advertised as "A
                                    complete Unix system with source code for 99 USD".   While I was dubious about
                                    this claim (since the USL vs BSDi lawsuit was in full swing) I said "What the
                                    heck" and sent away my 99 dollars, just to receive a thin booklet and a CD-ROM
                                    in the mail.   Since I did not have an Intel "PC" to run it on, all I could do
                                    was mount the CD on my MIPS/Ultrix workstation and read the man(1)ual pages.
                                    
                                    I was interested, but I put it away in my filing cabinet.
                                    
                                    About February of 1994 Kurt Reisler, Chair of the UNISIG of DECUS started
                                    sending emails (and copying me for some reason) about wanting to bring this
                                    person I had never heard about from FINLAND (of all places) to talk about a
                                    project that did not even run on Ultrix OR DEC/OSF1 to DECUS in New Orleans in
                                    May of 1994.
                                    
                                    After many emails and no luck in raising money for this trip I took mercy on
                                    Kurt and asked my management to fund the trip.   There is much more to this
                                    story, requiring me to also fund a stinking, weak, miserable Intel PC to run
                                    this project on, but that has been described elsewhere.
                                    
                                    Now I was at DECUS.  I had found Kurt trying to install this "project" on this
                                    stinking, weak, miserable Intel PC and not having much luck, when this nice
                                    young man with sandy brown hair, wire-rim glasses, wool socks and sandals came
                                    along.  In a lilting European accent, speaking perfect English he said "May I
                                    help you?" and ten minutes later GNU/Linux was running on that stinking, weak,
                                    miserable Intel PC.
                                    
                                    I sat down to use it, and was amazed. It was good. It was very, very good.
                                    
                                    I found out that later that day Linus (for of course it was Linus Torvalds) was
                                    going to give two talks that day.  One was "An Introduction to Linux" and the
                                    other was "Implementation Issues in Linux".
                                    
                                    Linus was very nervous about giving these talks.   This was the first time that
                                    he was giving a talk at a major conference (19,000 people attended that DECUS)
                                    to an English-speaking audience in English.   He kept feeling as if he was going
                                    to vomit.   I told him that he would be fine.
                                    
                                    He gave the talks.  Only forty people showed up to each one, but there was great
                                    applause.
                                    
                                    The rest of the story about steam driven river boats, strong alcoholic drinks
                                    named "Hurricanes", massive amounts of equipment and funding as well as
                                    engineering resources based only on good will and handshakes have been told
                                    before and in other places.
                                    
                                    Unfortunately the talks that Linus gave were lost.
                                    
                                    Until now.
                                    
                                    As I was cleaning my office I found some audio tapes made of Linus' talk, and
                                    which I purchased with my own money.  Now, to make your present, I had to buy a
                                    good audio tape playback machine and capture the audio in Audacity, then produce
                                    a digital copy of those tapes, which are listed here.  Unfortunately I do not
                                    have a copy of the slides, but I am not sure how many slides Linus had.  I do
                                    not think you will need them.
                                    
                                    Here is your Christmas present, from close to three decades ago.   Happy
                                    Linuxing" to all, no matter what your religion or creed.
                                    
                                    And if you can not hear the talks, you are probably using the wrong browser:
                                    

                                    Introduction to Linux:

                                    https://drive.google.com/file/d/1H64KSduYIqLAqnzT7Q4oNux4aB2-89VE/view?usp=sharing

                                    Implementation Issues with Linux:

                                    https://drive.google.com/file/d/1Y3EgT3bmUyfaeA_hKkv4KDwIBCjFo0DS/view?usp=sharing

                                    1. 28

                                      Thanks!

                                      Also I mirrored this on archive.org so people can find this after google no doubt caps the downloads.

                                      https://archive.org/details/199405-decusnew-orleans

                                      1. 13

                                        Thanks! I really appreciate you posting the text.

                                        It’s not so much that I don’t like Facebook, as that I literally cannot read things that are posted there, because it requires login and I don’t have an account. In my professional opinion as a privacy expert, neither should anyone else, but I realize that most people feel there isn’t really a choice.

                                        1. 3

                                          I don’t have a Facebook account either (and agree that neither should anyone else), but this post is actually publicly available so you should be able to read it without one. (I did, as I got to the post via the RSS feed, rather than the site so didn’t see the post.)

                                          1. 1

                                            That’s very interesting and good to know. I wonder whether it checks referrer or something? I do definitely get a hard login wall when I click it here.

                                            (Sorry for the delayed reply!)

                                        2. 11

                                          Someone also linked the slides in the archive.org link :)

                                          http://blu.org/meetings/1994/08/

                                          1. 3

                                            Does anyone have links to the referenced anecdotes “described elsewhere”?

                                            1. 3

                                              This format on Lobsters is really bad on mobile with the x-overflow, weird.

                                              1. 5

                                                The parent put the quote in a code block instead of in a blockquote.

                                                1. 2

                                                  The link that @neozeed posted to archive.org has the same text and is much easier to read on a mobile device.

                                                2. 2

                                                  Thumbs up @Foxboron. I usually go out of my way to isolate facebook into a separate browser. I do have to say that this content was worth the facebook tax.

                                                1. 4

                                                  The minimization part is cool. But I find running and experimenting with systemd inside of podman even cooler. I had no idea that one could do that.

                                                  I mean that makes sense now that I think about it, systemd is designed to run inside a systemd-nspawn which is almost the same as podman, technically.

                                                  1. 6

                                                    It also makes sense considering docker never intended to have systemd running inside containers. This prompted RedHat to more or less write podman.

                                                    https://lwn.net/Articles/676831/

                                                    1. 1

                                                      Wasn’t the article and the other side of the issue? I.e. docker running on a systemd host?

                                                      Systemd always worked ok in any container runtime - docker is not really aware of the container contents on that level anyway

                                                  1. 6

                                                    I haven’t been to FOSDEM for ages and I really should. I gave up on the beer event the last couple of times because Delerium got so packed that you couldn’t hear people even if you were shouting in their ears, but if you went back on Sunday evening they were pretty empty and you could sample most of the beers and still have a good conversation.

                                                    A few of us used to go there a day or two early and hack in the lobby of the Novotel Grand Place. They seemed quite happy as long as we periodically bought coffee / beer (which were not more overpriced than anywhere else near that area). I don’t know how much of FOSDEM you can capture in an online event. For me, the talks were occasionally interesting (and I’ve given a few main-track talks and several devroom talks), but the people that you’d meet in between were amazing. The talks always felt more like an excuse for attending than the real reason - a chat over breakfast in the Novotel was often the highlight of the trips for me.

                                                    1. 4

                                                      I gave up on the beer event the last couple of times because Delerium got so packed that you couldn’t hear people even if you were shouting in their ears

                                                      The trick is to go to Floris Bar, which is right across in the alley. They open a bit later and you easily get a table. They accept the beer tokens too. :-)

                                                      I don’t know how much of FOSDEM you can capture in an online event. For me, the talks were occasionally interesting (and I’ve given a few main-track talks and several devroom talks), but the people that you’d meet in between were amazing.

                                                      Totally agree. There is the occasional interesting talk/dev room, but the “hallway track” is the best. Also the yearly bytenight party at Hackerspace Brussels is great to meet interesting folks.

                                                      1. 2

                                                        I haven’t been to FOSDEM for ages and I really should. I gave up on the beer event the last couple of times because Delerium got so packed that you couldn’t hear people even if you were shouting in their ears, but if you went back on Sunday evening they were pretty empty and you could sample most of the beers and still have a good conversation.

                                                        If you stand outside it’s nicer frankly, but the queue are terrible :) In 2018 my phone got stolen at there. The WIFI password was appropriately “BeWarePickpockets” for the evening.

                                                      1. 1

                                                        It says that grub doesn’t verify secure boot signatures on the files they run, but the last time I worked on it (2 years ago), the kernel had to be signed by the SB keys and all the files (initrd, configs, kernel, grub modules) had to be signed with GPG to work. Is this different now ?

                                                        1. 1

                                                          There have been ~220 patches and around 30 (or something) CVEs for secure boot issues in GRUB so it’s more complicated. When I was looking at this around the same time (2019) grub allowed you to boot unsigned kernels.

                                                          These days grub isn’t suppose to be used in secure boot without utilizing a shim.

                                                        1. 2

                                                          I think there is a typo in cyptdevice=

                                                          Not sure why sbctl is not mentioned and how it compares to the manual process but otherwise great post! I’ll check out bgrt_disable

                                                          1. 2

                                                            Yes, there is a typo! Thanks :)

                                                            Explaining secure boot signing felt out of scope for the blog post, which is more about simplifying the boot process with UEFI stubs more so then trying to cram everything into one post about a reasonable boot setup. You’ll notice that I only briefly mention the systemd-boot features along with discoverable partitions as everything would just be a huuuggee information overload.

                                                          1. 14

                                                            What’s going on here? How did this get to the top of lobste.rs with 26 upvotes? I’m happy for the OP that they could get their system to work, but as far as I can tell, the story here is “package manager used to manage packages.” We have been doing that for decades. Is there any way the community can get a lever to push back on thin stories like this one?

                                                            1. 25

                                                              Would it change your opinion if the article mentioned that the nix shell being used here is entirely disposable and this process leaves no mark in your OS setup? Also that even if this required some obscure versions of common system dependencies you could drop into such a shell without worrying about version conflicts or messing up your conventional package manager?

                                                              I agree that the article is thin in content, but I don’t think you can write this story off as “package manager used to manage packages.” , I think nix shell is very magical in the package management world.

                                                              1. 6

                                                                I could do that with docker too and it would not leave a trace either

                                                                1. 17

                                                                  Yes, but then you’d be inside a container, so you’d have to deal with the complexities of that, like mounting drives, routing network traffic etc. With nix shell, you’re not really isolated, you’re just inside a shell session that has the necessary environment variables that provide just the packages you’ve asked for.

                                                                  Aside from the isolation, the nix shell is also much more composable. It can drop you into a shell that simultaneously has a strange Java, python and Erlang environment all compiled with your personal fork of GCC, and you’d just have to specify your GCC as an override for that to happen.

                                                                  1. 4

                                                                    I get that, but I have to go through the learning curve of nix-shell, while I already know docker, since I need it for my job anyway. I am saying that there are more ways to achieve what the article is talking about. It is fine that the author is happy with their choice of tools, but it is very unremarkable for the title and given how many upvotes that article got.

                                                                    1. 5

                                                                      Why not learn nix and then use it at work as well :) Nix knows how to package up a nix-defined environment into a docker container and produce very small images, and you don’t even need docker itself to do that. That’s what we do at work. I’m happy because as far as I’m concerned Nix is all there is and the DevOps folks are also happy because they get their docker images.

                                                                      1. 3

                                                                        I work in a humongous company where the tools and things are less free to choose from atm, so even if I learned nix, it would be a very tough sell..

                                                                  2. 3

                                                                    As someone who hasn’t used Docker, it would be nice to see what that looks like. I’m curious how the two approaches compare.

                                                                    1. 6

                                                                      I think that the key takeaway is that with Docker, you’re actually running a container will a full-blown OS inside. I have a bias against it, which is basically just my opinion, so take it with a grain of salt.

                                                                      I think that once the way to solve the problem of I need to run some specific version of X becomes let’s just virtualize a whole computer and OS because dependency handling is broken anyway, we, as a category simply gave up. It is side-stepping the problem.

                                                                      Now, the approach with Nix is much more elegant. You have fully reproducible dependency graphs, and with nix-shell you can drop yourself in an environment that is suitable for whatever you need to run regardless of dependency conflicts. It is quite neat, and those shells are disposable. You’re not running in a container, you’re not virtualizing the OS, you’re just loading a different dependency graph in your context.

                                                                      See, I don’t use Nix at all because I don’t have these needs, but I played with it and was impressed. I dislike our current approach of just run a container, it feels clunky to me. I think Docker has it’s place, specially in DevOps and stuff, but using it to solve the I need to run Python 2.x and stuff conflicts with my Python 3.x install is not the way I’d like to see our ecosystem going.


                                                                      In the end, from a very high-level, almost stratospheric, point-of-view: both docker and nix-shell workflow will be the developer typing some commands on the terminal, and having what they need running. So from a mechanical standpoint of needing to run something, they’ll both solve the problem. I just don’t like how solving things by doing the evergreen is now the preferred solution.

                                                                      Just be aware that this is an opinion from someone heavily biased against containers. You should play with both of them and decide for yourself.

                                                                      1. 3

                                                                        This comment is a very good description of why I’ve never tried Docker (and – full disclosure – use Nix for things like this).

                                                                        But what I’m really asking – although I didn’t make this explicit – is a comparison of the ergonomics. The original post shows the shell.nix file that does this (although as I point out in another comment, there’s a shell one-liner that gets you the same thing). Is there an equivalent Dockerfile?

                                                                        I was surprised to see Docker brought up at all because my (uninformed) assumption is that making a Docker image would be prohibitively slow or difficult for a one-off like this. I assumed it would be clunky to start a VM just to run a single script with a couple dependencies. But the fact that that was offered as an alternative to nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)

                                                                        1. 4

                                                                          But the fact that that was offered as an alternative to nix-shell makes me think that I’m wrong, and that Docker might be appropriate for more ad-hoc things than I expected, which makes me curious what that looks like. It points out a gap in my understanding that I’d like to fill… with as little exertion of effort as possible. :)

                                                                          I think containers is a perfectly capable solution to this. The closest thing you can use would probably be toolbox.

                                                                          https://github.com/containers/toolbox

                                                                          It would allow you to even provide a standardized environment which would be decoupled from the deployment itself (if that makes sense). It also mount $HOME as well.

                                                                          1. 3

                                                                            I use Nix, but also have experience with Toolbox.

                                                                            I would recommend most people to use Toolbox over nix-shell. With toolbox you can create one-off containers in literally seconds (it’s two commands). After entering the container you can just dnf install whatever you need. Your home directory gets mounted, so you do not have to juggle with volumes, etc. If you need to create the same environment more often, you can create a Dockerfile and build your toolbox containers with podman. The upstream containers that Fedora provides are also just built using Dockerfiles.

                                                                            The post shows a simple use case, but if you want to do something less trivial, it often entails learning Nix the language and nixpkgs (and all its functions, idioms, etc.). And the Nix learning curve is steep (though it is much simpler if you are familiar with functional programming). This makes the toolbox approach orders of magnitude easier for most people - you basically need to know toolbox create and toolbox enter and you can use all the knowledge that you already have.

                                                                            However, a very large shortcoming of toolbox/Dockerfiles/etc. is reproducibility. Sure, you can pass around an image and someone else will have the same environment. But Nix allows you to pin all dependencies plus the derivations (e.g. as a git SHA). You can give someone your Nix flake and they will have exactly the same dependency graph and build environment guaranteed.

                                                                            Another difference is that once you know Nix, it is immensely powerful for defining packages. Nix is a turing-complete functional language, so nixpkgs can provide a lot of powerful abstractions. I dread every time I have to create/modify and RPM spec file, because it is so primitive compared to making a Nix derivation.

                                                                            tl;dr: most people will want to use something like Toolbox, it is familiar and provides many of the same benefits as e.g. nix-shell (isolated, throw-away environments, with your home directory available). However, if you want strong reproduciblity across systems and a more powerful packaging/configuration language, learning Nix is worth it.

                                                                          2. 3

                                                                            A cool aspect of Docker is that it has a gazillion images already built and available for it. So depending on what you need, you’ll find a ready-made image you can put to good use with a single command. If there are no images that fill your exact need, then you’ll probably find an image that is close enough and can be customised. You don’t need to create images from scratch. You can remix what is already available. In terms of ergonomics, it is friendly and easy to use (for these simple cases).

                                                                            So, NixPkgs have a steeper learning curve in comparison to dockerfiles. It might be simpler to just run Docker. What I don’t like is what is happening inside Docker, and how the solution for what looks like simple problems involves running a whole OS.

                                                                            I’m aware that you can have containers without an OS like described in this thread, but that is not something I often see people using in the wild.

                                                                          3. 1

                                                                            Nit-pick: AFAIK one doesn’t really need Alpine or any other distro inside the container. It’s “merely” for convenience. AFAICT it’s entirely possible to e.g. run a Go application in a container without any distro. See e.g. https://www.cloudbees.com/blog/building-minimal-docker-containers-for-go-applications

                                                                      2. 3

                                                                        Let’s assume nix shell is actual magic — like sourcerer level, wave my hand and airplanes become dragons (or vice versa) magic — well this article just demonstrated that immense power by pulling a coin out of a deeply uncomfortable kid’s ear while pulling on her nose.

                                                                        I can’t speak for the previous comment’s author, but those extra details, or indeed any meat on the bones, would definitely help justify this article’s otherwise nonsensical ranking.

                                                                        1. 2

                                                                          Yeah, I agree with your assessment. This article could just as well have the title “MacOS is so fragile, I consider this simple thing to be an issue”. The trouble with demonstrating nix shell’s power is that for all the common cases, you have a variety of ad-hoc solutions. And the truly complex cases appear contrived out of context (see my other comment, which you may or may not consider to be turning airplanes into dragons).

                                                                      3. 19

                                                                        nix is not the first thing most devs would think of when faced with that particular problem, so it’s interesting to see reasons to add it to your toolbox.

                                                                        1. 9

                                                                          Good, as it is not supposed to be the first thing. Learning a fringe system with a new syntax just to do something trivial is not supposed to be the first thing at all.

                                                                        2. 4

                                                                          I find it also baffling that this story has more upvotes than the excellent and original code visualization article currently also very high. Probably some nix up vote ring pushing this

                                                                          1. 12

                                                                            Or folks just like Nix I guess? 🤷

                                                                            1. 11

                                                                              Nix is cool and people like it.

                                                                              1. 5

                                                                                I didn’t think this article was amazing, but I found it more interesting than the code visualization one, which lost me at the first, “From this picture, you can immediately see that X,” and I had to search around the picture for longer than it would have taken me to construct a find command to find the X it was talking about.

                                                                                This article, at least, caused me to say, “Oh, that’s kind of neat, wouldn’t have thought of using that.”

                                                                              2. 6

                                                                                This article is useless. It is way simpler (and the python way) to just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”. No need to write a nix file, and then write a blog post to convince yourself you didn’t waste your time!

                                                                                Considering all nix posts get upvoted regardless of content, it’s about time we have a “nix” tag added to the site.

                                                                                1. 14

                                                                                  This article is not useless just because you don’t see its value.

                                                                                  I work mainly with Ruby and have to deal with old projects. There are multiple instances where the Ruby way (using a Ruby version manager) did not work because it was unable to install an old Ruby version or gem on my new development machine. Using a nix-shell did the job every time.

                                                                                  just create a 2.7 virtualenv and run “pip install psycopg2 graphwiz”

                                                                                  What do you do if this fails due to some obscure dependency problem?

                                                                                  1. 4

                                                                                    What do you do if this fails due to some obscure dependency problem?

                                                                                    Arguably you solve it by pinning dependency versions in the pip install invocation or requirements.txt, as any Python developer not already using Nix would do.

                                                                                    This article is not useless just because you don’t see its value.

                                                                                    No, but it is fairly useless because it doesn’t do anything to establish that value, except to the choir.

                                                                                    1. 2

                                                                                      In my experience there will be a point where your dependencies will fail due to mismatching OpenSSL, glibc versions and so on. No amount of pinning dependencies will protect you against that. The only way out is to update dependencies and the version of your language. But that would just detract from your goal of getting an old project to run or is straight up impossible.

                                                                                      Enter Nix: You pin the entire environment in which your program will run. In addition you don’t pollute your development machine with different versions of libraries.

                                                                                      1. 3

                                                                                        Arguably that’s just shifting the burden of effort based on a value judgement. If your goal is to get an old project to run while emphasizing the value of incurring zero effort in updating it, then obviously Nix is a solution for you and you’ll instead put the effort into pinning its entire runtime environment. If, however, your value to emphasize is getting the project to run then it may well be a more fruitful choice to put the effort into updating the project.

                                                                                        The article doesn’t talk about any of the hairier details you’re speaking to, it just shows someone taking a slightly out of date Python project and not wanting to put any personal effort into updating it… but updating it by writing a (in this case relatively trivial) Python 3 version and making that publicly available to others would arguably be the “better” solution, at least in terms of the value of contributing back to the community whose work you’re using.

                                                                                        But ultimately my argument isn’t with the idea that Nix is a good solution to a specific problem, it’s that this particular article doesn’t really make that point and certainly doesn’t convincingly demonstrate the value of adding another complex bit of tooling to the toolkit. All the points you’ve raised would certainly help make that argument, but they’re not sadly not present in this particular article.

                                                                                    2. 1

                                                                                      Just out of curiosity, I’m also dealing with ancient ruby versions and use nix at work but I couldn’t figure out how to get old enough versions, is there something that helps with that?

                                                                                        1. 1

                                                                                          Thank you, very helpful!

                                                                                          1. 1

                                                                                            Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.

                                                                                            If instead you want an older ruby but linked to newer libraries (eg, OpenSSL) there’s a few extra steps, but this is a great jumping off point to finding derivations to fork.

                                                                                            1. 1

                                                                                              Do note this method will get you a ruby linked to dependencies from the same checkout. In many cases this is what you want.

                                                                                              Plus glibc, OpenSSL and other dependencies with many known vulnerabilities. This is fine for local stuff, but definitely not something you’d want to do for anything that is publicly visible.

                                                                                              Also, note that mixing different nixpkgs versions does not work when an application uses OpenGL, Vulkan, or any GPU-related drivers/libraries. The graphics stack is global state in Nix/NixOS and mixing software with different glibc versions quickly goes awry.

                                                                                        2. 2

                                                                                          This comment mentions having done something similar with older versions by checking out an older version of the nixpkgs repo that had the version of the language that they needed.

                                                                                          1. 2

                                                                                            Like others already said you can just pin nixpkgs. Sometimes there is more work involved. For example this is the current shell.nix for a Ruby on Rails project that wasn’t touched for 5 years. I’m in the process of setting up a reproducible development environment to get development going again. As you can see I have to jump through hoops to get Nokogiri play nicely.

                                                                                            There is also a German blog post with shell.nix examples in case you need inspiration.

                                                                                        3. 4

                                                                                          this example, perhaps. I recently contributed to a python 2 code base and running it locally was very difficult due to c library dependencies. The best I could do at the time was a Dockerfile (which I contributed with my changes) to encapsulate the environment. However, even with the container standpoint, fetching dependencies is still just as nebulous as “just apt install xyz.” Changes to the base image, an ambiently available dependency or simply turning off the distro package manager services for unsupported versions will break the container build. In the nix case, it is sort of forced on the user to spell it out completely what the code needs, combine with flakes and I have a lockfile not only for my python dependencies, but effectively the entire shell environment.

                                                                                          More concretely, at work, the powers to be wanted to deploy python to an old armv7 SoC running on a device. Some of the python code requires c dependencies like openssl, protobuf runtime and other things and it was hard to cross compile this for the target. Yes, for development it works as you describe, you just use a venv, pip install (pipenv, poetry, or whatever as well) and everything is peachy. then comes to deployment:

                                                                                          1. First you need to make a cross-compiled python interpreter, which involves first building the interpreter for your host triple then rebuilding the same source for the target host triple making sure to tell the build process where the host triple build is. This also ignores that some important python interpreter things may not build, like ctypes.
                                                                                          2. Learn every environment variable you need to expose to the setup.py or the n-teenth build / packaging solution for the python project you want to deploy, hope it generates a wheel. We will conveniently ignore how every C depending package may use cmake, or make, or meson, etc, etc…
                                                                                          3. make the wheels available to the image you actually ship.

                                                                                          I was able to crap out a proof-of-concept in a small nix expression that made a shell that ran the python interpreter I wanted with the python dependencies needed on both the host and the target and didn’t even have to think. Nixpkgs even gives you cross compiling capabilities.

                                                                                          1. 1

                                                                                            Your suggested plan is two years out of date, because CPython 2.7 is officially past its end of life and Python 2 packages are generally no longer supported by upstream developers. This is the power of Nix: Old software continues to be available, as if bitrot were extremely delayed.

                                                                                            1. 3

                                                                                              CPython 2.7 is available in debian stable (even testing and sid!), centos and rhel. Even on MacOS it is still the default python, that ships witht he system. I don’t know why you think it is no longer available in any distro other than nix.

                                                                                        1. 2

                                                                                          When do all their mirrors support https? Downloading something over http or even ftp does not feel like 2021.

                                                                                          1. 12

                                                                                            If they do this right (signed packages and so on), then https will only help with privacy. Which is important for sure, but leaking which packages you download is less horrible than leaking the contents of your conversations, or even just who you’ve been in contact with lately.

                                                                                            1. -1

                                                                                              HTTPS is more than just privacy. It also prevents JavaScript injection via ISPs, or any other MITM.

                                                                                              1. 21

                                                                                                It does that for web pages, not for packages. Packages are signed by the distro’s keys, so if anyone were to mess with your packages as you download them, your package manager would notice and prevent you from installing the package. The only real advantage to HTTPS for package distribution is that it helps conceal which packages you download (though even then, I get an attacker could get a pretty good idea just by seeing which server you’re downloading from and how many bytes you’re downloading).

                                                                                                1. 1

                                                                                                  It does that for web pages, not for packages

                                                                                                  Indeed, however ISOs, USB installers, etc. can still downloaded from the web site.

                                                                                                  Packages are signed by the distro’s keys, so if anyone were to mess with your packages as you download them, your package manager would notice and prevent you from installing the package.

                                                                                                  Yes, I’m familiar with cryptographic signatures.

                                                                                                  1. 9

                                                                                                    Indeed, however ISOs, USB installers, etc. can still downloaded from the web site.

                                                                                                    Yes. The Debian website uses HTTPS, and it looks like the images are distributed using HTTPS too. I thought we were talking bout distributing packages using HTTP vs HTTPS. If your only point is that the ISOs should be distributed over HTTPS then of course I agree, and the Debian project seems to as well.

                                                                                                    1. 0

                                                                                                      No, the point is that there is no need for HTTP when HTTPS is available. Regardless of traffic, all HTTP should redirect to HTTPS IMNSHO.

                                                                                                      1. 16

                                                                                                        But… why? Your argument for why HTTPS is better is that it prevents JavaScript injection and other forms of MITM. But MITM clearly isn’t a problem for package distribution. Is your argument that “HTTPS protects websites against MITM so packages should use HTTPS (even thought HTTPS doesn’t do anything to protect packages from MITM)”?

                                                                                                        I truly don’t understand what your reasoning is. Would you be happier if apt used a custom TCP-based transport protocol instead of HTTP?

                                                                                                        1. 6

                                                                                                          I suspect that a big reason is cost.

                                                                                                          Debian mirrors will be serving an absurd amount of traffic, and will probably want to serve data as close to wire speed as possible (likely 10G). Adding a layer of TLS on top means you need to spend money on a powerful CPU or accelerator kit, instead of (mostly) shipping bytes from the disk to the network card.

                                                                                                          Debian won’t be made of money, and sponsors won’t want to spend more than they absolutely have to.

                                                                                                          1. 4

                                                                                                            But MITM clearly isn’t a problem for package distribution.

                                                                                                            It is though! Package managers still accept untrusted input data and usually do some parsing on it. apt has had vulnerabilities and pacman as well.

                                                                                                            https://justi.cz/security/2019/01/22/apt-rce.html

                                                                                                            https://xn--1xa.duncano.de/pacman-CVE-2019-18182-CVE-2019-18183.html

                                                                                                            TLS would not prevent malicious mirrors in either of these cases, but it would prevent MITM attacks exploiting these issues.

                                                                                                            1. 7

                                                                                                              Adding TLS implementations also bring bugs, including RCE. And Debian is using GnuTLS for apt.

                                                                                                              1. 1

                                                                                                                Idd. It was one of the reasons for OpenBSD to create signify so I’m delighted to see Debians new approach might be based on it.

                                                                                                                From https://www.openbsd.org/papers/bsdcan-signify.html:

                                                                                                                … And if not CAs, then why use TLS? It takes more code for a TLS client just to negotiate hello than in all of signify.

                                                                                                                The first most likely option we might consider is PGP or GPG. I hear other operating systems do so. The concerns I had using an existing tool were complexity, quality, and complexity.

                                                                                                      2. 7

                                                                                                        @sandro originally said: “When do all their mirrors support https?” Emphasis on “mirrors”. To the best of my knowledge, “mirror” in this context does not refer to a web site, or a copy thereof, but to a packages repository.

                                                                                                        I responded specifically in this context. I was not talking about web sites, which rely on the transport mechanism for all security. In the context I was responding to, each package is signed. Your talk of JavaScript injection and other MITM attacks is simply off topic.

                                                                                                2. 9

                                                                                                  ftp.XX.debian.org are CNAMEs to servers accepting to host a mirror. These servers are handled by unrelated organisations, so it is not possible to provide a proper cert for them. This match the level of trust: mirrors are not trusted with the content nor the privacy. This is not the case of deb.debian.org which is available over HTTPS if you want (ftp.debian.org is an alias for it).

                                                                                                  1. 2

                                                                                                    Off line mirrors, people without direct internet access, decades later offline archives, people in the future, local DVD sets.

                                                                                                    Why “trust” silent media?

                                                                                                  1. 3

                                                                                                    I have used NixOS as my daily driver for a couple months now and I love it. However, I have a very superficial understanding of its architecture so I struggle to make meaning out of this. I’ve read https://r13y.com/ already, but it left me with more questions:

                                                                                                    1. What is being compared to determine whether two builds are consistent with one another? (diffoscope?) Isn’t the output necessarily different due to hardware optimizations? Are they turned off for the purposes of these tests?
                                                                                                    2. Does reaching the 100% threshold unlock new capabilities or use-cases?
                                                                                                    3. Are there other 100% reproducible (non toy) operating systems? How non-reproducible are other OSes?
                                                                                                    4. Were there any particularly challenging non-reproducible components?
                                                                                                    1. 7

                                                                                                      Yocto Project (an embedded Linux system) also has a reproducibility status page:

                                                                                                      https://www.yoctoproject.org/reproducible-build-results/

                                                                                                      Here is their wiki page about the topic: https://wiki.yoctoproject.org/wiki/Reproducible_Builds

                                                                                                      1. 2

                                                                                                        Thank you. The documentation is very precise, which I find reassuring.

                                                                                                      2. 5

                                                                                                        What is being compared to determine whether two builds are consistent with one another? (diffoscope?) Isn’t the output necessarily different due to hardware optimizations? Are they turned off for the purposes of these tests?

                                                                                                        In my understanding reproducible builds require that you target the same hardware, so e.g. arm64 without any extended instruction sets or so. Non-deterministic optimizations need to be turned of for that. https://reproducible-builds.org/docs/ is a nice resource, listing things which makes reproducible builds complicated in practice.

                                                                                                        Does reaching the 100% threshold unlock new capabilities or use-cases?

                                                                                                        Yes, one can assert whether a given ISO image matches upstream sources and hasn’t had any backdoors or so backed into the binary without disassembling it. This ability is lost if you are <100%.

                                                                                                        Are there other 100% reproducible (non toy) operating systems? How non-reproducible are other OSes?

                                                                                                        None that I know of, but many are working on it, see https://reproducible-builds.org/projects/

                                                                                                        1. 2

                                                                                                          Yes, one can assert whether a given ISO image matches upstream sources

                                                                                                          The act of verifying removes the need for verification. When you build it yourself to check, you no longer need to check. Just use your build artifacts.

                                                                                                          Reproducible builds are nice for other reasons – eg, caching by hash in distributed builds, but they’re security snake oil.

                                                                                                          Finally: if you’ve got a trusting trust attack, you can have a backdoor with no evidence in the code, which still builds reproducibly.

                                                                                                          1. 2

                                                                                                            If you’re building it yourself to check, you no longer need to check. Just use your build artifacts. This is security theater.

                                                                                                            It’s not. It would explicitly have prevented the Linux Mint ISO replacement attack we saw 6 years ago.

                                                                                                            https://blog.linuxmint.com/?p=2994

                                                                                                            (It belongs to the story that the parent comment is just parroting talking points from Tavis)

                                                                                                            1. 1

                                                                                                              It’s not. It would explicitly have prevented the Linux Mint ISO replacement attack we saw 6 years ago.

                                                                                                              Can you explain how anyone would have noticed without building the ISO from scratch?

                                                                                                              1. 3

                                                                                                                I think preventing it is hard because there are so many avenues to exploit, but reproducible builds can help you determine whether a build has been compromised. If you don’t know whether the attacker managed to alter your build artifacts, you can just rebuild them and do a byte-for-byte comparison. If your builds aren’t reproducible, you have to look at what the differences are: are they changed timestamps? optimization levels? reordered files? etc

                                                                                                                1. 2

                                                                                                                  You need to build it yourself to check, but could notify others if hashes mismatch as that would be much more suspect than it would be for non-reproducible software. Independent third parties could build other peoples ISOs on a regular basis to check.

                                                                                                                  1. 2

                                                                                                                    Also, I forgot the obvious circumvention (beyond the trusting trust attack): being lazy and putting the exploit into the distributed code – since in practice, nobody actually audits the code they run, it seems like in practice this would effectively circumvent any benefits from reproducible builds. Signing the ISO gets you a hell of a lot more bang for the buck.

                                                                                                                    1. 2

                                                                                                                      Reproducible Builds only concerns itself of the distribution network and build server. It can’t solve compromises on the input because that is not the goal. We need other initiatives to solve that part. Reproducible Builds is only part of the puzzle, and I people like you and Tavis really struggle to see that. I don’t know why.

                                                                                                                      This is very much like claiming memory safety issues are pointless to mitigate since logic bugs are still going to exist. But wouldn’t eliminating memory safety issues remove a good chunk of the attack surface though? Isn’t that a net gain?

                                                                                                                      1. 1

                                                                                                                        I can get the claimed benefits of reproducible builds by taking the exact steps I’d need to verify them – running a compiler and deploying the output.

                                                                                                                        If you can tell me how to get memory safety by running a compiler once over existing code, with no changes and no runtime costs, I’d also call any existing memory safety efforts snake oil.

                                                                                                                        Again, if you’re concerned about the security problems that reproducible builds claim to solve, you can solve them today with no code changes. Just run the builds.

                                                                                                                        1. 1

                                                                                                                          Again, if you’re concerned about the security problems that reproducible builds claim to solve, you can solve them today with no code changes. Just run the builds.

                                                                                                                          I have better things to do then swap out my distribution with Gentoo and pretend it solves the problem.

                                                                                                                      2. 1

                                                                                                                        Yes of course, my claim was that one could check whether the binary matches the sources, not that it magically solves all security issues. You are right that people need to be able to trust their toolchains in the first place (trusting trust), but this is true for all software, reproducible or not.

                                                                                                                        Another initiative in this direction are “bootstrappable builds”, https://www.bootstrappable.org/

                                                                                                                    2. 2

                                                                                                                      Nobody! And even fewer people if the ISO build is not reproducible. Which is the point of reproducible builds.

                                                                                                                      Ensuring we can reproduce a bit-for-bit identical artifact ensures we can validate the work, even with a signing key compromise. Without reproducible builds we are only left to our own device and have no way to even start validating it.

                                                                                                            1. 4

                                                                                                              Since your original secure boot posts I have released a version of sbctl which should make secure boot enrollment and signing easier then what it is these days.

                                                                                                              https://github.com/Foxboron/sbctl/

                                                                                                              Keytool.efi shouldn’t be needed with either sbkeysync (what sbctl use) or efi-updatevar. bootctl is also getting secure boot key enrollment soon as well. https://github.com/systemd/systemd/pull/18716

                                                                                                              Hopefully securing the boot chain should be easier with systemd-cryptenroll and easier access to secure boot tooling :) I was working on a similar blog post to showcase the implementation of Unified kernel images in mkinitcpio I have been working on along with the aforementioned tools.

                                                                                                              1. 1

                                                                                                                Hi Foxboron. Nice, marked star on your repository. Meanwhile, I was developing a dracut uefi hook for Arch Linux that is simple enough to only backup the main unified image, and recreate a new one. Just for personal use so far

                                                                                                                I was initially driven by that old thread about mkinicpio being deprecated but now i’m using dracut because of the early network modules and the possibility to further integrate with tang+clevis, and use something like a rpi at home as tang server for auto disk decryption.

                                                                                                                1. 2

                                                                                                                  Nice, thanks :)

                                                                                                                  I also have started on some UEFI stub implementation for mkinitcpio which is going to make things easier on that end as well. https://github.com/archlinux/mkinitcpio/pull/53

                                                                                                                  I don’t think clevis is necessarily going to be a thing much longer if the usability of the systemd tooling improves (atleast on systemd distros).