Threads for gwoplock

    1. 2

      Well, my router decided to lose its management interface and most of the controller software. So I get to factory reset it, hope that fixes it, and rebuild my network. Hopefully, I can get it back up and running so that I get it to cooperate with my open-thread border router…

    2. 2

      My raspberry pi finally arrived so I’ll be diving head first into openthread and hopefully finishing up my IoT project.

    3. 1

      Saturday, I’m participating in an orienteering competition in town. Sunday, between chores, I think I’m going to start writing the first part of a bunch of posts on my blog about building an OpenThread-based sensor module with Zephyr.

    4. 46

      I’d rather construct a build graph out of my own intestines than use make

      1. 39

        Even that would be an improvement over webpack and friends

        1. 8

          I’ll admit I don’t know webpack very well since I’m not primarily a front end developer. However every single time I touch webpack and something even it I don’t, the whole thing exploded into an unintelligible spaghetti monster. I’d rather deal with C++ template errors than anything touching webpack.

      2. 10

        Agree

        Reminds me of what @david_chisnall said a few weeks ago:

        “Why does everyone hate CMake so much? I find it far easier to understand than Makefiles and automake.”

        Why does everyone hate being punched in the face? I find it far more pleasant than being ritually disemboweled.

        https://lobste.rs/s/fdcpy3/fish_shell_be_rewritten_rust#c_bpiwtu

        To be fair, we’re all talking about C/C++ builds, and the OP isn’t. However I believe Ninja would also be better for web builds, as my other comment here explains. (You’ll be reinventing stuff that is “canned” in typical JS tools, but you’re also doing that with Make.)

        1. 13

          Makefiles get a LOT of guilt-by-association from autotools. For years I avoided learning about them because I didn’t want to get sucked into a black hole of insanity, but it turns out if you use Make on its own, (not for C or C++) it’s great!

          1. 4

            The FreeBSD build system is pure bmake, no autotools or anything else. One time, I wanted to rewrite a file that yacc included in C++, requiring a change to the rule that compiled the yacc output to compile as C++. After half a day of trying, I gave up.

            GNUstep Make uses some autotools stuff when you install it but is then a pure gmake environment. I suffered with it for years because it makes a bunch of assumptions that are never quite what I want and is a huge pain to use for anything that isn’t exactly what it was intended for.

            I have used large build systems in pure make, in various dialects, and they have always been the ones where I have had to deal with the most fragility (impossible to change, parallel builds subtly broken, and so on). CMake and friends are bad, but I’ve never gone beyond the level of mild dislike with a CMake build system for a project with thousands of build steps. I’ve gone to utter frustration and despair with pure make build systems under a tenth that complexity.

          2. 3

            I agree with guilt-by-association, but I disagree that Make is great on its own, even for the non C/C++ use cases.

            All three Makefiles I wrote were without autotools/ or anything like m4/CMake/etc. It was all just plain Make, and it had big problems too.

            One issue is that I don’t want to write a new build rule for every single blog post I write, and for every new log file I get, which looks like 2023-02-01.access.log.

            So I used pattern rules with % in Make, like

            _tmp/%_meta.json : %.md
                    ./build.sh split-doc $< $(patsubst %_meta.json,%,$@)
            _tmp/%_content.md : %.md
                    ./build.sh split-doc $< $(patsubst %_content.md,%,$@)
            _tmp/%_body.html: _tmp/%_content.md
                    ./build.sh snip-and-markdown $< $@
            _site/%.html: _tmp/%_meta.json _tmp/%_body.html
                    ./blog.py header-footer $^ > $@
            

            Turns out this feature interacts very poorly with others. It’s a crappy programming language with one level of looping. I spent forever fixing bugs in stuff like this, and it’s probably still not right.

            The alternative is I just write a simple for loop in Python that generates Ninja, and I’m done in 5 minutes.

            I can even write NESTED loops! Which I do for (gcc, clang) x (dbg, release, asan). Build variants are trivial with this code gen pattern, but tortured in pure Make.


            You might ask: why not generate Make from Python? You could, and that’s essentially what CMake did, what Android platform did, and many other build systems did too.

            But CMake now generates Ninja, and so does the Android platform (and Chrome too).

            Ninja basically has all the parts of Make that you would generate – that is its purpose.

            It doesn’t have all the gunk of implicit rules.

            One way to see this is that your 5 line Makefile is actually 1305 lines long, and your performance suffers because of it (extra stats), and you have to debug it occasionally:

            $ make  --print-data-base | wc -l
            ...
            1300
            

            Another way to see this is by strace:

            $ strace -e stat ninja
            ninja: no work to do.
            +++ exited with 0 +++
            
            $ strace -e stat make
            stat("/dev/pts/11", {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 11), ...}) = 0
            stat("/dev/pts/11", {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 11), ...}) = 0
            stat("/usr/include", {st_mode=S_IFDIR|0755, st_size=20480, ...}) = 0
            stat("/usr/gnu/include", 0x7ffd60f40e60) = -1 ENOENT (No such file or directory)
            stat("/usr/local/include", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
            stat("/usr/include", {st_mode=S_IFDIR|0755, st_size=20480, ...}) = 0
            stat(".", {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
            stat("RCS", 0x7ffd60f40e00)             = -1 ENOENT (No such file or directory)
            stat("SCCS", 0x7ffd60f40e00)            = -1 ENOENT (No such file or directory)
            stat("GNUmakefile", 0x7ffd60f3ed40)     = -1 ENOENT (No such file or directory)
            stat("makefile", 0x7ffd60f3ed40)        = -1 ENOENT (No such file or directory)
            stat("Makefile", 0x7ffd60f3ed40)        = -1 ENOENT (No such file or directory)
            make: *** No targets specified and no makefile found.  Stop.
            +++ exited with 2 +++
            
            1. 3

              OK but I don’t understand why you did it that way. I use it for my blog and I don’t write a new build rule for every blog post I make, but none of that stuff is necessary for me.

              I mean, yeah, don’t use it for something it’s not suited for, but I can’t tell from your explanation why your blog is badly suited for it and mine isn’t. It seems like you have some unspoken requirement beyond “build a blog” that is forcing you to complicate things, but without knowing what it is, I can’t comment further.

              Another way to see this is by strace

              I author my blog on a thinkpad from 2008. Running make takes under a hundred milliseconds. Why would I care if ninja is faster?

              1. 1

                How do you use Make for your blog without pattern rules? I’d be interested to see what it looks like

                My blog still uses Make, so I’m not that surprised if other people use it successfully … certainly it seems better than non-incremental and non-parallel “static site generators”, which seem to be written in Go because Ruby is too slow (???)

                I just really wish I had used Ninja from the beginning.

                That isn’t the only problem I ran into, looking at my Makefile, I also have .SECONDARY which I think fixed a bug (wrong default). And of course many people forget .PHONY

                The tagging and TOC was a bit hard to get right IIRC

                Also I seem to be scared to actually build the blog in parallel, but I don’t know if that is real or not :) Make doesn’t help you get your dependencies right

                Ninja doesn’t either, but in practice, since it builds in parallel by default, the builds seem to be more correct. (I want to add some lightweight sandboxing to my Ninja wrapper to fix this)

                1. 1

                  I didn’t mean to say I didn’t use pattern rules; just that I didn’t need any weird rules.

                  I’ve been using this 15-line Makefile for the last 5 years:

                  LATEST=199
                  SRC := $(wildcard *.m4 | grep -v feed.m4)
                  OUTPUTS := $(patsubst %.m4,out/%.html,$(SRC))
                  
                  all: $(OUTPUTS) out/atom.xml out/style.css
                  
                  out/list.html: Makefile $(SRC)
                  out/index.html: Makefile $(LATEST).m4
                  out/%.html: %.m4 header.html footer.html ; m4 -D__latest=$(LATEST) $< > $@
                  out/atom.xml: feed.m4 $(LATEST).m4 Makefile ; m4 -D__latest=$(LATEST) $< > $@
                  out/style.css: static/*.css ; cat $^ > $@
                  out/assets/%: static/assets/%; cp $^ $@
                  
                  clean: ; rm out/*
                  
                  watch: ; echo $(SRC) | tr " " "\n" | entr make
                  server: all ; cd out; python3 -m http.server 3001
                  
                  upload: all; rsync -azPL out/ p:technomancy.us/new/
                  
                  .PHONY: all prepublish clean watch server upload
                  

                  My table of contents generation is also terrible, but that’s because m4 is bad, not because Make is bad.

                  On the other hand, having admitted to willingly using m4 I guess I’ve basically lost whatever credibility I had, so let me clarify that I don’t actually endorse m4, but I do use it, and it’s bad in ways that don’t actively cause problems for me.

      3. 6

        Why? I find makefiles without implicit rules literally the only sane mechanism to automate builds.

      4. 4

        what would that even look like?

        1. 2

          Would you like to see?

          1. 3

            I’m afraid to see what a build graph made out of intestines would look like lol

      5. 2

        Every build system I’ve encountered eventually feels like that kind of intestinal distress. Make is just the one that I know that I need to take a little antacid beforehand, and I’ll be OK. Rake is a really good experience, too.

    5. 3

      My home lab’s DNS and VPN setup decided to explode, I imagine a good chunk of my weekend will be spent mopping that mess up. Less technology related, but I also need to get ready for thanksgiving this week.

    6. 3

      Work: I, an embedded developer, try to fight with AWS and Terraform to stand up the MQTT ingestion, API, backend, and frontend infrastructure for a new product.

      Personal: I have no idea. Maybe work on my homelab some.

    7. 3

      It’s been a busy few weeks at work and personally. I’m trying to have a nice relaxing weekend. I might write a C++ implementation of that transit language comparison repo posted here a while ago since I’ve been missing just writing code.

    8. 14

      the future needs to start with this coming to an end:

      the public key database SecureBoot maintains (which is under control from Microsoft)

      1. 12

        Microsoft:

        • controls the SecureBoot key database
        • owns GitHub, the world’s largest open source software forge
        • owns VS Code, a very popular “open source” editor (but see https://lobste.rs/s/ka3anc/visual_studio_code_is_designed_fracture)
        • has introduced Linux compatibility into Windows through WSL
        • helped push EME through W3C, baking proprietary DRM into the Web, ensuring non “mainstream” OSs (e.g. FreeBSD, Plan 9) get left out

        Perhaps this is all coincidence, but it’s starting to feel like a plan is coming together. Very A-Team, only with Nadella chewing on the cigar.

        1. 3

          I’d add WSL2 to your list of interesting Microsoft moves. The fact it allows for docker to run natively on windows and VS Code to use Linux tools feels classic EEE.

        2. 1

          EME stands for “Encrypted Media Extensions” for those who don’t know

      2. 6

        For what it’s worth, Microsoft’s own requirements for SecureBoot say that the owner must be able to add his own root keys to the firmware’s truststore. If you wish, you can remove all Microsoft keys from firmware truststore and replace them with your own.

        The problem is that now you need to manage that root certificate and keep it secure. That’s not trivial for individuals.

        1. 3

          I’d love to see a good F/OSS solution to this that makes it easy for an org to not just blindly trust RedHat or Canonical (for example), but only the specifically blessed kernel and boot environment settings that they want.

          1. 1

            Me too, but it is a multifaceted problem. Not only do you have to maintain your own root certificate, but you also need remote attestation to ensure that everyone is in fact running the blessed kernel and boot environment. And honestly, I think the remote attestation part is the harder one.

            1. 1

              That’s less vital as a starting point. If you install the machines with full-disk encryption where the key is tied to the PCR that contains your org’s certificate then anyone who boots an unauthorized kernel will lose access to the disk. If your threat model is an employee who wants to run a custom Linux distro that doesn’t meet your security policy, that’s a problem. If your threat model is someone stealing an employee’s laptop and using an old version of some signed Linux kernel with a known vulnerability to boot, unlock the disk, and then break into and steal your org’s data, then it’s fine. The second use case is the one that I care more about.

        2. 2

          Implementations sometimes have trouble with allowing users to define their own trust store.

          1. 2

            I have not seen a case where you actually can’t. It might be hard to find, with no standardized naming whatsoever, and possibly breaking some drivers built into devices, but from what I’ve experienced, the ability is always there.

        3. 1

          And yet there’s hardware made by Microsoft themselves that doesn’t comply with this requirement. For example, as far as I can tell, the Surface Pro 8 allows you to disable secure boot, but doesn’t let you add your own keys - you can only use Microsoft’s keys or disable Secure Boot entirely.

          1. 1

            AFAICT, you can use your own keys, but you still need to use ones from Microsoft too (ref). Which makes some sense, since SecureBoot is also used for device drivers, for which on a Surface Pro 8 Microsoft is responsible.

            1. 2

              I’ve read that page and I don’t see how it says you can use your own keys. See the dialog: https://learn.microsoft.com/en-us/surface/images/manage-surface-uefi-fig3.png

              You can disable secure boot entirely, or use Microsoft’s keys. A proper implementation of Secure Boot should allow you to enter secure boot Setup mode and enroll your own PK and KEKs.

              Secure boot isn’t used for device drivers unless you mean UEFI option ROMs. I’m not sure UEFI option ROMs are relevant as this is an Intel platform with integrated graphics and I doubt any option ROMs are needed. (Even if they were, that doesn’t mean you should be forced to trust the key which signs them; UEFI can also whitelist hashes of specific binaries IIRC, so that’s always an option.)

    9. 27

      The assertion in that email from CloudFlare support that “.txt files aren’t web content” is inane. Anything served over HTTP is web content. I hope that’s just one person’s opinion and not CloudFlare policy.

      1. 4

        That struck me as weird too. To me it’s hard to define “web content”, I think the point of that bit of the ToS is to prevent using Cloudflare as file storage. But in this case, the .txt is web content, not because it’s served over HTTP but because it is a downloadable artifact from a website just like a .iso or .exe would be.

        As an aside, I don’t see where in the ToS they call out web content restrictions at all but I just took a very quick look.