Ah, synchronicity! What surfaces this now?
Just six hours ago (and about as long past bedtime), I was learning from sparse Reddit threads where to get OVMF in GNU Guix. Like the author, Iâve been drawn towards increasing amounts of both immutabililty and personalization (ie. exoticism), one locus being Erase Your Darlings. Just looking at their config I can feel the hours of persistent, stubborn wrangling.
Eventually, many hours later, [âŠ]
I had my own run-ins with EFI while setting up Secure Boot.
Something (some daemon, utility, firmware, or package-script) keeps
flagging efivars
as immutable, and the extended attribute has
escaped me, for several hours, on multiple occasions.
What I wish I had tried earlier was to just boot into EFI shell because you can edit EFI vars much faster there
Does somebody expect me to remember those magic [GRUB] spells? What [should you] do next to boot into Linux?
I hardly remember commands for the GRUB & EFI shells. For a long
time I didnât know you could scroll in EFI shells, and the output of
help
would scroll off-screen. Using my phone sucked, a VM wasnât
always sufficient (or simple: see TFA), and I really appreciate my one
KVM.
Although itâs later in the boot process, I want to give props to the
Guix team here: their Guile Scheme initramfs
scripts error out into
a REPL, where you can import all the same (gnu build [...])
utilities
the script was using. Donât think you can scroll, but itâs refreshing
to have all those tools a backtrace at hand as soon as something goes
wrong.
I donât even want to think how many hours I lost because of this. My actual problem was more nuanced [âŠ]
Rant: Guix doesnât yet support key-files for LUKS partitions so I made
my own mapped-device-kind
that does. Other code filters
mapped-devices
for LUKS partitions by checking their types, and
proceeded to miss mine. To avoid forking Guix (which I guess I could
do) or subverting itâs device management entirely, I had to mutate the
existing type to add special cases for my devices.
Another pain-point: Having achieved darling erasure with BTRFS Iâm
now pursuing root on ZFS, which has aâŠ
tumultuous history with Guix.
Iâve done all the necessary wrangling to mount the datasets in the
initramfs
, but Guix really wants a path to a mount
-able root
block-device that it can watch for. I donât want to write (and
maintain) my own fork of the initramfs
, or stoop to putting root on
a ZVOL just to satisfy that requirement, so Iâm working on whatever
cheap hacks are necessary to get around the existing code.
Which is somehow to say; Itâs always like this. I canât suitably articulate right now why I persist in having everything just so, but I learn a heck of a lot about both the underlying systems and the towers built atop them by being so stubborn. Software infrastructure was how I got into programming in the first place, and will always be a blessing and a curse. Heaven help those who rely on my homelab.
I find this post valuable informationally and personally. Thanks joonas for taking the time to write it, and to bsandro for sharing it.
Edit: Ugh, going to need so mutate / advise more functions.
âI didnât identify with it for a long time; Not until everyone else had been getting an earful for years. I was just trying to get my computer to work, and guess I picked it up along the way. Couldnât get everything just right without a lilâ scripting. I thought, does this (ie. Bash) really count? How do people use their computers (ahem, Linux) without programming? But Iâm well past any plausible stage of denial now :pâ
A few weeks ago on macOS I tried installing Nix. I saw it created its own volume. âOh goshâ I thought âis this going to permanently allocate some part of my small 250GB SSD to itself?â Imagine my surprise when I looked at the volume manager and saw both the main partition & Nix volume had the same max size of 250GB. It was at that moment I realized filesystems had in fact advanced since the early 2000s and statically-allocated exclusive contiguous partitions werenât actually the way things had to be done anymore. Logical volumes can coexist in the same partition, using only however much space they need to use! This led me to discover the FOSS filesystem that has this feature (and is included in the Linux kernel), BTRFS.
I asked on Unix SE about installing different distros as subvolumes of a single BTRFS partition so they only take up as much space as they actually need, and you can do it but a lot of distro kernel upgrade workflows donât account for it (as the author mentions, Windows updates might also have trouble with this). So I ended up using logical volumes instead, which are very well-supported and make partitions easy to manually grow/shrink & ensure you donât have to worry about contiguous or empty space. So that got me most of the way there. Still, I look forward to a future where you can just set your entire disk (or multiple disks, using logical volumes) as one giant BTRFS partition and install everything into subvolumes so we never again have to worry about partition juggling.
The boot menu of Quibble looks like if you took grub, made it HiDPI aware, and added nicer fonts.
Underrated feature, I love when boot code acknowledges that monitors have been manufactured after 1990. I use systemd-boot which I donât think has this.
The 13-in-1 multiboot image for rapid distro-hopping on the PinePhone is such a BTRFS partition, with a subvolume for each distroâs root and (IIRC) a shared kernel and initiramfs.
Neat! Love the return of a painted spritely character (the classic site had so much charm), and this debugger puts others Iâve endured to shame.
As an aside, E keeps popping up as a spring of inspirations, a la⊠Iâm blanking on it, that influential hypothetical language; Iâll comment back when it comes to me. Letâs go with T for now.
Glad you liked the painted Spritely characters. Theyâve been making their way back slowly into the new site, but yes, not as front and center as before. But I too really enjoy them. :)
E is definitely cool, and has been a huge influence on Spritely, as is probably obvious. Itâs funny you should mention T: yes, the T scheme/lisp indirectly has had a big influence on Spritely also, because Jonathan Rees worked on it, and it both heavily influenced Mark Miller and companyâs approach towards treating lexical scope as the foundation for ocaps (fun fact: Jonathan A. Rees and Mark S. Miller went to college together at Yale, and years later went on to work on ocaps independently and came to many of the same technical conclusions without talking to each other!), and also was the predecessor to Jonathan A. Reesâ later Scheme, Scheme48. Jonathan Reesâ âsecurity kernelâ dissertation, which showed that a pure scheme could be an ocap-safe programming language environment, directly enabled Goblins to happen. (Speaking of weird short programming language names, the code for that security kernel, W7, is available, but few people know about it. Itâs amazing how compact and beautiful it is, because Scheme48 already enabled it to be so.)
It occurred to me while reading this that equality saturation (https://egraphs-good.github.io/) might be the missing piece that allows generalized macros to compose. A macro implemented with equality saturation could see every expansion step of its neighbors and rewrite based on the specific one(s) itâs looking for.
Whoa, thatâs a really interesting idea! Iâm not really sure how youâd decide on the âoptimalâ rewrite â I guess macros would include, in their expansions, how âspecificâ that expansion is? Or something like that? Definitely something to think about.
When writing macros within Schemeâs syntax-case
model, theyâre expressed as case-style pattern-matchers over syntax-objects (which themselves appear ideal for translation into e-nodes). In that context, I would posit that optimal extractions from saturated graphs are those which fulfill the earliest possible matches. Hence a match on a left-most set
literal would 1) take precedence over the no-match case, 2) can be applied after the test macro expands, and 3) could maybe even propogate transformations of sub-nodes into equivalent expansions where those literals have been eliminated (or not yet expanded into being).
Implementing such a system would be difficult (let alone in Janet without an existing syntax-case
to fork), and although I think it addresses the settable example as-given, itâs a rough model. There are still ambiguous cases where one would presumably fall-back into depth-first expansion.
The biggest problem is with that 3rd part, which is kinda out-there. Macarons are effectively expansions of their parent expressions, so they canât actually contribute transformations of themselves or their sibling arguments that are seperable from those parent expansions. Putting that aside (maybe by annotating with source syntax objects / equiv. e-nodes when preserving the transformation would be valid), it would feel kinda cursed to allow a match on a literal which might only exist in superposition (donât let reason stop you :p).
I guess 2/3 with the fall-back caveat ainât too bad, but disclaimer: this is way over my head, i hardly grok nondeterminism and look at this this with the same awestruck unfamiliarity as ”Kanren, which i also donât know nothin about
Iâve had an idea kicking around my brain for a while now of a way to implement a more powerful and flexible macro system than defmacro
for languages with lots of parentheses, but Iâve been too busy working on a book to actually try to implement it. But the book is out now! So Iâm going to try to mock it up in Janet and see how it feels in practice, and then (hopefully) write a blog post if it goes well.
That sounds awesome! I read a lot of the literature on syntax-case last month[1], have been loving reading Janet for Mortals in my downtime, and havenât reached your chapter on macros yet but think itâs a particularly interesting language for prototyping your idea because of eg. the behavior you discovered in âMaking a Game Pt. 3â (which isnât necessarily portable). Iâd be interested in any ideas you have in this area (even if theyâre not merit-ful or focused on hygiene), and will be looking forward to the post c:
[1]: Not all of which was correct: there is a a false ambiguity on the surface, and true undefined âimplementation-dependantâ behavior deep in the bowels of the spec.
Hey thanks! Glad youâre liking the book. Hereâs a quick sketch of my macro idea: https://github.com/ianthehenry/macaroni
I canât find any prior art for this but I have no idea what to search for or call it.
Spent some time considering prior art, and the closest I could get was what Guile calls Variable Transformers.
In eg. Common Lisp and Elisp, Generalized Variables can extended by defining a dedicated macro which set!
or setf
finds and invokes (via a global map, symbol properties, etc).
In Guile, you can create a macro which pattern-matches on the semantics of itâs call-site:
set!
formBecause it needs to be used as an identifier it canât define set!
-able sexps like Common Lisp or Elisp would allow, but neither can macaroni. Itâs not a first class object, short of being a normal function under the hood. Finally itâs handicapped by only being passed its parentâs form in the third situation, essentially still at set!
âs discretion (not sure about the exact mechanism in use). Definitely the only other example I could find of an identifier-bound macro receiving the form it is invoked within.
Stayed up too late to think any more, but love the idea, thatâs awesome
Hey thanks! I had seen something very similar to this in Racket before â I guess itâs a general scheme thing.
You actually can make settable sexps with the macaroni approach, by returning a first-class macaron that checks the context it appears in â https://github.com/ianthehenry/macaroni/blob/master/test/settable.janet
(The downside explained there tells me that I should spend some more time thinking about controlling the order of expansion⊠which would also make it easier to define infix operators with custom precedenceâŠ)
Ooo, I can see how Iâd have missed that on the way out, nice! Found Racketâs Assignment Transformers, and they (bless the docs!) explain that they are indeed just sugar over a variant of the âset!
+ symbol-prop
sâ approach. I wonder if this approach (ie. returning an anonymous or gensym
âed identifier macro) could be retrofitted in to that model, but it feels clear to me that macarons more cleanly solve and generalize what has always been a messy situation in Lisp implementations.
As another exploratory question, are we limited (in practice or theory) to the immediate context of the parent form? Aside from that dispatching on grandparent or cousin forms feels kinda cursed. I wonder what use cases pull sibling sexps into play.
Funny how having to dispatch on the set
literal kinda resembles the limitations of a system invoked by set
itself, but itâs progress! Re: expansion order, my gut feeling is that they ought to be compatible with other extensions that donât explicitly macro expand their arguments (ie. until the set
form is established), but havenât really dug into how janet/this all works and need more coffee first
Theoretically you can rewrite forms anywhere, but Iâm having a hard time coming up with a situation where youâd want to. But hereâs a nonsensical âgrandparentâ macaron:
(defmacaron sploot [& lefts1] [& rights1]
(macaron [& lefts2] [& rights2]
(macaron [& lefts3] [& rights3]
~(,;lefts1 ,;lefts2 ,;lefts3 ,;rights1 ,;rights2 ,;rights3))))
(test-macaron (foo (bar (sploot tab) 1) 2)
(bar foo tab 1 2))
You could also write a recursive macaron that rewrites an arbitrarily distant ancestor.
Hereâs an example of a potentially interesting âcousinâ macaron:
(defmacaron ditto [& lefts] [& rights]
(def index (length lefts))
(macaron [& parent-lefts] [& parent-rights]
(def previous-sibling (last parent-lefts))
(def referent (in previous-sibling index))
[;parent-lefts [;lefts referent ;rights] ;parent-rights]))
(test-macaron
(do
(print "hello")
(print ditto))
(do
(print "hello")
(print "hello")))
Is that useful? I dunno, maybe? Itâs like !$
in shell, but copies the element at the exact previous position.
Actually $!
is even easier to write:
(defmacaron !$ [& lefts] [& rights]
(macaron [& parent-lefts] [& parent-rights]
[;parent-lefts [;lefts (last (last parent-lefts)) ;rights] ;parent-rights]))
(test-macaron
(do
(print "hello" "there")
(print !$))
(do
(print "hello" "there")
(print "there")))
You could also rewrite both expressions into a single gensymâd let
expression so that the argument only actually gets evaluated once.
(defn drop-last [list]
(take (- (length list) 1) list))
(defmacaron !$ [& lefts] [& rights]
(macaron [& parent-lefts] [& parent-rights]
(def previous-sibling (last parent-lefts))
(def up-to-previous-sibling (drop-last parent-lefts))
(def referent (last previous-sibling))
(with-syms [$x]
~(,;up-to-previous-sibling
(let [,$x ,referent]
(,;(drop-last previous-sibling) ,$x)
(,;lefts ,$x ,;rights))
,;parent-rights))))
(test-macaroni
(do
(print "hello" "there")
(print !$))
(do
(let
[<1> "there"]
(print "hello" <1>)
(print <1>))))
I think this is pretty interesting? Maybe possibly even useful, to add debugging output or something?
At the repl Janet assigns _
to the result of the previous expression â you could do that in arbitrary code; implicitly surrounding the previous expression with (def _ ...)
if you use _
in an expression. Hmm. Not super usefulâŠ
Funny how having to dispatch on the set literal kinda resembles the limitations of a system invoked by set itself, but itâs progress
Yeah, but it allows you to âextendâ the behavior of set
without set
having to know anything about your custom extension (or even knowing that it is itself extensible!). But the evaluation order is problematic. Hmm.
I find that the header file problem is one that tup solves incredibly elegantly. It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.
Not sure if the author is here, but if you are, any plans to support something like that?
It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like
-MMD
.
So the âproperâ way is to intercept the filesystem calls in a non-portable manner and depend on anything the program opens without regard for whether it affects the output or not (like, say, translations of messages for diagnostics). While explicitly asking the preprocessor for an accurate list of headers that it reads is a hack?
The problem with the second option is that it isnât portable between languages or even compilers. Sure, both GCC and clang implement it, but there isnât really a standard output format other than a makefile, which isnât really ideal if you want to use anything that isnât make.
Itâs an unforunate format, but itâs set in stone by now, and wonât break. It has become a de facto narrow waist with at least 2 emitters:
and 2 consumers:
Basically itâs an economic fact that this format will persist, and it certainly works. I never liked doing anything with it in GNU make because it composes poorly with other Make features, but in Ninja itâs just fine. Iâm sure there are many other non-Make systems that parse it by now too.
Thatâs a fair point, also didnât know Ninja supported it but it makes sense. I wonder if other languages support something similar to allow for this kind of thing, though many modern languages just sidestep the issue all together by making the compiler take care of incremental compilation.
Most tools could probably read the -M output format and understand it quite easily. It doesnât use most of what could show up in a Makefile - it only uses single-line âtarget: source1 source2â rules with no commands, no variables, etc. I imagine if someone wanted to come up with a universal format, it wouldnât be far off from whatâs already there.
But.. donât you want to update your program when diagnostic messages are changed? The FUSE mount doesnât grab eg. library and system locales from outside the project root, so it only affects the resources of the project being built[1]. Heaven forbid youâre bisecting a branch for a change that is, for reasonable or cursed reasons alike, descended from one of those files..
For those interested, Iâve pitched tup and mused about this in a previous comment here.
[1]: Provided you donât vendor your all dependencies into the repo, which I guess applies to node_modules
! Idk off the top of my head if thereâs a way to exclude a subdirectory for this specific situation, or whether symlinks would work for controlling the mechanism.
Edit: Oh, itâs u/borisk again! I really appreciated your response last time this came up and hope youâre doinâ great c:
Edit 2: Oh, and you work on a build system! Iâll check it out sometime ^u^
I originally started Knit with the intention of supporting automatic dependency discovery using ptrace. I experimented with this with a tool called xkvt, which uses ptrace to run a list of commands and can generate a Knitfile that expresses the dependencies. However, I think this method is unfortunately more of a hack compared to -MMD because ptrace is non-portable (not well supported/documented on macOS and non-existent on Windows) and has a lot of complexity for tracing multithreaded processes. A Fuse-based approach like the one used by Tup is similar (maybe more reliable), but requires Fuse (a kernel extension), and also has the negative that automatic dependency discovery can sometimes include dependencies that you donât really want. When I tried to use Tup for a Chisel project I ran into problems because I was invoking the Scala build tool which generated a bunch of temporary files that Tup required to be explicitly listed as a result.
I think if Knit ever has decent support for an automatic dependency approach, it would be via a separate tool or extension rather than directly baked into Knit by default.
Cool! Iâve always thought about running a dynamic site based on Haunt, which doesnât quite fit into this subset of Scheme, but the example has a very similar structure. Love the idea, and the sleek deployment method; I havenât got similar ergonomics for my own deploys yetâŠ.
Havenât actually posted anything on my site (so I havenât crafted CSS for it or anything), but Iâve collected a short list of homages and related posts (including a link to commentary on inspirations) here:
https://www.illucid.net/posts/homages-to-aphyrs-technical-interview-series.html
Left out a repo that translates the same Queenâs post into Rust because it wasnât in narrative form, which felt important to me at the time, but idk, thatâs cool too and available here:
https://github.com/insou22/typing-the-technical-interview-rust
The format is fun, I like how people adapt the themes from Aphyrâs original blogs. Itâs a bit of a colorful show and tell without being too dry about the subject matter. Props to collecting all these formats into a repository!
Without paying too much attention to it, I chalked the recent arguments up (as u/scraps does) to the implicit / missing context of Caseyâs eg. game dev background (where most code really is performance critical).
This conversation pulls the argument out of that framework, recognizing that there is a place in practically all software for a performance-aware approach, while tactfully digging back at an equally dogmatic dismissal of other concerns (those which, as Casey may justifiably say, âare beyond the scope of this courseâ).
Loving said course, and glad to see these two tribal icons able to enchange ideas and reconcile these tensions into conscious tradeoffs for their audiences (and those who will inherit future tribal knowlage) to consider.
I often think back to this moment in time when my computer felt just right, everything under my control and ./configure
âd just for my use-cases; with Gentoo (DWM). It probably didnât feel âfinishedâ at the time (does it ever?), but I had a lot of time to spend on it then and the local maximum stayed with me. I learned real problem solving skills, and whatever field or eldrich nature I encounter software âissuesâ in today, itâs those foundational experiences which equipped me the confidence to grope into the dark, an intuition for where to find light, and a tactile familiarity with the properties of the lens through which we focus, particularly when mapping a systemâs behavior. Iâm still learning how best to solve problems once I understand them, and wouldnât be here it werenât for the tools that embracing Gentoo gave me. I occasionally donated when I was daily driving it and, aware that the unusually high spend this year was intentional, will donate again soon. Even when I donât use the distro, the wiki has been priceless, and is my go-to.
On a more reified note, Portageâs use flags are great and I hope to see functional package managers (not sure about nixpkgs?) taking wider advantage of a Restricted Dictionary of Keyword Arguments to allow users to tailor their software and itâs dependencies (inc. optional deps!). Such flags do exist, but do not have the same level of standardized vocabularies, application and discovery mechanisms, flag-based dependency-resolution, and of course, widespread implimentation. I can write manual package transformations as an end user, but that introduces a dependency on the packageâs build phases and is far more verbose. While both systems theoretically empower end users, Gentoo has nurtured a packaging model and culture which actively supports those powers.
Well, as supported as combinatorial explosions can get; I didnât learn those skills fixing nothing :p. Will always have a place in my heart for Gentoo, both Portage and the community. Huge shoutout to the GentooLTO project.
Not quite through, but I was expecting a quicker read and was pleasantly surprised to find something so thorough! Wonât have time to finish it tonight :p
Some thoughts, like, âThe Final Wordâ might be overstating things:
I point these out not to downplay ZFSâs own lovable quirks and the revolutionary impact theyâve had (notably on the lineage of these very systems), but to highlight these and future projects. Itâs too soon to underscore âThe Final Wordâ. That said, ZFS is still so worthy of our attention and appreciation! The author has clearly built a fantastic model and understanding of how the system works; I learned much more than I was ready for c:
One particular section caught my eye:
If you have a 5TiB pool, nothing wIll stop you from creating 10x 1TiB sparse zvols. Obviously, once all those zvols are half full, the underlying pool will be totally full. If the clients connected to the zvols try to write more data (which they might very well do because they think their storage is only half full) it will cause critical errors on the TrueNAS side and will most likely lead to extensive pool corruption. Even if you donât over-commit the pool (i.e., 5TiB pool with 5x 1TiB zvols), snapshots can push you above the 100% full mark.
I thought to myself, âZFS wouldnât hit me with a footgun like that, it knows data-loss is code REDâ. While âextensive pool corruptionâ might overzealous, it is a sticky situation. The clients in this case the filesystemâs populating the ZVOLs, which are prepared to run out of space at some point, but not for the disk to fail out from under them. Snapshots do provide protection/recovery-paths from this, but also lower the âover-provisioning thresholdâ. This isnât the sort of pool corruption that made me concerned; I couldnât find any evidence of that. It would obviously still be a disruptive failure of monitoring in production, and might best be avoided precautionarily by under-provisioning, which is a shame, but then even thick provisioning has a messy relationship with snapshots in the following section.
Iâm not sure any known system addresses this short of monitoring/vertical-integration. I guess, itâs great when BTRFS fills up and you can just like plug in a USB drive to fluidly expand available space, halt further corruption, and carry it into degraded performance gracefully. Not that BTRFSâs own relationship with free space is unblemished, but this does work!
Probably a viable approach in ZFS too (single device zdev?), but BTRFS really shines in itâs UI, responsiveness, and polish during exactly these sorts of migrations, which Iâd find relieving in an emergency. ZFS has a lot of notorious pool expansion pitfalls, addressed here, which I also wouldnât have to think about (even if just to dismiss them as inapplicable under the circumstances bc they relate to zdevs). It matters that I think ZFS can do it, and know that BTRFS can; itâs flexibility is reassuring. (Again, not a dig, still going great lengths to use ZFS everywhere; for all this I donât run butter right now :p)
Thinking about it more, this is probably because Iâve recreated BTRFS pools dozens of times, where as ZFS pools are more static and recreating them is often kinda intense. Itâs like BTRFS is declarative, like Nix, allowing me to erase my darlings and become comportable with a broader ranges of configurations by being less attached to the specific setup I have at any given time.
Live replication and sharing are both definitely missing from ZFS, though I can see how they could be added (to the FS, if not to the code). Offline deduplication is the other big omission and thatâs hard to add as well.
For cloud scenarios, I wish ZFS had stronger confidentiality and integrity properties. The encryption, last time I looked, left some fairly big side channels open and leaked a lot of metadata. Given that the core data structure is basically a Merkel tree, itâs unfortunate that ZFS doesnât provide cryptographic integrity checks on a per-pool and per-dataset basis. For secure boot, Iâd love to be able to embed my boot environmentâs root hash in the loader and have the kernel just check the head of the Merkel tree.
Yeah, the hubris of the âlast word in filesystemsâ self-anointment always struck me as fairly staggering. While perhaps not exceeded so quickly and dramatically, itâs a close cousin of â640K ought to be enough for anybodyâ. Has any broad category of non-trivial software ever been declared finished, with some flawless shining embodiment never to be improved upon again? Hell, even (comparatively speaking) laughably simple things like sorting arenât solved problems.
Yeah, the hubris of the âlast word in filesystemsâ self-anointment always struck me as fairly staggering.
Itâs just because the name starts with Z, so itâs always alphabetically last in a list of filesystems.
Yeah, the hubris of the âlast word in filesystemsâ self-anointment always struck me as fairly staggering. While perhaps not exceeded so quickly and dramatically, itâs a close cousin of â640K ought to be enough for anybodyâ.
Besides the name starting with Z which @jaculabilis mentioned, I suspect it was also in reference to ZFS being able to (theoretically) store 2^137 bytes worth of data, which does ought to to be enough for anybody.
Because storing 2^137 bytes worth of data would necessarily require more energy than that needed to boil the oceans, according to one of the ZFS creators [1].
[1] https://hbfs.wordpress.com/2009/02/10/to-boil-the-oceans/
AFAIK the RAID5/6 write hole still exists, so thatâs the notorious no-go; Iâve always preferred RAID10 myself. Does kinda put a damper on that declarative freedom aspect if mirrors are the only viably stable configuration, but the spirit is still there in the workflow and utilities.
Article author here, I made an account so I could reply. I appreciate the kind words, I put a lot of effort into getting this information together.
After talking to some of the ZFS devs, Iâm going to clarify that section about overprovisioning your zvols. I misunderstood some technical information and it turns out itâs not as bad as I made it out to be (but itâs still something you want to avoid).
The claim that ZFS is the last word in file systems comes from the original developers. I added a note to that effect in the first paragraph of the article. I have more info about what (I believe) they were getting at in one of the sections towards the end of the article: https://jro.io/truenas/openzfs/#final_word
Iâm obviously a huge fan of ZFS but Iâll admit that itâs not the best choice for every application. Itâs not super lean and high-performance like ext4, it doesnât (easily) scale out like ceph, and it doesnât offer the flexible expansion and pool modification like UNRAID. Despite all that, itâs a great all-round filesystem for many purposes and is far more mature than something like BTRFS. (Really, I just needed a catchy title for my article :) )
I read this, and immediately followed it with âI feel for the NetBSD communityâ; there, on Runenerdâs home page: https://rubenerd.com/the-beauty-of-cgi-and-simple-design-by-hales/
In which he links to Hailstrom: https://halestrom.net/darksleep/blog/046_cgi/ (Originally written here! https://lobste.rs/s/pdynxz/long_death_cgi_pm#c_lw4zci)
For formal folks, there is an RFI: https://www.rfc-editor.org/rfc/rfc3875
For those of us in the lucky 10,000 who want to know more, I do believe the rabbit hole is this way; but remeber that thereâs no better way to learn than to create! See implementations of the spec and the resources above.
Iâd like to work through Let over Lambda , a book that explains all sorts of macro shenanigans!
Iâve been meaning to write up some notes on my journey with Lisp this year, having read On Lisp, Let Over Lambda, and The Little Schemer. I can breifly attest that, while Grahamâs prose drew me in gently and the Little Schemer had concrete exercises, Let Over Lambda was my favorite read.
At first he seemed to proselytize a bit heavily (more even than PG), but his evident passion for macro-ology soon gave weight to his perspective, culminating in a broad appreciation of any language featuring those uniquely elegant philosophies which redefine solutions and problems alike. The zoo of macros was fascinating, I found his defense of anaphora compelling, and am still chewing on later chapters even as Iâve moved on to other works.
The only factor dampening my enjoyment is that Iâve never used Common Lisp, writing mostly in Scheme and Elisp, the latter now feeling all the more inadequate. Canât reccomended enough to anyone who has grasped the basic of Lisp and would like to see whatâs possible with macros; it had an immediate impact on the ways that I use and write them.
Xmonad will go with it :( But until Wayland has color management, itâs not a suitable display server for my workflows.
Off the top of my head I presume theyâre reffering to an ICC profile, which is a file produced by a calibration procedure mapping output colors from the OS to outputs that produce the intended color accurately on your specific display (be it by model online, or for your individual unit via an at-home calibration tool).
Yep. It deals with the color accuracy of your display. One usage is to normalize to the âstandardâ colors so my red is the same red as your red as measured by the colorimeter. After that, another usage is to simulate how an output will look, so say you were going to a printer you can get the profile of thaw printer so you can work on a project with the same colors that that printer cat produce. I donât print stuff that often, but I do look at and reproduce designs for the web, and I have a rot more confidence discussing the design knowing we are on the same page as far as color (code monkey only copies color from a design, but it can be a collaborative process for others).
Functional package managers donât get a mention here, but it is interesting to consider them in this articleâs context.
On one hand theyâre much like traditional package managers, in the sense that the burdon of packaging is picked up by a third party who might not know the project intimately. This sort of burdon still requires a great number of volunteer hours.
On the other, they side-step many of the messier aspects of dependency management and installation complexity through their isolation of packages and outputs. In a way one could see them as entirely ancillary to the articleâs focus, because they can output either their own native package objects, or another format entirely (ie. a docker image in practice, but hypothetically, why not an appimage or flatpak?).
Returning to volunteer hours, Iâve also seen many a project that isnât packaged by a third party but iinstead comes with a nix or guix manifest or package in the source tree. Even if this article is totally right, there could still be arguments for functional systems in the packaging of flatpaks by app devs, with the added benefit of functional tooling for dev environments in day-to-day work and onboarding c:
A lot of attention has fallen on symlinking files out of a Store a la Nix. In the last thread I mentioned git-annex, whoâs index essentially provides this as a Content Addressed Store
. It allows you to restructure your worktree into arbitrary hierarchies based on your tags with one-liners, as easily as if you were checking out a branch, and then to manipulate a fileâs tags by just moving it around in these ephemeral tag trees! It fixes broken symlinks automatically, and even comes with a file watcher that can add, commit, and push in the background.
I mentioned some issues I had with storing large numbers of files in git, but also that there are config settings to address that and broader techniques that can be applied (itâd be kinda nice to represents directories as single store items without zipping them, accepting that their contents wouldnât tagged or indexed in the same ways). One may also like to eg. ensure the daemon is always running and seeing everything, perhaps by pulling git-annex further down the software stack, towards the FUSE level that many existing options are operating at; Iâm not sure what other benefits could be gained from that. I think git-annex offers a compelling model and existing implimentation that could be leveraged and adapted towards these use-cases, and do intend to revisit annexing my entire arrayâs worth of storage :p
The out
macro described is basically how Clojureâs str
works. It takes any number of forms, discards the nils, calls toString
on the rest, and then joins them
By relying on nil-punning, you can embed an expression and it will skip any result that evaluates to nil: â(str âHelloâ (when some-val â World!â))returns âHello World!â when
some-val` is truthy, and âHelloâ when it is not.
Havenât used Clojure (yet!), thanks for sharing! In similar situations I often find myself pruning via apply
and ,@
:
(apply str `("Hello" ,@(when some-val " world!")))
;; Equivalent to:
(apply str (append '("Hello") (when some-val " world!")))
;; Of course, this only works if a no-op `when`
;; returns a value equivalent to the empty list
;; ( ie. nil/false = '() );
;; Otherwise we need to use an `if`, which doesn't feel as elegant...
(apply str `("Hello" ,@(if some-val " world!" '())))
PS: In a Lisp-2 the syntax would be (apply #'str ...
, and, can I use back quotes in in-line monospace spans?
The suggestion to filter warnings based on reachability analysis reminds me of similar consideration given to conditional compiler warnings in a recent discussion on changing Goâs for-loop syntax (an incredibly considerate discussion in general).
đ€Ż very cool! I donât need it, but Iâll try to find an excuse to use the idea somewhere!
FWIW, Iâve recently reached the next stage of my laptops keyboard bliss when I remapped:
That is, just move the modifiers one row up (key acts as a modifier when held, and as a key when pressed). I use kanata to do the remapping.
One thing that surprised me when playing with using normal keys as modifiers is that on some keyboards that is impossible. On some keyboards modifier keys are electronically different from normal keys and the keyboard wonât emit key down events if youâre already holding down another normal key. I really like the idea with spacemacs and devilmode that you donât have to hold two keys at once at all. That the entire interaction is just a linear stream of keypresses. Itâs just very easy to reason about.
This is called key rollover, how many keys you can press and release in order and get all the events properly recognised. Modifiers usually have to support pretty large rollover; alphanumerics are numerous and often put into a matrix of connections where you cannot distinguish all combos, but usually two keys at once work anyway (fast typing requires pretty high rollover support for at least some letter sequences). But three letter keys not adjacent in any normal word⊠low-end keyboards might be unhappy with this.
It used to be a big problem for games, where youâd have two people using opposite ends of the keyboard in a split-screen multiplayer (I guess computers are cheap enough now that this doesnât happen so much?). I remember having a keyboard where the scanning was clearly left-to-right, because the person using the left side of the keyboard could prevent the person using the right from performing critical actions by pressing too many keys.
Ok, I am test driving this in VS Code and I think I love this very much, especially in combination with https://marketplace.visualstudio.com/items?itemName=VSpaceCode.whichkey.
However, the devil can do this:
Is there some VS Code extension which allows such repeatable keys?
Thanks for sharing Kanata! Iâve been holding off on investing in KMonad because several of my devices have unusual architectures and I didnât find GHC easy to bootstrap, but Rust will be easier to go all in onâ the config is more valuable the more consistently I can use it.