Ah, synchronicity! What surfaces this now?
Just six hours ago (and about as long past bedtime), I was learning from sparse Reddit threads where to get OVMF in GNU Guix. Like the author, I’ve been drawn towards increasing amounts of both immutabililty and personalization (ie. exoticism), one locus being Erase Your Darlings. Just looking at their config I can feel the hours of persistent, stubborn wrangling.
Eventually, many hours later, […]
I had my own run-ins with EFI while setting up Secure Boot.
Something (some daemon, utility, firmware, or package-script) keeps
efivars as immutable, and the extended attribute has
escaped me, for several hours, on multiple occasions.
What I wish I had tried earlier was to just boot into EFI shell because you can edit EFI vars much faster there
Does somebody expect me to remember those magic [GRUB] spells? What [should you] do next to boot into Linux?
I hardly remember commands for the GRUB & EFI shells. For a long
time I didn’t know you could scroll in EFI shells, and the output of
help would scroll off-screen. Using my phone sucked, a VM wasn’t
always sufficient (or simple: see TFA), and I really appreciate my one
Although it’s later in the boot process, I want to give props to the
Guix team here: their Guile Scheme
initramfs scripts error out into
a REPL, where you can import all the same
(gnu build [...])
the script was using. Don’t think you can scroll, but it’s refreshing
to have all those tools a backtrace at hand as soon as something goes
I don’t even want to think how many hours I lost because of this. My actual problem was more nuanced […]
Rant: Guix doesn’t yet support key-files for LUKS partitions so I made
mapped-device-kind that does. Other code filters
mapped-devices for LUKS partitions by checking their types, and
proceeded to miss mine. To avoid forking Guix (which I guess I could
do) or subverting it’s device management entirely, I had to mutate the
existing type to add special cases for my devices.
Another pain-point: Having achieved darling erasure with BTRFS I’m
now pursuing root on ZFS, which has a…
tumultuous history with Guix.
I’ve done all the necessary wrangling to mount the datasets in the
initramfs, but Guix really wants a path to a
block-device that it can watch for. I don’t want to write (and
maintain) my own fork of the
initramfs, or stoop to putting root on
a ZVOL just to satisfy that requirement, so I’m working on whatever
cheap hacks are necessary to get around the existing code.
Which is somehow to say; It’s always like this. I can’t suitably articulate right now why I persist in having everything just so, but I learn a heck of a lot about both the underlying systems and the towers built atop them by being so stubborn. Software infrastructure was how I got into programming in the first place, and will always be a blessing and a curse. Heaven help those who rely on my homelab.
I find this post valuable informationally and personally. Thanks joonas for taking the time to write it, and to bsandro for sharing it.
Edit: Ugh, going to need so mutate / advise more functions.
“I didn’t identify with it for a long time; Not until everyone else had been getting an earful for years. I was just trying to get my computer to work, and guess I picked it up along the way. Couldn’t get everything just right without a lil’ scripting. I thought, does this (ie. Bash) really count? How do people use their computers (ahem, Linux) without programming? But I’m well past any plausible stage of denial now :p”
A few weeks ago on macOS I tried installing Nix. I saw it created its own volume. “Oh gosh” I thought “is this going to permanently allocate some part of my small 250GB SSD to itself?” Imagine my surprise when I looked at the volume manager and saw both the main partition & Nix volume had the same max size of 250GB. It was at that moment I realized filesystems had in fact advanced since the early 2000s and statically-allocated exclusive contiguous partitions weren’t actually the way things had to be done anymore. Logical volumes can coexist in the same partition, using only however much space they need to use! This led me to discover the FOSS filesystem that has this feature (and is included in the Linux kernel), BTRFS.
I asked on Unix SE about installing different distros as subvolumes of a single BTRFS partition so they only take up as much space as they actually need, and you can do it but a lot of distro kernel upgrade workflows don’t account for it (as the author mentions, Windows updates might also have trouble with this). So I ended up using logical volumes instead, which are very well-supported and make partitions easy to manually grow/shrink & ensure you don’t have to worry about contiguous or empty space. So that got me most of the way there. Still, I look forward to a future where you can just set your entire disk (or multiple disks, using logical volumes) as one giant BTRFS partition and install everything into subvolumes so we never again have to worry about partition juggling.
The boot menu of Quibble looks like if you took grub, made it HiDPI aware, and added nicer fonts.
Underrated feature, I love when boot code acknowledges that monitors have been manufactured after 1990. I use systemd-boot which I don’t think has this.
The 13-in-1 multiboot image for rapid distro-hopping on the PinePhone is such a BTRFS partition, with a subvolume for each distro’s root and (IIRC) a shared kernel and initiramfs.
Neat! Love the return of a painted spritely character (the classic site had so much charm), and this debugger puts others I’ve endured to shame.
As an aside, E keeps popping up as a spring of inspirations, a la… I’m blanking on it, that influential hypothetical language; I’ll comment back when it comes to me. Let’s go with T for now.
Glad you liked the painted Spritely characters. They’ve been making their way back slowly into the new site, but yes, not as front and center as before. But I too really enjoy them. :)
E is definitely cool, and has been a huge influence on Spritely, as is probably obvious. It’s funny you should mention T: yes, the T scheme/lisp indirectly has had a big influence on Spritely also, because Jonathan Rees worked on it, and it both heavily influenced Mark Miller and company’s approach towards treating lexical scope as the foundation for ocaps (fun fact: Jonathan A. Rees and Mark S. Miller went to college together at Yale, and years later went on to work on ocaps independently and came to many of the same technical conclusions without talking to each other!), and also was the predecessor to Jonathan A. Rees’ later Scheme, Scheme48. Jonathan Rees’ “security kernel” dissertation, which showed that a pure scheme could be an ocap-safe programming language environment, directly enabled Goblins to happen. (Speaking of weird short programming language names, the code for that security kernel, W7, is available, but few people know about it. It’s amazing how compact and beautiful it is, because Scheme48 already enabled it to be so.)
It occurred to me while reading this that equality saturation (https://egraphs-good.github.io/) might be the missing piece that allows generalized macros to compose. A macro implemented with equality saturation could see every expansion step of its neighbors and rewrite based on the specific one(s) it’s looking for.
Whoa, that’s a really interesting idea! I’m not really sure how you’d decide on the “optimal” rewrite – I guess macros would include, in their expansions, how “specific” that expansion is? Or something like that? Definitely something to think about.
When writing macros within Scheme’s
syntax-case model, they’re expressed as case-style pattern-matchers over syntax-objects (which themselves appear ideal for translation into e-nodes). In that context, I would posit that optimal extractions from saturated graphs are those which fulfill the earliest possible matches. Hence a match on a left-most
set literal would 1) take precedence over the no-match case, 2) can be applied after the test macro expands, and 3) could maybe even propogate transformations of sub-nodes into equivalent expansions where those literals have been eliminated (or not yet expanded into being).
Implementing such a system would be difficult (let alone in Janet without an existing
syntax-case to fork), and although I think it addresses the settable example as-given, it’s a rough model. There are still ambiguous cases where one would presumably fall-back into depth-first expansion.
The biggest problem is with that 3rd part, which is kinda out-there. Macarons are effectively expansions of their parent expressions, so they can’t actually contribute transformations of themselves or their sibling arguments that are seperable from those parent expansions. Putting that aside (maybe by annotating with source syntax objects / equiv. e-nodes when preserving the transformation would be valid), it would feel kinda cursed to allow a match on a literal which might only exist in superposition (don’t let reason stop you :p).
I guess 2/3 with the fall-back caveat ain’t too bad, but disclaimer: this is way over my head, i hardly grok nondeterminism and look at this this with the same awestruck unfamiliarity as µKanren, which i also don’t know nothin about
I’ve had an idea kicking around my brain for a while now of a way to implement a more powerful and flexible macro system than
defmacro for languages with lots of parentheses, but I’ve been too busy working on a book to actually try to implement it. But the book is out now! So I’m going to try to mock it up in Janet and see how it feels in practice, and then (hopefully) write a blog post if it goes well.
That sounds awesome! I read a lot of the literature on syntax-case last month, have been loving reading Janet for Mortals in my downtime, and haven’t reached your chapter on macros yet but think it’s a particularly interesting language for prototyping your idea because of eg. the behavior you discovered in “Making a Game Pt. 3” (which isn’t necessarily portable). I’d be interested in any ideas you have in this area (even if they’re not merit-ful or focused on hygiene), and will be looking forward to the post c:
: Not all of which was correct: there is a a false ambiguity on the surface, and true
undefined “implementation-dependant” behavior deep in the bowels of the spec.
Hey thanks! Glad you’re liking the book. Here’s a quick sketch of my macro idea: https://github.com/ianthehenry/macaroni
I can’t find any prior art for this but I have no idea what to search for or call it.
Spent some time considering prior art, and the closest I could get was what Guile calls Variable Transformers.
In eg. Common Lisp and Elisp, Generalized Variables can extended by defining a dedicated macro which
setf finds and invokes (via a global map, symbol properties, etc).
In Guile, you can create a macro which pattern-matches on the semantics of it’s call-site:
Because it needs to be used as an identifier it can’t define
set!-able sexps like Common Lisp or Elisp would allow, but neither can macaroni. It’s not a first class object, short of being a normal function under the hood. Finally it’s handicapped by only being passed its parent’s form in the third situation, essentially still at
set!’s discretion (not sure about the exact mechanism in use). Definitely the only other example I could find of an identifier-bound macro receiving the form it is invoked within.
Stayed up too late to think any more, but love the idea, that’s awesome
Hey thanks! I had seen something very similar to this in Racket before – I guess it’s a general scheme thing.
You actually can make settable sexps with the macaroni approach, by returning a first-class macaron that checks the context it appears in – https://github.com/ianthehenry/macaroni/blob/master/test/settable.janet
(The downside explained there tells me that I should spend some more time thinking about controlling the order of expansion… which would also make it easier to define infix operators with custom precedence…)
Ooo, I can see how I’d have missed that on the way out, nice! Found Racket’s Assignment Transformers, and they (bless the docs!) explain that they are indeed just sugar over a variant of the “
symbol-props” approach. I wonder if this approach (ie. returning an anonymous or
gensym’ed identifier macro) could be retrofitted in to that model, but it feels clear to me that macarons more cleanly solve and generalize what has always been a messy situation in Lisp implementations.
As another exploratory question, are we limited (in practice or theory) to the immediate context of the parent form? Aside from that dispatching on grandparent or cousin forms feels kinda cursed. I wonder what use cases pull sibling sexps into play.
Funny how having to dispatch on the
set literal kinda resembles the limitations of a system invoked by
set itself, but it’s progress! Re: expansion order, my gut feeling is that they ought to be compatible with other extensions that don’t explicitly macro expand their arguments (ie. until the
set form is established), but haven’t really dug into how janet/this all works and need more coffee first
Theoretically you can rewrite forms anywhere, but I’m having a hard time coming up with a situation where you’d want to. But here’s a nonsensical “grandparent” macaron:
(defmacaron sploot [& lefts1] [& rights1] (macaron [& lefts2] [& rights2] (macaron [& lefts3] [& rights3] ~(,;lefts1 ,;lefts2 ,;lefts3 ,;rights1 ,;rights2 ,;rights3)))) (test-macaron (foo (bar (sploot tab) 1) 2) (bar foo tab 1 2))
You could also write a recursive macaron that rewrites an arbitrarily distant ancestor.
Here’s an example of a potentially interesting “cousin” macaron:
(defmacaron ditto [& lefts] [& rights] (def index (length lefts)) (macaron [& parent-lefts] [& parent-rights] (def previous-sibling (last parent-lefts)) (def referent (in previous-sibling index)) [;parent-lefts [;lefts referent ;rights] ;parent-rights])) (test-macaron (do (print "hello") (print ditto)) (do (print "hello") (print "hello")))
Is that useful? I dunno, maybe? It’s like
!$ in shell, but copies the element at the exact previous position.
$! is even easier to write:
(defmacaron !$ [& lefts] [& rights] (macaron [& parent-lefts] [& parent-rights] [;parent-lefts [;lefts (last (last parent-lefts)) ;rights] ;parent-rights])) (test-macaron (do (print "hello" "there") (print !$)) (do (print "hello" "there") (print "there")))
You could also rewrite both expressions into a single gensym’d
let expression so that the argument only actually gets evaluated once.
(defn drop-last [list] (take (- (length list) 1) list)) (defmacaron !$ [& lefts] [& rights] (macaron [& parent-lefts] [& parent-rights] (def previous-sibling (last parent-lefts)) (def up-to-previous-sibling (drop-last parent-lefts)) (def referent (last previous-sibling)) (with-syms [$x] ~(,;up-to-previous-sibling (let [,$x ,referent] (,;(drop-last previous-sibling) ,$x) (,;lefts ,$x ,;rights)) ,;parent-rights)))) (test-macaroni (do (print "hello" "there") (print !$)) (do (let [<1> "there"] (print "hello" <1>) (print <1>))))
I think this is pretty interesting? Maybe possibly even useful, to add debugging output or something?
At the repl Janet assigns
_ to the result of the previous expression – you could do that in arbitrary code; implicitly surrounding the previous expression with
(def _ ...) if you use
_ in an expression. Hmm. Not super useful…
Funny how having to dispatch on the set literal kinda resembles the limitations of a system invoked by set itself, but it’s progress
Yeah, but it allows you to “extend” the behavior of
set having to know anything about your custom extension (or even knowing that it is itself extensible!). But the evaluation order is problematic. Hmm.
I find that the header file problem is one that tup solves incredibly elegantly. It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.
Not sure if the author is here, but if you are, any plans to support something like that?
It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like
So the “proper” way is to intercept the filesystem calls in a non-portable manner and depend on anything the program opens without regard for whether it affects the output or not (like, say, translations of messages for diagnostics). While explicitly asking the preprocessor for an accurate list of headers that it reads is a hack?
The problem with the second option is that it isn’t portable between languages or even compilers. Sure, both GCC and clang implement it, but there isn’t really a standard output format other than a makefile, which isn’t really ideal if you want to use anything that isn’t make.
It’s an unforunate format, but it’s set in stone by now, and won’t break. It has become a de facto narrow waist with at least 2 emitters:
and 2 consumers:
Basically it’s an economic fact that this format will persist, and it certainly works. I never liked doing anything with it in GNU make because it composes poorly with other Make features, but in Ninja it’s just fine. I’m sure there are many other non-Make systems that parse it by now too.
That’s a fair point, also didn’t know Ninja supported it but it makes sense. I wonder if other languages support something similar to allow for this kind of thing, though many modern languages just sidestep the issue all together by making the compiler take care of incremental compilation.
Most tools could probably read the -M output format and understand it quite easily. It doesn’t use most of what could show up in a Makefile - it only uses single-line “target: source1 source2” rules with no commands, no variables, etc. I imagine if someone wanted to come up with a universal format, it wouldn’t be far off from what’s already there.
But.. don’t you want to update your program when diagnostic messages are changed? The FUSE mount doesn’t grab eg. library and system locales from outside the project root, so it only affects the resources of the project being built. Heaven forbid you’re bisecting a branch for a change that is, for reasonable or cursed reasons alike, descended from one of those files..
For those interested, I’ve pitched tup and mused about this in a previous comment here.
: Provided you don’t vendor your all dependencies into the repo, which I guess applies to
node_modules! Idk off the top of my head if there’s a way to exclude a subdirectory for this specific situation, or whether symlinks would work for controlling the mechanism.
Edit: Oh, it’s u/borisk again! I really appreciated your response last time this came up and hope you’re doin’ great c:
Edit 2: Oh, and you work on a build system! I’ll check it out sometime ^u^
I originally started Knit with the intention of supporting automatic dependency discovery using ptrace. I experimented with this with a tool called xkvt, which uses ptrace to run a list of commands and can generate a Knitfile that expresses the dependencies. However, I think this method is unfortunately more of a hack compared to -MMD because ptrace is non-portable (not well supported/documented on macOS and non-existent on Windows) and has a lot of complexity for tracing multithreaded processes. A Fuse-based approach like the one used by Tup is similar (maybe more reliable), but requires Fuse (a kernel extension), and also has the negative that automatic dependency discovery can sometimes include dependencies that you don’t really want. When I tried to use Tup for a Chisel project I ran into problems because I was invoking the Scala build tool which generated a bunch of temporary files that Tup required to be explicitly listed as a result.
I think if Knit ever has decent support for an automatic dependency approach, it would be via a separate tool or extension rather than directly baked into Knit by default.
Cool! I’ve always thought about running a dynamic site based on Haunt, which doesn’t quite fit into this subset of Scheme, but the example has a very similar structure. Love the idea, and the sleek deployment method; I haven’t got similar ergonomics for my own deploys yet….
Haven’t actually posted anything on my site (so I haven’t crafted CSS for it or anything), but I’ve collected a short list of homages and related posts (including a link to commentary on inspirations) here:
Left out a repo that translates the same Queen’s post into Rust because it wasn’t in narrative form, which felt important to me at the time, but idk, that’s cool too and available here:
The format is fun, I like how people adapt the themes from Aphyr’s original blogs. It’s a bit of a colorful show and tell without being too dry about the subject matter. Props to collecting all these formats into a repository!
Without paying too much attention to it, I chalked the recent arguments up (as u/scraps does) to the implicit / missing context of Casey’s eg. game dev background (where most code really is performance critical).
This conversation pulls the argument out of that framework, recognizing that there is a place in practically all software for a performance-aware approach, while tactfully digging back at an equally dogmatic dismissal of other concerns (those which, as Casey may justifiably say, “are beyond the scope of this course”).
Loving said course, and glad to see these two tribal icons able to enchange ideas and reconcile these tensions into conscious tradeoffs for their audiences (and those who will inherit future tribal knowlage) to consider.
I often think back to this moment in time when my computer felt just right, everything under my control and .
/configure‘d just for my use-cases; with Gentoo (DWM). It probably didn’t feel “finished” at the time (does it ever?), but I had a lot of time to spend on it then and the local maximum stayed with me. I learned real problem solving skills, and whatever field or eldrich nature I encounter software “issues” in today, it’s those foundational experiences which equipped me the confidence to grope into the dark, an intuition for where to find light, and a tactile familiarity with the properties of the lens through which we focus, particularly when mapping a system’s behavior. I’m still learning how best to solve problems once I understand them, and wouldn’t be here it weren’t for the tools that embracing Gentoo gave me. I occasionally donated when I was daily driving it and, aware that the unusually high spend this year was intentional, will donate again soon. Even when I don’t use the distro, the wiki has been priceless, and is my go-to.
On a more reified note, Portage’s use flags are great and I hope to see functional package managers (not sure about nixpkgs?) taking wider advantage of a Restricted Dictionary of Keyword Arguments to allow users to tailor their software and it’s dependencies (inc. optional deps!). Such flags do exist, but do not have the same level of standardized vocabularies, application and discovery mechanisms, flag-based dependency-resolution, and of course, widespread implimentation. I can write manual package transformations as an end user, but that introduces a dependency on the package’s build phases and is far more verbose. While both systems theoretically empower end users, Gentoo has nurtured a packaging model and culture which actively supports those powers.
Well, as supported as combinatorial explosions can get; I didn’t learn those skills fixing nothing :p. Will always have a place in my heart for Gentoo, both Portage and the community. Huge shoutout to the GentooLTO project.
Not quite through, but I was expecting a quicker read and was pleasantly surprised to find something so thorough! Won’t have time to finish it tonight :p
Some thoughts, like, “The Final Word” might be overstating things:
I point these out not to downplay ZFS’s own lovable quirks and the revolutionary impact they’ve had (notably on the lineage of these very systems), but to highlight these and future projects. It’s too soon to underscore “The Final Word”. That said, ZFS is still so worthy of our attention and appreciation! The author has clearly built a fantastic model and understanding of how the system works; I learned much more than I was ready for c:
One particular section caught my eye:
If you have a 5TiB pool, nothing wIll stop you from creating 10x 1TiB sparse zvols. Obviously, once all those zvols are half full, the underlying pool will be totally full. If the clients connected to the zvols try to write more data (which they might very well do because they think their storage is only half full) it will cause critical errors on the TrueNAS side and will most likely lead to extensive pool corruption. Even if you don’t over-commit the pool (i.e., 5TiB pool with 5x 1TiB zvols), snapshots can push you above the 100% full mark.
I thought to myself, “ZFS wouldn’t hit me with a footgun like that, it knows data-loss is code RED”. While “extensive pool corruption” might overzealous, it is a sticky situation. The clients in this case the filesystem’s populating the ZVOLs, which are prepared to run out of space at some point, but not for the disk to fail out from under them. Snapshots do provide protection/recovery-paths from this, but also lower the “over-provisioning threshold”. This isn’t the sort of pool corruption that made me concerned; I couldn’t find any evidence of that. It would obviously still be a disruptive failure of monitoring in production, and might best be avoided precautionarily by under-provisioning, which is a shame, but then even thick provisioning has a messy relationship with snapshots in the following section.
I’m not sure any known system addresses this short of monitoring/vertical-integration. I guess, it’s great when BTRFS fills up and you can just like plug in a USB drive to fluidly expand available space, halt further corruption, and carry it into degraded performance gracefully. Not that BTRFS’s own relationship with free space is unblemished, but this does work!
Probably a viable approach in ZFS too (single device zdev?), but BTRFS really shines in it’s UI, responsiveness, and polish during exactly these sorts of migrations, which I’d find relieving in an emergency. ZFS has a lot of notorious pool expansion pitfalls, addressed here, which I also wouldn’t have to think about (even if just to dismiss them as inapplicable under the circumstances bc they relate to zdevs). It matters that I think ZFS can do it, and know that BTRFS can; it’s flexibility is reassuring. (Again, not a dig, still going great lengths to use ZFS everywhere; for all this I don’t run butter right now :p)
Thinking about it more, this is probably because I’ve recreated BTRFS pools dozens of times, where as ZFS pools are more static and recreating them is often kinda intense. It’s like BTRFS is declarative, like Nix, allowing me to erase my darlings and become comportable with a broader ranges of configurations by being less attached to the specific setup I have at any given time.
Live replication and sharing are both definitely missing from ZFS, though I can see how they could be added (to the FS, if not to the code). Offline deduplication is the other big omission and that’s hard to add as well.
For cloud scenarios, I wish ZFS had stronger confidentiality and integrity properties. The encryption, last time I looked, left some fairly big side channels open and leaked a lot of metadata. Given that the core data structure is basically a Merkel tree, it’s unfortunate that ZFS doesn’t provide cryptographic integrity checks on a per-pool and per-dataset basis. For secure boot, I’d love to be able to embed my boot environment’s root hash in the loader and have the kernel just check the head of the Merkel tree.
Yeah, the hubris of the “last word in filesystems” self-anointment always struck me as fairly staggering. While perhaps not exceeded so quickly and dramatically, it’s a close cousin of “640K ought to be enough for anybody”. Has any broad category of non-trivial software ever been declared finished, with some flawless shining embodiment never to be improved upon again? Hell, even (comparatively speaking) laughably simple things like sorting aren’t solved problems.
Yeah, the hubris of the “last word in filesystems” self-anointment always struck me as fairly staggering.
It’s just because the name starts with Z, so it’s always alphabetically last in a list of filesystems.
Yeah, the hubris of the “last word in filesystems” self-anointment always struck me as fairly staggering. While perhaps not exceeded so quickly and dramatically, it’s a close cousin of “640K ought to be enough for anybody”.
Besides the name starting with Z which @jaculabilis mentioned, I suspect it was also in reference to ZFS being able to (theoretically) store 2^137 bytes worth of data, which does ought to to be enough for anybody.
Because storing 2^137 bytes worth of data would necessarily require more energy than that needed to boil the oceans, according to one of the ZFS creators .
AFAIK the RAID5/6 write hole still exists, so that’s the notorious no-go; I’ve always preferred RAID10 myself. Does kinda put a damper on that declarative freedom aspect if mirrors are the only viably stable configuration, but the spirit is still there in the workflow and utilities.
Article author here, I made an account so I could reply. I appreciate the kind words, I put a lot of effort into getting this information together.
After talking to some of the ZFS devs, I’m going to clarify that section about overprovisioning your zvols. I misunderstood some technical information and it turns out it’s not as bad as I made it out to be (but it’s still something you want to avoid).
The claim that ZFS is the last word in file systems comes from the original developers. I added a note to that effect in the first paragraph of the article. I have more info about what (I believe) they were getting at in one of the sections towards the end of the article: https://jro.io/truenas/openzfs/#final_word
I’m obviously a huge fan of ZFS but I’ll admit that it’s not the best choice for every application. It’s not super lean and high-performance like ext4, it doesn’t (easily) scale out like ceph, and it doesn’t offer the flexible expansion and pool modification like UNRAID. Despite all that, it’s a great all-round filesystem for many purposes and is far more mature than something like BTRFS. (Really, I just needed a catchy title for my article :) )
I read this, and immediately followed it with “I feel for the NetBSD community”; there, on Runenerd’s home page: https://rubenerd.com/the-beauty-of-cgi-and-simple-design-by-hales/
In which he links to Hailstrom: https://halestrom.net/darksleep/blog/046_cgi/ (Originally written here! https://lobste.rs/s/pdynxz/long_death_cgi_pm#c_lw4zci)
For formal folks, there is an RFI: https://www.rfc-editor.org/rfc/rfc3875
For those of us in the lucky 10,000 who want to know more, I do believe the rabbit hole is this way; but remeber that there’s no better way to learn than to create! See implementations of the spec and the resources above.
I’d like to work through Let over Lambda , a book that explains all sorts of macro shenanigans!
I’ve been meaning to write up some notes on my journey with Lisp this year, having read On Lisp, Let Over Lambda, and The Little Schemer. I can breifly attest that, while Graham’s prose drew me in gently and the Little Schemer had concrete exercises, Let Over Lambda was my favorite read.
At first he seemed to proselytize a bit heavily (more even than PG), but his evident passion for macro-ology soon gave weight to his perspective, culminating in a broad appreciation of any language featuring those uniquely elegant philosophies which redefine solutions and problems alike. The zoo of macros was fascinating, I found his defense of anaphora compelling, and am still chewing on later chapters even as I’ve moved on to other works.
The only factor dampening my enjoyment is that I’ve never used Common Lisp, writing mostly in Scheme and Elisp, the latter now feeling all the more inadequate. Can’t reccomended enough to anyone who has grasped the basic of Lisp and would like to see what’s possible with macros; it had an immediate impact on the ways that I use and write them.
Xmonad will go with it :( But until Wayland has color management, it’s not a suitable display server for my workflows.
Off the top of my head I presume they’re reffering to an ICC profile, which is a file produced by a calibration procedure mapping output colors from the OS to outputs that produce the intended color accurately on your specific display (be it by model online, or for your individual unit via an at-home calibration tool).
Yep. It deals with the color accuracy of your display. One usage is to normalize to the ‘standard’ colors so my red is the same red as your red as measured by the colorimeter. After that, another usage is to simulate how an output will look, so say you were going to a printer you can get the profile of thaw printer so you can work on a project with the same colors that that printer cat produce. I don’t print stuff that often, but I do look at and reproduce designs for the web, and I have a rot more confidence discussing the design knowing we are on the same page as far as color (code monkey only copies color from a design, but it can be a collaborative process for others).
Functional package managers don’t get a mention here, but it is interesting to consider them in this article’s context.
On one hand they’re much like traditional package managers, in the sense that the burdon of packaging is picked up by a third party who might not know the project intimately. This sort of burdon still requires a great number of volunteer hours.
On the other, they side-step many of the messier aspects of dependency management and installation complexity through their isolation of packages and outputs. In a way one could see them as entirely ancillary to the article’s focus, because they can output either their own native package objects, or another format entirely (ie. a docker image in practice, but hypothetically, why not an appimage or flatpak?).
Returning to volunteer hours, I’ve also seen many a project that isn’t packaged by a third party but iinstead comes with a nix or guix manifest or package in the source tree. Even if this article is totally right, there could still be arguments for functional systems in the packaging of flatpaks by app devs, with the added benefit of functional tooling for dev environments in day-to-day work and onboarding c:
A lot of attention has fallen on symlinking files out of a Store a la Nix. In the last thread I mentioned git-annex, who’s index essentially provides this as a
Content Addressed Store. It allows you to restructure your worktree into arbitrary hierarchies based on your tags with one-liners, as easily as if you were checking out a branch, and then to manipulate a file’s tags by just moving it around in these ephemeral tag trees! It fixes broken symlinks automatically, and even comes with a file watcher that can add, commit, and push in the background.
I mentioned some issues I had with storing large numbers of files in git, but also that there are config settings to address that and broader techniques that can be applied (it’d be kinda nice to represents directories as single store items without zipping them, accepting that their contents wouldn’t tagged or indexed in the same ways). One may also like to eg. ensure the daemon is always running and seeing everything, perhaps by pulling git-annex further down the software stack, towards the FUSE level that many existing options are operating at; I’m not sure what other benefits could be gained from that. I think git-annex offers a compelling model and existing implimentation that could be leveraged and adapted towards these use-cases, and do intend to revisit annexing my entire array’s worth of storage :p
out macro described is basically how Clojure’s
str works. It takes any number of forms, discards the nils, calls
toString on the rest, and then joins them
By relying on nil-punning, you can embed an expression and it will skip any result that evaluates to nil: ’(str “Hello” (when some-val “ World!”))
returns “Hello World!” whensome-val` is truthy, and “Hello” when it is not.
Haven’t used Clojure (yet!), thanks for sharing! In similar situations I often find myself pruning via
(apply str `("Hello" ,@(when some-val " world!"))) ;; Equivalent to: (apply str (append '("Hello") (when some-val " world!"))) ;; Of course, this only works if a no-op `when` ;; returns a value equivalent to the empty list ;; ( ie. nil/false = '() ); ;; Otherwise we need to use an `if`, which doesn't feel as elegant... (apply str `("Hello" ,@(if some-val " world!" '())))
PS: In a Lisp-2 the syntax would be
(apply #'str ..., and, can I use back quotes in in-line monospace spans?
The suggestion to filter warnings based on reachability analysis reminds me of similar consideration given to conditional compiler warnings in a recent discussion on changing Go’s for-loop syntax (an incredibly considerate discussion in general).
🤯 very cool! I don’t need it, but I’ll try to find an excuse to use the idea somewhere!
FWIW, I’ve recently reached the next stage of my laptops keyboard bliss when I remapped:
That is, just move the modifiers one row up (key acts as a modifier when held, and as a key when pressed). I use kanata to do the remapping.
One thing that surprised me when playing with using normal keys as modifiers is that on some keyboards that is impossible. On some keyboards modifier keys are electronically different from normal keys and the keyboard won’t emit key down events if you’re already holding down another normal key. I really like the idea with spacemacs and devilmode that you don’t have to hold two keys at once at all. That the entire interaction is just a linear stream of keypresses. It’s just very easy to reason about.
This is called key rollover, how many keys you can press and release in order and get all the events properly recognised. Modifiers usually have to support pretty large rollover; alphanumerics are numerous and often put into a matrix of connections where you cannot distinguish all combos, but usually two keys at once work anyway (fast typing requires pretty high rollover support for at least some letter sequences). But three letter keys not adjacent in any normal word… low-end keyboards might be unhappy with this.
It used to be a big problem for games, where you’d have two people using opposite ends of the keyboard in a split-screen multiplayer (I guess computers are cheap enough now that this doesn’t happen so much?). I remember having a keyboard where the scanning was clearly left-to-right, because the person using the left side of the keyboard could prevent the person using the right from performing critical actions by pressing too many keys.
Ok, I am test driving this in VS Code and I think I love this very much, especially in combination with https://marketplace.visualstudio.com/items?itemName=VSpaceCode.whichkey.
However, the devil can do this:
Is there some VS Code extension which allows such repeatable keys?
Thanks for sharing Kanata! I’ve been holding off on investing in KMonad because several of my devices have unusual architectures and I didn’t find GHC easy to bootstrap, but Rust will be easier to go all in on– the config is more valuable the more consistently I can use it.