I tried to run this locally[1] by self-signing an OpenAI proxy, but eventually ran into Node errors that convinced me I was working with broken sources. At that point I pulled the prompt out, started feeding pages through html2txt, and got great results! I think this has a lot of potential.
Unfortunately many of my bookmarks caused issues with reliable JSON output. This was mostly pages containing snippets of JSON, like lobste.rs threads and hosted source trees. Adding clear delimiters and a “leading” suffix to the end of the prompt helped a little, and I added a validation gate via the openapi_schema_validator python module, but some pages consistently confused the model. I’m aware that this is an area of active research, but wonder if the OpenAI API performs better (ie. at all) for pages like those I encountered.
P.S: I should make it clear that I was building the potentially broken sources from, uh, source– the implication being that they might be out of sync with the version distributed on the Chrome webstore, and I just couldn’t be bothered to learn enough about Node or Chrome at the time to find out if/why/how to fix it.
It occurs to me only now that I could have tried running the disributed extension behind the proxy too, but I’m pretty happy with what I hacked up anyway c:
Did you run my extension, or some other code?
I mainly tested it with gpt-4 which is good for generating correct JSON. Grammar-based sampling should probably help with this, I need to add this in the next version.
This is especially egregious when used to include single quotes in single-quoted words (could probably be escaped cleanly insead, but idk):
$ echo 'it'"'"'s a small world'
As for indented HEREDOCS, another commentor mentioned you can have tabs stripped but i indent exclusively with spaces because conventional Lisp indentation doesn’t follow regular tab-stops (aside: is this the only POSIX defined use of tabs outside of Make?).
Sometimes replacing the underscores with spaces if i don’t need interpolation, omitting the pipe to cut if the destination isn’t whitespace-sensative, or indenting the HEREDOC into its own block by increasing cut’s offset
Waited to see the pufferfsh hit a corner before reading TFA, and also appreciate the use of whitespace: pre; overflow-x: auto; .
What a solid call to action! The writing so successfully evokes the unfliching conviction of inevitability, making it clear that inertia still holds stable platforms while simultaniously grasping the chance to present itself as The Moment– and indeed, so much is happening!
The XFCE devs are discussing Wayland support as well, potentially hitting the GNOME2 market and catching some of the customizability-minded folks who bounced off of Plasma. Seems like tiling folks could hardly be better served than by Sway (not that I don’t love the alts :p), and GNOME/mutter support is old news. What an exiting time!
Anyone interested in ergo keyboards should consider the insane variety of DIY options, provided the have the hobbiest perspective necessary to justify a lil’ time investment:
Yep, three weeks back:
Rename llama to walk.
I hadn’t heard of it before, and do like walk/lk slightly better because lk is such a unique and convenient shortcut in QUERTY (ie. it’s an inward roll with strong fingers on the home row, and not typically used for existing ls aliases).
There’s a potential cartoon here: a cute llama playing in a field, with a looming approaching shadow, cast by the hoof of a gargantuan llama wearing a blue facebook vest and a nametag “LLaMA”.
Those interested in Build System Theory can find my previous comment on Tup here, which TL;DR enforces its opinionated approach to specification ambiguity via a custom FUSE driver.
I’ve got a number of classic Build System papers in a binder somewhere (went through a phase :p), but recall only Build Systems à la Carte off the top of my head, which IMO would be a heavy place to start.
Some more digestable entrances to the rabbit-hole include:
I’ve used task myself in the past and it is a pretty good alternative. Though requiring users install a manual dependency isn’t ideal.
Just’s lack of tracking if tasks were up to date based on file file timestamps (or other mechanisms like task) is quite limiting on building performant builds.
I decided to avoid the debate of which tools are the best as there’s no right answer. What I hope it laid out was how make can be used in a straight-forward way to get good results.
You may already know this, but Just isn’t a build system, it’s a task runner, so I think doing the kind of tracking you’re pointing out is explicitly a non-goal.
The visual representation of hashes are called identicons: https://en.wikipedia.org/wiki/Identicon
GitHub uses one such algorithm to generate the default profile pictures
I was going to suggest :w !wc -w, which is still useful bc g<C-g> isn’t bound in Evil’s defaults. The :w !<Cmd> syntax (which TIL doesn’t save before running the command, but diverts the write into the pipe) also generalizes well into other use-cases.
Occasionally there’s an email or reply that leads to an interesting discussion (on here, HN, or some issue tracker), but it’s a rare thing. I engage in a parasocial sort of conversation as I read books, in that they’ve filled a gap for me…
Most back and forths happen AFK. I work in public, so I meet a lot of people and my co-workers aren’t in our bubble. Between the two groups, there are a few folks who ask what I’ve been up to[1]. I’ve found that enthusiasm is contagious, and that anyone with an open ear can share in it. Their background and faculties are less relevant than their willingness to riff on ideas and their interpretations, to ask solid questions, and to deepen their understanding over time. It’s great to realize I’ve met someone experienced, but all the knowlege in the world doesn’t make up for chemistry, and really, it’s a pleasure to be earnestly engaged with on any level. I can only hope to have cultivated habits that give back to these people[2].
1: I haven’t kept up with everyone like I should.
2: I can “only hope” as I am (broadly) oblivious to my own manners, and ignorant of the desires of others; Not everyone wants their attention back, and may need repayed in other ways or at other times[3].
Thanks for sharing! I was just explaining the folklore to a friend yesterday and, unable to recall the details, said I ought to have a refresher– what a coincidence that you’ve turned this out of the pile >u<
🤯 very cool! I don’t need it, but I’ll try to find an excuse to use the idea somewhere!
I am not particularly fond of remapping caps lock to behave like ctrl because that still suffers from the problem that key combinations like C-x, C-a require pressing both the modifier key and the modified key with the left hand fingers.
FWIW, I’ve recently reached the next stage of my laptops keyboard bliss when I remapped:
CapsLock -> Esc
z -> Ctrl
x -> Meta
c -> Alt
, -> Alt
. -> Meta
. -> Ctrl
That is, just move the modifiers one row up (key acts as a modifier when held, and as a key when pressed). I use kanata to do the remapping.
One thing that surprised me when playing with using normal keys as modifiers is that on some keyboards that is impossible. On some keyboards modifier keys are electronically different from normal keys and the keyboard won’t emit key down events if you’re already holding down another normal key. I really like the idea with spacemacs and devilmode that you don’t have to hold two keys at once at all. That the entire interaction is just a linear stream of keypresses. It’s just very easy to reason about.
This is called key rollover, how many keys you can press and release in order and get all the events properly recognised. Modifiers usually have to support pretty large rollover; alphanumerics are numerous and often put into a matrix of connections where you cannot distinguish all combos, but usually two keys at once work anyway (fast typing requires pretty high rollover support for at least some letter sequences). But three letter keys not adjacent in any normal word… low-end keyboards might be unhappy with this.
It used to be a big problem for games, where you’d have two people using opposite ends of the keyboard in a split-screen multiplayer (I guess computers are cheap enough now that this doesn’t happen so much?). I remember having a keyboard where the scanning was clearly left-to-right, because the person using the left side of the keyboard could prevent the person using the right from performing critical actions by pressing too many keys.
Thanks for sharing Kanata! I’ve been holding off on investing in KMonad because several of my devices have unusual architectures and I didn’t find GHC easy to bootstrap, but Rust will be easier to go all in on– the config is more valuable the more consistently I can use it.
Just six hours ago (and about as long past bedtime), I was learning
from sparse Reddit
threads
where to get OVMF in GNU Guix. Like the
author, I’ve been drawn
towards increasing amounts of both immutabililty and personalization
(ie. exoticism), one locus being Erase Your
Darlings. Just looking
at their config I can feel the hours of persistent, stubborn
wrangling.
Eventually, many hours later, […]
I had my own run-ins with EFI while setting up Secure Boot.
Something (some daemon, utility, firmware, or package-script) keeps
flagging efivars as immutable, and the extended attribute has
escaped me, for several hours, on multiple occasions.
What I wish I had tried earlier was to just boot into EFI shell
because you can edit EFI vars much faster there
Does somebody expect me to remember those magic [GRUB] spells? What
[should you] do next to boot into Linux?
I hardly remember commands for the GRUB & EFI shells. For a long
time I didn’t know you could scroll in EFI shells, and the output of
help would scroll off-screen. Using my phone sucked, a VM wasn’t
always sufficient (or simple: see TFA), and I really appreciate my one
KVM.
Although it’s later in the boot process, I want to give props to the
Guix team here: their Guile Scheme initramfs scripts error out into
a REPL, where you can import all the same (gnu build [...])utilities
the script was using. Don’t think you can scroll, but it’s refreshing
to have all those tools a backtrace at hand as soon as something goes
wrong.
I don’t even want to think how many hours I lost because of this. My
actual problem was more nuanced […]
Rant: Guix doesn’t yet support key-files for LUKS partitions so I made
my own mapped-device-kind that does. Other code filters
mapped-devices for LUKS partitions by checking their types, and
proceeded to miss mine. To avoid forking Guix (which I guess I could
do) or subverting it’s device management entirely, I had to mutate the
existing type to add special cases for my devices.
Another pain-point: Having achieved darling erasure with BTRFS I’m
now pursuing root on ZFS, which has a…
tumultuous history with Guix.
I’ve done all the necessary wrangling to mount the datasets in the
initramfs, but Guix really wants a path to a mount-able root
block-device that it can watch for. I don’t want to write (and
maintain) my own fork of the initramfs, or stoop to putting root on
a ZVOL just to satisfy that requirement, so I’m working on whatever
cheap hacks are necessary to get around the existing code.
Which is somehow to say; It’s always like this. I can’t suitably
articulate right now why I persist in having everything just so, but
I learn a heck of a lot about both the underlying systems and the
towers built atop them by being so stubborn. Software infrastructure
was how I got into programming in the first place, and will always be
a blessing and a curse. Heaven help those who rely on my homelab.
I find this post valuable informationally and personally. Thanks
joonas for taking the time to write it, and to bsandro for sharing it.
Edit: Ugh, going to need so mutate / advise more
functions.
“I didn’t identify with it for a long time; Not until everyone else
had been getting an earful for years. I was just trying to get my
computer to work, and guess I picked it up along the way.
Couldn’t get everything just right without a lil’ scripting. I
thought, does this (ie. Bash) really count? How do people use their
computers (ahem, Linux) without programming? But I’m well past
any plausible stage of denial now :p”
A few weeks ago on macOS I tried installing Nix. I saw it created its own volume. “Oh gosh” I thought “is this going to permanently allocate some part of my small 250GB SSD to itself?” Imagine my surprise when I looked at the volume manager and saw both the main partition & Nix volume had the same max size of 250GB. It was at that moment I realized filesystems had in fact advanced since the early 2000s and statically-allocated exclusive contiguous partitions weren’t actually the way things had to be done anymore. Logical volumes can coexist in the same partition, using only however much space they need to use! This led me to discover the FOSS filesystem that has this feature (and is included in the Linux kernel), BTRFS.
I asked on Unix SE about installing different distros as subvolumes of a single BTRFS partition so they only take up as much space as they actually need, and you can do it but a lot of distro kernel upgrade workflows don’t account for it (as the author mentions, Windows updates might also have trouble with this). So I ended up using logical volumes instead, which are very well-supported and make partitions easy to manually grow/shrink & ensure you don’t have to worry about contiguous or empty space. So that got me most of the way there. Still, I look forward to a future where you can just set your entire disk (or multiple disks, using logical volumes) as one giant BTRFS partition and install everything into subvolumes so we never again have to worry about partition juggling.
The boot menu of Quibble looks like if you took grub, made it HiDPI aware, and added nicer fonts.
Underrated feature, I love when boot code acknowledges that monitors have been manufactured after 1990. I use systemd-boot which I don’t think has this.
The 13-in-1 multiboot image for rapid distro-hopping on the PinePhone is such a BTRFS partition, with a subvolume for each distro’s root and (IIRC) a shared kernel and initiramfs.
It occurred to me while reading this that equality saturation (https://egraphs-good.github.io/) might be the missing piece that allows generalized macros to compose. A macro implemented with equality saturation could see every expansion step of its neighbors and rewrite based on the specific one(s) it’s looking for.
Whoa, that’s a really interesting idea! I’m not really sure how you’d decide on the “optimal” rewrite – I guess macros would include, in their expansions, how “specific” that expansion is? Or something like that? Definitely something to think about.
When writing macros within Scheme’s syntax-case model, they’re expressed as case-style pattern-matchers over syntax-objects (which themselves appear ideal for translation into e-nodes). In that context, I would posit that optimal extractions from saturated graphs are those which fulfill the earliest possible matches. Hence a match on a left-most set literal would 1) take precedence over the no-match case, 2) can be applied after the test macro expands, and 3) could maybe even propogate transformations of sub-nodes into equivalent expansions where those literals have been eliminated (or not yet expanded into being).
Implementing such a system would be difficult (let alone in Janet without an existing syntax-case to fork), and although I think it addresses the settable example as-given, it’s a rough model. There are still ambiguous cases where one would presumably fall-back into depth-first expansion.
The biggest problem is with that 3rd part, which is kinda out-there. Macarons are effectively expansions of their parent expressions, so they can’t actually contribute transformations of themselves or their sibling arguments that are seperable from those parent expansions. Putting that aside (maybe by annotating with source syntax objects / equiv. e-nodes when preserving the transformation would be valid), it would feel kinda cursed to allow a match on a literal which might only exist in superposition (don’t let reason stop you :p).
I guess 2/3 with the fall-back caveat ain’t too bad, but disclaimer: this is way over my head, i hardly grok nondeterminism and look at this this with the same awestruck unfamiliarity as µKanren, which i also don’t know nothin about
Neat! Love the return of a painted spritely character (the classic site had so much charm), and this debugger puts others I’ve endured to shame.
As an aside, E keeps popping up as a spring of inspirations, a la… I’m blanking on it, that influential hypothetical language; I’ll comment back when it comes to me. Let’s go with T for now.
Glad you liked the painted Spritely characters. They’ve been making their way back slowly into the new site, but yes, not as front and center as before. But I too really enjoy them. :)
E is definitely cool, and has been a huge influence on Spritely, as is probably obvious. It’s funny you should mention T: yes, the T scheme/lisp indirectly has had a big influence on Spritely also, because Jonathan Rees worked on it, and it both heavily influenced Mark Miller and company’s approach towards treating lexical scope as the foundation for ocaps (fun fact: Jonathan A. Rees and Mark S. Miller went to college together at Yale, and years later went on to work on ocaps independently and came to many of the same technical conclusions without talking to each other!), and also was the predecessor to Jonathan A. Rees’ later Scheme, Scheme48. Jonathan Rees’ “security kernel” dissertation, which showed that a pure scheme could be an ocap-safe programming language environment, directly enabled Goblins to happen. (Speaking of weird short programming language names, the code for that security kernel, W7, is available, but few people know about it. It’s amazing how compact and beautiful it is, because Scheme48 already enabled it to be so.)
I’ve had an idea kicking around my brain for a while now of a way to implement a more powerful and flexible macro system than defmacro for languages with lots of parentheses, but I’ve been too busy working on a book to actually try to implement it. But the book is out now! So I’m going to try to mock it up in Janet and see how it feels in practice, and then (hopefully) write a blog post if it goes well.
That sounds awesome! I read a lot of the literature on syntax-case last month[1], have been loving reading Janet for Mortals in my downtime, and haven’t reached your chapter on macros yet but think it’s a particularly interesting language for prototyping your idea because of eg. the behavior you discovered in “Making a Game Pt. 3” (which isn’t necessarily portable). I’d be interested in any ideas you have in this area (even if they’re not merit-ful or focused on hygiene), and will be looking forward to the post c:
[1]: Not all of which was correct: there is a a false ambiguity on the surface, and true undefined “implementation-dependant” behavior deep in the bowels of the spec.
Spent some time considering prior art, and the closest I could get was what Guile calls Variable Transformers.
In eg. Common Lisp and Elisp, Generalized Variables can extended by defining a dedicated macro which set! or setf finds and invokes (via a global map, symbol properties, etc).
In Guile, you can create a macro which pattern-matches on the semantics of it’s call-site:
use as an applicative operator
use as an identifier
use as the second argument of a set! form
Because it needs to be used as an identifier it can’t define set!-able sexps like Common Lisp or Elisp would allow, but neither can macaroni. It’s not a first class object, short of being a normal function under the hood. Finally it’s handicapped by only being passed its parent’s form in the third situation, essentially still at set!’s discretion (not sure about the exact mechanism in use). Definitely the only other example I could find of an identifier-bound macro receiving the form it is invoked within.
Stayed up too late to think any more, but love the idea, that’s awesome
(The downside explained there tells me that I should spend some more time thinking about controlling the order of expansion… which would also make it easier to define infix operators with custom precedence…)
Ooo, I can see how I’d have missed that on the way out, nice! Found Racket’s Assignment Transformers, and they (bless the docs!) explain that they are indeed just sugar over a variant of the “set! + symbol-props” approach. I wonder if this approach (ie. returning an anonymous or gensym’ed identifier macro) could be retrofitted in to that model, but it feels clear to me that macarons more cleanly solve and generalize what has always been a messy situation in Lisp implementations.
As another exploratory question, are we limited (in practice or theory) to the immediate context of the parent form? Aside from that dispatching on grandparent or cousin forms feels kinda cursed. I wonder what use cases pull sibling sexps into play.
Funny how having to dispatch on the set literal kinda resembles the limitations of a system invoked by set itself, but it’s progress! Re: expansion order, my gut feeling is that they ought to be compatible with other extensions that don’t explicitly macro expand their arguments (ie. until the set form is established), but haven’t really dug into how janet/this all works and need more coffee first
Theoretically you can rewrite forms anywhere, but I’m having a hard time coming up with a situation where you’d want to. But here’s a nonsensical “grandparent” macaron:
I think this is pretty interesting? Maybe possibly even useful, to add debugging output or something?
At the repl Janet assigns _ to the result of the previous expression – you could do that in arbitrary code; implicitly surrounding the previous expression with (def _ ...) if you use _ in an expression. Hmm. Not super useful…
Funny how having to dispatch on the set literal kinda resembles the limitations of a system invoked by set itself, but it’s progress
Yeah, but it allows you to “extend” the behavior of set without set having to know anything about your custom extension (or even knowing that it is itself extensible!). But the evaluation order is problematic. Hmm.
I find that the header file problem is one that tup solves incredibly elegantly. It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.
Not sure if the author is here, but if you are, any plans to support something like that?
It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.
So the “proper” way is to intercept the filesystem calls in a non-portable manner and depend on anything the program opens without regard for whether it affects the output or not (like, say, translations of messages for diagnostics). While explicitly asking the preprocessor for an accurate list of headers that it reads is a hack?
The problem with the second option is that it isn’t portable between languages or even compilers. Sure, both GCC and clang implement it, but there isn’t really a standard output format other than a makefile, which isn’t really ideal if you want to use anything that isn’t make.
It’s an unforunate format, but it’s set in stone by now, and won’t break. It has become a de facto narrow waist with at least 2 emitters:
Clang
GCC
and 2 consumers:
Make itself
Ninja has a very nice and efficient gcc -M parser
Basically it’s an economic fact that this format will persist, and it certainly works. I never liked doing anything with it in GNU make because it composes poorly with other Make features, but in Ninja it’s just fine. I’m sure there are many other non-Make systems that parse it by now too.
That’s a fair point, also didn’t know Ninja supported it but it makes sense. I wonder if other languages support something similar to allow for this kind of thing, though many modern languages just sidestep the issue all together by making the compiler take care of incremental compilation.
Most tools could probably read the -M output format and understand it quite easily. It doesn’t use most of what could show up in a Makefile - it only uses single-line “target: source1 source2” rules with no commands, no variables, etc. I imagine if someone wanted to come up with a universal format, it wouldn’t be far off from what’s already there.
But.. don’t you want to update your program when diagnostic messages are changed? The FUSE mount doesn’t grab eg. library and system locales from outside the project root, so it only affects the resources of the project being built[1]. Heaven forbid you’re bisecting a branch for a change that is, for reasonable or cursed reasons alike, descended from one of those files..
For those interested, I’ve pitched tup and mused about this in a previous comment here.
[1]: Provided you don’t vendor your all dependencies into the repo, which I guess applies to node_modules! Idk off the top of my head if there’s a way to exclude a subdirectory for this specific situation, or whether symlinks would work for controlling the mechanism.
Edit: Oh, it’s u/borisk again! I really appreciated your response last time this came up and hope you’re doin’ great c:
Edit 2: Oh, and you work on a build system! I’ll check it out sometime ^u^
I originally started Knit with the intention of supporting automatic dependency discovery using ptrace. I experimented with this with a tool called xkvt, which uses ptrace to run a list of commands and can generate a Knitfile that expresses the dependencies. However, I think this method is unfortunately more of a hack compared to -MMD because ptrace is non-portable (not well supported/documented on macOS and non-existent on Windows) and has a lot of complexity for tracing multithreaded processes. A Fuse-based approach like the one used by Tup is similar (maybe more reliable), but requires Fuse (a kernel extension), and also has the negative that automatic dependency discovery can sometimes include dependencies that you don’t really want. When I tried to use Tup for a Chisel project I ran into problems because I was invoking the Scala build tool which generated a bunch of temporary files that Tup required to be explicitly listed as a result.
I think if Knit ever has decent support for an automatic dependency approach, it would be via a separate tool or extension rather than directly baked into Knit by default.
Cool! I’ve always thought about running a dynamic site based on Haunt, which doesn’t quite fit into this subset of Scheme, but the example has a very similar structure. Love the idea, and the sleek deployment method; I haven’t got similar ergonomics for my own deploys yet….
Haven’t actually posted anything on my site (so I haven’t crafted CSS for it or anything), but I’ve collected a short list of homages and related posts (including a link to commentary on inspirations) here:
Left out a repo that translates the same Queen’s post into Rust because it wasn’t in narrative form, which felt important to me at the time, but idk, that’s cool too and available here:
The format is fun, I like how people adapt the themes from Aphyr’s original blogs. It’s a bit of a colorful show and tell without being too dry about the subject matter. Props to collecting all these formats into a repository!
Without paying too much attention to it, I chalked the recent arguments up (as u/scraps does) to the implicit / missing context of Casey’s eg. game dev background (where most code really is performance critical).
This conversation pulls the argument out of that framework, recognizing that there is a place in practically all software for a performance-aware approach, while tactfully digging back at an equally dogmatic dismissal of other concerns (those which, as Casey may justifiably say, “are beyond the scope of this course”).
Loving said course, and glad to see these two tribal icons able to enchange ideas and reconcile these tensions into conscious tradeoffs for their audiences (and those who will inherit future tribal knowlage) to consider.
I tried to run this locally[1] by self-signing an OpenAI proxy, but eventually ran into Node errors that convinced me I was working with broken sources. At that point I pulled the prompt out, started feeding pages through html2txt, and got great results! I think this has a lot of potential.
Unfortunately many of my bookmarks caused issues with reliable JSON output. This was mostly pages containing snippets of JSON, like lobste.rs threads and hosted source trees. Adding clear delimiters and a “leading” suffix to the end of the prompt helped a little, and I added a validation gate via the
openapi_schema_validator
python module, but some pages consistently confused the model. I’m aware that this is an area of active research, but wonder if the OpenAI API performs better (ie. at all) for pages like those I encountered.1: Mistral 7B (OpenOrca, GGUF), Ryzen CPU.
P.S: I should make it clear that I was building the potentially broken sources from, uh, source– the implication being that they might be out of sync with the version distributed on the Chrome webstore, and I just couldn’t be bothered to learn enough about Node or Chrome at the time to find out if/why/how to fix it.
It occurs to me only now that I could have tried running the disributed extension behind the proxy too, but I’m pretty happy with what I hacked up anyway c:
Did you run my extension, or some other code? I mainly tested it with gpt-4 which is good for generating correct JSON. Grammar-based sampling should probably help with this, I need to add this in the next version.
I’d have been runnng some other code :p
Still exited to hear that there are clear leads in the problem-space.
Thank you for the quick little tutorial. I have been using the eternal double-quote swamp every time I had to use json via bash.
Btw: The bouncing pufferfish is really cute.
I also love the pufferfish! Figuring out how you did it was a nice little puzzle.
thx! i was inspired by the many dvd logo screensavers i watched growing up 😌
My current approach is to use the chonky block of “Alternating Delimiters” for interpolation:
$ echo '{"name": "'"$name"'", "sign": "'"$sign"'"}'
This is especially egregious when used to include single quotes in single-quoted words (could probably be escaped cleanly insead, but idk):
$ echo 'it'"'"'s a small world'
As for indented HEREDOCS, another commentor mentioned you can have tabs stripped but i indent exclusively with spaces because conventional Lisp indentation doesn’t follow regular tab-stops (aside: is this the only POSIX defined use of tabs outside of Make?).
I tend to do something like this:
Sometimes replacing the underscores with spaces if i don’t need interpolation, omitting the pipe to
cut
if the destination isn’t whitespace-sensative, or indenting the HEREDOC into its own block by increasingcut
’s offsetWaited to see the pufferfsh hit a corner before reading TFA, and also appreciate the use of
whitespace: pre; overflow-x: auto;
.What a solid call to action! The writing so successfully evokes the unfliching conviction of inevitability, making it clear that inertia still holds stable platforms while simultaniously grasping the chance to present itself as The Moment– and indeed, so much is happening!
The XFCE devs are discussing Wayland support as well, potentially hitting the GNOME2 market and catching some of the customizability-minded folks who bounced off of Plasma. Seems like tiling folks could hardly be better served than by Sway (not that I don’t love the alts :p), and GNOME/mutter support is old news. What an exiting time!
Anyone interested in ergo keyboards should consider the insane variety of DIY options, provided the have the hobbiest perspective necessary to justify a lil’ time investment:
https://ianthehenry.com/posts/kyria-build/a-wireless-ergonomic-keyboard/
https://aposymbiont.github.io/split-keyboards/
thanks for the links, the first one is a really nice guide of what to expect when assembling these yourself
I came here just to suggest a see-also llama. While looking it up, I realised you are the author of llama and it’s been now renamed to walk… right?
Yep, three weeks back: Rename llama to walk. I hadn’t heard of it before, and do like walk/lk slightly better because lk is such a unique and convenient shortcut in QUERTY (ie. it’s an inward roll with strong fingers on the home row, and not typically used for existing ls aliases).
I like
lk
myself too)Yes) llama rignt now is LLaMA by facebook.
I think walk is better name for llama))
There’s a potential cartoon here: a cute llama playing in a field, with a looming approaching shadow, cast by the hoof of a gargantuan llama wearing a blue facebook vest and a nametag “LLaMA”.
Some really neat alternatives to Make:
https://github.com/go-task/task
https://github.com/casey/just
Those interested in Build System Theory can find my previous comment on Tup here, which TL;DR enforces its opinionated approach to specification ambiguity via a custom FUSE driver.
I’ve got a number of classic Build System papers in a binder somewhere (went through a phase :p), but recall only Build Systems à la Carte off the top of my head, which IMO would be a heavy place to start.
Some more digestable entrances to the rabbit-hole include:
I’ve used task myself in the past and it is a pretty good alternative. Though requiring users install a manual dependency isn’t ideal.
Just’s lack of tracking if tasks were up to date based on file file timestamps (or other mechanisms like task) is quite limiting on building performant builds.
I decided to avoid the debate of which tools are the best as there’s no right answer. What I hope it laid out was how make can be used in a straight-forward way to get good results.
You may already know this, but Just isn’t a build system, it’s a task runner, so I think doing the kind of tracking you’re pointing out is explicitly a non-goal.
I’ve successfully used mk[1] in several projects.
[1] https://9fans.github.io/plan9port/man/man1/mk.html
The visual representation of hashes are called identicons: https://en.wikipedia.org/wiki/Identicon GitHub uses one such algorithm to generate the default profile pictures
One of my favorite demos for this is from francoisbest[1], specifically the stagger variant because it’s just so much fun to watch.
1: https://francoisbest.com/hashvatar?text=f7f05111ddb22b58fdad8bee63a3cd2bcea43398&variant=stagger
Thank you, I opened the article expecting something along OpenSSH fingerprint visualization algorithm (see https://www.jfurness.uk/the-drunken-bishop-algorithm/ ) and it didn’t even come close :/
First time I hear the name “Identicons”.
The others have varying degrees of being specific to a use case, but:
You can actually use
g<C-g>
for that!I was going to suggest
:w !wc -w
, which is still useful bcg<C-g>
isn’t bound in Evil’s defaults. The:w !<Cmd>
syntax (which TIL doesn’t save before running the command, but diverts the write into the pipe) also generalizes well into other use-cases.I was about to comment that one could also use
M-x count-words
when using evil, and saw that it’s bound tog C-g
in doom emacs :)TIL
:w !<cmd>
I’m gonna look this up in the help for myself after asking, but what’s the difference between:%!foo
andw !<cmd>
:%!foo
replaces the entire buffer with the output of piping the buffer contents tofoo
.:w !foo
just pipes the buffer contents tofoo
and displays the output, leaving the buffer unchanged.Occasionally there’s an email or reply that leads to an interesting discussion (on here, HN, or some issue tracker), but it’s a rare thing. I engage in a parasocial sort of conversation as I read books, in that they’ve filled a gap for me…
Most back and forths happen AFK. I work in public, so I meet a lot of people and my co-workers aren’t in our bubble. Between the two groups, there are a few folks who ask what I’ve been up to[1]. I’ve found that enthusiasm is contagious, and that anyone with an open ear can share in it. Their background and faculties are less relevant than their willingness to riff on ideas and their interpretations, to ask solid questions, and to deepen their understanding over time. It’s great to realize I’ve met someone experienced, but all the knowlege in the world doesn’t make up for chemistry, and really, it’s a pleasure to be earnestly engaged with on any level. I can only hope to have cultivated habits that give back to these people[2].
1: I haven’t kept up with everyone like I should.
2: I can “only hope” as I am (broadly) oblivious to my own manners, and ignorant of the desires of others; Not everyone wants their attention back, and may need repayed in other ways or at other times[3].
3: Boy do these footnotes contrast!
I had no idea that there was a FAQ for 500 mile email story. Fascinating extra background for that story
Thanks for sharing! I was just explaining the folklore to a friend yesterday and, unable to recall the details, said I ought to have a refresher– what a coincidence that you’ve turned this out of the pile >u<
🤯 very cool! I don’t need it, but I’ll try to find an excuse to use the idea somewhere!
FWIW, I’ve recently reached the next stage of my laptops keyboard bliss when I remapped:
That is, just move the modifiers one row up (key acts as a modifier when held, and as a key when pressed). I use kanata to do the remapping.
One thing that surprised me when playing with using normal keys as modifiers is that on some keyboards that is impossible. On some keyboards modifier keys are electronically different from normal keys and the keyboard won’t emit key down events if you’re already holding down another normal key. I really like the idea with spacemacs and devilmode that you don’t have to hold two keys at once at all. That the entire interaction is just a linear stream of keypresses. It’s just very easy to reason about.
This is called key rollover, how many keys you can press and release in order and get all the events properly recognised. Modifiers usually have to support pretty large rollover; alphanumerics are numerous and often put into a matrix of connections where you cannot distinguish all combos, but usually two keys at once work anyway (fast typing requires pretty high rollover support for at least some letter sequences). But three letter keys not adjacent in any normal word… low-end keyboards might be unhappy with this.
It used to be a big problem for games, where you’d have two people using opposite ends of the keyboard in a split-screen multiplayer (I guess computers are cheap enough now that this doesn’t happen so much?). I remember having a keyboard where the scanning was clearly left-to-right, because the person using the left side of the keyboard could prevent the person using the right from performing critical actions by pressing too many keys.
Ok, I am test driving this in VS Code and I think I love this very much, especially in combination with https://marketplace.visualstudio.com/items?itemName=VSpaceCode.whichkey.
However, the devil can do this:
Is there some VS Code extension which allows such repeatable keys?
Thanks for sharing Kanata! I’ve been holding off on investing in KMonad because several of my devices have unusual architectures and I didn’t find GHC easy to bootstrap, but Rust will be easier to go all in on– the config is more valuable the more consistently I can use it.
Just six hours ago (and about as long past bedtime), I was learning from sparse Reddit threads where to get OVMF in GNU Guix. Like the author, I’ve been drawn towards increasing amounts of both immutabililty and personalization (ie. exoticism), one locus being Erase Your Darlings. Just looking at their config I can feel the hours of persistent, stubborn wrangling.
I had my own run-ins with EFI while setting up Secure Boot. Something (some daemon, utility, firmware, or package-script) keeps flagging
efivars
as immutable, and the extended attribute has escaped me, for several hours, on multiple occasions.I hardly remember commands for the GRUB & EFI shells. For a long time I didn’t know you could scroll in EFI shells, and the output of
help
would scroll off-screen. Using my phone sucked, a VM wasn’t always sufficient (or simple: see TFA), and I really appreciate my one KVM.Although it’s later in the boot process, I want to give props to the Guix team here: their Guile Scheme
initramfs
scripts error out into a REPL, where you can import all the same(gnu build [...])
utilities the script was using. Don’t think you can scroll, but it’s refreshing to have all those tools a backtrace at hand as soon as something goes wrong.Rant: Guix doesn’t yet support key-files for LUKS partitions so I made my own
mapped-device-kind
that does. Other code filtersmapped-devices
for LUKS partitions by checking their types, and proceeded to miss mine. To avoid forking Guix (which I guess I could do) or subverting it’s device management entirely, I had to mutate the existing type to add special cases for my devices.Another pain-point: Having achieved darling erasure with BTRFS I’m now pursuing root on ZFS, which has a… tumultuous history with Guix. I’ve done all the necessary wrangling to mount the datasets in the
initramfs
, but Guix really wants a path to amount
-able root block-device that it can watch for. I don’t want to write (and maintain) my own fork of theinitramfs
, or stoop to putting root on a ZVOL just to satisfy that requirement, so I’m working on whatever cheap hacks are necessary to get around the existing code.Which is somehow to say; It’s always like this. I can’t suitably articulate right now why I persist in having everything just so, but I learn a heck of a lot about both the underlying systems and the towers built atop them by being so stubborn. Software infrastructure was how I got into programming in the first place, and will always be a blessing and a curse. Heaven help those who rely on my homelab.
I find this post valuable informationally and personally. Thanks joonas for taking the time to write it, and to bsandro for sharing it.
Edit: Ugh, going to need so mutate / advise more functions.
A few weeks ago on macOS I tried installing Nix. I saw it created its own volume. “Oh gosh” I thought “is this going to permanently allocate some part of my small 250GB SSD to itself?” Imagine my surprise when I looked at the volume manager and saw both the main partition & Nix volume had the same max size of 250GB. It was at that moment I realized filesystems had in fact advanced since the early 2000s and statically-allocated exclusive contiguous partitions weren’t actually the way things had to be done anymore. Logical volumes can coexist in the same partition, using only however much space they need to use! This led me to discover the FOSS filesystem that has this feature (and is included in the Linux kernel), BTRFS.
I asked on Unix SE about installing different distros as subvolumes of a single BTRFS partition so they only take up as much space as they actually need, and you can do it but a lot of distro kernel upgrade workflows don’t account for it (as the author mentions, Windows updates might also have trouble with this). So I ended up using logical volumes instead, which are very well-supported and make partitions easy to manually grow/shrink & ensure you don’t have to worry about contiguous or empty space. So that got me most of the way there. Still, I look forward to a future where you can just set your entire disk (or multiple disks, using logical volumes) as one giant BTRFS partition and install everything into subvolumes so we never again have to worry about partition juggling.
Underrated feature, I love when boot code acknowledges that monitors have been manufactured after 1990. I use systemd-boot which I don’t think has this.
I know it’s amazing when I can see my bootloader
The 13-in-1 multiboot image for rapid distro-hopping on the PinePhone is such a BTRFS partition, with a subvolume for each distro’s root and (IIRC) a shared kernel and initiramfs.
https://xnux.eu/p-boot-demo/ https://xnux.eu/log/#046
It occurred to me while reading this that equality saturation (https://egraphs-good.github.io/) might be the missing piece that allows generalized macros to compose. A macro implemented with equality saturation could see every expansion step of its neighbors and rewrite based on the specific one(s) it’s looking for.
Whoa, that’s a really interesting idea! I’m not really sure how you’d decide on the “optimal” rewrite – I guess macros would include, in their expansions, how “specific” that expansion is? Or something like that? Definitely something to think about.
When writing macros within Scheme’s
syntax-case
model, they’re expressed as case-style pattern-matchers over syntax-objects (which themselves appear ideal for translation into e-nodes). In that context, I would posit that optimal extractions from saturated graphs are those which fulfill the earliest possible matches. Hence a match on a left-mostset
literal would 1) take precedence over the no-match case, 2) can be applied after the test macro expands, and 3) could maybe even propogate transformations of sub-nodes into equivalent expansions where those literals have been eliminated (or not yet expanded into being).Implementing such a system would be difficult (let alone in Janet without an existing
syntax-case
to fork), and although I think it addresses the settable example as-given, it’s a rough model. There are still ambiguous cases where one would presumably fall-back into depth-first expansion.The biggest problem is with that 3rd part, which is kinda out-there. Macarons are effectively expansions of their parent expressions, so they can’t actually contribute transformations of themselves or their sibling arguments that are seperable from those parent expansions. Putting that aside (maybe by annotating with source syntax objects / equiv. e-nodes when preserving the transformation would be valid), it would feel kinda cursed to allow a match on a literal which might only exist in superposition (don’t let reason stop you :p).
I guess 2/3 with the fall-back caveat ain’t too bad, but disclaimer: this is way over my head, i hardly grok nondeterminism and look at this this with the same awestruck unfamiliarity as µKanren, which i also don’t know nothin about
Neat! Love the return of a painted spritely character (the classic site had so much charm), and this debugger puts others I’ve endured to shame.
As an aside, E keeps popping up as a spring of inspirations, a la… I’m blanking on it, that influential hypothetical language; I’ll comment back when it comes to me. Let’s go with T for now.
Glad you liked the painted Spritely characters. They’ve been making their way back slowly into the new site, but yes, not as front and center as before. But I too really enjoy them. :)
E is definitely cool, and has been a huge influence on Spritely, as is probably obvious. It’s funny you should mention T: yes, the T scheme/lisp indirectly has had a big influence on Spritely also, because Jonathan Rees worked on it, and it both heavily influenced Mark Miller and company’s approach towards treating lexical scope as the foundation for ocaps (fun fact: Jonathan A. Rees and Mark S. Miller went to college together at Yale, and years later went on to work on ocaps independently and came to many of the same technical conclusions without talking to each other!), and also was the predecessor to Jonathan A. Rees’ later Scheme, Scheme48. Jonathan Rees’ “security kernel” dissertation, which showed that a pure scheme could be an ocap-safe programming language environment, directly enabled Goblins to happen. (Speaking of weird short programming language names, the code for that security kernel, W7, is available, but few people know about it. It’s amazing how compact and beautiful it is, because Scheme48 already enabled it to be so.)
Addendum: was probably thinking of ISWIM.
I’ve had an idea kicking around my brain for a while now of a way to implement a more powerful and flexible macro system than
defmacro
for languages with lots of parentheses, but I’ve been too busy working on a book to actually try to implement it. But the book is out now! So I’m going to try to mock it up in Janet and see how it feels in practice, and then (hopefully) write a blog post if it goes well.That sounds awesome! I read a lot of the literature on syntax-case last month[1], have been loving reading Janet for Mortals in my downtime, and haven’t reached your chapter on macros yet but think it’s a particularly interesting language for prototyping your idea because of eg. the behavior you discovered in “Making a Game Pt. 3” (which isn’t necessarily portable). I’d be interested in any ideas you have in this area (even if they’re not merit-ful or focused on hygiene), and will be looking forward to the post c:
[1]: Not all of which was correct: there is a a false ambiguity on the surface, and true
undefined“implementation-dependant” behavior deep in the bowels of the spec.Hey thanks! Glad you’re liking the book. Here’s a quick sketch of my macro idea: https://github.com/ianthehenry/macaroni
I can’t find any prior art for this but I have no idea what to search for or call it.
Spent some time considering prior art, and the closest I could get was what Guile calls Variable Transformers.
In eg. Common Lisp and Elisp, Generalized Variables can extended by defining a dedicated macro which
set!
orsetf
finds and invokes (via a global map, symbol properties, etc).In Guile, you can create a macro which pattern-matches on the semantics of it’s call-site:
set!
formBecause it needs to be used as an identifier it can’t define
set!
-able sexps like Common Lisp or Elisp would allow, but neither can macaroni. It’s not a first class object, short of being a normal function under the hood. Finally it’s handicapped by only being passed its parent’s form in the third situation, essentially still atset!
’s discretion (not sure about the exact mechanism in use). Definitely the only other example I could find of an identifier-bound macro receiving the form it is invoked within.Stayed up too late to think any more, but love the idea, that’s awesome
Hey thanks! I had seen something very similar to this in Racket before – I guess it’s a general scheme thing.
You actually can make settable sexps with the macaroni approach, by returning a first-class macaron that checks the context it appears in – https://github.com/ianthehenry/macaroni/blob/master/test/settable.janet
(The downside explained there tells me that I should spend some more time thinking about controlling the order of expansion… which would also make it easier to define infix operators with custom precedence…)
Ooo, I can see how I’d have missed that on the way out, nice! Found Racket’s Assignment Transformers, and they (bless the docs!) explain that they are indeed just sugar over a variant of the “
set!
+symbol-prop
s” approach. I wonder if this approach (ie. returning an anonymous orgensym
’ed identifier macro) could be retrofitted in to that model, but it feels clear to me that macarons more cleanly solve and generalize what has always been a messy situation in Lisp implementations.As another exploratory question, are we limited (in practice or theory) to the immediate context of the parent form? Aside from that dispatching on grandparent or cousin forms feels kinda cursed. I wonder what use cases pull sibling sexps into play.
Funny how having to dispatch on the
set
literal kinda resembles the limitations of a system invoked byset
itself, but it’s progress! Re: expansion order, my gut feeling is that they ought to be compatible with other extensions that don’t explicitly macro expand their arguments (ie. until theset
form is established), but haven’t really dug into how janet/this all works and need more coffee firstTheoretically you can rewrite forms anywhere, but I’m having a hard time coming up with a situation where you’d want to. But here’s a nonsensical “grandparent” macaron:
You could also write a recursive macaron that rewrites an arbitrarily distant ancestor.
Here’s an example of a potentially interesting “cousin” macaron:
Is that useful? I dunno, maybe? It’s like
!$
in shell, but copies the element at the exact previous position.Actually
$!
is even easier to write:You could also rewrite both expressions into a single gensym’d
let
expression so that the argument only actually gets evaluated once.I think this is pretty interesting? Maybe possibly even useful, to add debugging output or something?
At the repl Janet assigns
_
to the result of the previous expression – you could do that in arbitrary code; implicitly surrounding the previous expression with(def _ ...)
if you use_
in an expression. Hmm. Not super useful…Yeah, but it allows you to “extend” the behavior of
set
withoutset
having to know anything about your custom extension (or even knowing that it is itself extensible!). But the evaluation order is problematic. Hmm.I find that the header file problem is one that tup solves incredibly elegantly. It intercepts filesystem calls, and makes any rule depend on all the files that the subprocess accesses. Solves headers in an incredibly generic way, and works without requiring hacks like -MMD.
Not sure if the author is here, but if you are, any plans to support something like that?
So the “proper” way is to intercept the filesystem calls in a non-portable manner and depend on anything the program opens without regard for whether it affects the output or not (like, say, translations of messages for diagnostics). While explicitly asking the preprocessor for an accurate list of headers that it reads is a hack?
The problem with the second option is that it isn’t portable between languages or even compilers. Sure, both GCC and clang implement it, but there isn’t really a standard output format other than a makefile, which isn’t really ideal if you want to use anything that isn’t make.
It’s an unforunate format, but it’s set in stone by now, and won’t break. It has become a de facto narrow waist with at least 2 emitters:
and 2 consumers:
Basically it’s an economic fact that this format will persist, and it certainly works. I never liked doing anything with it in GNU make because it composes poorly with other Make features, but in Ninja it’s just fine. I’m sure there are many other non-Make systems that parse it by now too.
That’s a fair point, also didn’t know Ninja supported it but it makes sense. I wonder if other languages support something similar to allow for this kind of thing, though many modern languages just sidestep the issue all together by making the compiler take care of incremental compilation.
Most tools could probably read the -M output format and understand it quite easily. It doesn’t use most of what could show up in a Makefile - it only uses single-line “target: source1 source2” rules with no commands, no variables, etc. I imagine if someone wanted to come up with a universal format, it wouldn’t be far off from what’s already there.
But.. don’t you want to update your program when diagnostic messages are changed? The FUSE mount doesn’t grab eg. library and system locales from outside the project root, so it only affects the resources of the project being built[1]. Heaven forbid you’re bisecting a branch for a change that is, for reasonable or cursed reasons alike, descended from one of those files..
For those interested, I’ve pitched tup and mused about this in a previous comment here.
[1]: Provided you don’t vendor your all dependencies into the repo, which I guess applies to
node_modules
! Idk off the top of my head if there’s a way to exclude a subdirectory for this specific situation, or whether symlinks would work for controlling the mechanism.Edit: Oh, it’s u/borisk again! I really appreciated your response last time this came up and hope you’re doin’ great c:
Edit 2: Oh, and you work on a build system! I’ll check it out sometime ^u^
I originally started Knit with the intention of supporting automatic dependency discovery using ptrace. I experimented with this with a tool called xkvt, which uses ptrace to run a list of commands and can generate a Knitfile that expresses the dependencies. However, I think this method is unfortunately more of a hack compared to -MMD because ptrace is non-portable (not well supported/documented on macOS and non-existent on Windows) and has a lot of complexity for tracing multithreaded processes. A Fuse-based approach like the one used by Tup is similar (maybe more reliable), but requires Fuse (a kernel extension), and also has the negative that automatic dependency discovery can sometimes include dependencies that you don’t really want. When I tried to use Tup for a Chisel project I ran into problems because I was invoking the Scala build tool which generated a bunch of temporary files that Tup required to be explicitly listed as a result.
I think if Knit ever has decent support for an automatic dependency approach, it would be via a separate tool or extension rather than directly baked into Knit by default.
Cool! I’ve always thought about running a dynamic site based on Haunt, which doesn’t quite fit into this subset of Scheme, but the example has a very similar structure. Love the idea, and the sleek deployment method; I haven’t got similar ergonomics for my own deploys yet….
Haven’t actually posted anything on my site (so I haven’t crafted CSS for it or anything), but I’ve collected a short list of homages and related posts (including a link to commentary on inspirations) here:
https://www.illucid.net/posts/homages-to-aphyrs-technical-interview-series.html
Left out a repo that translates the same Queen’s post into Rust because it wasn’t in narrative form, which felt important to me at the time, but idk, that’s cool too and available here:
https://github.com/insou22/typing-the-technical-interview-rust
The format is fun, I like how people adapt the themes from Aphyr’s original blogs. It’s a bit of a colorful show and tell without being too dry about the subject matter. Props to collecting all these formats into a repository!
Without paying too much attention to it, I chalked the recent arguments up (as u/scraps does) to the implicit / missing context of Casey’s eg. game dev background (where most code really is performance critical).
This conversation pulls the argument out of that framework, recognizing that there is a place in practically all software for a performance-aware approach, while tactfully digging back at an equally dogmatic dismissal of other concerns (those which, as Casey may justifiably say, “are beyond the scope of this course”).
Loving said course, and glad to see these two tribal icons able to enchange ideas and reconcile these tensions into conscious tradeoffs for their audiences (and those who will inherit future tribal knowlage) to consider.