Somewhat confusingly, it would seem that conservative stack scanning has garnered the acronym “CSS” inside V8. What does CSS have to do with GC?, I ask. I know the answer but my brain keeps asking the question.
What a stupid acronym, inside a browser component of all things! They should have called it Heuristic Tracing of Memory Links, or Heuristic Tracing of Transient Pointers.
Reminds me of how compilers often talk about a “scheduler” that has nothing to do with the runtime notion of scheduling threads/tasks/etc. Pretty sure I’ve run into other examples of GC terminology overloading acronyms or terms as well, but I can’t recall specifics off hand.
Yes, you’re right. Brain not working first thing in the morning. There is some interesting work unifying instruction scheduling, register allocation, and instruction selection, but it’s not computationally feasible yet.
https://unison-code.github.io/ is the register allocation and instruction scheduling. Instruction selection I’m not sure is feasible beyond single-digit instruction sequences because I think equality saturation is a more promising overall approach to instruction selection, and extraction is already hard.
I wouldn’t be surprised if instruction scheduling would predate the other usage. But I think this is not the same as the CSS example. It’s a common word and both usages are correct in their respective niche. CSS as an abbreviation is somewhat arbitrary, they could have chosen slightly different words to avoid it, but I guess it was deliberate.
I’m doing Uiua this year, did BQN last year and J the year before that. I’d recommend BQN or Uiua over others since they’re more modern and easier to get into. The most difficult thing I found with J was understanding trains (hook and fork) and combinators. BQN has those too but IMO the design is much cleaner and they’re really well documented. Uiua sidesteps this completely with the stack model: there are no explicit combinators, just different ways of manipulating the stack. So far I’m really enjoying Uiua.
I apparently lucked into doing it the way that doesn’t run into any of the problems other people had, on a whim. (Spoiler below, stop if you don’t want them, but it’s day 1 so…)
For part 1, rather than doing a multi-match and extracting the first and last matches in the list, I did a match against the input and a match against the reversed input, which is an old trick.
For part 2, I kept the same structure rather than rewrite, which meant that I matched the reversed string against /\d|eno|owt|eerht|ruof|evif|xis|neves|thgie|enin/, and then re-reversed the capture before passing it through a string-to-num map.
And it turns out that that totally sidesteps the problem of “wait, how am I supposed to get 21 out of xxtwonexx?”
For part 1, rather than doing a multi-match and extracting the first and last matches in the list, I did a match against the input and a match against the reversed input, which is an old trick.
I took a similar approach. I’m using C++, so the natural solution was to use reverse iterators .rbegin() and .rend() that iterate through the elements of a container in reverse order. Rather than use a regex—which seemed like overkill—for part two, I just had an array of digit names I looped through and performed the appropriate search and chose the earliest one:
for (int i = 0; i < 10; i++) {
auto it = std::search(line.begin(), line.end(), digit_names[i].begin(), digit_names[i].end());
if (it <= first_digit_name_iter) {
first_digit_name_iter = it;
// ...
And in reverse:
for (int i = 0; i < 10; i++) {
auto it = std::search(line.rbegin(), line.rend(), digit_names[i].rbegin(), digit_names[i].rend());
if (it <= last_digit_name_iter) {
last_digit_name_iter = it;
// ...
My figuring on what is and isn’t “overkill” is: AoC is ranked by when you submit your solution, so that’s time to write the code plus time to run it. If something is really the wrong tool, the challenge will prove it by making your solution take an hour, or a terabyte of RAM, to run. But if I’m using a language where regexes are “right there” and they make my solution take 100ms instead of 10ms, I’m not bothered.
I like AOC because everyone can have their own goal! I’m impressed by people who can chase the leaderboard. I always personally aim for the lowest possible latency. Managed to get both parts today in under 120 microseconds including file reading and parsing.
Maybe a little bit, but it’s a recurring theme in AoC that you have to implement the spec as written, but not the spec as you think it means on a first read.
I think there was a rash of solutions in the early days of Dec 2022 where people were oohing and aahing over that current generation of LLMs solving the problems instantly.
It died down quite a bit as the difficulty ramped up.
Oh yeah, I got one very tedious bit of slice manipulation handed to me by copilot but for the rest it’s been mostly saving me typing debug output and the likes.
Guilty as charged. I messed around with ChatGPT on the first few problems last year. That was right after it came out, and it was pretty amazing how fast it could come up with a (typically slightly buggy) solution.
The difficulty, IMO, is that the problematic lines aren’t in the sample.
I did overlapping regexp matches. It was easy, once I caught why my first attempt didn’t work. Another solution would be to just do a search with indexOf and lastIndexOf, for each expected words, but you have to be careful to sort the results.
There’s a subtle hint, because the first sample does include a case where there’s only one digit (the “first” and “last” match are the same, or you could say they overlap completely). When you get the part 2 spec you have an opportunity to ask yourself “hmm, what changes about overlaps now that we’re matching strings of more than one character?”. Or at least it gives you a good first place to look when things go wrong.
Apparently some peoplle tried to solve part 2 using substitution (replace words with digits and then feed that into the solution for part 1), which also suffers from problems with overlaps, but in a way that’s harder to dig yourself out of.
My solution (which wasn’t using regex) passed all of the sample inputs and some other inputs I had come up with, but failed to produce the correct solution. I was at the point of going through the input file line-by-line in search of what I presumed to be an edge case I wasn’t handling.
Thankfully I had my friend run my input against his program and show me the results for each line, and that helped me figure out that I had misunderstood how the replacements worked when looking for the last digit.
Once I actually understood the problem it was quite simple to write a correct solution, but it took a while to get there.
I know NixOS has its problems, but it’s a real treat to see an OS release announcement and know there’s no risk in upgrading. OS rollbacks really change the upgrade risk calculus. Thanks, NixOS contributors!
I was going to reply that there is no risk at all, for all practical purposes, because of how easy it is to boot into previous generations. Then I upgraded and found my UEFI boot config was borked! At least I got a good story out of it, and will never be tempted to post any foolish replies for the rest of my life.
There is currently a bug fix patch for ZFS you should definitely use, though I don’t think it’ll murder old data, it just messes with new data in some edge cases (there is a very good writeup on the ZFS release page on github)
Out of interest, can you mention or link to some of these problems you’re referring to? I’m getting into NixOS myself and I wonder if there are obvious issues with it I might be missing.
I meant in the general sense that nothing is perfect. If you’re interested in NixOS, I strongly recommend just taking the plunge, even if your first foray is in a VM.
I use DBeaver (Community Edition) every day to interact with multiple PgSQL DBs: running queries, inspecting schemas, saving queries. The transactional mode is handy when I have to operate on a production DB. It is such a handy tool. (Not specific to PgSQL, you can use other supported databases as well)
Annoying DBeaver quirk: it automatically converts dates to the local time zone. If you are working on a distributed team this can lead to people freaking out. I recommend turning that setting off.
I just downloaded it to check it out, and I honestly cannot see what’s wrong with it. It’s your bog-standard IDE-type application. I’m using it on Gnome and it looks fine.
I think this is the crux of a problem that gets mentioned in passing a lot on LWN: The “old guard” are mostly people who enjoy(ed) doing the work, being a part of something cool, solving difficult problems, and so on. The “new generation” are mostly paid to submit patches, either literally with money, or indirectly with a resume line that will get them a better job. The two groups have fundamentally different philosophies about kernel work, don’t seem to really understand each other’s motivations, and frequently hurt each other’s feelings.
All the different developers getting paid to write Linux drivers and whatnot presumably adds up to millions of dollars, but no one gets paid to spend lots of time doing proper maintenance.
Obsidian. Even though I don’t use it (entirely) as its authors intended, it’s been my goto personal writing tool for about two years now. It’s the last in a long series of tools, some of them homebrew, that spanned over twenty years or so. It doesn’t do everything I want but it does everything I need and it’s become indispensable to me.
tmux. It’s just not the same without it :-).
Emacs. I no longer use it as my sole programmer’s editor but it’s still the one I use the most often, the most helpful and, the more I use VS Code, the more I think it’s the sanest and friendliest of them all.
Syncthing. I switch between computers a lot, and sometimes I’m gone from home for days at a time. It’s really useful to me.
Audacious, because I have a huge music library from back when I came dangerously close to disappointing my parents and becoming a musician, and none of it sounds good without those beautiful Winamp skins. Also it’s unbrowsable on UIs written for the Spotify age :-).
clang because it makes compilers just sliiiightly less insane.
Honorable mentions go to FreeBSD, Slackware, CCS64, and WindowMaker. I no longer use any of them on a daily basis but they were my gateway to programming. I wouldn’t be where I am without them.
Oh and the lobste.rs backend, without which I couldn’t bore any of you nerds!
+1 for Syncthing! I have it on all my computers at home and it’s fantastic to capture ideas and do research in my laptop, and then just move to the desktop for the “serious programming” and it’s all there!
This is such an absurd workhorse for me at work. Being able to take notes quickly and finding them again is such a huge feature and it’s amazing that it took so long for us to get to something that really works.
I have daily notes setup that on creation they suck in my calendar and template that into a nice running order for the day together with due and scheduled tasks from my todo plugin.
Honestly homebred may have been the biggest revolution for how I use my computer professionally getting rid of the brokenness that was maports, fink and the likes. It’s still unsurpassed for how pragmatically it gets rid of such a huge class of problems. (No Nix doesn’t come close.)
Ah, the post of love and appreciation 😌️ Besides many rather popular items (Firefox, Mutt, Vim, Perl, GIMP, OpenBSD, …) that we might be taking for granted, alas, I want to point out a few smaller, less known, or less mentioned software: Bound, motî, notmuch, Anki, mpv and yt-dlp, mbsync, fzf.vim and Ctags, ShellCheck, Open GPX Tracker. I’ll stop now.
In the past I sent personal emails to authors of some of this software to thank them for their work and to tell them how excactly their software made my life better. I encourage you to do so too. You probably know that owners of services and maintainers of software hear from users rather more often when things break and don’t work as wanted (especially DNS admins /jk) and less often when things just work as expected. Let’s tilt the scales and send a quick note of appreciation to your software or web-service author, shall we?
Let us know how the orange site responses compare to lobster’s!
I totally share the sentiment. It’s essentially a beautiful and customised offline version of Wiktionary on your iPhone. I’m glad you’ve found it useful.
The author has been very nice in communication too. We exchanged some bug reports and ideas on several occasions in the past.
Roc isn’t even at a numbered version yet (i.e. it’s pre v0.0.1), so it’s still in the experimentation phase. The biggest area of interest at the moment seems to be backend web development, but there’s also discussion around gamedev and scientific computing uses.
Recently I saw something called Garnix and I think just like CDK is a joy to use (not sure if this is controversial, but it’s been pretty great for us) there could be something to the approach of:
Use one of the most popular programming languages in the world
Type everything strongly
Use imperative statements to derive a declarative setup
It’s an interesting idea, but if you have “Unavailable commands:” lying there without any context, that’s not exactly good usability because it’s impossible to discover the state machine behind it without trying it out.
The shell has poor discoverability as it is which means many people coming to the command will have read some documentation about it but still from a pure UX perspective, it’s a bad practice to put people into a state without affordance how to get to a different state.
Since your comment is overall negative in tone, I want to emphasize that moving some commands to “Unavailable commands:” without further explanation still makes for more usable --help output than otherwise. If it were all you had time to implement, it would be better than the common practice of listing available and unavailable commands together indiscriminately.
But yes, the problem you describe of being unsure why a command is unavailable does imply the possibility of further improvements to --help output.
I was thinking the same thing.. I mean, you could dump a state transition diagram/dependency graph in --help output, but I’m not sure if that’s.. helpful enough. ;-) One does not usually see such even in more detailed documentation like man pages.
Similarly, even his guiding examples of fd/rg have pros & cons. Users must learn that honoring .gitignore is the default. In the case of rg this is right at the top of a 1000 line --help output (which also has pros & cons, right at the top, but so long its easy to miss; inverse text or color embellishment might help, such as user-config driven color-choice for markup which is a different kind of “missed opportunity based on dynamic state”).
In the case of fd, since unlike greps, finds mostly operate on file names/FS metadata not file contents, it’s a more dubious &| surprising default. (So, while hex coded hash named .pack files might make little sense to search for, a substring of “HEAD” or finding permission problems inside .git or executable files explicitly gitignored might make a lot of sense.) I say this bit only really as a teaser for how much this kind of context-driven stuff could be ripe for abuse leading to issues you highlight from a use-case variability perspective.
FWIW, I would say, as a document usability feature, that hovernote-only text is tricky. I didn’t even realize there was hidden text to hover over until you just said something. You have a lot of good text there - I particularly liked [3].
Okay, I think I get it. This is for IDEs – integrated development environments – specifically, where various background services require orchestration.
And, to be honest, that sort of environment is inherently more complex than one where the code editor stands alone and the code’s functionality is hermetically testable. I don’t think Organist is bad, but I’m skeptical of a setup which requires it.
At the end of the day, if it fits into a Nix flake, then I suppose I don’t care; I don’t have to actually know what’s inside a flake in order to use it. But I can’t imagine using this instead of a ten-line bash script which runs git pull, hooks direnv, and loads my SSH keys.
specifically, where various background services require orchestration
I’ve had to read comments going in all directions with the nix services RFC and then to see something like this just pull that out of a hat (admittedly in a butchered way) is kinda amazing.
It’s not just IDEs. Any serious development is going to need a database etc. for which people now are referred to “use a container”.
What is necessary is to have all common services defined somewhere because I’m not going to figure out the correct commands to initialize and start Postgres every time from scratch.
I read the entire thing and I’m not sure what it is doing or what we should be doing.
The only command that’s in there is: nix flake init -t github:nickel-lang/organist but that’s I guess how you setup an organist project, not how you use it? Then you use it regularly with nix develop?
Update: I think if you read the README here, it becomes clear: https://github.com/nickel-lang/organist Still not really clear whether or how it’ll fill many of my development needs.
Ncl
I browsed Nickel documentation previously but still, constructs like this leave me rather mystified:
services.minio = nix-s%"
%{organist.import_nix "nixpkgs#minio"}/bin/minio server --address :9000 ./.minio-data
"%
What is happening here with the %s?
I’d say in general that Nickel may be a great idea and it looks less offputting than Nixlang but it’s still very far off from something a large audience of people can use.
Recently I saw Garn which is a “Typescript eats the entire world” approach to this problem. I’m also very sceptical of it as an abstraction layer, but the choice of language does look like it could be a winner. It reminds me a bit of CDK/Typescript which is a weird imperative/declarative hybrid alternative to the standard terrible Devops ways of defining infrastructure.
My impressions as well. I’m not sure if this competes with devenv, devbox, and others, or is some completely different thing. If former, what does it bring over other tools.
Similar thoughts. Even as a Nix user I’m confused about some of the syntax I’m unfamiliar with, and generally about what Organist is trying to be.
If it’s a layer above Nix flakes dev shell configuration like some of the other projects, it seems like a hard sell as if you can do Nickel… you probably can do Nix already, and introducing an extra layer is neither here or there. If you go json/yaml it will be dumbed down but easier to consume for non-nixers, and if you go Nix - you are 100% seamless with Nix.
BTW. I’m causally lurking into Nickel and I’m still confused w.r.t level of interoperability with Nix. Nickel presents itself like a Nix-like “configuration language”, which means … it can’t really do some of the things Nix do? Or can it? Can it transpile to Nix or something?
My take is that yes it’s competing with those tools, but in a (nearly) native nix way, nearly as it depends on Nickel tooling, but the generated flake pulls that in automatically so there’s nothing else to install.
At work I am using Devenv mostly for process support (which ironically I don’t need any more) and it fits the bill, but IS two things to install before team members can start developing (plus direnv). This would only be one thing to install.
At home I run NixOS and just use a flake for my dependencies but that doesn’t launch any services so I am kind of keen on using organist if I ever need that.
My take is that yes it’s competing with those tools, but in a (nearly) native nix way, nearly as it depends on Nickel tooling, but the generated flake pulls that in automatically so there’s nothing else to install.
It’s very cool that this works so you can have your flake contents defined in some other language entirely and don’t have to think about it (if it works).
Got sucked into the whole pino thing and just wow: https://100r.co/site/philosophy.html
Yeah, I love the monochrome asthetic and the playful communication style.
What a stupid acronym, inside a browser component of all things! They should have called it Heuristic Tracing of Memory Links, or Heuristic Tracing of Transient Pointers.
Reminds me of how compilers often talk about a “scheduler” that has nothing to do with the runtime notion of scheduling threads/tasks/etc. Pretty sure I’ve run into other examples of GC terminology overloading acronyms or terms as well, but I can’t recall specifics off hand.
I’ve usually heard that qualified as “instruction scheduler”…
…unless you’re referring to some entirely different scheduler, of course.
No, it’s the instruction scheduler, but I’ve not often heard it qualified as such, especially not in the code base for the Go compiler IIRC.
In LLVM, it’s usually referred to as ISel.
Instruction selection and instruction scheduling are generally taken to be different tasks.
Yes, you’re right. Brain not working first thing in the morning. There is some interesting work unifying instruction scheduling, register allocation, and instruction selection, but it’s not computationally feasible yet.
https://unison-code.github.io/ is the register allocation and instruction scheduling. Instruction selection I’m not sure is feasible beyond single-digit instruction sequences because I think equality saturation is a more promising overall approach to instruction selection, and extraction is already hard.
I wouldn’t be surprised if instruction scheduling would predate the other usage. But I think this is not the same as the CSS example. It’s a common word and both usages are correct in their respective niche. CSS as an abbreviation is somewhat arbitrary, they could have chosen slightly different words to avoid it, but I guess it was deliberate.
Naming is hard but only if you’re bad at it.
Yo, what’s an array programming language I should try for this? BQN? Uiua? Something else entirely?
I’m doing Uiua this year, did BQN last year and J the year before that. I’d recommend BQN or Uiua over others since they’re more modern and easier to get into. The most difficult thing I found with J was understanding trains (hook and fork) and combinators. BQN has those too but IMO the design is much cleaner and they’re really well documented. Uiua sidesteps this completely with the stack model: there are no explicit combinators, just different ways of manipulating the stack. So far I’m really enjoying Uiua.
How about Rob Pike’s Ivy?
I thought I could come up with many better names for this but now that I think of it bfcoq is also amazing.
Someone explain why is so called “Advent” starting two days early?
The first advent is on Sunday but advent calendars start on December 1st commonly.
Yes. This comes from the physical advent calendars that have one door per December-day up to Christmas, where something nice is behind each door.
Because its easier to implement it in cron if we assume it starts first thing December
A particularly nasty one to start with today!
I apparently lucked into doing it the way that doesn’t run into any of the problems other people had, on a whim. (Spoiler below, stop if you don’t want them, but it’s day 1 so…)
For part 1, rather than doing a multi-match and extracting the first and last matches in the list, I did a match against the input and a match against the reversed input, which is an old trick.
For part 2, I kept the same structure rather than rewrite, which meant that I matched the reversed string against
/\d|eno|owt|eerht|ruof|evif|xis|neves|thgie|enin/
, and then re-reversed the capture before passing it through a string-to-num map.And it turns out that that totally sidesteps the problem of “wait, how am I supposed to get
21
out ofxxtwonexx
?”I just used a regular expression, with the leading group as optional. Means you always pick up the trailing “one” in “twone” first.
I looked for non-overlaping matches and got the right solution. Maybe my input never hit this “twone” edge case by luck!
I took a similar approach. I’m using C++, so the natural solution was to use reverse iterators
.rbegin()
and.rend()
that iterate through the elements of a container in reverse order. Rather than use a regex—which seemed like overkill—for part two, I just had an array of digit names I looped through and performed the appropriate search and chose the earliest one:And in reverse:
My figuring on what is and isn’t “overkill” is: AoC is ranked by when you submit your solution, so that’s time to write the code plus time to run it. If something is really the wrong tool, the challenge will prove it by making your solution take an hour, or a terabyte of RAM, to run. But if I’m using a language where regexes are “right there” and they make my solution take 100ms instead of 10ms, I’m not bothered.
I like AOC because everyone can have their own goal! I’m impressed by people who can chase the leaderboard. I always personally aim for the lowest possible latency. Managed to get both parts today in under 120 microseconds including file reading and parsing.
A part of me wonders whether the creator went out of his way to guard against common LLM usage.
Maybe a little bit, but it’s a recurring theme in AoC that you have to implement the spec as written, but not the spec as you think it means on a first read.
I can barely read the trite stuff about Elves as it is and I habitually skim all the text. I think that might just be enough obfuscation against LLMs.
I think there was a rash of solutions in the early days of Dec 2022 where people were oohing and aahing over that current generation of LLMs solving the problems instantly.
It died down quite a bit as the difficulty ramped up.
Oh yeah, I got one very tedious bit of slice manipulation handed to me by copilot but for the rest it’s been mostly saving me typing debug output and the likes.
Guilty as charged. I messed around with ChatGPT on the first few problems last year. That was right after it came out, and it was pretty amazing how fast it could come up with a (typically slightly buggy) solution.
SPOILER…
The difficulty, IMO, is that the problematic lines aren’t in the sample.
I did overlapping regexp matches. It was easy, once I caught why my first attempt didn’t work. Another solution would be to just do a search with
indexOf
andlastIndexOf
, for each expected words, but you have to be careful to sort the results.There’s a subtle hint, because the first sample does include a case where there’s only one digit (the “first” and “last” match are the same, or you could say they overlap completely). When you get the part 2 spec you have an opportunity to ask yourself “hmm, what changes about overlaps now that we’re matching strings of more than one character?”. Or at least it gives you a good first place to look when things go wrong.
Apparently some peoplle tried to solve part 2 using substitution (replace words with digits and then feed that into the solution for part 1), which also suffers from problems with overlaps, but in a way that’s harder to dig yourself out of.
Yes. My implementation passed all the sample tests, but it returns an incorrect value. Not easy at all.
Yeah, it’s pretty nasty for a day 1.
I feel like it’s only nasty if people approach it trying to fit every problem into a regex shaped hole
The way i wrote it was pretty simple
For a given input string:
And for the last character, you just do it in reverse, with the last character and ends with
Sure it’s probably less performant but it still only took a fraction of a second even with an interpreted language
I didn’t touch a regex. There’s a funny edge case that’s not in the text or in the sample input.
I had a very rough time with part 2.
My solution (which wasn’t using regex) passed all of the sample inputs and some other inputs I had come up with, but failed to produce the correct solution. I was at the point of going through the input file line-by-line in search of what I presumed to be an edge case I wasn’t handling.
Thankfully I had my friend run my input against his program and show me the results for each line, and that helped me figure out that I had misunderstood how the replacements worked when looking for the last digit.
Once I actually understood the problem it was quite simple to write a correct solution, but it took a while to get there.
I know NixOS has its problems, but it’s a real treat to see an OS release announcement and know there’s no risk in upgrading. OS rollbacks really change the upgrade risk calculus. Thanks, NixOS contributors!
Realistically there’s no risk in upgrading until there is risk in upgrading and your system gets bricked in this way or another.
I was going to reply that there is no risk at all, for all practical purposes, because of how easy it is to boot into previous generations. Then I upgraded and found my UEFI boot config was borked! At least I got a good story out of it, and will never be tempted to post any foolish replies for the rest of my life.
Using both NixOS and ZFS (for which NixOS has good support) multiplies one’s defense-in-depth, but ultimately I agree there’s always some risk.
There is currently a bug fix patch for ZFS you should definitely use, though I don’t think it’ll murder old data, it just messes with new data in some edge cases (there is a very good writeup on the ZFS release page on github)
I was around some pro nix heads and I remember their systems being borked all the time.
Out of interest, can you mention or link to some of these problems you’re referring to? I’m getting into NixOS myself and I wonder if there are obvious issues with it I might be missing.
I meant in the general sense that nothing is perfect. If you’re interested in NixOS, I strongly recommend just taking the plunge, even if your first foray is in a VM.
How were you writing Latex in Obsidian?
https://help.obsidian.md/Editing+and+formatting/Advanced+formatting+syntax#Math
I use DBeaver (Community Edition) every day to interact with multiple PgSQL DBs: running queries, inspecting schemas, saving queries. The transactional mode is handy when I have to operate on a production DB. It is such a handy tool. (Not specific to PgSQL, you can use other supported databases as well)
I’ll second DBeaver, I use it daily with both Postgres and Greenplum. My other go-tos are pgsql on the cli and psycopg in python.
Annoying DBeaver quirk: it automatically converts dates to the local time zone. If you are working on a distributed team this can lead to people freaking out. I recommend turning that setting off.
From a UI perspective this is one of the worst things I have ever had to see. It’s incredibly off-putting to use.
I just downloaded it to check it out, and I honestly cannot see what’s wrong with it. It’s your bog-standard IDE-type application. I’m using it on Gnome and it looks fine.
I counted some five levels of tab bars going on the same screen.
Looks promising! do you know how it compares to tableplus?
I don’t know how it compares, but you can simply download community edition and try it for yourself. I highly recommend, it’s great software.
I don’t get it. Isn’t Linux kernel development a multi-million dollar business?
I think this is the crux of a problem that gets mentioned in passing a lot on LWN: The “old guard” are mostly people who enjoy(ed) doing the work, being a part of something cool, solving difficult problems, and so on. The “new generation” are mostly paid to submit patches, either literally with money, or indirectly with a resume line that will get them a better job. The two groups have fundamentally different philosophies about kernel work, don’t seem to really understand each other’s motivations, and frequently hurt each other’s feelings.
All the different developers getting paid to write Linux drivers and whatnot presumably adds up to millions of dollars, but no one gets paid to spend lots of time doing proper maintenance.
Obsidian. Even though I don’t use it (entirely) as its authors intended, it’s been my goto personal writing tool for about two years now. It’s the last in a long series of tools, some of them homebrew, that spanned over twenty years or so. It doesn’t do everything I want but it does everything I need and it’s become indispensable to me.
tmux. It’s just not the same without it :-).
Emacs. I no longer use it as my sole programmer’s editor but it’s still the one I use the most often, the most helpful and, the more I use VS Code, the more I think it’s the sanest and friendliest of them all.
Syncthing. I switch between computers a lot, and sometimes I’m gone from home for days at a time. It’s really useful to me.
Audacious, because I have a huge music library from back when I came dangerously close to disappointing my parents and becoming a musician, and none of it sounds good without those beautiful Winamp skins. Also it’s unbrowsable on UIs written for the Spotify age :-).
clang because it makes compilers just sliiiightly less insane.
Honorable mentions go to FreeBSD, Slackware, CCS64, and WindowMaker. I no longer use any of them on a daily basis but they were my gateway to programming. I wouldn’t be where I am without them.
Oh and the lobste.rs backend, without which I couldn’t bore any of you nerds!
+1 for Syncthing! I have it on all my computers at home and it’s fantastic to capture ideas and do research in my laptop, and then just move to the desktop for the “serious programming” and it’s all there!
This is such an absurd workhorse for me at work. Being able to take notes quickly and finding them again is such a huge feature and it’s amazing that it took so long for us to get to something that really works.
I have daily notes setup that on creation they suck in my calendar and template that into a nice running order for the day together with due and scheduled tasks from my todo plugin.
Honestly homebred may have been the biggest revolution for how I use my computer professionally getting rid of the brokenness that was maports, fink and the likes. It’s still unsurpassed for how pragmatically it gets rid of such a huge class of problems. (No Nix doesn’t come close.)
Ah, the post of love and appreciation 😌️ Besides many rather popular items (Firefox, Mutt, Vim, Perl, GIMP, OpenBSD, …) that we might be taking for granted, alas, I want to point out a few smaller, less known, or less mentioned software: Bound, motî, notmuch, Anki, mpv and yt-dlp, mbsync, fzf.vim and Ctags, ShellCheck, Open GPX Tracker. I’ll stop now.
In the past I sent personal emails to authors of some of this software to thank them for their work and to tell them how excactly their software made my life better. I encourage you to do so too. You probably know that owners of services and maintainers of software hear from users rather more often when things break and don’t work as wanted (especially DNS admins /jk) and less often when things just work as expected. Let’s tilt the scales and send a quick note of appreciation to your software or web-service author, shall we?
Let us know how the orange site responses compare to lobster’s!
I love the idea of thanking the maintainers! I just did for one of the projects I love! :-)
Thank you 😌
motî looks stunningly well designed. I have a feeling I’ll be using it a lot, especially as a long-time anki user. Thanks for the recommendation!
I totally share the sentiment. It’s essentially a beautiful and customised offline version of Wiktionary on your iPhone. I’m glad you’ve found it useful.
The author has been very nice in communication too. We exchanged some bug reports and ideas on several occasions in the past.
I loved it when I had a ebook reader with Wiktionary lookup built in (Marvin). That made it possible for me to read books in French and Portuguese.
Marvin has unfortunately gone out of maintenance and not sure anymore what is going to replace that.
tl;dw?
I remember really liking Roc when I looked at it a couple of years ago. Is there a killer app or niche yet for the language?
Roc isn’t even at a numbered version yet (i.e. it’s pre v0.0.1), so it’s still in the experimentation phase. The biggest area of interest at the moment seems to be backend web development, but there’s also discussion around gamedev and scientific computing uses.
Recently I saw something called Garnix and I think just like CDK is a joy to use (not sure if this is controversial, but it’s been pretty great for us) there could be something to the approach of:
The website is a little light on information. Can you say more on what it is?
It’s an interesting idea, but if you have “Unavailable commands:” lying there without any context, that’s not exactly good usability because it’s impossible to discover the state machine behind it without trying it out.
The shell has poor discoverability as it is which means many people coming to the command will have read some documentation about it but still from a pure UX perspective, it’s a bad practice to put people into a state without affordance how to get to a different state.
Who said you can’t have the reason why the command is unavailable printed right next to it?
Well, that’s the least you should be doing here.
Since your comment is overall negative in tone, I want to emphasize that moving some commands to “Unavailable commands:” without further explanation still makes for more usable
--help
output than otherwise. If it were all you had time to implement, it would be better than the common practice of listing available and unavailable commands together indiscriminately.But yes, the problem you describe of being unsure why a command is unavailable does imply the possibility of further improvements to
--help
output.I was thinking the same thing.. I mean, you could dump a state transition diagram/dependency graph in
--help
output, but I’m not sure if that’s.. helpful enough. ;-) One does not usually see such even in more detailed documentation like man pages.Similarly, even his guiding examples of fd/rg have pros & cons. Users must learn that honoring
.gitignore
is the default. In the case ofrg
this is right at the top of a 1000 line--help
output (which also has pros & cons, right at the top, but so long its easy to miss; inverse text or color embellishment might help, such as user-config driven color-choice for markup which is a different kind of “missed opportunity based on dynamic state”).In the case of
fd
, since unlike greps, finds mostly operate on file names/FS metadata not file contents, it’s a more dubious &| surprising default. (So, while hex coded hash named .pack files might make little sense to search for, a substring of “HEAD” or finding permission problems inside.git
or executable files explicitly gitignored might make a lot of sense.) I say this bit only really as a teaser for how much this kind of context-driven stuff could be ripe for abuse leading to issues you highlight from a use-case variability perspective.I’m looking forward to see the first command line tool sprinkle AI in there to try to learn what it is that you actually want to do. (Please no!)
Yeah, as is I agree information is missing (I mention that on the hovernote 6).
FWIW, I would say, as a document usability feature, that hovernote-only text is tricky. I didn’t even realize there was hidden text to hover over until you just said something. You have a lot of good text there - I particularly liked [3].
Thanks! Yeah, maybe Bringhurst-style margin notes would be better. Or at least also listing hovernotes at the end of the text.
Okay, I think I get it. This is for IDEs – integrated development environments – specifically, where various background services require orchestration.
And, to be honest, that sort of environment is inherently more complex than one where the code editor stands alone and the code’s functionality is hermetically testable. I don’t think Organist is bad, but I’m skeptical of a setup which requires it.
At the end of the day, if it fits into a Nix flake, then I suppose I don’t care; I don’t have to actually know what’s inside a flake in order to use it. But I can’t imagine using this instead of a ten-line bash script which runs
git pull
, hooks direnv, and loads my SSH keys.I’ve had to read comments going in all directions with the nix services RFC and then to see something like this just pull that out of a hat (admittedly in a butchered way) is kinda amazing.
It’s not just IDEs. Any serious development is going to need a database etc. for which people now are referred to “use a container”. What is necessary is to have all common services defined somewhere because I’m not going to figure out the correct commands to initialize and start Postgres every time from scratch.
Going to survival in the woods with my kids, cycling and cramming for JLPT N4.
がんばれ!I’ll also be doing some JLPT practice this weekend. I’ve been working on improving my reading speed in particular.
I read the entire thing and I’m not sure what it is doing or what we should be doing.
The only command that’s in there is:
nix flake init -t github:nickel-lang/organist
but that’s I guess how you setup an organist project, not how you use it? Then you use it regularly withnix develop
?Update: I think if you read the README here, it becomes clear: https://github.com/nickel-lang/organist Still not really clear whether or how it’ll fill many of my development needs.
NclI browsed Nickel documentation previously but still, constructs like this leave me rather mystified:
What is happening here with the
%s
?I’d say in general that Nickel may be a great idea and it looks less offputting than Nixlang but it’s still very far off from something a large audience of people can use.
Recently I saw Garn which is a “Typescript eats the entire world” approach to this problem. I’m also very sceptical of it as an abstraction layer, but the choice of language does look like it could be a winner. It reminds me a bit of CDK/Typescript which is a weird imperative/declarative hybrid alternative to the standard terrible Devops ways of defining infrastructure.
My impressions as well. I’m not sure if this competes with devenv, devbox, and others, or is some completely different thing. If former, what does it bring over other tools.
Similar thoughts. Even as a Nix user I’m confused about some of the syntax I’m unfamiliar with, and generally about what Organist is trying to be.
If it’s a layer above Nix flakes dev shell configuration like some of the other projects, it seems like a hard sell as if you can do Nickel… you probably can do Nix already, and introducing an extra layer is neither here or there. If you go json/yaml it will be dumbed down but easier to consume for non-nixers, and if you go Nix - you are 100% seamless with Nix.
BTW. I’m causally lurking into Nickel and I’m still confused w.r.t level of interoperability with Nix. Nickel presents itself like a Nix-like “configuration language”, which means … it can’t really do some of the things Nix do? Or can it? Can it transpile to Nix or something?
My take is that yes it’s competing with those tools, but in a (nearly) native nix way, nearly as it depends on Nickel tooling, but the generated flake pulls that in automatically so there’s nothing else to install.
At work I am using Devenv mostly for process support (which ironically I don’t need any more) and it fits the bill, but IS two things to install before team members can start developing (plus direnv). This would only be one thing to install.
At home I run NixOS and just use a flake for my dependencies but that doesn’t launch any services so I am kind of keen on using organist if I ever need that.
It’s very cool that this works so you can have your flake contents defined in some other language entirely and don’t have to think about it (if it works).
You can use Devenv as a
mkShell
replacement when working with Nix Flakes, so you do not need to install anything manually.One of the article’s links is to Organist’s README – How does this differ from {insert your favorite tool} ?. In summary, Organist’s closest competitor is Devenv, and its main advantage is the consistency and power of the Nickel language.