It doesn’t yet have all the features of the currently deployed Nest, but uses a modernised architecture using state-of-the-art frameworks (Axum in the back, Svelte in the front, still Thrussh for SSH).
Also, Diesel instead of my own thing for asynchronous Postgres, which I had written before “Async Postgres” even existed in Rust.
Let me know if you can run it.
Also, Tailwind instead of Bootstrap, which I hope will also appeal to people wanting to contribute graphic design rather than backend/algorithms code.
My goal with moving to Svelte was also to keep the same super fast backend algorithms, while delegating the JS updates and layout to the front, which I think reduces latency. The architecture for that in the backend is still a little bit fragile as it involves proxying a NodeJS server, so cold renderings (first time you load any page from nest.pijul.com) are probably going to be much slower than the current Nest, while I would expect subsequent renders to be dramatically faster (since NodeJS isn’t involved at all and we save around 95% of the payload sizes).
I tried to write some things about AI stagnation and NixOS dominance, but it’s too hard to predict anything very useful in the current world. Companies will pursue the AI gold rush until the money runs out or a sudden disillusionment wave overwhelms them, and while NixOS is brilliant for servers there’s too much inertia and too much of a learning curve.
The most likely scenario of AI stagnation I could imagine is iPhone-ification.
The promising multi-paradigm pilot studies (like LLM for context ingestion + AlphaZero-like for getting inference checker to approve the logic part) keep not getting followed up, easy-to-use from get-go trumps possibility of developing deep user skills, stuff like LLaMA and DeepSeek makes sure that the remaining money is in superficial niceness.
Transcription to text, style adjustment, straightforward (sometimes large-block for tasks with more boilerplate than business logic) translation to code, and summarisation fully and undeniably catch up to translation (which was already there pre-LLM with DeepL).
Reasoning coliides with safety and ends up in a niche similar to physical-keyboard hand held devices today, you can get that stuff to run locally but you need to have a combination of interests and competences, and then you get a small nice boost to your workflows.
If only GitLab introduced seamless Pijul support! That is, make sure that both Pijul and Git clients can interact with repos in a sensible way. Then I could learn the ins and outs of Pijul with an easy fallback in case of catastrophic usability failure.
Yeah I guess as long as there is no open hosting solution the gate is kinda closed for wide adoption :/
tbh I haven’t looked into it’s ecosystem for a long time now. I wonder how pijul vs jujutsu would compare/if a pijul “backend” could be supported by jujutsu.
ripgrep - by sheer volume of use throughout the day. I’m generally not a fan of the “rewrite it in Rust” meme, but it’s great when it leads to someone critically re-examining their usability gripes with what came before.
oxipng - I’m really not beating the RIIR accusations but this one finds a lot of use for me as well. I used to use optipng in the same role before oxipng existed, but oxipng multithreads its trials and also is not of questionable maintenance status.
KDE’s ark specifically with the -b and -a switches, which enable non-dialog batch mode and auto-subfolder respectively, because the last thing I want is to extract foo and have it vomit an unknown quantity of files all over the current directory, or pessimistically extract foo into foo and end up with foo/foo/.
KDE’s filelight, to answer the question “where did all my space go?” with a resounding ~/.cache.
mpv plays video. With its yt-dlp integration, it’s the main way I watch videos.
Update - found it: mpv --ytdl-raw-options=cookies-from-browser=firefox URL. Three of my favourite programs in one command!
How do you use the mpv yt-dlp integration? The manual isn’t very clear, but seems to indicate that I have to manually export cookies from the browser every time I log in, and somehow pass them to mpv every time I run it, to make it work. I tested with yt-dlp --cookies="${XDG_CACHE_HOME}/firefox-cookies.txt" --cookies-from-browser=firefox and mpv --cookies-file="${XDG_CACHE_HOME}/firefox-cookies.txt" https://www.youtube.com/watch?v=[omitted], but got the same error message as just passing a YouTube URL.
Research software quality assurance is not yet a thing in many (most, it seems) places, but there have been some small steps towards it in the last couple of decades. Using open source software, publishing the software and data with the research, and adding research software engineers (that is, software engineering specialists) to the team all help, but more should be done:
Reproducible builds would make the result bit-for-bit reproducible, as opposed to “eh, looks about the same” reproducible. A big part of this is locking at least direct dependencies to exact versions, using things like Nix.
Automated tests, including “normal” and various types of abnormal data. Huge/tiny numbers test the range for which the algorithm is applicable. Invalid inputs test that the algorithm doesn’t do any numeric shenanigans.
Surprisingly, the largest single gain is to put the original data into source control. Researchers, like other humans, have difficulty managing version and stepping back to “after we fixed that sign problem on that geiger counter but before we trimmed the outliers” version.
I agree with the spirit of your idea, but isn’t very realistic for datasets that are gigabytes or terabytes. There are solutions, of course. In geospatial fields, STAC catalogs are increasingly expected as the norm.
Perhaps we are talking past each other. I am using “source control” as the idea of carefully keeping the original data, and the transformations to it. Git, designed for software code, does a good job for software code and a passable job at small data and configuration files.
My understanding of STAC catalogs are that they are a hierarchy of descriptions and links to assets. What is often missing is the idea that “this asset existed, was useful, had this hash, and was transformed from those assets using this process”. Am I misunderstanding the use of STAC in the field?
Ah. I thought you meant source control as in a VCS like Git. There’s Git LFS, but I don’t really think that’s appropriate for terabytes of data. Hashing that much data is also complicated. I mentioned STAC in the spirit of the original comment, which is about reproducibility (in particular, by other researchers). There are no explicit conventions in STAC for versioning data, but there are examples of informal, name-based conventions for publishing multiple versions of an asset in a STAC item.
There should be explicit conventions, perhaps like Rails database migration forms. For example:
1: “url://…”, original genome data as read from machine
2: Slice only interesting gene using code (name and version of software)
3: Remove poor lengths (name and version of software)
4: Run CRISPER simulator on …..
In Rails, database migration recognizes the impossiblity of versioning the entirety of even the structure of data. The idea is to keep the delta migrations to get to the current final version.
This is only an idea of a direction. Coming up with reasonable curation plans would be beyond Lobste.rs and cause ire among those that like shortcuts.
I’d say in 90% of the cases it’s a matter of budget and hiring and nothing else.
Adding “software engineering specialists” is not going to help. Or rather: if management is set on quality, just hire more senior developers, give them more time to do their work and allow them to veto features and deadlines. Comes at the same price and better results in most cases.
Adding “software engineering specialists” is not going to help. Or rather: if management is set on quality, just hire more senior developers
It sounds to me like you’re talking about software development by professional software developers in industry. The submission is talking about software development by scientists in academia. In this case, hiring “senior developers” would be adding “software engineering specialists”.
I would like to have an android native app that could sync stuff for offline viewing. There used to be something like this (microflux?) but I cannot install it on recent android.
In theory miniflux offers APIs that can be used by Android apps, but every Android app I tested is unusable with my RSS feeds.
Best app experience I had was the old google rss experience or newsblur. Wanted to write a newsblur API for miniflux, but stopped midway through…
Same - I wanted something I could access from all my machines and I didn’t like any of the public offerings, ether wildly different to my preferences or I didn’t trust them to disappear or behave badly in the future. Been very happy with miniflux, runs on a Raspberry Pi and has been pretty much hassle-free :)
I feel like this excludes a third option, the explicit empty check (explicit is better than implicit, after all 😁). I’m not aware of language support, but is_empty(my_list) or my_list.empty would be more explicit. For one thing, there would be no confusion about whether my_list is a list or actually some other type where empty is meaningless. And I imagine at least one person has been confused about why a list containing only falsy values (assert not all(my_list)) is truthy (assert my_list).
Nixpkgs is a great project for this IMO - it’s the first project in 20 years as a developer where I just feel like I could work on it full time and not get bored. There’s packages to … package, reproducibility issues to quash, Python tests to write, refactoring to improve all sorts of things, best practices to work out and enforce, and so much more.
A prerequisite for this is trying a bunch of technologies. Without doing so, you only have other people’s testimony to rely on when comparing known technology X to unknown technology Y. Those writers will often be biased themselves, or will try to compare a technology they know really well to others they don’t know well at all, resulting in a false signal which can then creep into the decision process of anyone wanting to choose between X and Y.
For example, I used PHP back in the mid-2000s for a few years for personal projects, and have used Python for many years professionally up until now. So I can only compare PHP to Python with three massive caveats: I don’t remember my experience with PHP very well any more, I was pretty junior when using PHP, and both languages have changed a lot since then.
Very true. But you should probably not be selecting an unknown technology for a high-stakes project. This also means that if you don’t have a lot of experience with other technologies, you’ll end up selecting a “technically” suboptimal technology, but if you are very skilled at the suboptimal technology, for you it might be actually be the optimal choice for the project.
Of course, the curmudgeon who succeeds you will then start whining about it, even though they wouldn’t be inheriting the project in the first place if it hadn’t been successful.
On the other hand, I’ve also seen “senior” engineers who thought PHP was the best language ever (“because it has such nice array handling”), and when asked further, said they didn’t have any serious experience with other languages. That’s the other extreme.
I guess it just means that one should keep trying out new things, but on lower stakes projects or smaller projects where a rewrite wouldn’t take too long. But then, “shiny new thing burnout” is also definitely a thing.
FYI, right now neither Kagi nor Google know about the mentioned bug report, “GHSA-h4vv-h3jq-v493”, and the relevant link (https://github.com/advisories/GHSA-h4vv-h3jq-v493) is a 404. So presumably the report is still considered to be in quarantine?
I think you should not assume this link will be resolvable in a few years from now, considering that repository is a “playground” and could get deleted any time.
In Nix convention, a top-level flake.nix file (linked below by puffnfresh), holds the entry points. In this case, the name of the package is appliance_17_image. When building with Nix, you can target that package directly. The . stands for “the flake in the current directory” and # is the access mechanism for the target package.
Features like time zones, deleting some of a repeating event, changing the alert on an already alerted event, caching remote calendars, and so on, are genuinely difficult to get right. And if you get any of these wrong, it is going to have a real life impact, possibly a devastating one.
Another complicating issue is that interoperability probably needs to be per-app, not per-protocol, since commercial product developers are incentivised to make interop difficult.
Google Calendar is often laughably bad at timezones, and its functionality for sending updated is terrible: it tends to either notify people about non-events (e.g., send an invitation for everyone when you add a new person) or fail to send them when there’s an actual change. But if you are a monopoly, you don’t need to make good software anymore, ya know. ;)
Then again, I’m not claiming that it I could take unlimited time off to write a perfect calendar application, I’d get everything right.
I can’t take this seriously with that table. Most of the values aren’t quantified, and don’t have any information on how they were measured, so either they’re subjective or they are not showing the numbers they have to back up these claims. This just reads like an early 2000s FUD article.
How this could be taken seriously:
Publish the numbers. If you don’t have numbers, just admit it’s subjective.
This article sounds like someone with a pre-made opinion made a rant post about the things they didn’t like and wrote some “objective” notes to justify their opinion. I don’t think I learned anything from that article other than that the author prefers X11.
Take performance: I’ve not experienced screen tearing on Wayland, which for me was a big issue on X11. So if I were to write the table I’d write Performance: X11 bad, Wayland better, but this is all just entirely anecdotal.
vCard for the 21st century. Basically a structured format for information about people and organisations, like various names, addresses, phone numbers, photos, etc. I’ve got the barest shell of this in acard, but it would need a whole lot of work to be usable.
The original announcement at 2025-03-09 02:29:38Z:
I tried to write some things about AI stagnation and NixOS dominance, but it’s too hard to predict anything very useful in the current world. Companies will pursue the AI gold rush until the money runs out or a sudden disillusionment wave overwhelms them, and while NixOS is brilliant for servers there’s too much inertia and too much of a learning curve.
The most likely scenario of AI stagnation I could imagine is iPhone-ification.
The promising multi-paradigm pilot studies (like LLM for context ingestion + AlphaZero-like for getting inference checker to approve the logic part) keep not getting followed up, easy-to-use from get-go trumps possibility of developing deep user skills, stuff like LLaMA and DeepSeek makes sure that the remaining money is in superficial niceness.
Transcription to text, style adjustment, straightforward (sometimes large-block for tasks with more boilerplate than business logic) translation to code, and summarisation fully and undeniably catch up to translation (which was already there pre-LLM with DeepL).
Reasoning coliides with safety and ends up in a niche similar to physical-keyboard hand held devices today, you can get that stuff to run locally but you need to have a combination of interests and competences, and then you get a small nice boost to your workflows.
If only GitLab introduced seamless Pijul support! That is, make sure that both Pijul and Git clients can interact with repos in a sensible way. Then I could learn the ins and outs of Pijul with an easy fallback in case of catastrophic usability failure.
(I just learned that Nest (the Pijul hosting solution) is not yet source-available, which means self-hosting is not feasible for the foreseeable. 😢)
Yeah I guess as long as there is no open hosting solution the gate is kinda closed for wide adoption :/ tbh I haven’t looked into it’s ecosystem for a long time now. I wonder how pijul vs jujutsu would compare/if a pijul “backend” could be supported by jujutsu.
Does Radicle do any kind of CI? I couldn’t find anything about that in their FAQ.
Uh good point. I know there was some dicussion in the zulip when I looked into it, but it seems like there’s currently only a poc.
-band-aswitches, which enable non-dialog batch mode and auto-subfolder respectively, because the last thing I want is to extractfooand have it vomit an unknown quantity of files all over the current directory, or pessimistically extractfoointofooand end up withfoo/foo/.~/.cache.yt-dlpintegration, it’s the main way I watch videos.Update - found it:
mpv --ytdl-raw-options=cookies-from-browser=firefox URL. Three of my favourite programs in one command!How do you use the mpv yt-dlp integration? The manual isn’t very clear, but seems to indicate that I have to manually export cookies from the browser every time I log in, and somehow pass them to mpv every time I run it, to make it work. I tested with
yt-dlp --cookies="${XDG_CACHE_HOME}/firefox-cookies.txt" --cookies-from-browser=firefoxandmpv --cookies-file="${XDG_CACHE_HOME}/firefox-cookies.txt" https://www.youtube.com/watch?v=[omitted], but got the same error message as just passing a YouTube URL.Research software quality assurance is not yet a thing in many (most, it seems) places, but there have been some small steps towards it in the last couple of decades. Using open source software, publishing the software and data with the research, and adding research software engineers (that is, software engineering specialists) to the team all help, but more should be done:
Surprisingly, the largest single gain is to put the original data into source control. Researchers, like other humans, have difficulty managing version and stepping back to “after we fixed that sign problem on that geiger counter but before we trimmed the outliers” version.
I agree with the spirit of your idea, but isn’t very realistic for datasets that are gigabytes or terabytes. There are solutions, of course. In geospatial fields, STAC catalogs are increasingly expected as the norm.
Perhaps we are talking past each other. I am using “source control” as the idea of carefully keeping the original data, and the transformations to it. Git, designed for software code, does a good job for software code and a passable job at small data and configuration files.
My understanding of STAC catalogs are that they are a hierarchy of descriptions and links to assets. What is often missing is the idea that “this asset existed, was useful, had this hash, and was transformed from those assets using this process”. Am I misunderstanding the use of STAC in the field?
Ah. I thought you meant source control as in a VCS like Git. There’s Git LFS, but I don’t really think that’s appropriate for terabytes of data. Hashing that much data is also complicated. I mentioned STAC in the spirit of the original comment, which is about reproducibility (in particular, by other researchers). There are no explicit conventions in STAC for versioning data, but there are examples of informal, name-based conventions for publishing multiple versions of an asset in a STAC item.
There should be explicit conventions, perhaps like Rails database migration forms. For example:
1: “url://…”, original genome data as read from machine 2: Slice only interesting gene using code (name and version of software) 3: Remove poor lengths (name and version of software) 4: Run CRISPER simulator on …..
In Rails, database migration recognizes the impossiblity of versioning the entirety of even the structure of data. The idea is to keep the delta migrations to get to the current final version.
This is only an idea of a direction. Coming up with reasonable curation plans would be beyond Lobste.rs and cause ire among those that like shortcuts.
I’d say in 90% of the cases it’s a matter of budget and hiring and nothing else.
Adding “software engineering specialists” is not going to help. Or rather: if management is set on quality, just hire more senior developers, give them more time to do their work and allow them to veto features and deadlines. Comes at the same price and better results in most cases.
It sounds to me like you’re talking about software development by professional software developers in industry. The submission is talking about software development by scientists in academia. In this case, hiring “senior developers” would be adding “software engineering specialists”.
Right, my bad!
I use miniflux, it’s nice and the PWA on android feels amazing
Me too. I chose it because they provide an RPM repository and it can use PostgreSQL as storage. I stayed because it just works and it’s minimalistic.
I’m toying with my own terminal client: https://github.com/alexpdp7/termflux
I would like to have an android native app that could sync stuff for offline viewing. There used to be something like this (microflux?) but I cannot install it on recent android.
I use Miniflux with the FeedMe app on Android via the Fever API. It’s not very good looking but it works fine.
Thank you!
In theory miniflux offers APIs that can be used by Android apps, but every Android app I tested is unusable with my RSS feeds. Best app experience I had was the old google rss experience or newsblur. Wanted to write a newsblur API for miniflux, but stopped midway through…
Miniflux also has support for the Fever API.
Same, really easy to set up using NixOS.
Same - I wanted something I could access from all my machines and I didn’t like any of the public offerings, ether wildly different to my preferences or I didn’t trust them to disappear or behave badly in the future. Been very happy with miniflux, runs on a Raspberry Pi and has been pretty much hassle-free :)
I feel like this excludes a third option, the explicit empty check (explicit is better than implicit, after all 😁). I’m not aware of language support, but
is_empty(my_list)ormy_list.emptywould be more explicit. For one thing, there would be no confusion about whethermy_listis a list or actually some other type whereemptyis meaningless. And I imagine at least one person has been confused about why a list containing only falsy values (assert not all(my_list)) is truthy (assert my_list).assert not any(my_list), rather! De Morgan strikes again.Nixpkgs is a great project for this IMO - it’s the first project in 20 years as a developer where I just feel like I could work on it full time and not get bored. There’s packages to … package, reproducibility issues to quash, Python tests to write, refactoring to improve all sorts of things, best practices to work out and enforce, and so much more.
A prerequisite for this is trying a bunch of technologies. Without doing so, you only have other people’s testimony to rely on when comparing known technology X to unknown technology Y. Those writers will often be biased themselves, or will try to compare a technology they know really well to others they don’t know well at all, resulting in a false signal which can then creep into the decision process of anyone wanting to choose between X and Y.
For example, I used PHP back in the mid-2000s for a few years for personal projects, and have used Python for many years professionally up until now. So I can only compare PHP to Python with three massive caveats: I don’t remember my experience with PHP very well any more, I was pretty junior when using PHP, and both languages have changed a lot since then.
Very true. But you should probably not be selecting an unknown technology for a high-stakes project. This also means that if you don’t have a lot of experience with other technologies, you’ll end up selecting a “technically” suboptimal technology, but if you are very skilled at the suboptimal technology, for you it might be actually be the optimal choice for the project.
Of course, the curmudgeon who succeeds you will then start whining about it, even though they wouldn’t be inheriting the project in the first place if it hadn’t been successful.
On the other hand, I’ve also seen “senior” engineers who thought PHP was the best language ever (“because it has such nice array handling”), and when asked further, said they didn’t have any serious experience with other languages. That’s the other extreme.
I guess it just means that one should keep trying out new things, but on lower stakes projects or smaller projects where a rewrite wouldn’t take too long. But then, “shiny new thing burnout” is also definitely a thing.
FYI, right now neither Kagi nor Google know about the mentioned bug report, “GHSA-h4vv-h3jq-v493”, and the relevant link (https://github.com/advisories/GHSA-h4vv-h3jq-v493) is a 404. So presumably the report is still considered to be in quarantine?
it’s now open https://github.com/NixOS/nix/security/advisories/GHSA-h4vv-h3jq-v493
I assume so.
Neat, but where does the
.\#appliance_17_imagestring come from? Is the string after the hash completely arbitrary?https://github.com/blitz/sysupdate-playground/blob/1a0938cb72b58feeb7d79bc15a4c316a8eac401c/flake.nix#L55
I think you should not assume this link will be resolvable in a few years from now, considering that repository is a “playground” and could get deleted any time.
So then all the links in the blog post won’t resolve.
In Nix convention, a top-level
flake.nixfile (linked below by puffnfresh), holds the entry points. In this case, the name of the package isappliance_17_image. When building with Nix, you can target that package directly. The.stands for “the flake in the current directory” and#is the access mechanism for the target package.You don’t need to escape
#if it’s not preceded by whitespace (actual rule might a bit more complex, that’s my mental model).I image the part after the sharp to be the segment selector as is in a URL segment.
It’s a build target in the repository / flake.
Trying to ignore all the Crowdstrike news.
Cat on lap, Nix, going to see friends and family. Life is good.
Features like time zones, deleting some of a repeating event, changing the alert on an already alerted event, caching remote calendars, and so on, are genuinely difficult to get right. And if you get any of these wrong, it is going to have a real life impact, possibly a devastating one.
Another complicating issue is that interoperability probably needs to be per-app, not per-protocol, since commercial product developers are incentivised to make interop difficult.
Google Calendar is often laughably bad at timezones, and its functionality for sending updated is terrible: it tends to either notify people about non-events (e.g., send an invitation for everyone when you add a new person) or fail to send them when there’s an actual change. But if you are a monopoly, you don’t need to make good software anymore, ya know. ;)
Then again, I’m not claiming that it I could take unlimited time off to write a perfect calendar application, I’d get everything right.
We’re not disagreeing. The “genuinely difficult” part was about how even the big players get this wrong, again and again.
I can’t take this seriously with that table. Most of the values aren’t quantified, and don’t have any information on how they were measured, so either they’re subjective or they are not showing the numbers they have to back up these claims. This just reads like an early 2000s FUD article.
How this could be taken seriously:
This article sounds like someone with a pre-made opinion made a rant post about the things they didn’t like and wrote some “objective” notes to justify their opinion. I don’t think I learned anything from that article other than that the author prefers X11.
Take performance: I’ve not experienced screen tearing on Wayland, which for me was a big issue on X11. So if I were to write the table I’d write Performance: X11 bad, Wayland better, but this is all just entirely anecdotal.
vCard for the 21st century. Basically a structured format for information about people and organisations, like various names, addresses, phone numbers, photos, etc. I’ve got the barest shell of this in acard, but it would need a whole lot of work to be usable.
Nice, I’ll have to steal some of this for my Nix setup. Being able to disable most of the bad defaults is great; the only downside is having to use Firefox ESR for some of the settings to take effect.