I have not used Determinate Nix, but I think you can infer their main selling points from their marketing copy: they provide some minor UX and DX improvements alongside a lot of enterprise integrations and support guarantees. Upstream Nix is not focused on catering to things like SOC2 compliance or MDM integration, so it makes sense for a private company to focus their efforts there.
Looking for the things ai wanted, seems like it still only supports paths, tarballs, Git, & Mercurial—with no support of having mirrors to fallback to for when servers inevitably go down. I was hoping these would have been address despite Nixpkgs fetchers support going beyond these two limitiations.
The main argument against is that even if you assume good intentions, it won’t be as close to production as an hosted CI (e.g. database version, OS type and version, etc).
Lots of developers develop on macOS and deploy on Linux, and there’s tons of subtle difference between the two systems, such as case sensitivity of the filesystem, as well as default ordering just to give an example.
To me the point of CI isn’t to ensure devs ran the test suite before merging. It’s to provide an environment that will catch as many things as possible that a local run wouldn’t be able to catch.
To me the point of CI isn’t to ensure devs ran the test suite before merging.
I’m basically repeating my other comment but I’m amped up about how much I dislike this idea, probably because it would tank my productivity, and this was too good as example to pass up: the point of CI isn’t (just) to ensure I ran the test suite before merging - although that’s part of it, because what if I forgot? The bigger point, though, is to run the test suite so that I don’t have to.
I have a very, very low threshold for what’s acceptably fast for a test suite. Probably 5-10 seconds or less. If it’s slower than that, I’m simply not going to run the entire thing locally, basically ever. I’m gonna run the tests I care about, and then I’m going to push my changes and let CI either trigger auto-merge, or tell me if there’s other tests I should have cared about (oops!). In the meantime, I’m fully context switched away not even thinking about that PR, because the work is being done for me.
You’re definitely correct here but I think there are plenty of applications where you can like… just trust the intersection between app and os/arch is gonna work.
But now that I think about it, this is such a GH-bound project and like… any such app small enough in scope or value for this to be worth using can just use the free Actions minutes. Doubt they’d go over.
any such app small enough in scope or value for this to be worth using can just use the free Actions minutes.
Yes, that’s the biggest thing that doesn’t make sense to me.
I get the argument that hosted runners are quite weak compared to many developer machines, but if your test suite is small enough to be ran on a single machine, it can probably run about as fast if you parallelize your CI just a tiny bit.
With a fully containerized dev environment yes, that pretty much abolish the divergence in software configuration.
But there are more concern than just that. Does your app relies on some caches? Dependencies?
Where they in a clean state?
I know it’s a bit of an extreme example, but I spend a lot of time using bundle open and editing my gems to debug stuff, it’s not rare I forget to gem pristine after an investigation.
This can lead me to have tests that pass on my machine, and will never work elsewhere. There are millions of scenarios like this one.
I was once rejected from a job (partly) because the Dockerfile I wrote for my code assignment didn’t build on the assessor’s Apple Silicon Mac. I had developed and tested on my x86-64 Linux device. Considering how much server software is built with the same pair of configurations just with the roles switched around, I’d say they aren’t diminished enough.
Was just about to point this out. I’ve seen a lot of bugs in aarch64 Linux software that don’t exist in x86-64 Linux software. You can run a container built for a non-native architecture through Docker’s compatibility layer, but it’s a pretty noticeable performance hit.
One of the things that I like having a CI is the fact that it forces you to declare your dev environment programmatically. It means that you avoid the famous “works in my machine” issue because if tests works in your machine but not in CI, something is missing.
There are of course ways to avoid this issue, maybe if they enforced that all dev tests also run in a controlled environment (either via Docker or maybe something like testcontainers), but it needs more discipline.
This is by far the biggest plus side to CI. Missing external dependencies have bitten me before, but without CI, they’d bite me during deploy, rather than as a failed CI run. I’ve also run into issues specifically with native dependencies on Node, where it’d fetch the correct native dependency on my local machine, but fail to fetch it on CI, which likely means it would’ve failed in prod.
This is something “local CI” can check for. I’ve wanted this, so I added it to my build server tool (that normally runs on a remote machine) called ding. I’ll run something like “ding build make build” where “ding build” is the ci command, and “make build” is what it runs. It clones the current git repo into a temporary directory, and runs the command “make build” in it, sandboxed with bubblewrap.
The point still stands that you can forget to run the local CI.
It’s been rumored that the source code for Red Alert 2 and Tiberian Sun has been lost. Its absence here would seem to all but confirm that, unless EA announces a remaster soon.
That’s also one of the most annoying because, while the game works under WINE, the installer doesn’t. I managed to get a fan-adapted version working but it crashes in the campaign. If I had a Windows machine lying around, I’d do a fresh install: the campaign worked fine in WINE last time I tried it but not with the patched version.
The biggest issue I’ve had running Nix in a VM on my MacBook is that NixOS AArch64 doesn’t seem to be quite as battle tested as Nix-Darwin or NixOS x86-64. Occasionally I’ll run into packages that just don’t seem to work on that architecture yet.
aarch64-linux is one of the four supported architectures. I’ve had pretty good luck with it myself. Though it’s rare for nixpkgs maintainers to run all variations of the systems, so things do get missed. I’d encourage creating issues and pinging maintainers if one of those four systems doesn’t work, or if you know how to fix it then opening a PR.
The wording of the blog post confused me, because in my mind “FFI” (Foreign Function Interface) usually means “whatever the language provide to call C code”, so in particular the comparison between “the function written as a C extension” and “the function written using the FFI” is confusing as they sound like they are talking about the exact same thing.
The author is talking specifically about a Ruby library called FFI, where people write Ruby code that describe C functions and then the library is able to call them. I would guess that it interprets foreign calls, in the sense that the FFI library (maybe?) inspects the Ruby-side data describing their interface on each call, to run appropriate datatype-conversion logic before and after the call. I suppose that JIT-ing is meant to remove this interpretation overhead – which is probably costly for very fast functions, but not that noticeable for longer-running C functions.
Details about this would have helped me follow the blog post, and probably other people unfamiliar with the specific Ruby library called FFI.
Replying to myself: I wonder why the author needs to generate assembly code for this. I would assume that it should be possible, on the first call to this function, to output C code (using the usual Ruby runtime libraries that people use for C extensions) to a file, call the C compiler to produce a dynamic library, dlopen the result, and then (on this first call and all future calls) just call the library code. This would probably get similar performance benefits and be much more portable.
I would guess because that would require shipping a C compiler? Ruby FFI/C extensions are compiled before runtime; the only thing you need to ship to prod is your code, Ruby, and the build artifacts.
Yeah, that’s why I wrote “predominantly”. Also, for such a localised JIT, a second port to aarch64 is not that hard. You just won’t have an eBPF port falling out of your compiler (this is absurd for effect, I know this isn’t a reasonable thing).
Note that the point here is not to JIT arbirary Ruby code, which is probably quite hard, but a “tiny JIT” to use compilation rather than interpretation for the FFI wrappers around external calls. (In fact it’s probably feasible, if a bit less convenient, to set things up to compile these wrappers ahead-of-time.)
A text editor with the flexibility of Emacs but designed to support GUI applications really well, to the extent that most apps could be rewritten to be embedded inside it. The benefit to this would be allowing programmable workflows and easier inter program communication.
This was more or less Atom, the granddaddy of Electron applications. The primary disadvantage of its wild west plug-in model was that it was possible for extensions to do inefficient things on the main editing thread, making the whole application sluggish. It’d be cool to see a more sandboxed approach, with clear channels for inter-extension communication.
A way to easily distribute Ruby applications like CLI tools. Bonus points if it’s a single self-contained binary that I can grab with curl in a setup script. I work on several projects that use different versions of Ruby, and any tools distributed as gems need to be reinstalled for every different Ruby version; it’s awful.
A modern open source desktop RSS reading application for Windows. Everything I’ve found is unmaintained, or closed source with a subscription, or doesn’t sync with services like Feedly.
A “fantasy console” with mobile development support. I’d love to tinker with tiny games on my tiny phone screen at odd times. This is on the PICO-8 roadmap, and I believe there’s an Android version of TIC-80. Pythonista might also be an option, except I don’t think the game APIs it exposes work on desktop.
A terminal editor like vim (which I use) but completely redesigned to be more 2025 aware, even more 1990s aware :D With text menus for certain things that are hard to remember, integration with LLMs, tabs and so forth. I don’t like much to go outside the terminal app, nor alike IDE-alike stuff.
Helix is never getting vim keybindings, unfortunately. You can get Neovim to a comparable place, but it takes quite a bit of work. telescope.nvim for improved UI elements, which-key.nvim to display hints for key sequences, coc.nvim for code completion… and you may still need to mess around with LSP configs.
I’m a fairly happy neovim user; I have all the functionality I need and none of what I want, in 300 or so lines of lua and a few plugins.
But I have to admit that interacting with the helix “shell” (I don’t know if that’s the term; the colon command stuff) is much nicer than the vi legacy stuff. They’ve thought it through very nicely.
Why won’t neovim get vim keybindings? Last I heard they were going to embed a scheme in it, I thought they were going full bore on configurability.
The Helix project’s official stance is that they’re uninterested in enabling alternative editing paradigms. I would assume this philosophy will extend to the functionality exposed to the plugin system, although we won’t know for sure until said plugin system actually exists.
There’s a vim-keybinding plugin for VS code, ever try it out? I found it not perfect, but quite satisfying. Doesn’t necessarily integrate well with menus and stuff though.
I missed that bit! Though it does better than most graphical editors, since you can tunnel it over SSH to work on a remote system. Not perfect, but works pretty well.
I feel like helix tends a bit toward the IDE-alike direction. But OP also asks for “integration with LLMs” which is another thing I’d say tends toward the IDE-alike direction, so I can’t say I’m sure what @antirez means by that criterion.
emacs definitely feels like an IDE to me when I’ve tried it. Even without a pile of plugins it still loads noticeably slower than the other terminal editors I’ve tried like vim, and then when I’ve asked about this, people tell me I need to learn “emacs server” which seems even more IDE-tastic. Lisp is cool and undoubtably there’s a lot of power & depth there, but like Nix I think I need to make it part of my personality to realize the benefits.
I don’t even think you need to run emacs as a server, just have a little patience. Even with a huge framework’s worth of plugins my Emacs is useful in a couple of seconds.
For my habits and workflow, 2 seconds to wait for a text file is completely unacceptable. What’s next, 2 seconds to open a new shell or change directories or wait for ls? Please do not advise me to “just” lower my standards and change my workflow to excuse poor software performance.
For me, Emacs is started in daemon-mode when I log in (with systemd, but could be done perfectly well with one line in .xinitrc or even .profile). Then if I’m invoking it from the shell, I have a script that pops up a terminal frame instantly, and is suitable for using as $VISUAL for mail clients and such. I’m absolutely not trying to say you should do this, just that it’s the standard way for using a terminal Emacs in the “run editor, edit, quit” workflow (as opposed to the “never leave Emacs” workflow).
It’s ridiculous to call loading an editing environment like Emacs “poor performance” with the features vs say Helix. Sometimes I use Helix for command line edits, usually I use Emacs for file editing. If you’re going to need all the fancy features of an editor you can wait <2 seconds like an adult, or use vim like an adult, but if you’re needing LLM integration and can’t wait shorter than a breath for your editor to load that is a you problem, not a problem with your editor.
You really don’t need to make it part of your personality, but you do need to install it so you’ve got a server, and now make sure you’re running a version with aot compilation.
In fairness, I don’t think many people use stock vim or vs code as their daily driver either, because there are rich returns to mildly customizing your editor and investing in knowing it well.
I don’t think many people use stock vim or vs code as their daily driver
I’m actually pretty curious about this. Does anyone know if there’s data on this?
I think my only Vim config for a long time was “set nocompatible” and “syntax on” (both of which I don’t seem to need these days). Oh, and “set nobackup”. VS Code, I think I only disabled wordwrap. Emacs required a bit of customization (cua-mode, SLIME, selecting a non-bright-white theme), but I’ve used it a lot less than Vim and VS Code. I used to be big on customization, but I guess I never got into customizing editors–not sure why…
I’ve been plugging lazyvim hard of late, and I think it’d fit your needs without leaving vim and not spending an eternity configuring stuff:
text menus for certain things that are hard to remember
which-key + most things being bound to leader (i.e. space) make it super easy to find how to do things, and autocomplete on the command line takes you the rest of the way
integration with LLMs
Comes with one-button setup for copilot, supermaven and a couple others. IME it’s not as good as VS code but gets pretty close; I’ve definitely not felt IDE AI features give me enough value to switch.
I know about terminal split, but I don’t believe that’s the way. It was much better with 1990s MS-DOS editors… where they tried to bring you some decent terminal UI. Btw I see that I was suggested to look at a few projects, I’ll do! Thanks.
Of course — the terminal part was just «if I am listing features that can be used nowadays without relearning the editor, I can as well mention that». But overall the situation with new features in Vim or NeoVim is better than you described, so in case you find some dealbreakers with mentioned editors — hopefully not but who knows (I do find dealbreakers there) — you can get at least some of the way from Vim.
I remember John Carmack describing in one of his Doom 3 talks how he was shocked to discover that he made a mistake in the game loop that caused one needless frame of input latency. To his great relief, he discovered it just in time to fix it before the game shipped. He cares about every single millisecond. Meanwhile, the display server and compositing window manager introduce latency left and right. It’s painful to see how the computing world is devolving in many areas, particularly in usability and performance.
He cares about every single millisecond. Meanwhile, the display server and compositing window manager introduce latency left and right.
I will say the unpopular-but-true thing here: Carmack probably was wrong to do that, and you would be just as wrong to adopt that philosophy today. The bookkeeping counting-bytes-and-cycles side of programming is, in the truest Brooksian sense, accidental complexity which we ought to try to vanquish in order to better attack the essential complexity of the problems we work on.
There are still, occasionally, times and places when being a Scrooge, sitting in his counting-house and begrudging every last ha’penny of expenditure, is forced on a programmer, but they are not as common as is commonly thought. Even in game programming – always brought up as the last bastion of Performance-Carers who Care About Performance™ – the overwhelming majority very obviously don’t actually care about performance the way Carmack or Muratori do, and don’t have to care and haven’t had to for years. “Yeah, but will it run Crysis?” reached meme status nearly 20 years ago!
The point of advances in hardware has not been to cause us to become ever more Scrooge-like, but to free us from having to be Scrooges in the first place. Much as Scrooge himself became a kindly and generous man after the visitation of the spirits, we too can become kinder and have more generous performance budgets after being visited by even moderately modern hardware,
(and the examples of old software so often held up as paragons of Caring About Performance are basically just survivorship bias anyway – the average piece of software always had average performance for and in its era, and we forget how many mediocre stuff was out there while holding up only one or two extreme outliers which were in no way representative of programming practice at the time of their creation)
There is certainly a version of performance optimization where the juice is not worth the squeeze, but is there any indication that Carmack’s approach fell into that category? The given example of “a mistake in the game loop that caused one needless frame of input latency” seems like a bug that definitely should have been fixed.
I’m having a hard time following your reasons for saying Carmack was “wrong” to care so much about performance. Is there some way in which the world would be better if he didn’t? Are you saying he should have cared about something else more?
There are different kinds of complexity. Everything in engineering is about compromises. If you decide to trade some latency for some other benefit, that’s fine. If you introduce latency because you weren’t modelling it in your trade-off space, that’s quite another.
the overwhelming majority very obviously don’t actually care about performance the way Carmack or Muratori do, and don’t have to care and haven’t had to for years. “Yeah, but will it run Crysis?” reached meme status nearly 20 years ago!
the amount of people complaining about game performance in literally any game forum, steam reviews / comments / whatnot obviously shows that wrong. Businesses don’t care about performance but actual humans being do care ; the problem is the constantly increasing disconnect between business and people.
Minecraft – the best-selling video game of all time – is known for both its horrid performance and for being almost universally beloved by players.
The idea that “business” is somehow forcing this onto people (especially when Minecraft started out and initially exploded in popularity as an indie game with even worse performance than it has today) is just not supported by empirical reality, sorry.
But the success is despite the game’s terrible performance, not thanks to it. Or do you think if you asked people if they would prefer minecraft to be faster they would say no ?
If it was not a problem then a mod that does a marginal performance improvement certainly would not have 10M downloads: https://modrinth.com/mod/moreculling . So people definitely do care ; they just don’t have a choice because if you want to play “minecraft” with your friends this is your only option. Just like for instance Slack, Gitlab or Jira are absolutely terrible but you don’t have a choice to use it because that’s where your coworkers are.
I don’t know of any game that succeeded because of their great performance, but I know of plenty that have succeeded despite their horrible performance. While performance can improve player satisfaction, for games, it’s a secondary measure of success, and it’s foolish to focus on it without having the rest of the game being good to play. It’s the case for most other software as well - most of the time, it’s “do the job well, in a convenient to use way, and preferably fast”. There’s fairly few problems where the main factor for software solving it is their speed first.
Bad performance can kill a decent game. Good performance cannot bring success to an otherwise mediocre game. If it worked that way, my simple games that run at ~1000FPS would have taken over the world already.
Or do you think if you asked people if they would prefer minecraft to be faster they would say no ?
Even if a game was written by an entire army of Carmacks and Muratoris squeezing every last bit of performance they could get, people would almost certainly answer “yes” to “would you prefer it to be faster”. It’s a meaningless question, because nobody says no to it even when the performance is already very good.
And the fact that Minecraft succeeded as an indie game based on people loving its gameplay even though it had terrible performance really and truly does put the lie to the notion that game dev is somehow a unique performance-carer industry or that people who play games are somehow super uniquely sensitive to performance. Gamers routinely accept things that are way worse than the sins of your least favorite Electron app or React SPA.
I think a more generous interpretation of the hypothetical would be to phrase the question as: “Do you think the performance of Minecraft is a problem?”
In that scenario, I would imagine that even people who love the game would likely say yes. At the same time, if you asked that question about some Carmack-ified game, you might get mostly “no” responses.
Can you clarify the claim that you are making, and why the chosen example has any bearing on it? Obviously gaming is different from other industries in some ways and the same in other ways.
I think the Scrooge analogy only works in some cases. Scrooge was free to become more generous because he was dealing with his own money. In the same way, when writing programs that run on our own servers, we should feel free to trade efficiency for other things if we wish. But when writing programs that run on our users’ machines, the resources, whether RAM or battery life, aren’t ours to take, so we should be as sparing with them as possible while still doing what we need to do.
Unfortunately, that last phrase, “while still doing what we need to do”, is doing a lot of work there. I have myself shipped a desktop app that uses Electron, because there was a need to get it out quickly, both to make money for my (small, bootstrapped) company and to solve a problem which no other product has solved. But I’ve still put in some small efforts here and there to make the app frugal for an Electron app, while not nearly as frugal as it would be if it were fully native.
I used to be passionate about this too, but I really think villianizing accidental complexity is a false idol. Accidental complexity is the domain of the programmer. We will always have to translate some idealized functionality into a physically executable system. And that system should be fast. And that will always mean reorganizing the data structures and algorithms to be more performant.
My point of view today is that implementation details should be completely embraced, and we should build software that takes advantage of its environment to the fullest. The best way to do this while also retaining the essential complexity of the domain is by completely separating specification from implementation. I believe we should be writing executable specifications and using them in model-based tests on the real implementation. The specifications disregard implementation details, making them much smaller and more comprehensible.
I have working examples of doing this if this sounds interesting, or even farfetched.
I agree with this view. I used to be enamored by the ideas of Domain Driven Design (referring to the code implementation aspects here and not the human aspects) and Clean/Hexagonal Architecture and whatever other similar design philosophies where the shape of your actual code is supposed to mirror the shape of the domain concepts.
One of the easiest ways to break that spell is to try to work on a system with a SQL database where there are a lot of tables with a lot of relations, where ACID matters (e.g., you actually understand and leverage your transaction isolation settings), and where performance matters (e.g., many records, can’t just SELECT * from every JOINed table, etc).
I don’t know where I first heard the term, but I really like to refer to “mechanical sympathy”. Don’t write code that exactly mirrors your business logic; your job as a programmer is to translate the business logic into machine instructions, not to translate business logic into business logic. So, write instructions that will run well on the machine.
Everything is a tradeoff. For example, in C++, when you create a vector and grow it, it is automatically zeroed. You could improve performance by using a plain array that you allocate yourself. I usually forgo this optimization because it costs time and often makes the code more unpleasant to work with. I also don’t go and optimize the assembly by hand, unless there is no other way to achieve what I want. With that being said, performance is a killer feature and lack of performance can kill a product. We absolutely need developers who are more educated in performance matters. Performance problems don’t just cripple our own industry, they cripple the whole world which relies on software. I think the mindset you described here is defeatist and, if it proliferates, will lead to worse software.
You could improve performance by using a plain array that you allocate yourself.
This one isn’t actually clear cut. Most modern CPUs do store allocate in L1. If you write an entire L1 line in the window of the store buffer, it will materialise the line in L1 without fetching from memory or a remote cache (just sending out some broadcast invalidates if the line is in someone else’s cache). If you zero, this will, definitely happen. If you don’t and initialise piecemeal, you may hit the same optimisation, but you may end up pulling in data from memory and then overwriting it.
If the array is big and you do this, you may find that it’s triggering some page faults eagerly to allocate the underlying storage. If you were going to use only a small amount of the total space, this will increase memory usage and hurt your cache. If you use all of it, then the kernel may see that you’ve rapidly faulted on two adjacent pages and handle a bit more in the page faults eagerly handler. This pre-faulting may also move page faults off some later path and reduce jitter.
Ah, you must be one of those “Performance-Carers who Care About Performance™” ;)
Both approaches will be faster in some settings.
This is so often the case, and it always worries me that attitudes like the GP lead to people not even knowing about how to properly benchmark and performance analyse anymore. Not too long ago I showed somebody who was an L4 SWE-SRE at Google a flamegraph - and he had never seen one before!
Ah, you must be one of those “Performance-Carers who Care About Performance™” ;)
Sometimes, and that’s the important bit. Performance is one of the things that I can optimise for, sometimes it’s not the right thing. I recently wrote a document processing framework for my next book. It runs all of its passes in Lua. It simplifies memory management by doing a load of copies of std::string. For a 200+ page book, well under one second of execution time is spent in all of that code, the vast majority is spent in libclang parsing all of the C++ examples and building semantic markup from them. The code is optimised for me to be able to easily add lowerings from new kinds of semantic markup to semantic HTML or typeset PDF, not for performance.
Similarly, a lot of what I work on now is an embedded platform. Microcontrollers are insanely fast relative to memory sizes these days. The computers I learned to program on had a bit less memory, but CPUs that were two orders of magnitude slower. So the main thing I care about is code and data size. If an O(n) algorithm is smaller than an O(log(n)) one, I may still prefer it because I know n is probably 1, and never more than 8 in a lot of cases.
But when I do want to optimise for performance, I want to understand why things are slow and how to fix it. I learned this lesson as a PhD student, where my PhD supervisor gave me some code that avoided passing things in parameters down deep function calls and stored them in globals instead. On the old machine he’d written it for, that was a speedup. Parameters were all passed on the stack and globals were fast to access (no PIC, load a global was just load from a hard-coded address). On the newer machines, it meant things had to go via a slower sequence for PC-relative loads and the accesses to globals impeded SSA construction and so inhibited a load of optimisation. Passing the state down as parameters kept it in registers and enabled local reasoning in the compiler. Undoing his optimisation gave me a 20% speedup. Introducing his optimisation gave him a greater speedup on the hardware that he originally used.
This is so often the case, and it always worries me that attitudes like the GP lead to people not even knowing about how to properly benchmark and performance analyse anymore.
I know how to and I teach it to people I work with. Just recently at work I rebuilt a major service, cut the DB queries it was doing by a factor of about 4 in the process, and it went from multi-second to single-digit-millisecond p95 response times.
But I also don’t pull constant all-nighters worrying that there might be some tiny bit of performance I left on the table, or switching from “slow” to “faster” programming languages, or really any of the stuff people always allege I ought to be doing if I really “care about performance”. I approach a project with a reasonable baseline performance budget, and if I’m within that then I leave it alone and move on to the next thing. I’m not going to wake up in a cold sweat wondering if maybe I could have shaved another picosecond somewhere.
And the fact that you can’t really respond to or engage with criticism of hyper-obsession with performance (or, you can but only through sneering strawmen) isn’t really helpful, y’know?
And the fact that you can’t really respond to or engage with criticism of hyper-obsession with performance (or, you can but only through sneering strawmen) isn’t really helpful, y’know?
How were we supposed to know that you were criticizing “hyper-obsession” that leads to all-nighters, worry, and loss of sleep over shaving off picoseconds? From your other post it sounded like you were criticizing Carmack’s approach, and I haven’t seen any indication that it corresponds to the “hyper-obsession” you describe.
I did a consulting gig a few years ago where just switching from zeroing with std::vector to pre-zeroed with calloc was a double-digit % improvement on Linux.
I think answer is somewhere in the middle: should game programmers in general care? Maybe too broad of a statement. Does ID Software, producers of top-of-the-class, extremely fast shooters benefit from someone who cares so deeply to make sure their games are super snappy? Probably yes.
You think thats bad, consider the advent of “web-apps” for everything.
On anything other than an M-series Apple computer they feel sluggish, even with absurd computer specifications. The largest improvement I felt going from a i9-9900K to an M1 was that Slack suddenly felt like a native app, going back to my old PC felt like going back to the 90’s.
There is a wide spectrum of performance in Electron apps. Although it’s mostly VS Code versus everyone else. VS Code is not particularly snappy, but it’s decent. Discord also feels faster than other messengers. The rest of highly interactive webapps are used are unbearably sluggish.
So I think these measured 6ms are irrelevant. I’m on Wayland Gnome and everything feels snappy except highly interactive webapps. Even my 10-year-old laptop felt great, but I retired it because some webapps were too painful (while compiling Rust felt… OK? Also browsing non-JS content sites was great).
Heck, my favorite comparison is to run Q2 on WASM. How can that feel so much snappier than a chat application like Slack?
I got so accustomed to the latency, when I use something with nearly zero latency (e.g. an 80’s computer with CRT), I get the surreal impression that the character appeared before I pressed the button.
I had the same feeling recently with a commadore64.
It really was striking how a computer with less power than the microcontroller in my keyboard could feel so fast, but obviously when you actually give it an instruction to think about, the limitations of the computer are clear.
As a user on smaller platforms without native apps, I will gladly take a web app or PWA over no access.
In the ’90s almost everything was running Microsoft Windows with on x86 for personal computers with almost everyone running at the 5 different screen resolunions so it was more reasonable to make a singular app for a singular CPU architecture & call it a day. Also security was an afterthought. To support all of these newer platforms, architectures, device types, & have the code in a sandbox, going the HTML + CSS + JavaScript route is a tradeoff many are willing to take for portability since browsers are ubiquitous. The weird thing is that a web app doesn’t have to be slow, & not every application has the same demands to warrant a native release.
Having been around the BSD, and the Linux block 20+ years ago, I share the sentiment. Quirky and/or slow apps are annoying, but still more efficient than no apps.
Besides, as far as UIs go, “native” is just… a moderately useful description at this point. macOS is the only one that’s sort of there, but that wasn’t always the case in all this time, either (remember when it shipped with two toolkits and three themes?). Windows has like three generations of UI toolkits, and one of the two on which the *nix world has mostly converged is frequently used along with things like Kirigami, making it native in the sense that it all eventually goes through some low-level Qt drawing code and color schemes kind of work, but that’s about it.
Don’t get me wrong, I definitely prefer a unified “native” experience; even several native options were tolerable, like back when you could tell a Windows 3.x-era application from other Windows 98 applications because the Open file… dialog looked different and whatnot, but keybindings were generally the same, widgets mostly worked the same etc.
But that’s a lost cause, this is not how applications are developed anymore – both because developers have lost interest in it and because most platform developers (in the wide sense – e.g. Microsoft) have lost interest in it. A rich, native framework is one of the most complex types of software to maintain, with some of the highest validation and maintenance costs. Building one, only to find almost everyone avoids it due to portability or vendor lock-in concerns unless they literally don’t have a choice, and that even then they try to use as little of it as humanly possible, is not a very good use of already scant resources in an age where most of the profit is in mobile and services, not desktop.
You can focus on the bad and point out that the vast majority of Electron applications out there are slow, inconsistent, and their UIs suck. Which is true, but you can also focus on the good and point out that the corpus of Electron applications we have now is a lot wider and more capable than their Xaw/Motif/Wx/Xforms/GTK/Qt/a million others – such consistency, much wow! – equivalents from 25 years ago, whose UIs also sucked.
I loved my M1 Air. I swapped it out for a (lighter) HP Aero which is more repairable and upgradable (and plays more video games). But the build quality, screen, speakers, battery, and finger print sensor don’t have anything on the Air. If Apple ever offers an 11 inch laptop again, I’ll probably jump on it.
Still rocking an iPhone 12 mini. The modern web can be pretty brutal on it at times: pages crashing, browser freezing for 10s at a time. It has honestly curtailed my web use on the go significantly, so I’m mostly okay with it on the whole.
Most things I absolutely need I can get an app for that will run better usually. The only real issue is the small screen size is obviously not being designed for anymore, and that’s becoming more of an issue.
I didn’t realize how many apps were essentially web applications until I enabled iOS lockdown mode. Suddenly I was having to add exceptions left and right for chat apps, my notes app, my Bible app, etc.
But even web-powered apps do seem snappier than most websites. Maybe they’re loading less advertising/analytics code on the fly?
Most things I absolutely need I can get an app for that will run better usually. The only real issue is the small screen size is obviously not being designed for anymore, and that’s becoming more of an issue.
I’m on a 2022 iPhone SE, and feel the same way. (My screen may be a bit smaller than yours?) The device is plenty fast, but it’s becoming increasingly clear that neither web designers nor app developers test much if at all on the screen size, and it can be impossible to access important controls.
TBH, I would cheerfully carry a flip phone with the ability to let other devices tether to it for data connectivity. Almost any time I really carry about using the web, I have a tablet or a laptop in a bag nearby. A thing that I could talk on as needed and that could supply GPS and data to another thing in my bag would really be a sweet spot for me.
That is exactly the kind of thing I’d like. I’d probably need to wait for the 5G version, given the 4G signal strength in a few of the places I tend to use data.
We are going to be increasingly inclined to not accept new proposed features in the library core.
Alex/Carson – you’ve been implicitly pushing the idea that “complete” is a worthy goal, and this finally makes it explicit. Software driven by a cohesive philosophy always shines, and htmx is no exception. I’m very appreciative of what you are doing, along with the rest of the htmx contributors.
I don’t know if it was ever made formal policy, but I seem to remember that at one point a lead maintainer of RSpec opined that version 3 was basically the final form of the project. Upgrades have been pleasantly unexciting for about a decade now.
It’s not that they never add features. It’s that, at least since sqlite 3 (so for the past 20 years):
I never need to upgrade sqlite unless I’m feeling pain from a specific bug or want a specific new feature.
Code that I wrote against sqlite in 2004/2005 works just as well against sqlite now as it did against sqlite then.
I have no worries about reading a database now that I wrote in 2005. And I’m confident that I could write against 2025 sqlite and, if I’m careful about what features I use, read it with code that I wrote in 2005.
I think sqlite is very much an example of software driven by the cohesive philosophy that @jjude asked about. As @adriano said, it’s not necessarily feature complete, but features are added very carefully and deliberately. There aren’t many things I’m as confident using as I am in sqlite. It makes me happy that htmx (another thing I like a lot) aspires to that, but it’s got to keep going a long time to prove it’s in the same league. (I suspect it will.)
I’m not sure whether sqlite has a cohesive philosophy, but note that @jjude’s question, as I understand it, is about software with a cohesive philosophy; not necessarily software with feature completeness as its philosophy.
If I were to guess what the sqlite authors’ philosophy might be, it’s that the world needs a high quality SQL implementation that remains in the public domain.
Thus highlighting the issues with “feature-complete for stability’s sake”.
Things change. It is the one constant. If the software is static in a sense, then rolling a new major version or forking with this kind of “fix” is both reasonable and necessary for the long term needs.
I also have rosy memories of the last good Python in my mind (2.5), but realistically, it was always accreting features at pretty fast rate. It’s just a lot of people got stuck on 2.7 for a long time and didn’t observe it.
You have to go back further than that IMO. Pre-2.0 was when you could argue that Python was more or less adhering to its own Zen. To me the addition of list comprehensions marks the time when that ship has set sail.
The biggest disadvantage of VSCode remote is that it needs to install a server on the target machine. If you’re remoting to a slightly weird OS (non-glibc based or some ARM platforms), the solution described in the article may be a better experience.
I’m in the Mac ecosystem (MacBook Pro, iPhone) so I use and love NetNewsWire - it syncs read state perfectly across my devices via iCloud and just works.
I was a huge Reeder fan until the author launched a new version that dropped most of the RSS sync options due to a new focus on subscribing to non-RSS sources. The old app is still around as “Reeder Classic,” but it’s effectively in maintenance mode.
NetNewsWire is slightly clunkier than Reeder Classic, but the author seems more invested in the types of functionality I want to have in an RSS reader.
Note that iCloud is only one of NetNewsWire’s subscription/unreadness sync options: You can also sync with Basque, Feedbin, Feedly, Inoreader, NewsBlur, The Old Reader, or self-hosted FreshRSS, or just keep the data locally. And like any good RSS app it can export and import subscriptions for a quick tryout.
I’m also on the Apple/Mac ecosystem, and I do love NetNewsWire (using iCloud to handle subscriptions and read state across my devices).
I also do love that I can leave a Quick Note on my iPad using the Apple Pencil on articles (swiping from the right bottom corner up), I use this feature in addition to the starred articles whenever I want to add some personal thoughts to what I’m reading.
Interesting. Never knew that. Is there any equivalent one for iphone? I now star articles and then batch process export entire article to obsidian to take notes.
I should evaluate XMPP again for my private messaging needs. I’m running a Matrix server right now, but XMPP seems a bit more “settled”, for lack of a better word. I wonder whether there’s a significant difference in security levels.
XMPP servers are designed with considerate defaults to discard after X amount of time and the data that’s sent to the client is offloaded to the client itself. Only downside is if you will be migrating data you will do it for the client and not for the server with the current client implementations. Many people I know are skeptical about hosting Matrix servers because the server will mirror and/or cache a lot of data and you can’t remove it cleanly.
Sad to see the Matrix team repeating the lie that Bluesky is decentralized. When you look at the distinction between the two, it’s clear as day that we either need to stop calling Bluesky decentralized, or choose a new word for things that actually promote a network without megainstances and centralization.
Running a single Synapse server is enough to chat with your friends, completely in isolation. Two groups running Synapse can talk to each other without any interference from a third party, and you can self-host sydent or ma1sd or what have you for the identity API, too. You store and transmit the data required for conversations you participate in.
Running a Bluesky PDS, on the other hand, gives you control over your own data, to an extent, but Bluesky-the-company, or some other large entity, must be involved in order for you to talk to anyone else, because running a relay is expensive and legally risky. While you can argue this is technically “decentralized”, it’s qualitatively different from the way that things like Matrix and ActivityPub work.
Twitter has a single center. ATProto is designed to facilitate a network with a few centers rather than one, and views megarelays like bluesky.network as a success; a relay that isn’t enormous is a failure. Matrix, ActivityPub, and so forth are designed for a network with thousands of small “centers”, none of which need a complete view of the network, and the community tends to view mega-instances like mastodon.social, matrix.org, etc. as failures of the system.
imo, there is a line that drawn between ideologically pure decentralisation and mass adoption, and bluesky falls right in the middle on this line.
Onboarding my friend on was a nightmare, explaining things like
pick an instance (dont pick mastodon.social or any of the weird ones because some instances refuse to federate with mastodon.social and a few other instances (“you can move your account to another instance” argument does not work because most people just arent going to bother doing that))
if someone shares a link to a post while they are on a different instance than you, and you open the link, your login does not carry over.
mostly incompatible activitypub things can be viewed from the same thing (peertube, lemmy, mastodon feeds in the same place), no app properly supports this.
to someone who just wants to “use something like twitter that is not twitter” is a fool’s errand.
Bluesky has managed to surpass this hurdle (you download the bluesky app and make an account)(same can also be said about threads.net btw) by centralising a lot of its infrastructure into a cohesive brand and therefore, it has done something that neither mastodon or matrix or pgp has been able to do – actually being used by people
This may be a fine argument for building centralized systems, but it’s not at all an argument for calling Bluesky “decentralized,” which is what the parent comment was arguing against.
Sure. But if it’s not actually decentralized, what’s the point? Just use a centralized service, which will have an even better user experience, such as not exposing private info like likes and blocks.
I am not really sure what you’re saying here. If people care about the benefits of decentralization, they need to use something actually decentralized; if people don’t care about the benefits of decentralization, they should use a centralized solution that doesn’t have the tradeoffs of something like Bluesky.
i guess i didnt word my comment properly, i am sorry.
what i meant to say is thay bluesky, with all of its flaws right now, is a pretty good middle ground between ideologically pure decentralisation and mass market appeal
That’s the point, making a product technical superior doesn’t guarantee success. Matrix developers frequently have to explain why joining a channel can be slow - even slower than the decades-old IRC network. However, they rarely acknowledge that this performance issue is one of several key barriers holding back wider adoption of Matrix.
Software that is limited, but simple to use, may be more appealing to the user and market than the reverse.
source
The OP explicitly cites faster room joins as something for the post-2.0 roadmap.
I’ll evaluate Matrix’s usability after version 2.0 is released - though I wouldn’t hold my breath for that. Perhaps Matrix should redesign their protocol specifications to require fewer $ to impl or operate while matching the performance of contemporary chat platforms.
I am still on the side of thinking matrix as alpha-stage software in 2024.
Running a Bluesky PDS, on the other hand, gives you control over your own data, to an extent, but Bluesky-the-company, or some other large entity, must be involved in order for you to talk to anyone else, because running a relay is expensive and legally risky. While you can argue this is technically “decentralized”, it’s qualitatively different from the way that things like Matrix and ActivityPub work.
From your link, I gather that running a relay the size of the largest relay costs somewhere less than $200/month. I agree that would be expensive if I were doing it for myself and my friends group, but it seems rather inexpensive at the scale they’re talking about. I haven’t looked into running one other than reading the post you linked; are you arguing that it’d be impossible (or not useful) to run a small one?
And can you elaborate on how it’s more legally risky than running a synapse server or a mastodon instance?
Sure - but $200/mo is both an order of magnitude more than it costs me to run four GoToSocial instances, and as the author says,
my guess is that it could do an order of magnitude more event rate, but will run out of disk before too long (eg, in the next year).
can you elaborate on how it’s more legally risky than running a synapse server or a mastodon instance?
Not to put too fine a point on it, but, I trust my friends and the people we talk to not to post CSAM; I don’t have that trust relationship with everyone on the entire Bluesky network, which is an open-signup set.
Sure - but $200/mo is both an order of magnitude more than it costs me to run four GoToSocial instances, and as the author says,
my guess is that it could do an order of magnitude more event rate, but will run out of disk before too long (eg, in the next year).
I wasn’t disagreeing that it’s too expensive for a friends/family instance.
My question was really (because I don’t understand the setup well enough) whether it’s possible to run an instance 10% the size for 10% the cost. Do you know if small instances are possible given the protocol/federation arrangement?
Not to put too fine a point on it, but, I trust my friends and the people we talk to not to post CSAM; I don’t have that trust relationship with everyone on the entire Bluesky network, which is an open-signup set.
Does a federated Mastodon instance carry that same risk? That’s the part I’m trying to get my head around… I had kind of understood this to be about the same lift as a Mastodon instance, in most ways. Now that I hear it’s not, I’m trying to get my head around where the differences lie.
I like your characterization that “the community tends to view mega-instances like mastodon.social, matrix.org, etc. as failures of the system” when it comes to the community differences. Now I’m just trying to update my mental model of legal/tech differences.
Ah yeah, this is exactly the difference I’m trying to get at. The Bluesky architecture is designed around what they call a “big world” concept; each relay is supposed to have a complete view of the whole network, including from accounts and instances you’re not following, have never seen, have never interacted with you, etc. That’s what makes the cost explode so much for a relay; there’s not really such a thing as a “friends and family” instance. They pay some lip service to the idea of this in the protocol documentation, but there’s essentially no actual support for it in practice.
On a fedi instance, you only store posts (and media) from people you follow, including things boosted by those people, and posts fetched as parts of threads and so forth, so your exposure - both in terms of storage/processing cost and in terms of liability - is dramatically lessened. On a Matrix server, it’s even more so; you store only messages in rooms one of your users is in.
To elaborate a bit, the bluesky protocol (as currently implemented) makes it impossible to do a lot of important things without a big world relay - for example there’s no protocol-level notification mechanism for likes/replies/follows/etc, users are just expected to have a relay crawling the entire network for actions which concern them.
(That being said it’s just about possible to build a bare-minimum small-world “see what my friends are posting” thing, and I intend to do this at some point.)
For people only subscribed through RSS, I don’t see that anything more than a redirect is necessary for account migration.
As for signed posts… can’t we simply rely on HTTPS to your own domain to prevent man-in-the-middle modification of posts? If Bluesky requires you to have your own domain anyway…
Running a relay that subscribes to the entire global network is relatively expensive to run, but there’s no reason you couldn’t subscribe to and relay a smaller sub-graph of the network featuring only your friends.
This seems entirely tangential to the release of Matrix 2.0, but it’s perpeptuating a big misconception about how AT Proto works
I subscribe to the megarelay and it’s about 12 megabits per second at peak times. The burden for running a megarelay is relaying to all subscribers (which is NOT all PDSes – just appviews, feeds, and labelers) and storing recently broadcasted // all ever broadcasted posts. The relay is not integral to the network (PDSes could & can just subscribe to each other), it’s a bolt-on for performance
Plus there’s no reason you couldn’t be connected to a smaller relay that forwards your stuff upstream to the large firehose (so that bluesky AppView can see it) or not at all (so only subscribers to your alt relay can see it) – Much differently to ActivityPub, the auth is at the data layer and not the request layer, so relays can feed into each other and rebroadcast without betraying message integrity
Every time I’m trying KDE, I’m hitting showstoper bugs, usually random crashes or weird hangs of the UI requiring restarting desktop session. It doesn’t really matter how polished and featureful the thing is, if it isn’t usable.
This sounds a bit to me like graphics driver problems. I had all of that and more with an Nvidia card (different but equally critical issues on both X and Wayland), but with an AMD one for the last ~6 months I’ve not had one glitch.
edit: plasma6 on NixOS in both cases; Wayland working fine for me with AMD so haven’t had to try X.
I haven’t hit any major crashes, but every time I’ve used it I could rack up a laundry list of papercut bugs in fifteen minutes of attempting to customize the desktop. In mid 2022 on openSUSE Tumbleweed I was able to get the taskbar to resize a few pixels just by opening a customization menu, get widgets stuck under other widgets, spawn secondary bars that couldn’t be moved, etc.
I find it really depends on the combination of distribution, x11 vs wayland, and kde version. I’ve had good luck with debian – an older version of kde (5.27) but quite stable. I tried plasma6 on both fedora and alpine and found it a bit buggy yet.
I’m looking forward to trying out cosmic desktop once it is stable.
Behind the scenes, devenv now parses Nix’s internal logs to determine which files and directories were accessed during evaluation.
I’m curious if this would work with my Lix system. Logging format seems a little bit fragile. It’s not really a formally specified API with guarantees about backwards compatibility, right?
Yes please. After my adolescent phase of super obsessing over configurable dev tools (desktop Linux environments, text editors) I’ve concluded that 95% of the time, customizability is a farce for poor design. macOS, vscode/zed are wonderfully functional and I’m so much happier.
My vscode configuration is extensive; I’ve been using it for years. But I’ve spent very little time actually tweaking compared to Neovim. I go into the config to fix a specific niggle, and that’s it. It seems like having a turing complete configuration blows out the amount of time I end up thinking about config exponentially.
As one of the Fish developers said (in an issue discussion, sorry I can’t find the quote now):
Each need for a config option is a result of failing to do what the user wants for a given situation.
I’ve taken this to heart. Not least because you can’t remove it (without breaking peoples’ configs) when you later figure out that it ought to function completely differently.
I strongly agree with you, I like configurability as in “i want my editor to do X for me”, not “I need 7 plugins to get tree-sitter and lsp going” type of configurability. Tools should have good defaults.
I agree here, but also part of my motivation for sticking close to defaults is as a consultant, I’m often on other people’s systems. I’m sometimes lucky enough to install my preferred editor, configuring it on their system feels like a bridge too far.
Regardless of the governance politics, is determinate nix useful or pleasant compared to a plain old flake? Are people using this and flakehub much?
I have not used Determinate Nix, but I think you can infer their main selling points from their marketing copy: they provide some minor UX and DX improvements alongside a lot of enterprise integrations and support guarantees. Upstream Nix is not focused on catering to things like SOC2 compliance or MDM integration, so it makes sense for a private company to focus their efforts there.
Looking for the things ai wanted, seems like it still only supports paths, tarballs, Git, & Mercurial—with no support of having mirrors to fallback to for when servers inevitably go down. I was hoping these would have been address despite Nixpkgs fetchers support going beyond these two limitiations.
I don’t know about Linux, but it’s the best option on macOS. The uninstaller and ability to persist across major OS upgrades are really nice.
I think this is a great idea, but I am anticipating folks explainIng why it isn’t.
The main argument against is that even if you assume good intentions, it won’t be as close to production as an hosted CI (e.g. database version, OS type and version, etc).
Lots of developers develop on macOS and deploy on Linux, and there’s tons of subtle difference between the two systems, such as case sensitivity of the filesystem, as well as default ordering just to give an example.
To me the point of CI isn’t to ensure devs ran the test suite before merging. It’s to provide an environment that will catch as many things as possible that a local run wouldn’t be able to catch.
I’m basically repeating my other comment but I’m amped up about how much I dislike this idea, probably because it would tank my productivity, and this was too good as example to pass up: the point of CI isn’t (just) to ensure I ran the test suite before merging - although that’s part of it, because what if I forgot? The bigger point, though, is to run the test suite so that I don’t have to.
I have a very, very low threshold for what’s acceptably fast for a test suite. Probably 5-10 seconds or less. If it’s slower than that, I’m simply not going to run the entire thing locally, basically ever. I’m gonna run the tests I care about, and then I’m going to push my changes and let CI either trigger auto-merge, or tell me if there’s other tests I should have cared about (oops!). In the meantime, I’m fully context switched away not even thinking about that PR, because the work is being done for me.
You’re definitely correct here but I think there are plenty of applications where you can like… just trust the intersection between app and os/arch is gonna work.
But now that I think about it, this is such a GH-bound project and like… any such app small enough in scope or value for this to be worth using can just use the free Actions minutes. Doubt they’d go over.
Yes, that’s the biggest thing that doesn’t make sense to me.
I get the argument that hosted runners are quite weak compared to many developer machines, but if your test suite is small enough to be ran on a single machine, it can probably run about as fast if you parallelize your CI just a tiny bit.
I wonder if those differences are diminished if everything runs on Docker
With a fully containerized dev environment yes, that pretty much abolish the divergence in software configuration.
But there are more concern than just that. Does your app relies on some caches? Dependencies?
Where they in a clean state?
I know it’s a bit of an extreme example, but I spend a lot of time using
bundle openand editing my gems to debug stuff, it’s not rare I forget togem pristineafter an investigation.This can lead me to have tests that pass on my machine, and will never work elsewhere. There are millions of scenarios like this one.
I was once rejected from a job (partly) because the Dockerfile I wrote for my code assignment didn’t build on the assessor’s Apple Silicon Mac. I had developed and tested on my x86-64 Linux device. Considering how much server software is built with the same pair of configurations just with the roles switched around, I’d say they aren’t diminished enough.
Was just about to point this out. I’ve seen a lot of bugs in aarch64 Linux software that don’t exist in x86-64 Linux software. You can run a container built for a non-native architecture through Docker’s compatibility layer, but it’s a pretty noticeable performance hit.
One of the things that I like having a CI is the fact that it forces you to declare your dev environment programmatically. It means that you avoid the famous “works in my machine” issue because if tests works in your machine but not in CI, something is missing.
There are of course ways to avoid this issue, maybe if they enforced that all dev tests also run in a controlled environment (either via Docker or maybe something like testcontainers), but it needs more discipline.
This is by far the biggest plus side to CI. Missing external dependencies have bitten me before, but without CI, they’d bite me during deploy, rather than as a failed CI run. I’ve also run into issues specifically with native dependencies on Node, where it’d fetch the correct native dependency on my local machine, but fail to fetch it on CI, which likely means it would’ve failed in prod.
Here’s one: if you forget to check in a file, this won’t catch it.
It checks if the repo is not dirty, so it shouldn’t.
This is something “local CI” can check for. I’ve wanted this, so I added it to my build server tool (that normally runs on a remote machine) called ding. I’ll run something like “ding build make build” where “ding build” is the ci command, and “make build” is what it runs. It clones the current git repo into a temporary directory, and runs the command “make build” in it, sandboxed with bubblewrap.
The point still stands that you can forget to run the local CI.
What’s to stop me from lying and making the gh api calls manually?
Westwoods best title is missing: Red Alert 2
It’s been rumored that the source code for Red Alert 2 and Tiberian Sun has been lost. Its absence here would seem to all but confirm that, unless EA announces a remaster soon.
That’s also one of the most annoying because, while the game works under WINE, the installer doesn’t. I managed to get a fan-adapted version working but it crashes in the campaign. If I had a Windows machine lying around, I’d do a fresh install: the campaign worked fine in WINE last time I tried it but not with the patched version.
This is really neat. Does anyone know if it’s possible to expose some sort of browser widget to tk9_0 applications?
There’s Tkhtml, but I am guessing this is very far from being useful for the modern web. Might be fine for HTML based documentation and such.
The biggest issue I’ve had running Nix in a VM on my MacBook is that NixOS AArch64 doesn’t seem to be quite as battle tested as Nix-Darwin or NixOS x86-64. Occasionally I’ll run into packages that just don’t seem to work on that architecture yet.
Yeah fair point. That’s why it’s great to be able to run x86-64 packages in the VM too, see the “x86 Virtualization” chapter of the linked post.
aarch64-linux is one of the four supported architectures. I’ve had pretty good luck with it myself. Though it’s rare for nixpkgs maintainers to run all variations of the systems, so things do get missed. I’d encourage creating issues and pinging maintainers if one of those four systems doesn’t work, or if you know how to fix it then opening a PR.
The wording of the blog post confused me, because in my mind “FFI” (Foreign Function Interface) usually means “whatever the language provide to call C code”, so in particular the comparison between “the function written as a C extension” and “the function written using the FFI” is confusing as they sound like they are talking about the exact same thing.
The author is talking specifically about a Ruby library called
FFI, where people write Ruby code that describe C functions and then the library is able to call them. I would guess that it interprets foreign calls, in the sense that the FFI library (maybe?) inspects the Ruby-side data describing their interface on each call, to run appropriate datatype-conversion logic before and after the call. I suppose that JIT-ing is meant to remove this interpretation overhead – which is probably costly for very fast functions, but not that noticeable for longer-running C functions.Details about this would have helped me follow the blog post, and probably other people unfamiliar with the specific Ruby library called
FFI.Replying to myself: I wonder why the author needs to generate assembly code for this. I would assume that it should be possible, on the first call to this function, to output C code (using the usual Ruby runtime libraries that people use for C extensions) to a file, call the C compiler to produce a dynamic library,
dlopenthe result, and then (on this first call and all future calls) just call the library code. This would probably get similar performance benefits and be much more portable.I would guess because that would require shipping a C compiler? Ruby FFI/C extensions are compiled before runtime; the only thing you need to ship to prod is your code, Ruby, and the build artifacts.
This is essentially how MJIT worked.
https://www.ruby-lang.org/en/news/2018/12/06/ruby-2-6-0-rc1-released/ https://github.com/vnmakarov/ruby/tree/rtl_mjit_branch#mjit-organization
Ruby has since that evolved very fast on the JIT side, spawning YJIT, RJIT, now FJIT…
I’m also not sure if the portability is needed here. Ruby is predominantly done on x86, at least at the scale where these optimisations matter.
Apple Silicon exists and is quite popular for development
You’re correct. I was referring to deployment systems (where the last bit of performance matters) and should have been clearer about that.
Even in production, ARM64 is getting more common these days, because of AWS Graviton and al.
But yes, x86_64 is still the overwhelming majority of production deployments.
Yeah, that’s why I wrote “predominantly”. Also, for such a localised JIT, a second port to aarch64 is not that hard. You just won’t have an eBPF port falling out of your compiler (this is absurd for effect, I know this isn’t a reasonable thing).
Note that the point here is not to JIT arbirary Ruby code, which is probably quite hard, but a “tiny JIT” to use compilation rather than interpretation for the FFI wrappers around external calls. (In fact it’s probably feasible, if a bit less convenient, to set things up to compile these wrappers ahead-of-time.)
A text editor with the flexibility of Emacs but designed to support GUI applications really well, to the extent that most apps could be rewritten to be embedded inside it. The benefit to this would be allowing programmable workflows and easier inter program communication.
This was more or less Atom, the granddaddy of Electron applications. The primary disadvantage of its wild west plug-in model was that it was possible for extensions to do inefficient things on the main editing thread, making the whole application sluggish. It’d be cool to see a more sandboxed approach, with clear channels for inter-extension communication.
I came in to write basically the same as https://lobste.rs/~crmsnbleyd.
I think Eclipse comes closer. So basically I want Eclipse but in a nice language.
I want to add a scheme or lisp interpreter to Zed and turn it into an Emacs.
A terminal editor like vim (which I use) but completely redesigned to be more 2025 aware, even more 1990s aware :D With text menus for certain things that are hard to remember, integration with LLMs, tabs and so forth. I don’t like much to go outside the terminal app, nor alike IDE-alike stuff.
You’re looking for helix.
Thanks, just downloaded, I’m trying it.
Came here to say this.
As soon as helix gets vim keybindings, I’ll use it.
I gave the helix/kakoune bindings a college try. Did not like them at all, the way it deals with trailing spaces after words messes up my workflow.
But the LSP integration and over all more modern interface is just so much better than neovim.
There is a fork called evil-helix, not sure how good it is
Helix is never getting vim keybindings, unfortunately. You can get Neovim to a comparable place, but it takes quite a bit of work. telescope.nvim for improved UI elements, which-key.nvim to display hints for key sequences, coc.nvim for code completion… and you may still need to mess around with LSP configs.
I’m a fairly happy neovim user; I have all the functionality I need and none of what I want, in 300 or so lines of lua and a few plugins.
But I have to admit that interacting with the helix “shell” (I don’t know if that’s the term; the colon command stuff) is much nicer than the vi legacy stuff. They’ve thought it through very nicely.
Why won’t neovim get vim keybindings? Last I heard they were going to embed a scheme in it, I thought they were going full bore on configurability.
The Helix project’s official stance is that they’re uninterested in enabling alternative editing paradigms. I would assume this philosophy will extend to the functionality exposed to the plugin system, although we won’t know for sure until said plugin system actually exists.
There’s a vim-keybinding plugin for VS code, ever try it out? I found it not perfect, but quite satisfying. Doesn’t necessarily integrate well with menus and stuff though.
what’s vscode got to do with terminal editors? :)
I missed that bit! Though it does better than most graphical editors, since you can tunnel it over SSH to work on a remote system. Not perfect, but works pretty well.
I feel like helix tends a bit toward the IDE-alike direction. But OP also asks for “integration with LLMs” which is another thing I’d say tends toward the IDE-alike direction, so I can’t say I’m sure what @antirez means by that criterion.
Or kak or zed or … what xi aspires to be, hows it doing?
Unless they finish plugin system, there’s no LLM integration yet. I do aware of workaround like helix-gpt
Have you seen flow?
I’ve fully converted all my terminal editing over to flow, it just feels right
What about emacs in the terminal with evil mode? That would hit a lot of your points.
emacs definitely feels like an IDE to me when I’ve tried it. Even without a pile of plugins it still loads noticeably slower than the other terminal editors I’ve tried like vim, and then when I’ve asked about this, people tell me I need to learn “emacs server” which seems even more IDE-tastic. Lisp is cool and undoubtably there’s a lot of power & depth there, but like Nix I think I need to make it part of my personality to realize the benefits.
I don’t even think you need to run emacs as a server, just have a little patience. Even with a huge framework’s worth of plugins my Emacs is useful in a couple of seconds.
For my habits and workflow, 2 seconds to wait for a text file is completely unacceptable. What’s next, 2 seconds to open a new shell or change directories or wait for
ls? Please do not advise me to “just” lower my standards and change my workflow to excuse poor software performance.For me, Emacs is started in daemon-mode when I log in (with systemd, but could be done perfectly well with one line in .xinitrc or even .profile). Then if I’m invoking it from the shell, I have a script that pops up a terminal frame instantly, and is suitable for using as $VISUAL for mail clients and such. I’m absolutely not trying to say you should do this, just that it’s the standard way for using a terminal Emacs in the “run editor, edit, quit” workflow (as opposed to the “never leave Emacs” workflow).
It’s ridiculous to call loading an editing environment like Emacs “poor performance” with the features vs say Helix. Sometimes I use Helix for command line edits, usually I use Emacs for file editing. If you’re going to need all the fancy features of an editor you can wait <2 seconds like an adult, or use vim like an adult, but if you’re needing LLM integration and can’t wait shorter than a breath for your editor to load that is a you problem, not a problem with your editor.
You really don’t need to make it part of your personality, but you do need to install it so you’ve got a server, and now make sure you’re running a version with aot compilation.
In fairness, I don’t think many people use stock vim or vs code as their daily driver either, because there are rich returns to mildly customizing your editor and investing in knowing it well.
I’m actually pretty curious about this. Does anyone know if there’s data on this?
I think my only Vim config for a long time was “set nocompatible” and “syntax on” (both of which I don’t seem to need these days). Oh, and “set nobackup”. VS Code, I think I only disabled wordwrap. Emacs required a bit of customization (cua-mode, SLIME, selecting a non-bright-white theme), but I’ve used it a lot less than Vim and VS Code. I used to be big on customization, but I guess I never got into customizing editors–not sure why…
I’ve been plugging lazyvim hard of late, and I think it’d fit your needs without leaving vim and not spending an eternity configuring stuff:
which-key + most things being bound to leader (i.e. space) make it super easy to find how to do things, and autocomplete on the command line takes you the rest of the way
Comes with one-button setup for copilot, supermaven and a couple others. IME it’s not as good as VS code but gets pretty close; I’ve definitely not felt IDE AI features give me enough value to switch.
Tabs by default and very sensible keybinds for switching between them! https://www.lazyvim.org/configuration/tips#navigating-around-multiple-buffers
try helix or kakoune
Vim does have built-in tabs since quite a few years, by the way.
Moreover, you might want to read the Vim help on menu support, there is at least something for terminal mode.
Neovim seems to have built-in support for a terminal split, by the way.
Yes, since 2006 to be more precise, so vim added support for tabs 19 years ago.
I know about terminal split, but I don’t believe that’s the way. It was much better with 1990s MS-DOS editors… where they tried to bring you some decent terminal UI. Btw I see that I was suggested to look at a few projects, I’ll do! Thanks.
Of course — the terminal part was just «if I am listing features that can be used nowadays without relearning the editor, I can as well mention that». But overall the situation with new features in Vim or NeoVim is better than you described, so in case you find some dealbreakers with mentioned editors — hopefully not but who knows (I do find dealbreakers there) — you can get at least some of the way from Vim.
zed has a great vim mode (seriously)
I remember John Carmack describing in one of his Doom 3 talks how he was shocked to discover that he made a mistake in the game loop that caused one needless frame of input latency. To his great relief, he discovered it just in time to fix it before the game shipped. He cares about every single millisecond. Meanwhile, the display server and compositing window manager introduce latency left and right. It’s painful to see how the computing world is devolving in many areas, particularly in usability and performance.
I will say the unpopular-but-true thing here: Carmack probably was wrong to do that, and you would be just as wrong to adopt that philosophy today. The bookkeeping counting-bytes-and-cycles side of programming is, in the truest Brooksian sense, accidental complexity which we ought to try to vanquish in order to better attack the essential complexity of the problems we work on.
There are still, occasionally, times and places when being a Scrooge, sitting in his counting-house and begrudging every last ha’penny of expenditure, is forced on a programmer, but they are not as common as is commonly thought. Even in game programming – always brought up as the last bastion of Performance-Carers who Care About Performance™ – the overwhelming majority very obviously don’t actually care about performance the way Carmack or Muratori do, and don’t have to care and haven’t had to for years. “Yeah, but will it run Crysis?” reached meme status nearly 20 years ago!
The point of advances in hardware has not been to cause us to become ever more Scrooge-like, but to free us from having to be Scrooges in the first place. Much as Scrooge himself became a kindly and generous man after the visitation of the spirits, we too can become kinder and have more generous performance budgets after being visited by even moderately modern hardware,
(and the examples of old software so often held up as paragons of Caring About Performance are basically just survivorship bias anyway – the average piece of software always had average performance for and in its era, and we forget how many mediocre stuff was out there while holding up only one or two extreme outliers which were in no way representative of programming practice at the time of their creation)
There is certainly a version of performance optimization where the juice is not worth the squeeze, but is there any indication that Carmack’s approach fell into that category? The given example of “a mistake in the game loop that caused one needless frame of input latency” seems like a bug that definitely should have been fixed.
I’m having a hard time following your reasons for saying Carmack was “wrong” to care so much about performance. Is there some way in which the world would be better if he didn’t? Are you saying he should have cared about something else more?
16ms of input latency is enormous for a fast-faced mouse driven game; definitely something the player can notice.
There are different kinds of complexity. Everything in engineering is about compromises. If you decide to trade some latency for some other benefit, that’s fine. If you introduce latency because you weren’t modelling it in your trade-off space, that’s quite another.
the amount of people complaining about game performance in literally any game forum, steam reviews / comments / whatnot obviously shows that wrong. Businesses don’t care about performance but actual humans being do care ; the problem is the constantly increasing disconnect between business and people.
Minecraft – the best-selling video game of all time – is known for both its horrid performance and for being almost universally beloved by players.
The idea that “business” is somehow forcing this onto people (especially when Minecraft started out and initially exploded in popularity as an indie game with even worse performance than it has today) is just not supported by empirical reality, sorry.
But the success is despite the game’s terrible performance, not thanks to it. Or do you think if you asked people if they would prefer minecraft to be faster they would say no ? If it was not a problem then a mod that does a marginal performance improvement certainly would not have 10M downloads: https://modrinth.com/mod/moreculling . So people definitely do care ; they just don’t have a choice because if you want to play “minecraft” with your friends this is your only option. Just like for instance Slack, Gitlab or Jira are absolutely terrible but you don’t have a choice to use it because that’s where your coworkers are.
I don’t know of any game that succeeded because of their great performance, but I know of plenty that have succeeded despite their horrible performance. While performance can improve player satisfaction, for games, it’s a secondary measure of success, and it’s foolish to focus on it without having the rest of the game being good to play. It’s the case for most other software as well - most of the time, it’s “do the job well, in a convenient to use way, and preferably fast”. There’s fairly few problems where the main factor for software solving it is their speed first.
… every competitive shooter ? you think counter-strike would have succeeded if it had the performance of, say, neverwinter nights 2 ?
Bad performance can kill a decent game. Good performance cannot bring success to an otherwise mediocre game. If it worked that way, my simple games that run at ~1000FPS would have taken over the world already.
Even if a game was written by an entire army of Carmacks and Muratoris squeezing every last bit of performance they could get, people would almost certainly answer “yes” to “would you prefer it to be faster”. It’s a meaningless question, because nobody says no to it even when the performance is already very good.
And the fact that Minecraft succeeded as an indie game based on people loving its gameplay even though it had terrible performance really and truly does put the lie to the notion that game dev is somehow a unique performance-carer industry or that people who play games are somehow super uniquely sensitive to performance. Gamers routinely accept things that are way worse than the sins of your least favorite Electron app or React SPA.
I think a more generous interpretation of the hypothetical would be to phrase the question as: “Do you think the performance of Minecraft is a problem?”
In that scenario, I would imagine that even people who love the game would likely say yes. At the same time, if you asked that question about some Carmack-ified game, you might get mostly “no” responses.
how is accepting things an argument for anything ? we are better than this as a species
Can you clarify the claim that you are making, and why the chosen example has any bearing on it? Obviously gaming is different from other industries in some ways and the same in other ways.
I think the Scrooge analogy only works in some cases. Scrooge was free to become more generous because he was dealing with his own money. In the same way, when writing programs that run on our own servers, we should feel free to trade efficiency for other things if we wish. But when writing programs that run on our users’ machines, the resources, whether RAM or battery life, aren’t ours to take, so we should be as sparing with them as possible while still doing what we need to do.
Unfortunately, that last phrase, “while still doing what we need to do”, is doing a lot of work there. I have myself shipped a desktop app that uses Electron, because there was a need to get it out quickly, both to make money for my (small, bootstrapped) company and to solve a problem which no other product has solved. But I’ve still put in some small efforts here and there to make the app frugal for an Electron app, while not nearly as frugal as it would be if it were fully native.
I used to be passionate about this too, but I really think villianizing accidental complexity is a false idol. Accidental complexity is the domain of the programmer. We will always have to translate some idealized functionality into a physically executable system. And that system should be fast. And that will always mean reorganizing the data structures and algorithms to be more performant.
My point of view today is that implementation details should be completely embraced, and we should build software that takes advantage of its environment to the fullest. The best way to do this while also retaining the essential complexity of the domain is by completely separating specification from implementation. I believe we should be writing executable specifications and using them in model-based tests on the real implementation. The specifications disregard implementation details, making them much smaller and more comprehensible.
I have working examples of doing this if this sounds interesting, or even farfetched.
I agree with this view. I used to be enamored by the ideas of Domain Driven Design (referring to the code implementation aspects here and not the human aspects) and Clean/Hexagonal Architecture and whatever other similar design philosophies where the shape of your actual code is supposed to mirror the shape of the domain concepts.
One of the easiest ways to break that spell is to try to work on a system with a SQL database where there are a lot of tables with a lot of relations, where ACID matters (e.g., you actually understand and leverage your transaction isolation settings), and where performance matters (e.g., many records, can’t just SELECT * from every JOINed table, etc).
I don’t know where I first heard the term, but I really like to refer to “mechanical sympathy”. Don’t write code that exactly mirrors your business logic; your job as a programmer is to translate the business logic into machine instructions, not to translate business logic into business logic. So, write instructions that will run well on the machine.
Everything is a tradeoff. For example, in C++, when you create a vector and grow it, it is automatically zeroed. You could improve performance by using a plain array that you allocate yourself. I usually forgo this optimization because it costs time and often makes the code more unpleasant to work with. I also don’t go and optimize the assembly by hand, unless there is no other way to achieve what I want. With that being said, performance is a killer feature and lack of performance can kill a product. We absolutely need developers who are more educated in performance matters. Performance problems don’t just cripple our own industry, they cripple the whole world which relies on software. I think the mindset you described here is defeatist and, if it proliferates, will lead to worse software.
This one isn’t actually clear cut. Most modern CPUs do store allocate in L1. If you write an entire L1 line in the window of the store buffer, it will materialise the line in L1 without fetching from memory or a remote cache (just sending out some broadcast invalidates if the line is in someone else’s cache). If you zero, this will, definitely happen. If you don’t and initialise piecemeal, you may hit the same optimisation, but you may end up pulling in data from memory and then overwriting it.
If the array is big and you do this, you may find that it’s triggering some page faults eagerly to allocate the underlying storage. If you were going to use only a small amount of the total space, this will increase memory usage and hurt your cache. If you use all of it, then the kernel may see that you’ve rapidly faulted on two adjacent pages and handle a bit more in the page faults eagerly handler. This pre-faulting may also move page faults off some later path and reduce jitter.
Both approaches will be faster in some settings.
Ah, you must be one of those “Performance-Carers who Care About Performance™” ;)
This is so often the case, and it always worries me that attitudes like the GP lead to people not even knowing about how to properly benchmark and performance analyse anymore. Not too long ago I showed somebody who was an L4 SWE-SRE at Google a flamegraph - and he had never seen one before!
Sometimes, and that’s the important bit. Performance is one of the things that I can optimise for, sometimes it’s not the right thing. I recently wrote a document processing framework for my next book. It runs all of its passes in Lua. It simplifies memory management by doing a load of copies of
std::string. For a 200+ page book, well under one second of execution time is spent in all of that code, the vast majority is spent in libclang parsing all of the C++ examples and building semantic markup from them. The code is optimised for me to be able to easily add lowerings from new kinds of semantic markup to semantic HTML or typeset PDF, not for performance.Similarly, a lot of what I work on now is an embedded platform. Microcontrollers are insanely fast relative to memory sizes these days. The computers I learned to program on had a bit less memory, but CPUs that were two orders of magnitude slower. So the main thing I care about is code and data size. If an O(n) algorithm is smaller than an O(log(n)) one, I may still prefer it because I know n is probably 1, and never more than 8 in a lot of cases.
But when I do want to optimise for performance, I want to understand why things are slow and how to fix it. I learned this lesson as a PhD student, where my PhD supervisor gave me some code that avoided passing things in parameters down deep function calls and stored them in globals instead. On the old machine he’d written it for, that was a speedup. Parameters were all passed on the stack and globals were fast to access (no PIC, load a global was just load from a hard-coded address). On the newer machines, it meant things had to go via a slower sequence for PC-relative loads and the accesses to globals impeded SSA construction and so inhibited a load of optimisation. Passing the state down as parameters kept it in registers and enabled local reasoning in the compiler. Undoing his optimisation gave me a 20% speedup. Introducing his optimisation gave him a greater speedup on the hardware that he originally used.
I know how to and I teach it to people I work with. Just recently at work I rebuilt a major service, cut the DB queries it was doing by a factor of about 4 in the process, and it went from multi-second to single-digit-millisecond p95 response times.
But I also don’t pull constant all-nighters worrying that there might be some tiny bit of performance I left on the table, or switching from “slow” to “faster” programming languages, or really any of the stuff people always allege I ought to be doing if I really “care about performance”. I approach a project with a reasonable baseline performance budget, and if I’m within that then I leave it alone and move on to the next thing. I’m not going to wake up in a cold sweat wondering if maybe I could have shaved another picosecond somewhere.
And the fact that you can’t really respond to or engage with criticism of hyper-obsession with performance (or, you can but only through sneering strawmen) isn’t really helpful, y’know?
How were we supposed to know that you were criticizing “hyper-obsession” that leads to all-nighters, worry, and loss of sleep over shaving off picoseconds? From your other post it sounded like you were criticizing Carmack’s approach, and I haven’t seen any indication that it corresponds to the “hyper-obsession” you describe.
Where’s the strawman really?
I did a consulting gig a few years ago where just switching from zeroing with std::vector to pre-zeroed with calloc was a double-digit % improvement on Linux.
I think answer is somewhere in the middle: should game programmers in general care? Maybe too broad of a statement. Does ID Software, producers of top-of-the-class, extremely fast shooters benefit from someone who cares so deeply to make sure their games are super snappy? Probably yes.
You think thats bad, consider the advent of “web-apps” for everything.
On anything other than an M-series Apple computer they feel sluggish, even with absurd computer specifications. The largest improvement I felt going from a i9-9900K to an M1 was that Slack suddenly felt like a native app, going back to my old PC felt like going back to the 90’s.
I would love to dig into why.
The bit that was really shocking to me was how ctrl-1 and ctrl-2 (switching Slack workspaces) took around a second on a powerful AMD laptop on Linux.
At work we use Matrix/Element. It has its share of issues but the performance isn’t nearly as bad.
I don’t really see how switching tabs inside a program is really related to the DRM subsystem, or to Kernel Mode Setting.
I thought they were mentioning ctrl-alt-F1/F2 switching (virtual terminals), which used to be indeed slow.
My bad.
There is a wide spectrum of performance in Electron apps. Although it’s mostly VS Code versus everyone else. VS Code is not particularly snappy, but it’s decent. Discord also feels faster than other messengers. The rest of highly interactive webapps are used are unbearably sluggish.
So I think these measured 6ms are irrelevant. I’m on Wayland Gnome and everything feels snappy except highly interactive webapps. Even my 10-year-old laptop felt great, but I retired it because some webapps were too painful (while compiling Rust felt… OK? Also browsing non-JS content sites was great).
Heck, my favorite comparison is to run Q2 on WASM. How can that feel so much snappier than a chat application like Slack?
I got so accustomed to the latency, when I use something with nearly zero latency (e.g. an 80’s computer with CRT), I get the surreal impression that the character appeared before I pressed the button.
I had the same feeling recently with a commadore64.
It really was striking how a computer with less power than the microcontroller in my keyboard could feel so fast, but obviously when you actually give it an instruction to think about, the limitations of the computer are clear.
EDIT: Oh hey, I wasn’t kidding.
The CPU in my keyboard is 16MHz: ControllerBoard Microcontroller PDF Datasheet
The CPU in the commadore64 I was using was 0.9-1MHz: https://en.wikipedia.org/wiki/MOS_Technology_6510
As a user on smaller platforms without native apps, I will gladly take a web app or PWA over no access.
In the ’90s almost everything was running Microsoft Windows with on x86 for personal computers with almost everyone running at the 5 different screen resolunions so it was more reasonable to make a singular app for a singular CPU architecture & call it a day. Also security was an afterthought. To support all of these newer platforms, architectures, device types, & have the code in a sandbox, going the HTML + CSS + JavaScript route is a tradeoff many are willing to take for portability since browsers are ubiquitous. The weird thing is that a web app doesn’t have to be slow, & not every application has the same demands to warrant a native release.
Having been around the BSD, and the Linux block 20+ years ago, I share the sentiment. Quirky and/or slow apps are annoying, but still more efficient than no apps.
Besides, as far as UIs go, “native” is just… a moderately useful description at this point. macOS is the only one that’s sort of there, but that wasn’t always the case in all this time, either (remember when it shipped with two toolkits and three themes?). Windows has like three generations of UI toolkits, and one of the two on which the *nix world has mostly converged is frequently used along with things like Kirigami, making it native in the sense that it all eventually goes through some low-level Qt drawing code and color schemes kind of work, but that’s about it.
Don’t get me wrong, I definitely prefer a unified “native” experience; even several native options were tolerable, like back when you could tell a Windows 3.x-era application from other Windows 98 applications because the Open file… dialog looked different and whatnot, but keybindings were generally the same, widgets mostly worked the same etc.
But that’s a lost cause, this is not how applications are developed anymore – both because developers have lost interest in it and because most platform developers (in the wide sense – e.g. Microsoft) have lost interest in it. A rich, native framework is one of the most complex types of software to maintain, with some of the highest validation and maintenance costs. Building one, only to find almost everyone avoids it due to portability or vendor lock-in concerns unless they literally don’t have a choice, and that even then they try to use as little of it as humanly possible, is not a very good use of already scant resources in an age where most of the profit is in mobile and services, not desktop.
You can focus on the bad and point out that the vast majority of Electron applications out there are slow, inconsistent, and their UIs suck. Which is true, but you can also focus on the good and point out that the corpus of Electron applications we have now is a lot wider and more capable than their Xaw/Motif/Wx/Xforms/GTK/Qt/a million others – such consistency, much wow! – equivalents from 25 years ago, whose UIs also sucked.
I loved my M1 Air. I swapped it out for a (lighter) HP Aero which is more repairable and upgradable (and plays more video games). But the build quality, screen, speakers, battery, and finger print sensor don’t have anything on the Air. If Apple ever offers an 11 inch laptop again, I’ll probably jump on it.
Still rocking an iPhone 12 mini. The modern web can be pretty brutal on it at times: pages crashing, browser freezing for 10s at a time. It has honestly curtailed my web use on the go significantly, so I’m mostly okay with it on the whole.
Most things I absolutely need I can get an app for that will run better usually. The only real issue is the small screen size is obviously not being designed for anymore, and that’s becoming more of an issue.
I didn’t realize how many apps were essentially web applications until I enabled iOS lockdown mode. Suddenly I was having to add exceptions left and right for chat apps, my notes app, my Bible app, etc.
But even web-powered apps do seem snappier than most websites. Maybe they’re loading less advertising/analytics code on the fly?
I’m on a 2022 iPhone SE, and feel the same way. (My screen may be a bit smaller than yours?) The device is plenty fast, but it’s becoming increasingly clear that neither web designers nor app developers test much if at all on the screen size, and it can be impossible to access important controls.
TBH, I would cheerfully carry a flip phone with the ability to let other devices tether to it for data connectivity. Almost any time I really carry about using the web, I have a tablet or a laptop in a bag nearby. A thing that I could talk on as needed and that could supply GPS and data to another thing in my bag would really be a sweet spot for me.
Maybe you want a Lightphone? They can tether.
That is exactly the kind of thing I’d like. I’d probably need to wait for the 5G version, given the 4G signal strength in a few of the places I tend to use data.
Alex/Carson – you’ve been implicitly pushing the idea that “complete” is a worthy goal, and this finally makes it explicit. Software driven by a cohesive philosophy always shines, and htmx is no exception. I’m very appreciative of what you are doing, along with the rest of the htmx contributors.
any other examples of such software available in public?
sqlite?
I don’t know if it was ever made formal policy, but I seem to remember that at one point a lead maintainer of RSpec opined that version 3 was basically the final form of the project. Upgrades have been pleasantly unexciting for about a decade now.
It’s not that they never add features. It’s that, at least since sqlite 3 (so for the past 20 years):
I think sqlite is very much an example of software driven by the cohesive philosophy that @jjude asked about. As @adriano said, it’s not necessarily feature complete, but features are added very carefully and deliberately. There aren’t many things I’m as confident using as I am in sqlite. It makes me happy that htmx (another thing I like a lot) aspires to that, but it’s got to keep going a long time to prove it’s in the same league. (I suspect it will.)
I’m not sure whether sqlite has a cohesive philosophy, but note that @jjude’s question, as I understand it, is about software with a cohesive philosophy; not necessarily software with feature completeness as its philosophy.
If I were to guess what the sqlite authors’ philosophy might be, it’s that the world needs a high quality SQL implementation that remains in the public domain.
TeX - Knuth said many many times that it was feature-complete.
And yet the default configuration flames anyone daring to have a non-ASCII character in their own name.
Stability also means that defaults in many cases can’t be changed, otherwise you could break existing users.
Thus highlighting the issues with “feature-complete for stability’s sake”.
Things change. It is the one constant. If the software is static in a sense, then rolling a new major version or forking with this kind of “fix” is both reasonable and necessary for the long term needs.
From the answers it seems the boring tech would be: go + htmx + sqlite.
Thanks.
Common Lisp, I think. If I were starting a fresh Web project I’d look to Common Lisp + htmx. Probably SBCL.
Emacs. You can often find a package written (and forgotten) easily over ten years ago and it’s highly likely it will just work.
Go. Python used to be like that too before it got huge.
I also have rosy memories of the last good Python in my mind (2.5), but realistically, it was always accreting features at pretty fast rate. It’s just a lot of people got stuck on 2.7 for a long time and didn’t observe it.
You have to go back further than that IMO. Pre-2.0 was when you could argue that Python was more or less adhering to its own Zen. To me the addition of list comprehensions marks the time when that ship has set sail.
I would say the “unix philosophy” is the most central one, guiding hundreds of terminal apps in the POSIX standard set and beyond.
Common Lisp, perhaps?
Hare, although it’s not finished yet. https://harelang.org/blog/2023-11-08-100-year-language/
Didn’t Paul Graham once say that about his lisp?
I just use Visual Studio Code with the Remote - SSH extension… Seems simpler, but I guess if you hate VS this is an option…
The biggest disadvantage of VSCode remote is that it needs to install a server on the target machine. If you’re remoting to a slightly weird OS (non-glibc based or some ARM platforms), the solution described in the article may be a better experience.
Great point.
I’m in the Mac ecosystem (MacBook Pro, iPhone) so I use and love NetNewsWire - it syncs read state perfectly across my devices via iCloud and just works.
I’m also all in on Apple so I use Reeder, and can recommend it unreservedly.
I was a huge Reeder fan until the author launched a new version that dropped most of the RSS sync options due to a new focus on subscribing to non-RSS sources. The old app is still around as “Reeder Classic,” but it’s effectively in maintenance mode.
NetNewsWire is slightly clunkier than Reeder Classic, but the author seems more invested in the types of functionality I want to have in an RSS reader.
Yeah that was super annoying but whichever sync service I used was still supported so it missed me. I should try NNW again.
Note that iCloud is only one of NetNewsWire’s subscription/unreadness sync options: You can also sync with Basque, Feedbin, Feedly, Inoreader, NewsBlur, The Old Reader, or self-hosted FreshRSS, or just keep the data locally. And like any good RSS app it can export and import subscriptions for a quick tryout.
I’m also on the Apple/Mac ecosystem, and I do love NetNewsWire (using iCloud to handle subscriptions and read state across my devices). I also do love that I can leave a Quick Note on my iPad using the Apple Pencil on articles (swiping from the right bottom corner up), I use this feature in addition to the starred articles whenever I want to add some personal thoughts to what I’m reading.
Interesting. Never knew that. Is there any equivalent one for iphone? I now star articles and then batch process export entire article to obsidian to take notes.
Looks like there isn’t a quick note gesture on iOS. It’s accessed from the share sheet or a control center icon.
https://support.apple.com/guide/iphone/use-quick-notes-iph5084c0387/ios
Same, used it way back when and used it ever since they remade or rereleased it.
I’ve been using it Mac only for years, I never realized there was a iOS app too. Thanks!
Same!
I should evaluate XMPP again for my private messaging needs. I’m running a Matrix server right now, but XMPP seems a bit more “settled”, for lack of a better word. I wonder whether there’s a significant difference in security levels.
Recommend Prosody as a server, ime it’s ideal for a small to medium self-hosted system.
Prosody is great; fast and small. Very well organized, easy to read code. I wrote a plugin to integrate it with a company VoIP phone service, once.
XMPP servers are designed with considerate defaults to discard after X amount of time and the data that’s sent to the client is offloaded to the client itself. Only downside is if you will be migrating data you will do it for the client and not for the server with the current client implementations. Many people I know are skeptical about hosting Matrix servers because the server will mirror and/or cache a lot of data and you can’t remove it cleanly.
Sad to see the Matrix team repeating the lie that Bluesky is decentralized. When you look at the distinction between the two, it’s clear as day that we either need to stop calling Bluesky decentralized, or choose a new word for things that actually promote a network without megainstances and centralization.
Running a single Synapse server is enough to chat with your friends, completely in isolation. Two groups running Synapse can talk to each other without any interference from a third party, and you can self-host sydent or ma1sd or what have you for the identity API, too. You store and transmit the data required for conversations you participate in.
Running a Bluesky PDS, on the other hand, gives you control over your own data, to an extent, but Bluesky-the-company, or some other large entity, must be involved in order for you to talk to anyone else, because running a relay is expensive and legally risky. While you can argue this is technically “decentralized”, it’s qualitatively different from the way that things like Matrix and ActivityPub work.
Twitter has a single center. ATProto is designed to facilitate a network with a few centers rather than one, and views megarelays like bluesky.network as a success; a relay that isn’t enormous is a failure. Matrix, ActivityPub, and so forth are designed for a network with thousands of small “centers”, none of which need a complete view of the network, and the community tends to view mega-instances like mastodon.social, matrix.org, etc. as failures of the system.
imo, there is a line that drawn between ideologically pure decentralisation and mass adoption, and bluesky falls right in the middle on this line.
Onboarding my friend on was a nightmare, explaining things like
to someone who just wants to “use something like twitter that is not twitter” is a fool’s errand.
Bluesky has managed to surpass this hurdle (you download the bluesky app and make an account)(same can also be said about threads.net btw) by centralising a lot of its infrastructure into a cohesive brand and therefore, it has done something that neither mastodon or matrix or pgp has been able to do – actually being used by people
This may be a fine argument for building centralized systems, but it’s not at all an argument for calling Bluesky “decentralized,” which is what the parent comment was arguing against.
oh yeah absolutely my bad, my comment was very non sequitur. i am sorry
Sure. But if it’s not actually decentralized, what’s the point? Just use a centralized service, which will have an even better user experience, such as not exposing private info like likes and blocks.
so that the people who actually do care about it, can benefit from it
I am not really sure what you’re saying here. If people care about the benefits of decentralization, they need to use something actually decentralized; if people don’t care about the benefits of decentralization, they should use a centralized solution that doesn’t have the tradeoffs of something like Bluesky.
i guess i didnt word my comment properly, i am sorry.
what i meant to say is thay bluesky, with all of its flaws right now, is a pretty good middle ground between ideologically pure decentralisation and mass market appeal
No hard feelings - I just think we’re talking about slightly different things :)
That’s the point, making a product technical superior doesn’t guarantee success. Matrix developers frequently have to explain why joining a channel can be slow - even slower than the decades-old IRC network. However, they rarely acknowledge that this performance issue is one of several key barriers holding back wider adoption of Matrix.
we did a massive project to speed up room joins: https://element-hq.github.io/synapse/latest/development/synapse_architecture/faster_joins.html and https://github.com/matrix-org/matrix-spec-proposals/blob/rav/proposal/faster_joins/proposals/3902-faster-remote-joins.md etc. Ironically, we did the hard bit (refactored all of synapse to support non-atomic joins), but then ran out of $ before we could realise most of the advantages, and shifted gear to more fundamental things (ie slow sync). The OP explicitly cites faster room joins as something for the post-2.0 roadmap.
I’ll evaluate Matrix’s usability after version 2.0 is released - though I wouldn’t hold my breath for that. Perhaps Matrix should redesign their protocol specifications to require fewer $ to impl or operate while matching the performance of contemporary chat platforms. I am still on the side of thinking matrix as alpha-stage software in 2024.
From your link, I gather that running a relay the size of the largest relay costs somewhere less than $200/month. I agree that would be expensive if I were doing it for myself and my friends group, but it seems rather inexpensive at the scale they’re talking about. I haven’t looked into running one other than reading the post you linked; are you arguing that it’d be impossible (or not useful) to run a small one?
And can you elaborate on how it’s more legally risky than running a synapse server or a mastodon instance?
Sure - but $200/mo is both an order of magnitude more than it costs me to run four GoToSocial instances, and as the author says,
Not to put too fine a point on it, but, I trust my friends and the people we talk to not to post CSAM; I don’t have that trust relationship with everyone on the entire Bluesky network, which is an open-signup set.
I wasn’t disagreeing that it’s too expensive for a friends/family instance.
My question was really (because I don’t understand the setup well enough) whether it’s possible to run an instance 10% the size for 10% the cost. Do you know if small instances are possible given the protocol/federation arrangement?
Does a federated Mastodon instance carry that same risk? That’s the part I’m trying to get my head around… I had kind of understood this to be about the same lift as a Mastodon instance, in most ways. Now that I hear it’s not, I’m trying to get my head around where the differences lie.
I like your characterization that “the community tends to view mega-instances like mastodon.social, matrix.org, etc. as failures of the system” when it comes to the community differences. Now I’m just trying to update my mental model of legal/tech differences.
Ah yeah, this is exactly the difference I’m trying to get at. The Bluesky architecture is designed around what they call a “big world” concept; each relay is supposed to have a complete view of the whole network, including from accounts and instances you’re not following, have never seen, have never interacted with you, etc. That’s what makes the cost explode so much for a relay; there’s not really such a thing as a “friends and family” instance. They pay some lip service to the idea of this in the protocol documentation, but there’s essentially no actual support for it in practice.
On a fedi instance, you only store posts (and media) from people you follow, including things boosted by those people, and posts fetched as parts of threads and so forth, so your exposure - both in terms of storage/processing cost and in terms of liability - is dramatically lessened. On a Matrix server, it’s even more so; you store only messages in rooms one of your users is in.
To elaborate a bit, the bluesky protocol (as currently implemented) makes it impossible to do a lot of important things without a big world relay - for example there’s no protocol-level notification mechanism for likes/replies/follows/etc, users are just expected to have a relay crawling the entire network for actions which concern them.
(That being said it’s just about possible to build a bare-minimum small-world “see what my friends are posting” thing, and I intend to do this at some point.)
Does Bluesky offer RSS feeds? What advantages would this bare-minimum small-world approach offer over RSS?
This thread by one of the protocols developers explains a little bit. The TL;DR is signed posts and account migration.
https://bsky.app/profile/pfrazee.com/post/3l6xwi52zti2y
It would appear that it is possible to redirect an RSS feed to a new location: https://cyber.harvard.edu/rss/rssRedirect.html
For people only subscribed through RSS, I don’t see that anything more than a redirect is necessary for account migration.
As for signed posts… can’t we simply rely on HTTPS to your own domain to prevent man-in-the-middle modification of posts? If Bluesky requires you to have your own domain anyway…
Running a relay that subscribes to the entire global network is relatively expensive to run, but there’s no reason you couldn’t subscribe to and relay a smaller sub-graph of the network featuring only your friends.
This seems entirely tangential to the release of Matrix 2.0, but it’s perpeptuating a big misconception about how AT Proto works
I subscribe to the megarelay and it’s about 12 megabits per second at peak times. The burden for running a megarelay is relaying to all subscribers (which is NOT all PDSes – just appviews, feeds, and labelers) and storing recently broadcasted // all ever broadcasted posts. The relay is not integral to the network (PDSes could & can just subscribe to each other), it’s a bolt-on for performance
Plus there’s no reason you couldn’t be connected to a smaller relay that forwards your stuff upstream to the large firehose (so that bluesky AppView can see it) or not at all (so only subscribers to your alt relay can see it) – Much differently to ActivityPub, the auth is at the data layer and not the request layer, so relays can feed into each other and rebroadcast without betraying message integrity
Every time I’m trying KDE, I’m hitting showstoper bugs, usually random crashes or weird hangs of the UI requiring restarting desktop session. It doesn’t really matter how polished and featureful the thing is, if it isn’t usable.
This sounds a bit to me like graphics driver problems. I had all of that and more with an Nvidia card (different but equally critical issues on both X and Wayland), but with an AMD one for the last ~6 months I’ve not had one glitch.
edit: plasma6 on NixOS in both cases; Wayland working fine for me with AMD so haven’t had to try X.
I lurk on /r/debian and every time someone asks for help with a weird video issue, there’s a better than even chance that they have an nVidia card.
I haven’t hit any major crashes, but every time I’ve used it I could rack up a laundry list of papercut bugs in fifteen minutes of attempting to customize the desktop. In mid 2022 on openSUSE Tumbleweed I was able to get the taskbar to resize a few pixels just by opening a customization menu, get widgets stuck under other widgets, spawn secondary bars that couldn’t be moved, etc.
Oh yeah. The exact same thing happened to me, not long ago.
All these features and customization are great in theory, but in practice it’s just needless complexity.
I find it really depends on the combination of distribution, x11 vs wayland, and kde version. I’ve had good luck with debian – an older version of kde (5.27) but quite stable. I tried plasma6 on both fedora and alpine and found it a bit buggy yet.
I’m looking forward to trying out cosmic desktop once it is stable.
I’m curious if this would work with my Lix system. Logging format seems a little bit fragile. It’s not really a formally specified API with guarantees about backwards compatibility, right?
Yes please. After my adolescent phase of super obsessing over configurable dev tools (desktop Linux environments, text editors) I’ve concluded that 95% of the time, customizability is a farce for poor design. macOS, vscode/zed are wonderfully functional and I’m so much happier.
My vscode configuration is extensive; I’ve been using it for years. But I’ve spent very little time actually tweaking compared to Neovim. I go into the config to fix a specific niggle, and that’s it. It seems like having a turing complete configuration blows out the amount of time I end up thinking about config exponentially.
I have nothing in there except for a font-size change.
As one of the Fish developers said (in an issue discussion, sorry I can’t find the quote now):
I’ve taken this to heart. Not least because you can’t remove it (without breaking peoples’ configs) when you later figure out that it ought to function completely differently.
I strongly agree with you, I like configurability as in “i want my editor to do X for me”, not “I need 7 plugins to get tree-sitter and lsp going” type of configurability. Tools should have good defaults.
I agree here, but also part of my motivation for sticking close to defaults is as a consultant, I’m often on other people’s systems. I’m sometimes lucky enough to install my preferred editor, configuring it on their system feels like a bridge too far.