Threads for jcelerier

    1.  

      wanted to try it and my experience is the usual wayland one:

      • I start it, it shows me a screen with a lot of shortcuts

      • almost none of the displayed shortcuts work because I have an azerty keyboard - I mentally switch to qwerty, open a terminal - doesn’t work as it’s hardcoded for alacritty which the niri package did not install - and it does not respect TERMINAL.

      • I switch to another tty, pkill niri, install alacritty

      • I reopen it, go and edit the config now that I at least have some tool to work with ; apparently it has its own custom configuration for keyboard layouts? no clue how I can do my “setxkbmap -model pc105 -layout fr -variant oss_latin9”. But I set it to “fr” which at least gets the basics right.

      • Now some keyboard shortcuts that worked in qwerty mode (e.g. super+shift+/) don’t work anymore whether when doing the right key combination in azerty or qwerty

      • I try to launch google-chrome, apparently in 2025 it still requires flags to start on wayland and I don’t remember which and its man page does not mention wayland at all

      • I try some apps I use but they all fail with “failed to open display” - I guess it doesn’t launch Xwayland ? Despite it being installed

      • yay -R niri

      Like, all this was entirely solved in the X11 world, I can try any random new WM and key input will work as expected because I have the correct setxkbmap configuration in my .xinitrc. Why is it so hard to have the same in Wayland?

      1.  

        niri doesn’t ship with Xwayland integration, but it’s very easy to set up xwayland-satellite with it for all X11 apps to work perfectly

        I’m also on an azerty keyboard, I remapped the annoying shortcuts and since then I have no complaints

        1.  

          apparently it has its own custom configuration for keyboard layouts?

          It uses the exact same way of configuring keyboard layouts as every other Wayland compositor. Unfortunately, there’s no standard place to store this configuration, so every Wayland compositor makes it part of its own config. But all the options are the standard xkeyboard-config options. You should be able to specify variant "oss_latin9" in Niri.

          Now some keyboard shortcuts that worked in qwerty mode (e.g. super+shift+/) don’t work anymore whether when doing the right key combination in azerty or qwerty

          I figured out after a little bit that what you need to do is list the base key. So e.g. on my keyboard / is shift+7, that’s what I have to write, not /.

          I try some apps I use but they all fail with “failed to open display” - I guess it doesn’t launch Xwayland ?

          It relies on xwayland-satellite, which needs to be running. I saw that they’re gonna make Niri auto-launch it in a future version.

          1.  

            so every Wayland compositor makes it part of its own config.

            that’s exactly what I call “its own custom configuration”, there isn’t one place where I can put my keyboard layout and every compositor will use it. makes me regret the windows registry.

        2. 2

          The 13% performance impact of bound check is much closer to what I’m used to (in audio with constant array access, more like 15-20%) compared to the research that found its impact at 0.3% : https://chandlerc.blog/posts/2024/11/story-time-bounds-checking/

          It would be great to have some proper understanding of what exactly was benchmarked in the 0.3% case as to me, getting 13% performance impact on core workloads really means “we have to buy new computers” which is a decent chunk of our budget already.

          1. 14

            One thing there is that if you take a codebase that was written without bounds check in mind, and then forcefully enable bounds checking, than of course you’ll get a massive slow down. Bounds checking every access kills all vectorization, so any reasonably-performant bounds-checking solution necessary requires extra work to eliminate most bounds checks.

            That is, you need not only enable bounds checks, but then spend some time looking at the places where the checks are not eliminated by compiler, and rewrite the code there to explain to the compiler that it is safe to hoist them (which is mostly “stupid” things like let xs = xs[0..n]; let ys = ys[0..n]), and then spend more time finding few selective places where you need unchecked indexing.

            So, I’d say a more useful quantitive experiment would be to take some Rust codec that was explicitly optimized to lean heavy on compiler’s elimination of bounds checking, and then compile that with a patched version of the rust compiler that eliminates bounds checks.

            The qualitative claim that in codes you can’t actually host the checks in interesting. I don’t have relevant experience here, but two things give me pause,

            First:

            where many of the memory areas have a runtime-determined size that would be difficult to track during the compile-time in order to hoist checks.

            If this means to say what it actually says, then this is wrong. let xs = xs[0..n]; let ys = ys[0..n] is exactly how you hoist checks due to runtime-determined size. Given that the quote uses ‘hoist’ rather than ‘eliminate’ I have a hutch that it wants to say something different though?

            Second:

            Even if you can’t hoist checks from the core loop of the codec, my gut feeling that exploitable out-of-bounds happen in the auxiliary, cold code that you need to run before you get to your innermost loop. So, unchecked indexing in the hot loop + checked indexing everywhere else feels like it can make 99% of runtime access unchecked, and 90% of source-level accesses checked, which is exactly the right tradeoff.


            Not claiming that the numbers stated in the article are wrong, just that I don’t personally know whether to believe or disbelieve them given the background I have!

            1. 1

              so any reasonably-performant bounds-checking solution necessary requires extra work to eliminate most bounds checks.

              I just don’t see how this is realistic - people who write the tight math code that actually have to go fast absolutely do not have this experience. e.g. I know for sure that

              That is, you need not only enable bounds checks, but then spend some time looking at the places where the checks are not eliminated by compiler, and rewrite the code there to explain to the compiler that it is safe to hoist them (which is mostly “stupid” things like let xs = xs[0..n]; let ys = ys[0..n]), and then spend more time finding few selective places where you need unchecked indexing.

              will never happen 99.999% of the time, and then we’re all getting slower software than we should and have to spend more money and resources on buying more powerful computers.

              Also, elimination of bound checks is only viable in release mode with optimizations enabled, but I’ve seen a fair amount of codebases where the performance in debug mode was also critical (as otherwise it’s just.. not possible to debug the code if it doesn’t work at a certain speed). E.g. people in C++ would instead of doing std::vector<foo> vec; ...; vec[i]; get the pointer to the data and operate directly through pointer arithmetic - ptr = vec.data(); ptr[i]; because just the cost of non-inlined operator[] is too much. Likewise, I have yet to see a successful use of e.g. doing a loop through

                 for(auto v : ranges::iota(min, max)) { ... }
              

              due to the performance impact on a -O0 build thus everyone does classic openmp-friendly for(int i = 0; i < N; i++) loops.

              1. 7

                If the person writes tight code, but doesn’t have the skill to check that bound checks do not kill vectorization, they almost surely lack the skill to write unsafe code that doesn’t go out of bounds.

                Performance is easy: you just benchmark&look at the asm. Not doing out of bounds is hard in C, as only maliciously crafted inputs trigger this behavior (and OOB is usually downstream of more subtle stuff like UB on integer overflow).

                I think in most contexts, security is more important than speed, so this is a worthwhile trade.

                Though, the scenario does seem unrealistic to me — in my experience, folks writing performance-relevant code generally know what they are doing.

                Important fact to underline here is that the cost of bounds check is bimodal: every specific check is either virtually free (trivially predicted not taken branch) or makes code 5x slower (when it kills vectorization). To avoid the slowdown, you don’t need to be hyper-vigilant every time you do [], you only need to pay attention to vectorized inner loops.

                The statements about debug performance are true.

              2. 1

                I think the difference is easier explained by the fact that bzip3 is a small, highly optimized library, that is being compared to a giant & old monolithic C++ codebase.

                1. 2

                  Unless I am misreading something, the cited 13% is from comparing a C++ project to itself, compiled with different flags.

                  1. 1

                    Yes, although I realize now I didn’t read your comment carefully. I thought you were also talking about the 0.3% number from retrofitting spatial memory safety for Google. But yes, the 13% is from comparing paq8l w/ and w/o flags.

              3. 4

                I take issue with the claimed performance penalty of bounds checking.

                They did not enable bounds checking, they enabled assertions.

                All of them.

                So they were testing far more things than just bounds checks, and then looking at the “pag8l.cpp”s I find on the internet those assertions did not even appear to include significant bounds checking.

                I’m not sure how seriously we should take such a roughshod approach to “how expensive are bounds checks?”

                It reads like a text book case of “I know bounds checks are far too expensive, so if my ‘benchmark’ confirms that I won’t bother seeing if what I’m claiming to test is remotely accurate”.

                All they’ve done is shown assertions are generally expensive.

              4. 27

                If you maintain a reasonably popular C++ library and it doesn’t use CMake as its build system, sooner or later someone will come and demand that you add CMake support (risking permanent brain damage in the process) to make it easier to consume your library in their CMake-based project. Happened to me multiple times.

                1. 3

                  I’m more familiar with meson, which has a nice system for consuming dependencies that don’t use meson (wraps). Does CMake not have something similar?

                  1. 27

                    Like everything else in CMake, it has plenty of ways to interoperate with other build systems, and they all still involve suffering.

                    1. 2

                      CMake can find libraries through pkg-config or you can write your own finder. See files in /usr/share/cmake-*/Modules/ for inspiration. No need to build your library using CMake.

                      1. 1

                        I’ve sent PRs for porting projects that use pkgconf to cmake more than a few times because pkgconf is at best fragile and at worst barely works at all for instance with the msvc universe. With cmake I know it will work and I can do my job.

                        1. 1

                          Last time I tried pkg-config on Windows, it didn’t work very well. That’s been a while.

                          you can write your own finder. See files in /usr/share/cmake-*/Modules/ for inspiration.

                          You can also accept a PR for a finder… which is what I’d offer to do if I were maintaining a C++ library that doesn’t use CMake. That said, I personally find the brain damage that CMake induces very compatible with the wear and tear imposed by C++, so when I do work in C++, I usually do use CMake for my builds. It’s the worst C++ build system, to be sure, except that all the others are even worse.

                          1. 2

                            It’s the worst C++ build system, to be sure, except that all the others are even worse.

                            The insidious thing about CMake-induced brain damage is that it attacks those parts of your brain that you need to recognize a better build system. How else could one possibly explain otherwise sane people professing love to CMake?

                            1. 4

                              Nobody starts out using CMake. The thing they used prior to picking up CMake is what did that job.

                              1. 3

                                My above comment is of course a joke, but there is some truth to it: in CMake people work by copying and pasting illogical, magic incantations until something sticks. There is no understanding of the underlying model of build. Which is not really surprising since whatever model one may argue CMake has is obfuscated by the meta build system layering and hacking around of underlying build system limitations (like having to generate source code during the configuration phase).

                                Then, when they try a supposedly better build system, they adopt the same approach: don’t bother trying to understand, just throw things against the wall until something sticks. I see this constantly where smart, experienced C++ developers adopt this attitude when trying build2. But C++ build is too complex for this approach to work for anything other than toy examples.

                                Yes, C++ is a complex language and some features are outright broken or don’t compose well. But you can pick a sensible subset and there will be logic and model, and you can go read the standard and it makes sense. CMake, IMO, is the absolute worse part of it. A new language (C++ post-C++11) lost to CMake, truly.

                              2. 2

                                I’m a pretty strong cmake proponent because it’s the only thing that works at the scale I need - building one codebase with every potential toolchain and compiler, having only one set of commands for targeting windows/msvc to iOS to freebsd to emscripten to ESP32s etc. Alternatives never work that well, for instance meson is a PITA as soon as you want to use windows, fully hermetic build systems just aren’t compatible with what making packages for Linux distros requires, etc The language certainly is terrible but it solves actual problems such as not having to go download 7z.exe (or the Mac or Linux version) from who knows where on your CI scripts depending on the platform you’re running on because cmake supports cross platform archive extraction with one single command.

                      2. 2

                        It turns out, anyone can understand numbered gears totally fine after a bit of practice.

                        Author would be amazed. I’ve met a few people that didn’t understand gears at all, one of them even after years of biking

                        1. 6

                          I finally decided to go look it up, since I had an intuition that I was doing it wrong, but didn’t know exactly how to think about it.

                          https://www.yellowjersey.co.uk/the-draft/bike-gears-explained/

                          My intuition was correct that there is a lot of overlap and that you should use only the chunks of the cassette closest to your chosen crankset gear. But I didn’t have the 2/3 rule in my head and I still don’t know how I should shift through the gears like i would in a manual car.

                          1. 5

                            Yeah, there is a huge amount of overlap in most 2x or 3x drive trains, and shifting gets complex in theory. You have to chose a chain ring and cassette cog that give you a desired ratio for an optimal cadence at a certain power output while keeping as straight a chain line as possible. In practice, go to slow or cranks are too easy to turn? click shifter this way. Too fast or cranks are too hard to turn? click shifter that way.

                            There are only a very small minority of people who actually think about the math behind their gear ratios, and those tend to be single speed/fixed gear geeks for whom a percent change can determine if they get up a hill, blow out a knee, or burn through their tire on skids, and pro cyclists who need to get every marginal gain possible.

                            I think there’s a good middle ground through. Like a bike shouldn’t baby you with buttons for “flat”, “sort of flat”, “hill”, “hill with grass”, but it also shouldn’t make you calculate your ratios just to go from one setting to another. Just numbers is fine, e.g. gear 1 - gear 8 if you have an 8x cassette.

                            1. 1

                              To give some context about my initial post - this was about people struggling to understand how a three-gear city-provided bike (with pretty much no control other than turning the gear and pedalling) works, without thinking of ratios or anything. Like, that you have to go to an upper gear when you want to go faster on flat terrain and go back to lower gear if struggling during a climb. I’m no better: I bike and I have honestly zero clue about anything in your post and parent’s post.

                          2. 3

                            I don’t think I understand “numbered gears”. I have no idea which of them is 1 and which is 12 for instance. But I do understand gearing on a bicycle very well.

                            Are people really looking at the gearing indicator when they have one thinking oh, I’m in 5 on the flat I should really be 7…

                            I shall leave now, obviously the illustration is a good one for UX, but don’t question it too hard.

                            1. 6

                              I don’t often think about the number, but it is good when you want to quickly jump. I know on a flat so I should be roughly a 5. But I think the key thing is that it is easy to understand going higher and lower. If I am struggling I gear up a bit, if my feet are spinning on the pedals I go down a bit. It is very natural to learn or adjust. If I am on a new terrain I’m not screwed if there is no preset, I can just extrapolate from previous experience (or worst case play around and see what works well).

                              If I am warming food in a microwave but it is a bit too cold it is trivial to understand how to add 10s, or turn up the power. The actual time is somewhat meaningless, but you quickly build an intuition and will remember a few good settings for things you do often. But if my options are “reheat, dethaw, potato, popcorn” and my popcorn comes out mostly unpopped I don’t know what to do to make it better. Do I want “potato” or “dethaw”?

                              I’m starting to see this more and more so the article really resonates with me. For example most airfryers push users towards preset buttons like “fries” and setting regular time and temperature is harder to do than it should be. If my fries are a little too crispy do I fix that with the “chicken wing” setting? Not this isn’t so bad because they almost always do have a manual option. In some way the presets can be seen as starting points that you can learn from (as it does show you the time and temperature they use). So the only real problem here is making the UX for manual controls worse than it would be if that was the only option.

                              My pet peeve is the Ninja Creami. I think technically it is a great appliance but the controls are super frustrating. They just have 8 presets, with an awful dial to pick between. (https://m.media-amazon.com/images/I/71t9VcZQVVL._AC_UF1000,1000_QL80_.jpg). I’m pretty convinced half of the buttons are just duplicates, and if they don’t work you have no recourse. If it just had options for up/down speed and rotational speed I would be able to fine tune to the recipes that I like. The best option this machine provides is repeating the cycle or pressing “respin” until it is done enough. It is really awful UX in a attempt to make it easy (and sell it as a “10 function machine”). Providing two dials would be easier to use in practice and provide better results as you can fine-tune to what you are making.

                              The Ninja Creami is a really bad case of this problem but I see the general concept appearing everywhere. Instead of general controls that can be tuned but may require a bit of guidance to get started things are shifting to only providing presets, and if the appropriate preset doesn’t exist or doesn’t fit your case then you don’t really know what to do and aren’t empowered to fix your problem. This is why this article really resonated with me.

                              1. 2

                                Right, I think where the bicycle analogy falls down is that you have a constant feed back loop on a bicycle to you the rider. If you remove all indicators of what gear you are in - you’re still able to operate the bike. To the point where if you ride enough the gear doesn’t matter just the “leg sensation” and anticipation the road ahead.

                                On a kitchen machine we don’t have this instant feedback so do actually need some references to start from. If we were to take it to the nth degree with a bicycle maybe it’s like riding a fixed gear / single speed but when you want to change gearing all you have in the bag is sprockets and chainrings labeled “hilly road”, “track pursuit” etc, rather than a 14,15,16 sprocket and 48, 50, 52 chain ring.

                                I do think it’s a good analogy, I am being slightly very facetious picking it apart.

                                1. 1

                                  when you want to change gearing all you have in the bag is sprockets and chainrings labeled <…>

                                  If all you have is a pile of gears and chainrings, then you’d be better served by labeling them according to the approximate achievable ride speed within physiologically optimal cadence range :}

                                  (don’t mind me, this is just a wild tangent)

                            1. 3

                              This is caused by rustup, which shims cargo, rustc, etc to redirect to the required toolchain. Which is usually controlled by the user but a rust-toolchain file allows the directory to override it, which in turn can cause automatic installing of new toolchains without user involvement.

                              Funnily enough, implicit toolchain install was removed in rustup 1.28.0 but there was a big outcry about it so it’s being restored by default. Maybe a less invasive change would be an easier pill to swallow (e.g. only official toolchains are installed implicitly).

                              1. 2

                                It’s not related to toolchain installs, just to path overrides, but otherwise yeah, this is how it works.

                                1. 1

                                  This is a nice illustration of how errors arise out of combinations of known-good and tested products working by spec.

                                  Welcome to STPA ;)

                                  1. 2

                                    Is it really an error or just the classic issue that some people want safety and other people want the convenience of being able to do whatever they want (e.g. arbitrary code execution)?

                                    1. 1

                                      I mean, the STPA is basically the observation you make, taken to the extreme: systems of “good” components have emergent behaviour that may or may not be seen as an error. For that reason, e.g. it does model humans in the loop as important, because they can make these judgement call.

                              2. 13

                                This is an incredibly strange article. It has a few technical inaccuracies (Box IS a Sized type, the size of a pointer to an object doesn’t depend on the size of the object itself), but more fundamentally, it doesn’t actually answer the question it poses in the title.

                                For example, it complains that fat pointers are bad because they’re not supported by FFI. Nobody* writing a business app is going to care about FFI.

                                The rest of the article is full of “standard” complaints about Rust that I think have been fairly widely debunked, or just represent a not-well-informed view of things (e.g., the borrow checker is too hard, async sucks, etc) but even if true none of these criticisms are specific to business apps, it’s just a critique of the language itself.

                                I also just had to laugh at this bit:

                                While languages such as Go enable you to write pretty much entire HTTP service from standard lib alone, this bazaar-style package management comes with the burden: whenever you need to solve any mundane problem, you land in a space where everything has at least 7 different crates available, but half of them are actually a toy projects and most of them were not maintained for the last 5 years. And don’t get me started about audits to check if one of 600 dependencies of your hello world app won’t be used for supply chain attacks.

                                Yes, dependency management is a concern, but comparing to Go which famously refuses to implement basic features in the language, and then expects you to import xXx69roflcopterxXx’s github repo which is widely accepted as the best library in the ecosystem is a bit hilarious to me.

                                • Yes, yes, I’m sure that somebody somewhere has tried to write a business app with FFI included, but it’s certainly not the norm.
                                1. 4

                                  Yes, dependency management is a concern, but comparing to Go which famously refuses to implement basic features in the language, and then expects you to import xXx69roflcopterxXx’s github repo which is widely accepted as the best library in the ecosystem is a bit hilarious to me.

                                  Rust is the exact same, it just hides the GitHub repo names better. The Cargo package json would in Go be spelled "github.com/maciejhirsz/json-rust".

                                  Well, not the exact same, because if you also decide to write your own json package for Rust you can’t register it in crates.io due to its no-namespaces naming policy. Go doesn’t restrict how many packages are allowed to match the pattern "github.com/*/json".

                                  1. 2

                                    Nobody* writing a business app is going to care about FFI.

                                    I’m not sure what you call a business app. All the ones I know always have some dll-based runtime plugin / extension mechanism.

                                  2. 8

                                    Context: author of a node-based environment for artists (https://ossia.io)

                                    I think node-and-wires is popular because visual programming designers make the fundamental assumption that the underlying nature and logic of programming is just traditional textual programming.

                                    To me, it’s actually popular because it mimicks the physical tools that people in my field are used to - there’s a lot of plugging real-life cables in and out of real-life boxes involved. I tried all my PhD to not go towards node and wire but in the end that’s the only tool that existing practitioners are familiar enough to grasp quickly.

                                    Much of the power in pure functional programming lies in the power of higher-order functions, and I haven’t seen very good node-and-wires representation of that.

                                    I’d recommend trying OpenMusic

                                    To me, that’s damning evidence against the practice of using nodes-and-wires to model functions. Text is still the better form for expressing the underlying logic of functional programming.

                                    I think that’s just because of education and habit. If you taught people visual programming that’s what they do - there are fields (computer music, game design especially with unreal engine) where the work is 99% through visual programming tools and that’s because that’s what the universities teach - and the result is that the workforceonly knows to use visual tools.

                                    Imperative programming with node-and-wires fares no better. A loop in LabVIEW gives no more advantage or clarity over writing it in text

                                    You don’t know how many people would do absolutely anything to not have to type or read text. I’ve seen people do stuff like hundreds of duplicated nodes in blueprints - when shown a dozen lines of code doing the same behaviour the response would be “but with blueprints I can understand it”

                                    1. 1

                                      I’d recommend trying OpenMusic

                                      Cool, I looked at the PDF. Every function node has a “lambda mode”, where if you trigger it, the output is the function itself, rather than its output. https://hal.science/hal-00683472/document Thanks.

                                      when shown a dozen lines of code doing the same behaviour the response would be “but with blueprints I can understand it”

                                      Do you know if they understand it because they developed idioms? So that certain node groups are arranged in a certain way to be recognized by their visual pattern of nodes at a zoomed out level? Kinda like how APL has idioms? Is that why it’s more understandable to them, or is it something else?

                                      1. 1

                                        Do you know if they understand it because they developed idioms?

                                        the main thing I’ve seen is that people will go to absolutely any length to not have to type on their keyboard and do stuff only with the mouse. I had a boss who would purposely unplug their keyboard when testing the software we were developing to make sure everything would work without the mouse. I’ve given a talk once in a computer music / media arts conference and some people stood up and left the room when I showed some example of how to do something more easily (to me) in code with javascript than graphically.

                                        That said, having visual patterns through layouts of nodes in graphical languages is extremely common - you see students starting to do it after a couple hours of using such a software usually.

                                    2. 4

                                      Random sidenote: I wish there was standard shortcuts or aliases for frequently typed commands. It’s annoying to type systemctl daemon-reload after editing a unit, e.g. why not systemctl dr? Or debugging a failed unit, journalctl -xue myunit seems unnecessarily arcane, why not --debug or friendlier?

                                      1. 5

                                        I’m using these:

                                        alias sc="sudo LESSSECURE_ALLOW=lesskey SYSTEMD_LESS='$LESS' systemctl"
                                        alias jc="sudo LESSSECURE_ALLOW=lesskey SYSTEMD_LESS='$LESS' journalctl"
                                        

                                        this is shorter to type, completion still works and I get my less options

                                        1. 3

                                          Typing this for me looks like sy<tab><tab> d<tab> - doesn’t your shell have systemd completions ?

                                          1. 1

                                            It does but what you describe doesn’t work for me.

                                            $ systemctl d
                                            daemon-reexec  daemon-reload  default        disable
                                            
                                            1. 2

                                              what doesn’t work ? in any modern shell when you are here and type tab twice you will get to daemon-reload. ex: https://streamable.com/jdedh6

                                              1. 1

                                                your shell doesn’t show up a tab-movable highlight when such prompt appears? If so, try that out. It’s very nice feature.

                                            2. 3

                                              journalctl -u <service> --follow is equally annoying

                                              1. 15

                                                journalctl -fu

                                                1. 3

                                                  My favorite command in all linux. Some daemon is not working. F U Mr. Daemon!

                                                  1. 2

                                                    so this does exist - I could swear I tried that before and it didn’t work

                                                    1. 19

                                                      I wasn’t sure whether to read it as short args or a message directed at journalctl.

                                                      1. 1

                                                        Thankfully it can be both! :)

                                                      2. 1

                                                        You gotta use -fu not -uf, nothing makes you madder then having to follow some service logs :rage:

                                                        1. 13

                                                          That’s standard getopt behaviour.

                                                          1. 2

                                                            Well I guess fu rolls better of the tongue than uf. But I remember literally looking up if there isn’t anything like -f and having issues with that. Oh well.

                                                    2. 3

                                                      Would it be “too clever” for systemd to wait for unit files to change and reload the affected system automagically when it changed?

                                                      1. 13

                                                        I’m not sure it would be “clever”. At best it would make transactional changes (i.e. changes that span several files) hard, at worst impossible. It would also be a weird editing experience when just saving activates the changes.

                                                        1. 2

                                                          I wonder why changes should need to be transactional? In Kubernetes we edit resource specs—which are very similar to systemd units—individually. Eventually consistency obviates transactions. I think the same could have held for systemd, right?

                                                          1. 6

                                                            I wonder why changes should need to be transactional

                                                            Because the services sd manages are mote stateful. If sd restarted every service each moment their on-disk base unit file changes [1], desktop users, database admins, etc would have terrible experience.

                                                            [1] say during a routine distro upgrade.

                                                      2. 3

                                                        Shorter commands would be easier to type accidentally. I approve of something as powerful as systemctl not being that way.

                                                        Does tab completion not work for you, though?

                                                      3. 1

                                                        I wouldn’t dare to do any complex UI without Qt-like OOP

                                                        1. 5

                                                          FWIW, C++ now has a type-safe printf replacement, std::format. It’s done using variadic templates. I implemented my own simplified equivalent last year, which was a fun exercise because I’m not a real template expert! Fortunately concepts and consteval functions made it a lot easier.

                                                          1. 7

                                                            C++ templates are in a really wild category here, I personally very much consider them macros, but, there’s actually an argument to be made for considering the C++ template language to be dependently typed, if you consider compile time it’s “runtime”, but that’s a whole can of worms of its own.

                                                            1. 3

                                                              I understand that C++ templates are very messy from a PLT perspective, but it would be inaccurate to consider them macros, they’re too deeply integrated with the “object language” to meaningfully be called macros. As an ex-C++ programmer with a lot of functional programming experience afterwards, I think most PLT folks are making a big mistake by not looking closer at what C++ templates got right among all the things that it obviously got wrong.

                                                              1. 3

                                                                Maybe this is a bit of a gap in our definitions of “macros”, but I personally don’t think there is such a thing as being too deeply integrated with the “object” language for something to be meaningfully considered macros. Lots of modern macro systems end up deeply entangled in with the host language, in the quest for hygiene, and I also consider idris’s elaborator reflection to be macros, and it’s what the C++ template language wishes it could be in terms of integration with the host language.

                                                              2. 1

                                                                I would say the way I implemented it is more on the dependent-typing side, to the extent that I understand the term. While parsing the format() call,

                                                                • each arg’s type is mapped to an enumerated constant, which goes into a compile-time array that’s passed to the internal format() function
                                                                • each arg is wrapped in a function that converts it to a type that can be passed through C varargs, e.g. the function for std::string calls c_str() to convert it to a C string pointer.
                                                              3. 1

                                                                C++ now has a type-safe printf replacement, std::format.

                                                                I’m not sure it’s even “now” anymore. std::format has been provided in the standard library since C++20, and fmt::format has provided compile-time argument checking back to C++14, an eleven-year old language !

                                                                1. 1

                                                                  “Since C++20” doesn’t mean “since 2020” unfortunately. As of a year ago std::format was still giving me trouble in Clang/libc++ (not the latest one but whichever one Xcode ships.)

                                                                  And adoption is slow in C++. It seems a lot of projects still haven’t moved past C++11 features [citation needed].

                                                                  1. 2

                                                                    I mean, on macOS you can disregard the vendor’ old clang and use for instance a homebrew-shipped clang which will be up-to-date. I always find it weird to judge a language by how it’s shipped by Apple / Microsoft / Ubuntu… - by that measure Idris does not exist as a PL. Likewise, people who aren’t using c++20 are likely not going to jump ship to Idris mainly because the kind of magical features a dependent-typed language brings is usually what they don’t want.

                                                                    I doubt it’s the majority though, every c++ poll from a few years ago already showed c++11 already being quite towards the end of the bell curve of standard adoption: https://www.jetbrains.com/lp/devecosystem-2023/cpp/

                                                                    1. 3

                                                                      I’ve been reluctant to install a newer Clang, tbh. Will it have the same Apple-specific bits as the one in Xcode, and compatible libc++ headers?

                                                                      1. 2

                                                                        I had issues with it a decade ago but since then I’ve seen a few software successfully shipped on macOS with a custom-compiled clang.

                                                                    2. 1

                                                                      I’m sure there are some corners that haven’t gotten past C++11, but I don’t think it’s particularly common. Even in old industry code bases upgrading to newer standards is rarely an issue unless some weird platform/toolchain needs to be supported. (Maybe some highly regulated industries too, where upgrading compilers is a lot of paperwork? I don’t touch that too much)

                                                                      1. 1

                                                                        At work our move to C++20 was complicated by the need to support some older versions of CentOS. I don’t remember the details, but our Linux guy was cursing about it. Something about not supporting GCC 12, or glibc/libcpp being too old?

                                                                        1. 2

                                                                          That’s strange too, CentOS is actually very nice for recent compiler support. You can use gcc-toolset-12 on CentOS 7 (released 11 years ago) and gcc-toolset-14, the latest, on CentOS 8.

                                                                          1. 1

                                                                            It might be CentOS 6 we still have to support?

                                                                            1. 1

                                                                              It has been EOL for 5 years already, thus, my condolences !

                                                                2. 1

                                                                  MacOS - DNSServiceQueryRecord

                                                                  .. isn’t this function only for DNS-SD ? (e.g. Bonjour / mDNS / local DNS resolving, not anything internet-related).

                                                                  1. 2

                                                                    There is wide-area DNS-SD which uses the regular DNS instead of mDNS. http://www.dns-sd.org/

                                                                    1. 1

                                                                      Hm, it’s not clear to me if wide-area DNSSD would be regulars DNS or a completely different service

                                                                      1. 1

                                                                        What isn’t clear? I wrote that it uses regular DNS and the web page I linked to says in its first sentence “DNS Service Discovery is a way of using standard DNS programming interfaces, servers, and packet formats to browse the network for services.” This isn’t a hypothetical proposal, it has been deployed for many years.

                                                                        1. 1

                                                                          well, to me this very explicitly means “this is the same API & protocol than normal DNS but won’t allow you to resolve google.com but rather select services relevant to your organization”.

                                                                          Hybrid Unicast/Multicast DNS-Based Service Discovery (draft-cheshire-mdnsext-hybrid) describes a way to provide wide-area service discovery for devices that only advertise their services using link-local Multicast DNS.

                                                                          1. 1

                                                                            Hybrid DNS-SD is a way to reduce the traffic associated with mDNS in large multicast networks. It uses wide-area DNS-SD but the existence of hybrid DNS-SD doesn’t imply that wide-area DNS-SD is restricted to local services.

                                                                            Wide-area DNS-SD uses regular DNS but adds a layer on top to support browsing for services. If you are using a DNS-SD API then you will only be able to discover services that have service discovery records in the DNS, which probably doesn’t include google.com.

                                                                            But if you are using a lower-level DNS API like DNSServiceQueryRecord() that is described as “Query for an arbitrary DNS record” then you can use it to resolve google.com.

                                                                  2. 26

                                                                    Focus on debugging your application rather than debugging your programming language knowledge.

                                                                    I think this is very compelling on the surface - we should want languages that are lean towards being simple, productive, and fun to write! However, I worry that Zig expands the surface area of “the application”’s complexity by offloading language complexity to it. If so, it scales worse than putting the complexity in the language - instead of learning a more complex language once, you have to deal with it in every line of every application you write in the language.

                                                                    No lifetimes/borrow checking/mutability XOR sharing/RAII certainly makes writing Zig simpler, but you now need to be constantly vigilant about correctly representing and implementing their replacements in every application you write.

                                                                    Also, I completely agree with the author that stuff like unused variables should be warnings, not hard errors. There is a time and place for things. Prototyping, debugging, and refactoring code should not demand immediate perfection at every step of the process.

                                                                    1. 10

                                                                      I think this is very compelling on the surface - we should want languages that are lean towards being simple, productive, and fun to write!

                                                                      While I agree that simple languages are nice, I don’t think we should design them to be. I would rather spend hours working on my compiler complaining about some code I wrote and understand why (and unlock a new skill to detect a class of defects — for instance, async’s await points and why pinning is important), rather than being able to compile, ship, and deal with the aftermath of a SEV-1 at 3:30 AM.

                                                                      1. 2

                                                                        I would rather spend hours working on my compiler complaining about some code I wrote

                                                                        I do, too, in some cases. In other cases, I’ll always remember a previous job where a customer called in the morning and wanted a fresh app done in the evening for a one-time event ; that wouldn’t have worked if we had to spend three hours fixing typing bugs

                                                                        1. 12

                                                                          This assumes people do not understand the error / are complete juniors, right? Because in my experience, after more than 10 years using Rust, the compiler very rarely yells at me, now that I’m fully wired to think in Rust. And it’s a virtuous circle: when it does yell at me, I know my attention needs to be focused on that compiler error because it probably hides real-world issues underneath.

                                                                          I can prototype / release a POC / MVP in Rust without worrying too much about the reliability implications. Can you say the same thing about Zig?

                                                                          1. 5

                                                                            10 years is a massive amount of time just to feel comfortable in a language.

                                                                            1. 11

                                                                              I didn’t say it took me 10 years, though.

                                                                              1. 11

                                                                                Google reports it takes people far less than that: https://opensource.googleblog.com/2023/06/rust-fact-vs-fiction-5-insights-from-googles-rust-journey-2022.html

                                                                                Based on our studies, more than 2/3 of respondents are confident in contributing to a Rust codebase within two months or less when learning Rust. Further, a third of respondents become as productive using Rust as other languages in two months or less. Within four months, that number increased to over 50%. Anecdotally, these ramp-up numbers are in line with the time we’ve seen for developers to adopt other languages, both inside and outside of Google.

                                                                                1. 10

                                                                                  Coming from Haskell back in 2015, it took me two days to learn Rust and be productive with it (my first project was a rewrite of a graphics abstraction, luminance). It took me an additional two weeks to be comfortable around lifetime annotations (like 'a: 'b) and properly use them, but most of the &str vs. String clicked immediately to me, as well as move semantics and Drop (I spent long times with C and C++ / D too).

                                                                                  I do think people over-exaggerate Rust complexity.

                                                                              2. 3

                                                                                Because in my experience, after more than 10 years using Rust,

                                                                                10+ years of experience is an extremely small minority of programmers. In the organization I’m currently working at of 140 employees, I’m the only one with 10+ years of actual consistent programming jobs (and even then I’m doing 95% management nowadays so I don’t even count in this statistic - on average people around me would start moving to management roles away from any text editor after ~5 years of programming).

                                                                                1. 6

                                                                                  Omg. I’ve proudly worn variations on the “programmer” title for all 40 years since leaving university. Or at least that’s what I write on government forms, though Samsung for example called me “Superior Software Engineer”. A nice thing about that particular company is that they have a dual track career path where you can have the same grade and salary as a department head with 50 people while remaining purely technical.

                                                                          2. 6

                                                                            No lifetimes/borrow checking/mutability XOR sharing/RAII certainly makes writing Zig simpler, but you now need to be constantly vigilant about correctly representing and implementing their replacements in every application you write.

                                                                            While I’m an unabashed Rush shill these days, this can be a feature. There are a number of times where XOR mutability can’t be proven by the borrow checker and this gets in the way of expressing correct programs in the most efficient and readable way possible.

                                                                            At the same time, it does not eliminate all classes of concurrency bugs (like deadlocks) or all classes of memory safety bugs (like bounds checking) or technically-memory-safe-but-is-it bugs (memory leaks) so you still have to think about difficult things when it comes to accessing variables even if the compiler can prove a big chunk of it is safe. Is it so bad to add a few more classes to your headspace when programming?

                                                                            I’d say “yes” but people have been writing programs that work for decades without it. So I can see the appeal.

                                                                            1. 13

                                                                              I’d say “yes” but people have been writing programs that work for decades without it. So I can see the appeal.

                                                                              Enforcing mutable xor shared is the thing that lets you build complex concurrent programs. People have been writing multithreaded C for years but often with simple pipeline or fork-join models. If you look at Erlang code and C code for the same problem, you will usually see orders of magnitude more concurrency in the Erlang version.

                                                                              I’m not a huge fan of how Rust enforces this property, but enforcing it in any way makes it possible to reason about concurrency locally in a way that enables a lot more of it.

                                                                              1. 2

                                                                                Not sure if erlang has mutable xor shared; Thought it was just actors + copy (semantically). C code can have equal amounts of concurrency like Erlang, but it would be a bit verbose with callbacks/state-machines (libuv is a good example) that compilers which provide stackless coroutines would normally do for you. Although stackful coroutines (like erlang) are still an option.

                                                                                Rust’s borrowing scheme, along with Send + Sync, is primarily a way to avoid data races (a specific subset of race conditions) over aiding in correct concurrent code; Sticking to safe abstractions often guides one to either poor borrowck bypasses (Arc<Mutex/RwLock>) when used sparingly or actors with channels when dataflow is more explicit. One can still have race conditions and deadlocks with both so concurrency-correctness doesn’t seem like something the compiler helps with at that point, only preventing UB from memory access.

                                                                                1. 8

                                                                                  Not sure if erlang has mutable xor shared

                                                                                  In Erlang, only one object is mutable: the process dictionary. You cannot take a reference to the process dictionary, and so you cannot send it as a message.

                                                                                  C code can have equal amounts of concurrency like Erlang, but it would be a bit verbose with callbacks/state-machines (libuv is a good example) that compilers which provide stackless coroutines would normally do for you. Although stackful coroutines (like erlang) are still an option.

                                                                                  The first non-trivial Erlang program that I wrote had over a thousand actors. I wrote it on a single-core laptop and then deployed it on a 64-CPU SGI monster. It got a linear speedup. Doing the same in C might be possible. Doing it in Erlang was easy.

                                                                                  One can still have race conditions and deadlocks with both so concurrency-correctness doesn’t seem like something the compiler helps with at that point

                                                                                  Indeed, this is partly why we created the behaviour oriented concurrency model, so a compiler can guarantee deadlock freedom. Avoiding race conditions is harder (though BOC makes the common causes easy to avoid).

                                                                                  1. 3

                                                                                    You cannot take a reference to the process dictionary

                                                                                    Would this then be mutable xor shared

                                                                                    Doing it in Erlang was easy

                                                                                    Wondering if thats more due to garbage collection rather than concurrency primitives. Given 64bit address spaces, one can spin up a million stackful C threads on a modern laptop with similar ease. Presumably, allocations with linear lifetimes could use arenas in C to make the memory management simpler. But the erlang version would likely be more expressive/easy.

                                                                                    behaviour oriented concurrency model

                                                                                    Nice, will give it a read. Is it related to ponylang behaviors / GenServer approach of exposing only the handlers for channels rather than the blocking receivers? I remember understanding this approach to be deadlock free until a discussion pointed out that it’s mainly traditional deadlocks; Logical ones still being unavoidable i.e. “I sent you something and expect a reply/handler-called in order to progress the system”

                                                                                    1. 4

                                                                                      You cannot take a reference to the process dictionary

                                                                                      Would this then be mutable xor shared

                                                                                      Yes, this is mutable but cannot be shared. Everything else is immutable and can be shared.

                                                                                      Wondering if thats more due to garbage collection rather than concurrency primitives

                                                                                      No, the GC is largely irrelevant. The fact that you can pretend that you uniquely own everything that you reference (it may be shared, but it’s all immutable) is the key.

                                                                                      Is it related to ponylang behaviors / GenServer approach of exposing only the handlers for channels rather than the blocking receivers?

                                                                                      The author of Pony was one of my coauthors. It’s designed to address problems that became apparent with Pony.

                                                                                      1. 4

                                                                                        Everything else is immutable and can be shared.

                                                                                        AFAIK, erlang/beam don’t share data semantically: Updates create a copy and sending across processes deep-copies into the other (ignoring internal ref-counting optimizations for large binary objects) making it more like CoW than shared references.

                                                                          3. 4

                                                                            This kind of articles confirms for me that one of the largest quality-of-life improvement of Rust over C++ is being able to do some decent amount of reflection and code-generation in-language with annotations. Thankfully it’s happening after decades of C++ people screaming at the mere idea of reflection in comp.lang.c++ and various official forums.

                                                                            1. 2

                                                                              I wonder how they compare to LLFIO in C++, which beats other solutions I tried by, like, an order of magnitude

                                                                              1. 22

                                                                                I remember John Carmack describing in one of his Doom 3 talks how he was shocked to discover that he made a mistake in the game loop that caused one needless frame of input latency. To his great relief, he discovered it just in time to fix it before the game shipped. He cares about every single millisecond. Meanwhile, the display server and compositing window manager introduce latency left and right. It’s painful to see how the computing world is devolving in many areas, particularly in usability and performance.

                                                                                1. 17

                                                                                  He cares about every single millisecond. Meanwhile, the display server and compositing window manager introduce latency left and right.

                                                                                  I will say the unpopular-but-true thing here: Carmack probably was wrong to do that, and you would be just as wrong to adopt that philosophy today. The bookkeeping counting-bytes-and-cycles side of programming is, in the truest Brooksian sense, accidental complexity which we ought to try to vanquish in order to better attack the essential complexity of the problems we work on.

                                                                                  There are still, occasionally, times and places when being a Scrooge, sitting in his counting-house and begrudging every last ha’penny of expenditure, is forced on a programmer, but they are not as common as is commonly thought. Even in game programming – always brought up as the last bastion of Performance-Carers who Care About Performance™ – the overwhelming majority very obviously don’t actually care about performance the way Carmack or Muratori do, and don’t have to care and haven’t had to for years. “Yeah, but will it run Crysis?” reached meme status nearly 20 years ago!

                                                                                  The point of advances in hardware has not been to cause us to become ever more Scrooge-like, but to free us from having to be Scrooges in the first place. Much as Scrooge himself became a kindly and generous man after the visitation of the spirits, we too can become kinder and have more generous performance budgets after being visited by even moderately modern hardware,

                                                                                  (and the examples of old software so often held up as paragons of Caring About Performance are basically just survivorship bias anyway – the average piece of software always had average performance for and in its era, and we forget how many mediocre stuff was out there while holding up only one or two extreme outliers which were in no way representative of programming practice at the time of their creation)

                                                                                  1. 35

                                                                                    There is certainly a version of performance optimization where the juice is not worth the squeeze, but is there any indication that Carmack’s approach fell into that category? The given example of “a mistake in the game loop that caused one needless frame of input latency” seems like a bug that definitely should have been fixed.

                                                                                    I’m having a hard time following your reasons for saying Carmack was “wrong” to care so much about performance. Is there some way in which the world would be better if he didn’t? Are you saying he should have cared about something else more?

                                                                                    1. 14

                                                                                      16ms of input latency is enormous for a fast-faced mouse driven game; definitely something the player can notice.

                                                                                    2. 18

                                                                                      There are different kinds of complexity. Everything in engineering is about compromises. If you decide to trade some latency for some other benefit, that’s fine. If you introduce latency because you weren’t modelling it in your trade-off space, that’s quite another.

                                                                                      1. 8

                                                                                        the overwhelming majority very obviously don’t actually care about performance the way Carmack or Muratori do, and don’t have to care and haven’t had to for years. “Yeah, but will it run Crysis?” reached meme status nearly 20 years ago!

                                                                                        the amount of people complaining about game performance in literally any game forum, steam reviews / comments / whatnot obviously shows that wrong. Businesses don’t care about performance but actual humans being do care ; the problem is the constantly increasing disconnect between business and people.

                                                                                        1. 3

                                                                                          Minecraft – the best-selling video game of all time – is known for both its horrid performance and for being almost universally beloved by players.

                                                                                          The idea that “business” is somehow forcing this onto people (especially when Minecraft started out and initially exploded in popularity as an indie game with even worse performance than it has today) is just not supported by empirical reality, sorry.

                                                                                          1. 8

                                                                                            But the success is despite the game’s terrible performance, not thanks to it. Or do you think if you asked people if they would prefer minecraft to be faster they would say no ? If it was not a problem then a mod that does a marginal performance improvement certainly would not have 10M downloads: https://modrinth.com/mod/moreculling . So people definitely do care ; they just don’t have a choice because if you want to play “minecraft” with your friends this is your only option. Just like for instance Slack, Gitlab or Jira are absolutely terrible but you don’t have a choice to use it because that’s where your coworkers are.

                                                                                            1. 5

                                                                                              I don’t know of any game that succeeded because of their great performance, but I know of plenty that have succeeded despite their horrible performance. While performance can improve player satisfaction, for games, it’s a secondary measure of success, and it’s foolish to focus on it without having the rest of the game being good to play. It’s the case for most other software as well - most of the time, it’s “do the job well, in a convenient to use way, and preferably fast”. There’s fairly few problems where the main factor for software solving it is their speed first.

                                                                                              1. 2

                                                                                                I don’t know of any game that succeeded because of their great performance,

                                                                                                … every competitive shooter ? you think counter-strike would have succeeded if it had the performance of, say, neverwinter nights 2 ?

                                                                                                1. 1

                                                                                                  Bad performance can kill a decent game. Good performance cannot bring success to an otherwise mediocre game. If it worked that way, my simple games that run at ~1000FPS would have taken over the world already.

                                                                                              2. 3

                                                                                                Or do you think if you asked people if they would prefer minecraft to be faster they would say no ?

                                                                                                Even if a game was written by an entire army of Carmacks and Muratoris squeezing every last bit of performance they could get, people would almost certainly answer “yes” to “would you prefer it to be faster”. It’s a meaningless question, because nobody says no to it even when the performance is already very good.

                                                                                                And the fact that Minecraft succeeded as an indie game based on people loving its gameplay even though it had terrible performance really and truly does put the lie to the notion that game dev is somehow a unique performance-carer industry or that people who play games are somehow super uniquely sensitive to performance. Gamers routinely accept things that are way worse than the sins of your least favorite Electron app or React SPA.

                                                                                                1. 6

                                                                                                  I think a more generous interpretation of the hypothetical would be to phrase the question as: “Do you think the performance of Minecraft is a problem?”

                                                                                                  In that scenario, I would imagine that even people who love the game would likely say yes. At the same time, if you asked that question about some Carmack-ified game, you might get mostly “no” responses.

                                                                                                  1. 1

                                                                                                    Gamers routinely accept things

                                                                                                    how is accepting things an argument for anything ? we are better than this as a species

                                                                                                    1. 1

                                                                                                      Can you clarify the claim that you are making, and why the chosen example has any bearing on it? Obviously gaming is different from other industries in some ways and the same in other ways.

                                                                                              3. 7

                                                                                                I think the Scrooge analogy only works in some cases. Scrooge was free to become more generous because he was dealing with his own money. In the same way, when writing programs that run on our own servers, we should feel free to trade efficiency for other things if we wish. But when writing programs that run on our users’ machines, the resources, whether RAM or battery life, aren’t ours to take, so we should be as sparing with them as possible while still doing what we need to do.

                                                                                                Unfortunately, that last phrase, “while still doing what we need to do”, is doing a lot of work there. I have myself shipped a desktop app that uses Electron, because there was a need to get it out quickly, both to make money for my (small, bootstrapped) company and to solve a problem which no other product has solved. But I’ve still put in some small efforts here and there to make the app frugal for an Electron app, while not nearly as frugal as it would be if it were fully native.

                                                                                                1. 6

                                                                                                  I used to be passionate about this too, but I really think villianizing accidental complexity is a false idol. Accidental complexity is the domain of the programmer. We will always have to translate some idealized functionality into a physically executable system. And that system should be fast. And that will always mean reorganizing the data structures and algorithms to be more performant.

                                                                                                  My point of view today is that implementation details should be completely embraced, and we should build software that takes advantage of its environment to the fullest. The best way to do this while also retaining the essential complexity of the domain is by completely separating specification from implementation. I believe we should be writing executable specifications and using them in model-based tests on the real implementation. The specifications disregard implementation details, making them much smaller and more comprehensible.

                                                                                                  I have working examples of doing this if this sounds interesting, or even farfetched.

                                                                                                  1. 3

                                                                                                    I agree with this view. I used to be enamored by the ideas of Domain Driven Design (referring to the code implementation aspects here and not the human aspects) and Clean/Hexagonal Architecture and whatever other similar design philosophies where the shape of your actual code is supposed to mirror the shape of the domain concepts.

                                                                                                    One of the easiest ways to break that spell is to try to work on a system with a SQL database where there are a lot of tables with a lot of relations, where ACID matters (e.g., you actually understand and leverage your transaction isolation settings), and where performance matters (e.g., many records, can’t just SELECT * from every JOINed table, etc).

                                                                                                    I don’t know where I first heard the term, but I really like to refer to “mechanical sympathy”. Don’t write code that exactly mirrors your business logic; your job as a programmer is to translate the business logic into machine instructions, not to translate business logic into business logic. So, write instructions that will run well on the machine.

                                                                                                  2. 3

                                                                                                    Everything is a tradeoff. For example, in C++, when you create a vector and grow it, it is automatically zeroed. You could improve performance by using a plain array that you allocate yourself. I usually forgo this optimization because it costs time and often makes the code more unpleasant to work with. I also don’t go and optimize the assembly by hand, unless there is no other way to achieve what I want. With that being said, performance is a killer feature and lack of performance can kill a product. We absolutely need developers who are more educated in performance matters. Performance problems don’t just cripple our own industry, they cripple the whole world which relies on software. I think the mindset you described here is defeatist and, if it proliferates, will lead to worse software.

                                                                                                    1. 12

                                                                                                      You could improve performance by using a plain array that you allocate yourself.

                                                                                                      This one isn’t actually clear cut. Most modern CPUs do store allocate in L1. If you write an entire L1 line in the window of the store buffer, it will materialise the line in L1 without fetching from memory or a remote cache (just sending out some broadcast invalidates if the line is in someone else’s cache). If you zero, this will, definitely happen. If you don’t and initialise piecemeal, you may hit the same optimisation, but you may end up pulling in data from memory and then overwriting it.

                                                                                                      If the array is big and you do this, you may find that it’s triggering some page faults eagerly to allocate the underlying storage. If you were going to use only a small amount of the total space, this will increase memory usage and hurt your cache. If you use all of it, then the kernel may see that you’ve rapidly faulted on two adjacent pages and handle a bit more in the page faults eagerly handler. This pre-faulting may also move page faults off some later path and reduce jitter.

                                                                                                      Both approaches will be faster in some settings.

                                                                                                      1. 4

                                                                                                        Ah, you must be one of those “Performance-Carers who Care About Performance™” ;)

                                                                                                        Both approaches will be faster in some settings.

                                                                                                        This is so often the case, and it always worries me that attitudes like the GP lead to people not even knowing about how to properly benchmark and performance analyse anymore. Not too long ago I showed somebody who was an L4 SWE-SRE at Google a flamegraph - and he had never seen one before!

                                                                                                        1. 11

                                                                                                          Ah, you must be one of those “Performance-Carers who Care About Performance™” ;)

                                                                                                          Sometimes, and that’s the important bit. Performance is one of the things that I can optimise for, sometimes it’s not the right thing. I recently wrote a document processing framework for my next book. It runs all of its passes in Lua. It simplifies memory management by doing a load of copies of std::string. For a 200+ page book, well under one second of execution time is spent in all of that code, the vast majority is spent in libclang parsing all of the C++ examples and building semantic markup from them. The code is optimised for me to be able to easily add lowerings from new kinds of semantic markup to semantic HTML or typeset PDF, not for performance.

                                                                                                          Similarly, a lot of what I work on now is an embedded platform. Microcontrollers are insanely fast relative to memory sizes these days. The computers I learned to program on had a bit less memory, but CPUs that were two orders of magnitude slower. So the main thing I care about is code and data size. If an O(n) algorithm is smaller than an O(log(n)) one, I may still prefer it because I know n is probably 1, and never more than 8 in a lot of cases.

                                                                                                          But when I do want to optimise for performance, I want to understand why things are slow and how to fix it. I learned this lesson as a PhD student, where my PhD supervisor gave me some code that avoided passing things in parameters down deep function calls and stored them in globals instead. On the old machine he’d written it for, that was a speedup. Parameters were all passed on the stack and globals were fast to access (no PIC, load a global was just load from a hard-coded address). On the newer machines, it meant things had to go via a slower sequence for PC-relative loads and the accesses to globals impeded SSA construction and so inhibited a load of optimisation. Passing the state down as parameters kept it in registers and enabled local reasoning in the compiler. Undoing his optimisation gave me a 20% speedup. Introducing his optimisation gave him a greater speedup on the hardware that he originally used.

                                                                                                          1. 1

                                                                                                            This is so often the case, and it always worries me that attitudes like the GP lead to people not even knowing about how to properly benchmark and performance analyse anymore.

                                                                                                            I know how to and I teach it to people I work with. Just recently at work I rebuilt a major service, cut the DB queries it was doing by a factor of about 4 in the process, and it went from multi-second to single-digit-millisecond p95 response times.

                                                                                                            But I also don’t pull constant all-nighters worrying that there might be some tiny bit of performance I left on the table, or switching from “slow” to “faster” programming languages, or really any of the stuff people always allege I ought to be doing if I really “care about performance”. I approach a project with a reasonable baseline performance budget, and if I’m within that then I leave it alone and move on to the next thing. I’m not going to wake up in a cold sweat wondering if maybe I could have shaved another picosecond somewhere.

                                                                                                            And the fact that you can’t really respond to or engage with criticism of hyper-obsession with performance (or, you can but only through sneering strawmen) isn’t really helpful, y’know?

                                                                                                            1. 2

                                                                                                              And the fact that you can’t really respond to or engage with criticism of hyper-obsession with performance (or, you can but only through sneering strawmen) isn’t really helpful, y’know?

                                                                                                              How were we supposed to know that you were criticizing “hyper-obsession” that leads to all-nighters, worry, and loss of sleep over shaving off picoseconds? From your other post it sounded like you were criticizing Carmack’s approach, and I haven’t seen any indication that it corresponds to the “hyper-obsession” you describe.

                                                                                                              Where’s the strawman really?

                                                                                                          2. 2

                                                                                                            This one isn’t actually clear cut.

                                                                                                            I did a consulting gig a few years ago where just switching from zeroing with std::vector to pre-zeroed with calloc was a double-digit % improvement on Linux.

                                                                                                        2. 3

                                                                                                          I think answer is somewhere in the middle: should game programmers in general care? Maybe too broad of a statement. Does ID Software, producers of top-of-the-class, extremely fast shooters benefit from someone who cares so deeply to make sure their games are super snappy? Probably yes.

                                                                                                        3. 5

                                                                                                          You think thats bad, consider the advent of “web-apps” for everything.

                                                                                                          On anything other than an M-series Apple computer they feel sluggish, even with absurd computer specifications. The largest improvement I felt going from a i9-9900K to an M1 was that Slack suddenly felt like a native app, going back to my old PC felt like going back to the 90’s.

                                                                                                          I would love to dig into why.

                                                                                                          1. 11

                                                                                                            The bit that was really shocking to me was how ctrl-1 and ctrl-2 (switching Slack workspaces) took around a second on a powerful AMD laptop on Linux.

                                                                                                            At work we use Matrix/Element. It has its share of issues but the performance isn’t nearly as bad.

                                                                                                            1. 8

                                                                                                              I don’t really see how switching tabs inside a program is really related to the DRM subsystem, or to Kernel Mode Setting.

                                                                                                              1. 1

                                                                                                                I thought they were mentioning ctrl-alt-F1/F2 switching (virtual terminals), which used to be indeed slow.

                                                                                                                My bad.

                                                                                                          2. 7

                                                                                                            There is a wide spectrum of performance in Electron apps. Although it’s mostly VS Code versus everyone else. VS Code is not particularly snappy, but it’s decent. Discord also feels faster than other messengers. The rest of highly interactive webapps are used are unbearably sluggish.

                                                                                                            So I think these measured 6ms are irrelevant. I’m on Wayland Gnome and everything feels snappy except highly interactive webapps. Even my 10-year-old laptop felt great, but I retired it because some webapps were too painful (while compiling Rust felt… OK? Also browsing non-JS content sites was great).

                                                                                                            Heck, my favorite comparison is to run Q2 on WASM. How can that feel so much snappier than a chat application like Slack?

                                                                                                            1. 12

                                                                                                              I got so accustomed to the latency, when I use something with nearly zero latency (e.g. an 80’s computer with CRT), I get the surreal impression that the character appeared before I pressed the button.

                                                                                                              1. 4

                                                                                                                I had the same feeling recently with a commadore64.

                                                                                                                It really was striking how a computer with less power than the microcontroller in my keyboard could feel so fast, but obviously when you actually give it an instruction to think about, the limitations of the computer are clear.

                                                                                                                EDIT: Oh hey, I wasn’t kidding.

                                                                                                                The CPU in my keyboard is 16MHz: ControllerBoard Microcontroller PDF Datasheet

                                                                                                                The CPU in the commadore64 I was using was 0.9-1MHz: https://en.wikipedia.org/wiki/MOS_Technology_6510

                                                                                                            2. 4

                                                                                                              As a user on smaller platforms without native apps, I will gladly take a web app or PWA over no access.

                                                                                                              In the ’90s almost everything was running Microsoft Windows with on x86 for personal computers with almost everyone running at the 5 different screen resolunions so it was more reasonable to make a singular app for a singular CPU architecture & call it a day. Also security was an afterthought. To support all of these newer platforms, architectures, device types, & have the code in a sandbox, going the HTML + CSS + JavaScript route is a tradeoff many are willing to take for portability since browsers are ubiquitous. The weird thing is that a web app doesn’t have to be slow, & not every application has the same demands to warrant a native release.

                                                                                                              1. 10

                                                                                                                Having been around the BSD, and the Linux block 20+ years ago, I share the sentiment. Quirky and/or slow apps are annoying, but still more efficient than no apps.

                                                                                                                Besides, as far as UIs go, “native” is just… a moderately useful description at this point. macOS is the only one that’s sort of there, but that wasn’t always the case in all this time, either (remember when it shipped with two toolkits and three themes?). Windows has like three generations of UI toolkits, and one of the two on which the *nix world has mostly converged is frequently used along with things like Kirigami, making it native in the sense that it all eventually goes through some low-level Qt drawing code and color schemes kind of work, but that’s about it.

                                                                                                                Don’t get me wrong, I definitely prefer a unified “native” experience; even several native options were tolerable, like back when you could tell a Windows 3.x-era application from other Windows 98 applications because the Open file… dialog looked different and whatnot, but keybindings were generally the same, widgets mostly worked the same etc.

                                                                                                                But that’s a lost cause, this is not how applications are developed anymore – both because developers have lost interest in it and because most platform developers (in the wide sense – e.g. Microsoft) have lost interest in it. A rich, native framework is one of the most complex types of software to maintain, with some of the highest validation and maintenance costs. Building one, only to find almost everyone avoids it due to portability or vendor lock-in concerns unless they literally don’t have a choice, and that even then they try to use as little of it as humanly possible, is not a very good use of already scant resources in an age where most of the profit is in mobile and services, not desktop.

                                                                                                                You can focus on the bad and point out that the vast majority of Electron applications out there are slow, inconsistent, and their UIs suck. Which is true, but you can also focus on the good and point out that the corpus of Electron applications we have now is a lot wider and more capable than their Xaw/Motif/Wx/Xforms/GTK/Qt/a million others – such consistency, much wow! – equivalents from 25 years ago, whose UIs also sucked.

                                                                                                          3. 2

                                                                                                            “15 engineers and constantly pushing” is the new “8 megabytes and constantly swapping”.

                                                                                                            joke aside, the local runners are indeed a bit frustrating, basically you need one VM per runner which doesn’t really scale. Like others in this thread my solution to this is to put most of the deployment in bash scripts (ex. : https://github.com/ossia/score/tree/master/ci ), sadly you then loose the ability to use many of the cool pre-made actions for many use cases. Just building on nix is definitely not enough, very often I’ll have something that builds fine on Nix but fails on another distro with a slightly different GCC or Clang setup

                                                                                                            1. 1

                                                                                                              joke aside, the local runners are indeed a bit frustrating, basically you need one VM per runner which doesn’t really scale.

                                                                                                              I was looking into this for CI; the official recommendation is you have some other service that can spawn runners, via whatever mechanism. They provide a sample using k8s, but I suspect you could do something that requires less care and feeding. Maybe someone else has already done it; it’d save me the effort of having to write it if so.

                                                                                                            2. 56

                                                                                                              I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.

                                                                                                              WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.

                                                                                                              1. 24

                                                                                                                I believe in your perception, but I wonder how people determine this sort of thing.

                                                                                                                It seems like an availability heuristic: if you notice an app is bad, and discover it’s made in Electron, you remember that. But if an app isn’t bad, do you even check how it was built?

                                                                                                                Sort of like how you can always tell bad plastic surgery, but not necessarily good plastic surgery.

                                                                                                                1. 29

                                                                                                                  On macOS, there has been a shift in the past decade from noticing apps have poor UIs and seeing that they are Qt, to seeing that they are Electron. One of the problems with the web is that there’s no standard rich text edit control. Cocoa’s NSTextView is incredibly powerful, it basically includes an entire typesetting engine with hooks exposed to everything. Things like drag-and-drop, undo, consistent keyboard shortcuts, and so on all work for free if you use it. Any app that doesn’t use it, but exposes a way of editing text, sticks out. Keyboard navigation will work almost how you’re used to, for example. In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.

                                                                                                                  1. 5

                                                                                                                    In Electron and the web, everyone has to use a non-standard text view if they want anything more than plain text. And a lot of apps implement their own rather than reusing a single common one, so there isn’t even consistency between Electron apps.

                                                                                                                    The is probably the best criticism of Electron apps in this thread that’s not just knee-jerk / dogpiling. It’s absolutely valid and even for non-Electron web apps it’s a real problem. I work at a company that had it’s own collaborative rich-text editor based on OT, and it is both a tonne of work to maintain and extend, and also subtly (and sometimes not-so-subtly) different to every other rich text editor out there.

                                                                                                                    1. 3

                                                                                                                      I’ve been using Obsidian a fair bit lately. I’m pretty sure it’s Electron-based but on OSX that still means that most of the editing shortcuts work properly. Ctrl-a and ctrl-d for start and end of line, ctrl-n and ctrl-p for next and previous line, etc. These are all Emacs hotkeys that ended up in OSX via NeXT. Want to know what the most frustrating thing has been with using Obsidian cross platform? Those Emacs hotkeys that all work on OSX don’t work on the Linux version… on the Linux version they do things like Select All or Print. Every time I switch from my Mac laptop to my Linux desktop I end up frustrated from all of the crap that happens when I use my muscle memory hotkeys.

                                                                                                                      1. 7

                                                                                                                        This is something that annoys me about Linux desktops. OPENSTEP and CDE, and even EMACS, supported a meta key so that control could be control and invoking shortcuts was a different key. Both KDE and GNOME were first released after Windows keys were ubiquitous on PC keyboards that could have been used as a command / meta key, yet they copied the Windows model for shortcuts.

                                                                                                                        More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.

                                                                                                                        1. 4

                                                                                                                          More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.

                                                                                                                          You mean middle click, right? I say that in jest, but anytime I’m on a non-Linux platform, I find myself highlighting and middle clicking, then realizing that just doesn’t work here and sadly finding the actual clipboard keys.

                                                                                                                          1. 3

                                                                                                                            X11’s select buffer always annoyed me because it conflates two actions. Selecting and copying are distinct operations and need to be to support operations like select and paste to overwrite. Implicitly doing a copy-like operation is annoying and hits a bunch of common corner cases. If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).

                                                                                                                            The select buffer exists to avoid needing a clipboard server that holds a copy of the object being transferred, but drag and drop (which worked reliably on OPENSTEP and was always a pain on X11) is a better interaction model for that. And, when designed properly, has better support for content negotiation, than the select buffer in X11. For example, on macOS I can drag a file from the Finder to the Terminal and the Terminal will negotiate the path of the file as the type (and know that it’s a file, not a string, so properly escape it) and insert it into the shell. If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal? Without massive hacks and tight coupling?

                                                                                                                            1. 3

                                                                                                                              If you select a file in any X11 file manager, does this work when you middle click in an X11 terminal?

                                                                                                                              There’s no reason why it shouldn’t on the X level - middle clicks to the same content negotiation as any other clipboard or drag and drop operation (in fact, it is literally the same, asking for the TARGETS property, then calling XConvertSelection with the format you want, the only difference is that second argument to XConvertSelection - PRIMARY, CLIPBOARD, or XdndSelection).

                                                                                                                              If it doesn’t work, it is probably just because the terminal doesn’t try. Which I’d understand; my terminal unconditionally asks for strings too, because knowing what is going on in the running application is a bit of a pain. The terminal doesn’t know if you are at a shell prompt or a text editor or a Python interpreter unless you hack up those programs to inform it somehow. (This is something I was fairly impressed with on the Mac, those things do generally work, but I don’t know how. My guess is massive hacks and tight coupling between their shell extensions and their terminal extensions.)

                                                                                                                              need to be to support operations like select and paste to overwrite

                                                                                                                              Eh, I made it work in my library! I like middle click a lot and frequently double click one thing to select it, then double click followed by middle click in another to replace its content. Heck, that’s how I do web links a great many times (I can’t say a majority, but several times a day). Made me a little angry that it wouldn’t work in the mainstream programs, so I made it work in mind.

                                                                                                                              It is a bit hacky though: it does an automatic string copy of the selection into an internal buffer of the application when replacing the selection. Upon pasting, if it is asked to paste the current selection over itself, it instead use that saved buffer. Theoretically pure? Nah. Practically perfect? Yup. Works For Me.

                                                                                                                              If I have something selected in two apps, switching between them and then pasting in a third makes it unclear which thing I’ll paste (some apps share selection to the select buffer when it’s selected, some do it when they are active and a selection exists, it’s not clear which is ‘correct’ behaviour).

                                                                                                                              You know, I thought this was in the spec and loaded it up to prove it and…. it isn’t. lol, it is clear to me what is the correct behavior (asserting ownership of the global selection just when switching between programs is obviously wrong - it’d make copy/paste between two programs with a background selection impossible, since trying to paste in one would switch the active window, which would change the selection, which is just annoying), I’d assert the selection if and only if it is an explicit user action to change the selection or to initiate a clipboard cut/copy command, but yeah the ICCCM doesn’t go into any of this and neither does any other official document ive checked.

                                                                                                                              tbh, I think this is my biggest criticism of the X ecosystem in general: there’s little bits that are underspecified. In some cases, they just never defined a standard, though it’d be easy, and thus you get annoying interop problems. Other cases, like here, they describe how you should do something, but not when or why you should do that. There’s a lot to like about “mechanism, not policy” but… it certainly has its downsides.

                                                                                                                              1. 1

                                                                                                                                Fair points and a difference of opinion probably driven by difference in use. I wasn’t even thinking about copying and pasting files, just textual snippets. Middle click from a file doesn’t work, but dragging and dropping files does lead to the escaped file path being inserted into the terminal.

                                                                                                                                I always appreciate the depth of knowledge your comments bring to this site, thank you for turning my half-in-jest poke at MacOS into a learning opportunity!

                                                                                                                            2. 2

                                                                                                                              More than 50% of the reason I use a Mac is that copy and paste in the terminal are the same shortcut as every other app.

                                                                                                                              You know, I’m always ashamed to say that, and I won’t rate the % that it figures into my decision, but me too. For me, the thing I really like is that I can use full vim mode in JetBrains tools, but all my Mac keyboard shortcuts also work well. Because the mac command key doesn’t interfere ever with vim mode. And same for terminal apps. But the deciding feature is really JetBrains… PyCharm Pro on Mac is so much better than PyCharm Pro on Linux just because of how this specific bit of behavior influences IdeaVim.

                                                                                                                              I also like Apple’s hardware better right now, but all things being equal, this would nudge me towards mac.

                                                                                                                              1. 1

                                                                                                                                Nothing to be ashamed of. I’m a diehard Linux user. I’ve been at my job 3 years now, that entire time I had a goal to get a Linux laptop, I’ve purposefully picked products that enabled that and have finally switched, and I intend to maintain development environment stuff myself (this is challenging because I’m not only the only Linux engineer, I’m also the only x86 engineer).

                                                                                                                                I say all this to hammer home that despite how much I prefer Linux (many, many warts and all), this is actually one of the biggest things by far that I miss about my old work Mac.

                                                                                                                              2. 1

                                                                                                                                Have you seen or tried Kinto?

                                                                                                                                1. 1

                                                                                                                                  I have not heard of it and my ability to operate a search engine to find the relevant thing is failing me.

                                                                                                                          2. 18

                                                                                                                            Plus we live in a world now where we expect tools to be released cross-platform, which means that I think a lot of people compare an electron app on, say, Linux to an equivalent native app on Linux, and argue that the native app would clearly be better.

                                                                                                                            But from what I remember of the days before electron, what we had on Linux was either significantly worse than the same app released for other platforms, or nothing at all. I’m thinking particularly of Skype for Linux right now, which was a pain to use and supported relatively few of the features other platforms had. The election Skype app is still terrible, but at least it’s better than what we had before.

                                                                                                                            1. 8

                                                                                                                              Yeah, I recall those days. Web tech is the only reason Linux on the desktop isn’t even worse than it was then.

                                                                                                                          3. 8

                                                                                                                            Weird, all the ones I’ve used have been excellent with great UX. It’s the ones that go native that seem to struggle with their design. Prolly because xml is terrible for designing apps

                                                                                                                            1. 35

                                                                                                                              I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord). But then I like my margins to be set to 0 and information density on my screen to approximate the average circa-2005 japanese website. For instance Ripcord (https://cancel.fm/ripcord/static/ripcord_screenshot_win_6.png) is infinitely more pleasant for me to use than Discord.

                                                                                                                              But most likely some people disagree - from the article:

                                                                                                                              The McDonald’s ordering kiosk, powering the world’s biggest food retailer, is entirely built with Chromium.

                                                                                                                              I’m really amazed for instance that anyone would use McDonald’s kiosks as an example of something good - you can literally see most of these poor things stutter with 10fps animations and constant struggles to show anything in a timely manner.

                                                                                                                              1. 26

                                                                                                                                I’m really amazed for instance that anyone would use McDonald’s kiosks as an example of something good

                                                                                                                                My children - especially the 10 and 12 year old - will stand around mocking their performance while ordering food.

                                                                                                                                1. 2

                                                                                                                                  I’d really like to see how you and the parent comment author interact with your computer. For me electron apps are at best barely useable, ranging from terrible UI/UX but with some useful features and useless eye candy at the cost of performance (VSCode), to really insulting to use (Slack, Discord).

                                                                                                                                  IDK, Slack literally changed the business world by moving many companies away from email. As it turns out, instant communication and the systems Slack provided to promote communication almost certainly resulted in economic growth as well as the ability to increase remote work around the world. You can call that “insulting” but it doesn’t change the facts of its market- and mind-share.

                                                                                                                                  Emoji reactions, threads, huddles, screen sharing are all squarely in the realm of UX and popularized by Slack. I would argue they wouldn’t have been able to make Slack so feature packed without using web tech, especially when you see their app marketplace which is a huge UX boon.

                                                                                                                                  Slack is not just a “chat app”.

                                                                                                                                  If you want a simple text-based chat app with 0-margins then use IRC.

                                                                                                                                  I could easily make the same argument for VSCode: you cannot ignore the market- and mind-share. If the UX was truly deplorable then no one would use it.

                                                                                                                                  Everything else is anecdotal and personal preference which I do not have any interest in discussing.

                                                                                                                                  1. 3

                                                                                                                                    If you want a simple text-based chat app with 0-margins then use IRC.

                                                                                                                                    I truly miss the days when you could actually connect to Slack with an IRC client. That feature went away in… I dunno, 2017 or so. It worked fabulously well for me.

                                                                                                                                    1. 2

                                                                                                                                      Yeah Slack used to be much easier to integrate with. As a user I could pretty easily spot the point where they had grown large enough that it was time to start walling in that garden …

                                                                                                                                2. 7

                                                                                                                                  excellent with great UX.

                                                                                                                                  This is not a direct personal attack or criticism, but a general comment:

                                                                                                                                  I find it remarkable that, when I professionally criticise GNOME, KDE and indeed Electron apps in my writing, people frequently defend them and say that they find them fine – in other words, as a generic global value judgement – without directly addressing my criticisms.

                                                                                                                                  I use one Electron app routinely, Panwriter, and that’s partly because it tries to hide its UI. It’s a distraction-free writing tool. I don’t want to see its UI. That’s the point. But the UI it does have is good and standards-compliant. It has a menu bar; those menus appear in the OS’s standard place; they respond to the standard keystrokes.

                                                                                                                                  My point is:

                                                                                                                                  There are objective, independent standards for UI, of which IBM CUA is the #1 and the Mac HIG are the #2.

                                                                                                                                  “It looks good and I can find the buttons and it’s easy to work” does not equate to “this program has good UI.”

                                                                                                                                  It is, IMHO, more important to be standards-compliant than it is to look good.

                                                                                                                                  Most Electron apps look like PWAs (which I also hate). But they are often pretty. Looking good is nice, but function is more important. For an application running on an OS, fitting in with that OS and using the OS’s UI is more important than looking good.

                                                                                                                                  But today ISTM that this itself is an opinion, and an unusual and unpopular one. I find that bizarre. To me it’s like saying that a car or motorbike must have the standard controls in the standard places and they must work in the standard way, and it doesn’t matter if it’s a drop-dead beautiful streamlined work of art if those aren’t true. Whereas it feels like the prevailing opinion now is that a streamlined work of art with no standard controls is not only fine but desirable.

                                                                                                                                3. 4

                                                                                                                                  This is called confirmation bias.

                                                                                                                                  1. 4

                                                                                                                                    No, that’s not what confirmation bias means.

                                                                                                                                    1. 4

                                                                                                                                      since I already know the outcome

                                                                                                                                      This is exactly what confirmation bias refers to.

                                                                                                                                      1. 9

                                                                                                                                        Confirmation bias is cherry picking evidence to support your preconceptions. This, is simply having observed something (“all Electron apps I’ve used were terrible”), and not being interested in why — which is understandable since the conclusion was “avoid Electron”.

                                                                                                                                        It’s okay at some point to decide you have looked at enough evidence, make up your mind, and stop spending time examining any further evidence.

                                                                                                                                        1. 2

                                                                                                                                          Yes, cherry picking is part of it, but confirmation bias is a little more extensive than that.

                                                                                                                                          It also affects when you even seek evidence, such as only checking what an app is built with when it’s slow, but not checking when it’s fast.

                                                                                                                                          It can affect your interpretation and memory as well. E.g., if you already believe electron apps are slow, you may be more likely to remember slow electron apps and forget (if you ever learned of) fast electron apps.

                                                                                                                                          Don’t get me wrong, I’m guilty of this too. Slack is the canonical slow electron app, and everyone remembers it. Whereas my 1Password app is a fast electron app, but I never bothered to learn that until the article mentioned it.

                                                                                                                                          All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds. And if your data collection and interpretation are biased, it doesn’t matter how much of it you’ve collected. (E.g., the disastrous 1936 Literary Digest prediction of Landon defeating Roosevelt, which polled millions of Americans, but from non-representative automobile and telephone owners.)

                                                                                                                                          1. 3

                                                                                                                                            It also affects when you even seek evidence

                                                                                                                                            We’re talking about someone who stopped seeking evidence, so it doesn’t apply here.

                                                                                                                                            All of which is to say, I’m very dubious that people’s personal experiences in the comments are an unbiased survey of the state of electron app speeds.

                                                                                                                                            So would I.

                                                                                                                                            And it doesn’t help that apparently different people have very different criteria for what constitutes acceptable performance. My personal criterion would be “within an order of magnitude of the maximum achievable”. That is, if it is 10 times slower than the fastest possible, that’s still acceptable to me in most settings. Thing is though, I’m pretty sure many programs are _three_orders of magnitude slower than they could be, and I don’t notice because when I click a button or whatever they still react in fewer frames than I can consciously perceive — but that still impacts battery life, and still necessitates a faster computer than necessary. Worse, in practice I have no idea how much slower than necessary an app really is. The best I can do is notice that a similar app feels snappier, or doesn’t uses as much resources.

                                                                                                                                            1. 2

                                                                                                                                              We’re talking about someone who stopped seeking evidence, so it doesn’t apply here.

                                                                                                                                              ??? It still applies if they stopped seeking evidence because of confirmation bias. I’m not clear what you’re trying to say here.

                                                                                                                                              1. 2

                                                                                                                                                It still applies if they stopped seeking evidence because of confirmation bias.

                                                                                                                                                Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.

                                                                                                                                                And even if you were right, and confirmation bias led them to think they have enough evidence even though they do not, and then stopped seeking, the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.

                                                                                                                                                Stopping to seek evidence does not confirm anything by the way. It goes both ways: either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.

                                                                                                                                                1. 1

                                                                                                                                                  Oh but then you have to establish that confirmation bias happened before they stopped. And so far you haven’t even asserted it — though if you did I would dispute it.

                                                                                                                                                  As a default stance, that’s more likely to be wrong than right.

                                                                                                                                                  Which of these two scenarios is more likely: that the users in this thread carefully weighed the evidence in an unbiased manner, examining both electron and non-electron apps, seeking both confirmatory and disconfirmatory evidence… or that they made a gut judgment based on a mix of personal experience and public perception.

                                                                                                                                                  The second is way more likely.


                                                                                                                                                  the act of stopping itself is not confirmation bias, it’s just thinking one has enough evidence to stop bothering.

                                                                                                                                                  It’s the reason behind stopping, not the act itself, that can constitute “confirmation bias”.

                                                                                                                                                  … either the evidence confirms the belief and we go “yeah yeah, I know” and forget about it, or it weakens it and we go “don’t bother, I know it’s bogus in some way”. Under confirmation bias, one would remember the confirming evidence, and use it later.

                                                                                                                                                  As a former neuroscientist, I can assure you, you’re using an overly narrow definition not shared by the actual psychology literature.

                                                                                                                                                  1. 1

                                                                                                                                                    From Wikipedia:

                                                                                                                                                    Confirmation bias (also confirmatory bias, myside bias,[a] or congeniality bias[2]) is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values.[3] People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs.

                                                                                                                                                    Sounds like a reasonable definition, not overly narrow. And if you as a specialist disagree with that, I encourage you to correct the Wikipedia page. Assuming however you do agree with this definition, let’s pick apart the original comment:

                                                                                                                                                    I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.

                                                                                                                                                    WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.

                                                                                                                                                    Let’s see:

                                                                                                                                                    I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.

                                                                                                                                                    If that’s true, that’s not confirmation bias — because it’s true. If it isn’t, yeah, we can blame confirmation bias for ignoring good Electron apps. Maybe they only checked when the app was terrible or something? At this point we don’t know.

                                                                                                                                                    Now one could say with high confidence this is confirmation bias, if they personally believe a good proportion of Electron apps are not terrible. They would conclude highly unlikely that the original commenter really only stumbled on terrible Electron apps, so they must have ignored (or failed to notice) the non-terrible ones. Which indeed would be textbook confirmation bias.

                                                                                                                                                    But then you came in and wrote:

                                                                                                                                                    since I already know the outcome

                                                                                                                                                    This is exactly what confirmation bias refers to.

                                                                                                                                                    Oh, so you were seeing the bias in the second paragraph:

                                                                                                                                                    WHY this is the case is a respactable topic for sure, but since I already know the outcome I’m more interested in other topics.

                                                                                                                                                    Here we have someone who decided they had seen enough, and decided to just avoid Electron and move on. Which I would insist is a very reasonable thing to do, should the first paragraph be true (which it is, as far as they’re concerned).

                                                                                                                                                    Even if the first paragraph was full of confirmation bias, I don’t see any here. Specifically, I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them. And again, if you as a specialist says confirmation bias is more than that, I urge you to correct the Wikipedia page.

                                                                                                                                                    1. 1

                                                                                                                                                      is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values

                                                                                                                                                      But… Wikipedia already agrees with me here. This definition is quite broad in scope. In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.

                                                                                                                                                      If that’s true, that’s not confirmation bias — because it’s true.

                                                                                                                                                      Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not. Science is not served by only seeking to confirm what we know. As Karl Popper put it, scientists should always aim to falsify their theories. Plus, doing so assumes the conclusion; we might only think we know the truth, but without seeking to disconfirm, we’d never find out.

                                                                                                                                                      I don’t see any favouring of supporting information, or any disfavouring of contrary information, or misinterpreting anything in a way that suits them

                                                                                                                                                      Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”. It’s a scientific approach to our own cognition that has to be cultivated.


                                                                                                                                                      To reiterate, it’s most likely we’re biased, haven’t done the self-reflection to see that, and haven’t systematically investigated electron vs non-electron performance to state anything definitively.

                                                                                                                                                      And I get it, too. We only have so many hours in the day, we can’t investigate everything 100%, and quick judgments are useful. But, they trade off speed for accuracy. We should strive to remember that, and be humble instead of overconfident.

                                                                                                                                                      1. 1

                                                                                                                                                        In particular, note the “search for” part. Confirmation bias includes biased searches, or lack thereof.

                                                                                                                                                        As long as you’re saying “biased search”, and “biased lack of search”. The mere absence of search is not in itself a bias.

                                                                                                                                                        quick judgments are useful. But, they trade off speed for accuracy.

                                                                                                                                                        Yup. Note that this trade-off is a far cry from actual confirmation bias.


                                                                                                                                                        If that’s true, that’s not confirmation bias — because it’s true.

                                                                                                                                                        Confirmation bias is about how we search for and interpret evidence, regardless of whether our belief is true or not.

                                                                                                                                                        Wait, I think you’re misinterpreting the “it” in my sentence. By “it”, I meant literally the following statement: “I’ve used many Electron apps, both mass-market and niche, and they’ve all been terrible. In some cases there’s a native app that does the same thing and it’s always better.”

                                                                                                                                                        That statement does not say whether all Electron apps are terrible or whether Electron makes apps terrible, or anything like that. It states what had been directly observed. And if it is true that:

                                                                                                                                                        1. They used many Electron apps.
                                                                                                                                                        2. They’ve all been terrible.
                                                                                                                                                        3. Every time there was an alternative it was better.

                                                                                                                                                        Then believing and writing those 3 points is not confirmation bias. It’s just stating the fact as they happened. If on the other hand it’s not true, then we can call foul:

                                                                                                                                                        1. If they only used a couple Electron apps, that’s inflating evidence.
                                                                                                                                                        2. If not all Electron apps they used have been terrible, there’s confirmation bias for omitting (or forgetting) the one that weren’t.
                                                                                                                                                        3. If sometimes the alternative was worse, again, confirmation bias.

                                                                                                                                                        As Karl Popper put it, scientists should always aim to falsify their theories.

                                                                                                                                                        For the record I’m way past Popper. He’s not wrong, and his heuristic is great in practice, but now we have probability theory. Long story short, the material presented in E. T. Jaynes’ Probability Theory: the Logic of Science should be part of the mandatory curriculum, before you even leave high school — even if maths and science aren’t your chosen field.

                                                                                                                                                        One trivial, yet important, result from probability theory, is that absence of evidence is evidence of absence: if you expect to see some evidence of something if it’s true, then not seeing that evidence should lower your probability that it is true. The stronger you expect that evidence, the further your belief ought to shift.

                                                                                                                                                        Which is why Popper’s rule is important: by actively seeking evidence, you make it that much more probable to stumble upon it, should your theory be false. But the more effort you put into falsifying your theory, and failing, the more likely your theory is true. The kicker, though, is that it doesn’t apply to just the evidence you actively seek out, or the experimental tests you might do. It applies to any evidence, including what you passively observe.

                                                                                                                                                        Why would they be consciously aware of doing that, or say so in public if they were? People rarely think, “I’m potentially biased”.

                                                                                                                                                        Oh no you don’t. We’re all fallible mortals, all potentially biased, so I can quote a random piece of text, say “This is exactly what confirmation bias refers to” and that’s okay because surely the human behind it has confirmation bias like the rest of us even if they aren’t aware of it, right? That’s a general counter argument, it does not work that way.

                                                                                                                                                        There is a way to assert confirmation bias, but you need to work from your own prior beliefs:

                                                                                                                                                        1. Say you have very good reasons to believe that (i) at least half of Electrons app are not terrible, and (ii) confirmation bias is extremely common.
                                                                                                                                                        2. Say you accept that they have used at least 10 such apps. Under your prior, the random chance they’ve all been terrible is less than 1 in a thousand. The random chance that confirmation bias is involved in some way however, is quite a bit higher.
                                                                                                                                                        3. Do the math. What do you know, it is more likely this comment is a product of confirmation bias than actual observation.

                                                                                                                                                        Something like that. It’s not exact either (there’s selection bias, the possibility of “many” meaning only “5”, the fact we probably don’t agree on the definitions of “terrible” and “better”), but you get the gist of it: you can’t invoke confirmation bias from a pedestal. You have to expose yourself a little bit, reveal your priors at the risk of other people disputing them, otherwise your argument falls flat.

                                                                                                                                                        1. 1

                                                                                                                                                          Our comments are getting longer and longer, we’re starting to parse minutiae, and I just don’t have the energy to add in the Bayesian angle and keep it going today.

                                                                                                                                                          It’s been stimulating though! I disagree, but I liked arguing with someone in good faith. Enjoy your weekend out there.

                                                                                                                                                          1. 2

                                                                                                                                                            If Robert Aumann himself can at the same time produce his agreement theorem and be religious, it’s okay for us to give up. :-)

                                                                                                                                                            Thanks for engaging with me thus far.

                                                                                                                                            2. 2

                                                                                                                                              Am I the only one who routinely looks at every app I download to see what toolkit it’s using? Granted, I have an additional reason to care about that: accessibility.

                                                                                                                                              1. 1

                                                                                                                                                No I do this too. Always interesting to see how things are built.

                                                                                                                                                1. 1

                                                                                                                                                  You should write your findings up in a post and submit it! Might settle a lot of debates in the comments 😉

                                                                                                                                      2. 3

                                                                                                                                        Were you able to determine that they were terrible because they used Electron?

                                                                                                                                        1. 19

                                                                                                                                          Who cares? “All electron apps are terrible, most non-electron apps are not” is enough information to try to avoid Electron, even if it just so happens to be true for other reasons (e.g maybe only terrible development teams choose Electron, or maybe the only teams who choose Electron are those under a set of constraints from management which necessarily will make the software terrible).

                                                                                                                                        2. 2

                                                                                                                                          I think one thing worth noting is this:

                                                                                                                                          In some cases there’s a native app

                                                                                                                                          Emphasis mine. A lot of users won’t care if an app sort of sucks, but at least it exists.

                                                                                                                                        3. 5

                                                                                                                                          I think the issue is that ABI even though it logically feels like it’s part of the backend, is actually part of the language implemented on top of LLVM, which may or may not want to use the system ABI (or even handle multiple ABIs in one source file like C and C++ with all the calling-convention-altering extensions – __fastcall, [[clang::trivial_abi]] etc

                                                                                                                                          1. 3

                                                                                                                                            Maybe, but I’m sure you could also say the same thing about other LLVM features - exactly like all the calling conventions it supports, which not every language uses or needs. If this is a commonly needed features by languages, you’d think LLVM would be able to provide it.

                                                                                                                                            1. 1

                                                                                                                                              I’ve head about these alternate C/C++ calling conventions, but never actually encountered them. AFAICT they only exist on Windows or on non-mainstream CPUs?

                                                                                                                                              Of course there’s no hard requirement that you follow the system’s ABI, right? I mean, look at Go. Diverging just means FFI gets awkward and you don’t play well with debuggers and profiling tools…

                                                                                                                                              1. 3

                                                                                                                                                Some were added for Objective-C, to provide APIs for calls to the runtime that were less disruptive of caller code (the caller can assume more registers are preserved so can have more live values and on the fast path the callee doesn’t spill them either, and if it does spills them in a single place reducing code size). Not sure if Apple actually used them in this way.

                                                                                                                                                1. 2

                                                                                                                                                  The stdcall/fastcall/vectorcall ABIs only exist on Windows and 32-bit x86.

                                                                                                                                                  extern "x86-interrupt" is a relevant other calling convention on amd64, though.

                                                                                                                                              2. 3

                                                                                                                                                I got the Walmart one that Apple no longer sells: currently $650. It’s surprisingly capable.

                                                                                                                                                My only regret is that I then started getting back into VSTs (virtual MIDI instruments) and CAD (for our new A1 mini 3D printer), and I’m pretty sure I’m going to blow out the disk space soon 😞

                                                                                                                                                Aside from that, it’s handled everything I’ve thrown at it without any trouble at all, even with only 8GB RAM. And as much as huge VSTs have me wanting to upgrade, I’m not about to upgrade right now: I’m convinced the next generation or two of hardware + software advances are going to have LLMs running really well locally.

                                                                                                                                                1. 5

                                                                                                                                                  reading this is wild to me aha. I bought a desktop computer with 16G of RAM in 2011 because back then 8G already didn’t cut it anymore for me with VSTs with the songs we were doing with my band at the time. I upgraded to 64G in 2016. Right now I’m typing on a M2 Pro mac mini with 32G of ram and not going for more is my biggest regret, with Asahi Linux I hit OOM at least thrice a day just with Firefox+IDE.

                                                                                                                                                  1. 1

                                                                                                                                                    I’ve been astonished at how well the little thing keeps working. But yeah, next time I’ll go much bigger 🙂

                                                                                                                                                2. 6

                                                                                                                                                  Hyphens were meant to save paper & space. In the digital world to get more space, what you do is scroll. There’s so many visual issues with justified text while CSS doesn’t have proper page layout engine tooling to do variable tracking for justified text to look more natural—nor would we want to spend more CPU cycles trying to calculate the optimal spacing to make less lakes & rivers since viewport sizes are fluid unlike the page of a magazine.

                                                                                                                                                  The article’s conclusing is mostly correct (use start justification since not all languages are left-aligned or even top-aligned), just use the default justification unless you have a good reason not to & should never be used for body copy in a blog post or similar. Where to use justified text & hyphens? There are some @media print styles it makes sense…

                                                                                                                                                  1. 8

                                                                                                                                                    There’s so many visual issues with justified text

                                                                                                                                                    but non-justified text has (to me, visually) an issue on 99% of the lines. e.g. I’d much prefer reading these comments justified, it just looks horrible like most websites as it stands.

                                                                                                                                                    1. 5

                                                                                                                                                      I may just be old, but a ragged right margin doesn’t look published to me. It looks like a draft, or a piece of homework.

                                                                                                                                                      1. 1

                                                                                                                                                        I am grateful for userStyles so I can unjustify all of these sites that conversely look broken to me with the lakes / rivers. My typography education makes me shudder at the lack of readability on phones & long, non-breaking inline code blocks.

                                                                                                                                                    2. 5

                                                                                                                                                      Indeed. Even print newspapers and the New Yorker suffer from rivers and/or awkward hyphenation, and they have the advantage of being able to make manual adjustments. Lest anyone misconstrue my posting on this topic as an endorsement for justified text, I would hope that people would read the article carefully, because it does not. My takeaway is, if you have to do it (and you probably don’t), at least do it right.

                                                                                                                                                      1. 4

                                                                                                                                                        Even print newspapers and the New Yorker suffer from rivers and/or awkward hyphenation

                                                                                                                                                        I think “even” is the wrong word. IME newspapers are horrible on this front, compared to what you’d get in a fully justified book or journal article.