1. 19

    Third public commit to doas.c: doas grows up. no insults.

    1.  

      I appreciate that they used strong randomness to pick the insults.

    1. 5

      doas in its effort to be a sudo replacement without all the bloat, neglects to implement this.

      It wasn’t neglected at all. Insults are the definition of useless bloat.

      1. 11

        This is what is known as “dry humor”

        1.  

          Yeah, I thought that was obvious when I was writing it, but apparently not. Clarified my own opinion on it in another comment.

          1.  

            Touché

          2.  

            Author here. Agreed, it’s entirely useless. I threw this post together for fun one day a few years ago, and it’s meant entirely as tongue in cheek. I don’t use it myself because it’s pointless, and one extra thing I’d need to set up on a new system for no gain.

          1. 2

            Nice work, and good write-up about the process. Having recently used posix_spawn for the first time, it does give the impression that eventually almost every syscall should be available as a file_action or attr

            It feels like it would be more composable (and maybe more unix-like?) to just spawn a little stub executable that contains all the post-fork initialization code before the desired executable is called with exec. Of course, that way, you miss out on posix_spawn’s ability to capture error codes and return them to the parent process.

            1. 2

              Nice work, and good write-up about the process. Having recently used posix_spawn for the first time, it does give the impression that eventually almost every syscall should be available as a file_action or attr

              This is precisely why:

              • posix_spawn is a terrible API and
              • There is no good reason to implement posix_spawn in the kernel.

              It is a horrible compromise design that was created with the express design constraint that it must be possible to implement entirely in userspace. This means that it can’t by any more expressive than vfork, just an API that is more restrictive and therefore harder to misuse.

              It feels like it would be more composable (and maybe more unix-like?) to just spawn a little stub executable that contains all the post-fork initialization code before the desired executable is called with exec. Of course, that way, you miss out on posix_spawn’s ability to capture error codes and return them to the parent process.

              This is almost what vfork does, except that it’s a function not a complete executable: it has the weird ‘returns-twice’ behaviour, so the typical way of using it is to call a setup function in the returns-as-child version. This can then do any system calls. It’s a bit clunky to use because you must not use malloc in that context[1], but if you do all of your allocation in the parent context, call vfork, do the system calls to set up child state, and then execve, when vfork returns the second time you can clean everything up and you get exactly the behaviour that you want.

              I use a wrapper around vfork that takes a lambda to run in the child context, which gives a much more UNIX-like behaviour than posix_spawn. Adding posix_spawn in the kernel adds a load of extra code that runs in the kernel, for no real benefit (unless you’re doing a truly huge amount of work in the spawn call such that the extra system calls in the vfork context would not be completely dwarfed by the process-creation overhead.

              [1] Well, you can if you’re careful. After execve, any allocated objects will remain allocated in the parent, so you must make sure that you capture them in the parent for cleanup. This is much easier in C++ where you can use RAII to capture all of the syscall arguments by having a std::vector or similar in the parent context that is passed by reference into the child vfork context. Or you can just use a block inside the child context that ends its scope immediately before the execve call so that there’s everything is deallocated in the child just before the new binary is loaded.

              1. 1

                That’s actually quite clever. The project I was working on is actually written in C++, but writing this kind of syscall-heavy code tends to put my brain in ‘C’-mode.

                1. 1

                  This is almost what vfork does, except that it’s a function not a complete executable: it has the weird ‘returns-twice’ behaviour, so the typical way of using it is to call a setup function in the returns-as-child version.

                  If you like vfork except for this, you might like my sfork variation, which is like vfork, except it doesn’t return twice: https://github.com/catern/sfork

                  Edit: for anyone coming across this later, I submitted this as a top-level submission and discussed it there: https://lobste.rs/s/vzavsz/sfork_synchronous_single_threaded

                2. 1

                  It feels like it would be more composable (and maybe more unix-like?) to just spawn a little stub executable that contains all the post-fork initialization code before the desired executable is called with exec. Of course, that way, you miss out on posix_spawn’s ability to capture error codes and return them to the parent process.

                  More linuxy than unixy, but how about a system call like ebpf_spawn(...) that would run a short bytecode program before exec?

                  1. 4

                    Aside from the fact that eBPF is a security disaster, why run a bytecode program in the kernel to do something that you can already do with a native program in userspace?

                1. 5

                  Looks cool but please don’t import this.. it’s a 3 line implementation

                  func Of[T any](value T) *T {
                  	return &value
                  }
                  

                  Very leftpad-esque

                  1. 4

                    It’d be nice if there was a consistent standard for vendoring a subset of packages. That way you could have the benefits of code re-use, while still maintaining a reference to where the code came from, and avoiding the dangers of that code being modified by a malicious actor.

                    I wouldn’t use it for a one-liner like this, but it’d be nice to have for one-file rarely-changing packages so you could avoid adding bloat to the dependency graph.

                  1. 3

                    There was that one particular feature of the language that impressed me back then and that was the day I acknowledged the flexibility of the PHP language, you can create an instance of a class and call a static method using an array!

                    <?php
                    
                    class Foo {
                        public static function run($message) {
                            print($message);
                        }
                    }
                    
                    ["Foo", "run"]("BAR");
                    
                    1. 5

                      For anybody as bewildered as I was (and still am), this is not some bizarre artifact of PHP implicit type conversions or weird object system; the “Callable array” syntax was explicitly introduced as a way of dynamically invoking static methods.

                      1. 2

                        In 8.1, releasing next week, there’s finally first-class callable supports, so strings and arrays aren’t necessary. https://wiki.php.net/rfc/first_class_callable_syntax

                        1. 1

                          create an instance of a class

                          Static methods don’t require instantiation. Are you sure instantiation is happening here?

                          Also, wouldn’t it be better to be able to write something like Foo.run("BAR")?

                          1. 1

                            You’re correct, this is a static method invocation. But this convention does also work for instance methods if you supply an object in the first array item instead of the class name.

                        1. 5

                          Visiting fair Dublin City, looking forward to a proper Guinness and some live music.

                          Also want to get vpsadminos setup on a VM in the homelab so I can evaluate it. Seems to tick my boxes for hypervisor as a SmartOS replacement. ZFS, NixOS, netboot.

                          1. 3

                            I always really liked the casual music in Irish pubs; just some people playing music sitting in a regular booth or whatnot. There’s a good vibe to it which beats a “real” concert, and certainly beats playing pop or EDM at a million-and-one dB.

                            1. 3

                              Hope you enjoy Dublin, we’ve some great pubs around; just be sure to have a COVID vaccine pass. If you haven’t been before, I’d recommend avoiding areas like Temple Bar, as the pubs there are overpriced and mostly targeted to tourists (there’s a few exceptions, but it’s a good rule of thumb).

                              Why’re you migrating from SmartOS? I was actually considering a move to it for a system in my homelab. Right now it’s running FreeBSD with manually managed jails / bhyve VMs, which is fine, but I’d like to move to an OS that actually aims to be used primarily as a virtualization host.

                              1. 2

                                Thanks, yeah you do. This is my third time visiting, first time we didn’t leave Temple Bar, second time I explored a bit more and discovered more of the city. We’ve done a mixture this time, ended up sampling Irish whiskeys in Palace Bar last night which was most enjoyable. Peaty irish whiskey isn’t anywhere near as explosive to the palette as peaty scotch, but it’s still an enjoyable drink.

                                I’m mostly looking to switch from SmartOS because I don’t work with it day to day anymore, rather than because it’s deficient somehow. It’s definitely easy to administer and does the job, I love the fact it’s simple to recover from boot media failure (flash a new stick), upgrade the base OS (update the stick) and it’s based on ZFS of course. I’m a little less enamoured by managing the VMs, my current method is a ruby library I borrowed from a friend to generate the XML to create VMs. I’ve never managed to find anything better, which given things like terraform exist makes me sad. Not upset enough to invest time in fixing the problem space though, which then makes me sadder.

                                Nixpkgs/NixOS I like from playing with it and would like more exposure to it, which basing the HomeLab on it will give me for sure. The homelab is a bit unloved as well, things have been fairly static with it for 3-4 years now and heading into the winter I’m itching to spend some time working out kinks with it, making sure it’s available on Tailscale properly, sorting out monitoring/service management. VPSAdminOS appears to tick the boxes of being like SmartOS, but based on Nix/Linux so my now-usual day to day tooling works there easily.

                                1. 2

                                  We’ve done a mixture this time, ended up sampling Irish whiskeys in Palace Bar last night which was most enjoyable. Peaty irish whiskey isn’t anywhere near as explosive to the palette as peaty scotch, but it’s still an enjoyable drink.

                                  If you’re into whiskeys, then the Dingle whiskey bar is worth a visit if you have the time. They have an extensive selection of world whiskeys there, and the staff are usually happy to recommend. Kennedy’s is my go-to for lunch and a pint on the rare occasion I’m in town these days. Reasonable selection of pub food (their lamb burger is my recommendation; I haven’t had any of their vegetarian options since pre-covid, so I can’t offer any guidance there), and a well rounded choice of beers.

                                  I’m mostly looking to switch from SmartOS because I don’t work with it day to day anymore, rather than because it’s deficient somehow. It’s definitely easy to administer and does the job, I love the fact it’s simple to recover from boot media failure (flash a new stick), upgrade the base OS (update the stick) and it’s based on ZFS of course.

                                  The simplicity of upgrades, and decoupling of the base OS from the VM storage is what attracted me to it. I was also considering Alpine which can function similarly (although with a bit more manual work), but I’ve never been a huge fan of the usual tools for working with KVM (virt-manager / virsh). If the tooling is equally awkward on the SmartOS side, then that levels the playing field a little bit. Maybe I’ll just put a weekend or four into writing my own tooling for KVM and pad out the CV, haha.

                                  I’ll be sure to check out VPSAdminOS; I haven’t really used Nix much but it’s been on my radar for a while.

                              2. 3

                                I learned an important lesson from one of the organisers at the conference I attended: Guinness does not travel well. It needs to be moved carefully and then left to settle and after that the pumps and the pouring make a noticeable difference. There’s a huge difference in quality of the Guinness from one pub to the next. I was told to go to a place by the river that doesn’t seem to exist anymore for the best Guinness in the city and so I decided to try it (not really believing that there was much of a difference) and it really was true. The most surprising thing to me was that the bar on the top of the Guinness museum (which, by the way, is fantastic) served mediocre Guinness. I then made the mistake of drinking Guinness again when I got back home. It really doesn’t survive crossing the Irish Sea and being poured like ale.

                                Where is the best place for Guinness in Dublin now?

                                1. 1

                                  As someone visiting from the UK, anywhere in Dublin is the best place for a Guinness. Auld Dubliner and Palace Bar both served an excellent Guinness to us last night, there’s probably cheaper places out there if you wander a bit further out of the tourist area.

                                  Guinness definitely doesn’t make it to the UK in anywhere near as enjoyable a state. Whilst I do enjoy a Guinness from time to time at home, I basically visit Dublin (and Belfast to be fair) for a proper Guinness.

                                  1. 1

                                    Guinness definitely doesn’t make it to the UK in anywhere near as enjoyable a state

                                    Getting technical for a second, it makes it there fine. The trouble is that it’s not nearly as popular there as it is here. That means that the kegs aren’t as fresh, and the lines aren’t cleaned nearly as often, that combined with the fact that it oxidizes quickly, is what leads to the poorer quality.

                              1. 7

                                Hey, that’s actually not a bad tip (I’m not 100% sure it’s worthy of its own post, but it’s definitely not worth flagging). My main concern is:

                                None of the viruses in npm are able to run on my host when I do things this way.

                                This is assuming a lot of the security of docker. Obviously, it’s better than running npm as the same user that has access to your SSH/Azure/AWS/GCP/PGP keys / X11 socket, but docker security isn’t 100%, and I wouldn’t rely on it fully. At the end of the day, you’re still running untrusted code; containers aren’t a panacea, and the simplest misconfiguration can render privilege escalation trivial.

                                1. 3

                                  the simplest misconfiguration can render privilege escalation trivial.

                                  I’m a bit curious which configuration that’d be?

                                  1. 2

                                    not OP, but “--privileged” would do it. or many of the “--cap-add” options

                                    1. 1

                                      Not 100% sure here but lots of containers are configured to run as root, and file permissions are just on your disk right? so a container environment lets you basically take control of all mounted volumes and do whatever you want.

                                      This is of course only relevant to the mounted volume in that case, though.

                                      I think there’s also a lot of advice in dockerland which is the unfortunate intersection of easier than all alternatives yet very insecure (like most ways to connect to a github private repo from within a container involves some form of exposing your private keys).

                                    2. 1

                                      This is assuming a lot of the security of docker

                                      Which has IMO a good track record. Are there actually documented large scale exploits of privilege escalation from a container in this context? Or at all?

                                      Unless you’re doing stupid stuff I don’t think there’s a serious threat with using Docker for this use case.

                                    1. 4

                                      Does this mean type-checking in rust is about to become turing complete?

                                      1. 7

                                        It already was, but there’s a stack limit

                                        1. 5

                                          Practically, procedural macros are in every way worse for compilers.

                                        1. 21

                                          We can see that very practically using standard linux / programming tools. While you can use cat to show the data of a file, it won’t work for directories

                                          That’s a relatively recent addition. Early versions of UNIX exposed directories just like any other file. Before the APIs were added for traversing directories in a filesystem-independent way, userspace would just open directories and parse them directly. Even on modern systems, the ‘data can be stored only in leaf nodes’ part is very dependent on the filesystem and the definition of data. In NTFS and HFS+, I think you can store alternate data streams on directories and on most *NIX systems you can set extended attributes on directories.

                                          If you’re thinking of files and folders using their actual analog equivalents

                                          This is why I like to differentiate between a directory and a folder. A directory is (like a telephone directory) a key-value store that indexes from some human-friendly names to something a computer uses. On the original UNIX filesystems, it was exactly that: a file that used a fixed-sized record structure to map from names to inode numbers.

                                          In contrast, a folder is a UI abstraction that represents a container of documents.

                                          The fact that folders are typically implemented with a 1:1 mapping to directories is an implementation detail. In BFS, for example, both folders and saved searches were implemented as directories.

                                          The fact that inner nodes in a file system hierarchy cannot hold data is limiting. In many use cases, it’d be natural to have nodes that can hold data AND have children. Because file systems don’t allow this

                                          The problem is not that filesystems don’t allow this, it’s that there are no portable interfaces to filesystems that allow with. People want to be able to deploy their persistent data structures onto ext4, ZFS, UFS2, APFS, HFS+, NTFS, and sometimes even FAT32 filesystems and access it with SMB, WebDAV, or NFS (or AFS or AFP) network shares. This means that the set of things that you can use from any given filesystem is the intersection of the features provided by all of these things.

                                          I was expecting from the title for the author to realise that filesystems are not trees, but apparently this didn’t happen. Filesystems are DAGs if you’re lucky, graphs if you’re not. Hard links make a filesystem a DAG because the same file can be in multiple leaf nodes. Hard links on HFS+ are allowed (if you’re a sufficiently privileged user) to point to directories, which allows them to be an arbitrary graph (the ‘sufficiently privileged’ part is because a lot of things break if they are and so this privilege is granted only to things like the Time Machine daemon that promise not to create cycles). Junctions / reparse-points on NTFS can also create cycles, as can symlinks (though in both cases, at least you know that you’re potentially entering a cycle).

                                          1. 3

                                            We can see that very practically using standard linux / programming tools. While you can use cat to show the data of a file, it won’t work for directories

                                            That’s a relatively recent addition. Early versions of UNIX exposed directories just like any other file. Before the APIs were added for traversing directories in a filesystem-independent way, userspace would just open directories and parse them directly. Even on modern systems, the ‘data can be stored only in leaf nodes’ part is very dependent on the filesystem and the definition of data. In NTFS and HFS+, I think you can store alternate data streams on directories and on most *NIX systems you can set extended attributes on directories.

                                            IIRC it still worked on FreeBSD versions as recently as 11. In most traditional Unix filesystems (at least I believe it’s the case for [UF]FS and ext{,2,3,4}) a directory is stored much the same way as a file, the only difference is a flag that indicates the blocks pointed to by the inode are used to store directory entries rather than file content. The Rust equivalent is probably closer to:

                                            struct DirEnt {
                                                name: String,
                                                node: Rc<Inode>
                                            }
                                            
                                            enum Inode {
                                                File: Vec<u8>,
                                                Dir: Vec<DirEnt>
                                            }
                                            

                                            Note the Rc as well, because an inode might be pointed to by more than one directory entry.

                                            1. 1

                                              Can confirm this worked in recentish FreeBSDs. Made some nice file read vulnerabilities funnier to exploit :)

                                              1. 1

                                                Modern Unix filesystems use more complicated on-disk structures for at least large directories, because they want to be able to get a specific directory entry (or determine that it doesn’t exist) without potentially having to scan the entire directory. The increased complication of the actual structure of directories and directory entries is one reason why Linux moved to forbidding reading directories as regular files.

                                                (Some directories haven’t been readable as regular files for a long time. One example is directories on NFS mounts; from the beginning of NFS in the 80s, you could only look at them with NFS directory operations.)

                                                1. 1

                                                  The increased complication of the actual structure of directories and directory entries is one reason why Linux moved to forbidding reading directories as regular files.

                                                  Did Linux ever allow it? I was under the impression that it had been distinguishing feature between it and the BSDs quite early on.

                                                  1. 1

                                                    Based on looking at the source code of 0.96c on www.tuhs.org, it appears that early Linux allowed reading directories with read(). ext_file_read() in fs/ext/file.c, appears to specifically allows reads of S_ISDIR() inodes, although 0.96c also has readdir() and friends.

                                              1. 2

                                                Outside of work I’m trying my hand at building a simple 2D platformer in Zig. It started off as an attempt to translate a short Raylib demo into Zig, but I quickly switched to SDL due to (known) C ABI issues and it’s grown since then. Long-term it’d be fun to build a metroidvania style game, but right now I’m just focused on getting the basics working.

                                                1. 2

                                                  What ABI issues does Raylib have that Zig has trouble binding to it?

                                                  1. 2

                                                    I’m building on ARM, where there’s still issues passing structs as parameters to C functions. SDL for the most part passes pointers, so no issue there. As far as I know, this doesn’t affect x86.

                                                1. 19

                                                  They do mention it in passing, but I really can’t help but feel that the approach outlined here is probably not the best option in most cases. If you are measuring your memory budget in megabytes, you should probably just not use a garbage collected language.

                                                  1. 20

                                                    All of the memory saved with this linker work had nothing to do with garbage collection.

                                                    1. 7

                                                      Sure, but that’s tangential to my point. In a gced language, doing almost anything will generate garbage. Calling standard library functions will generate garbage. This makes it difficult to have really tight control of your memory usage. If you were to use, for example, c++ (or rust if you want to be trendy) you could carefully preallocate pretty much everything, and at runtime have no dynamic allocation (or very little, and carefully bounded, depending on your problem and constraints). This would be (for my skillset, at least) a much easier way to keep memory usage down. They do mention they have a lot of go internals expertise, so maybe the tradeoff is different for them, but that seems like an uncommon scenario.

                                                      1. 1

                                                        I wouldn’t say that, because it’s likely that they wouldn’t have been short on memory to begin with if they hadn’t used a GC language. (And yes, I’m familiar with the pros and cons of GC; I’m writing a concurrent compacting GC right now for work.)

                                                      2. 2

                                                        Only maybe. Without a gc long running processes can end up with really fragmented memory. With a gc you can compact and not waste address space with dead objects.

                                                        1. 18

                                                          If you’re really counting megs, perhaps the better option is to forgo dynamic heap allocations entirely, like an embedded system does.

                                                          1. 4

                                                            Technically yes. But they probably used this to deploy one code base for everything, instead of rewriting this only for the iOS part.

                                                            1. 2

                                                              Exactly this. You can try to do this in a gced language, and even make some progress, but you will be fighting the language.

                                                              1. -2

                                                                You should probably write it all in assembly language too.

                                                                1. 7

                                                                  I feel like you’re being sarcastic, but making most of the app not do dynamic applications is not a crazy or extreme idea. It’s not super common in phone apps and the system API itself may force some allocations. But doing 90+% of work in statically allocated memory and indexed arenas is a valid path here.

                                                                  Of course that would require a different language than Go, which they have good reasons not to do.

                                                                  1. 1

                                                                    I’m being sarcastic. But one of the issues identified in the article is that different tailnets have different sizes and topologies - they rejected the idea of limiting the size of networks that would work with iOS which is what they’d need to do if they wanted to do everything statically allocated.

                                                                    1. 3

                                                                      they rejected the idea of limiting the size of networks

                                                                      They’re already limited. They can’t use more than the allowed memory, so the difference is - does the app tell you that you reached the limit, or does it get silently killed.

                                                                      I believe that fragment was related to “how other team would solve it keeping other things the same” (i.e. keeping go). Preallocation/arenas requires going away from go, so it would give them more possible connections not less.

                                                              2. 10

                                                                That is absolutely not my experience with garbage collectors.

                                                                Few are compacting/moving, and even fewer are designed to operate well in low-memory environments[1]. golang’s collector is none of that.

                                                                On the other hand, it is usually trivial to avoid wasting address space in languages without garbage collectors, and a application-specific memory management scheme typically gives 2-20x performance boost in a busy application. I would think this absolutely worth the limitations in an application like this.

                                                                [1]: not that I think 15mb is terribly low-memory. If you can syscall 500 times a second, that equates to about 2.5gb/sec transfer filling the whole thing - a speed which far exceeds the current (and likely next two) generations of iOS devices.

                                                                1. 4

                                                                  To back up what you’re saying, this presentation on the future direction that the Golang team are aiming to take is worth reading. https://go.dev/blog/ismmkeynote

                                                                  At the end of that presentation there’s some tea-leaf reading about the likely direction that hardware development is likely to go in. Golang’s designers are betting on DRAM capacity improving in future faster than bandwidth improvements and MUCH faster than latency improvements.

                                                                  Based on their predictions about what hardware will look like in future, they’re deliberately trading off higher total RAM usage in order to get good throughput and very low pause times (and they expect to move further in that direction in future).

                                                                  One nitpick:

                                                                  Few are compacting/moving,

                                                                  Unless my memory is wildly wrong, Haskell’s generation 1 collector is copying, and I’m led to understand it’s pretty common for the youngest generation in a generational GC to be copying (which implies compaction) even if the later ones aren’t.

                                                                  I believe historically a lot of functional programming languages have tended to have copying GCs.

                                                                  1. 2

                                                                    At the end of that presentation there’s some tea-leaf reading about the likely direction that hardware development is likely to go in. Golang’s designers are betting on DRAM capacity improving in future faster than bandwidth improvements and MUCH faster than latency improvements.

                                                                    Given the unprecedented semiconductor shortages, as well as crypto’s market influence slowly spreading out of the GPU space, that seems a risky bet to me.

                                                                    1. 1

                                                                      That’s the short term, but it’s not super relevant either way. They’re betting on the ratios between these quantities changing, not on the exact rate at which they change. If overall price goes down slower than desired, that doesn’t really have any bearing.

                                                                  2. 1

                                                                    Aren’t most GCs compacting and moving?

                                                                    The first multi-user system I used heavily was a SunOS 4.1.3 system with 16MB of RAM. It was responsive with a dozen users so long as they weren’t all running Emacs. Emacs, written in a garbage collected, interpreted language would have run well on a much smaller system if there was only one user.

                                                                    The first OS I worked on ran in 16MB of RAM and ran a Java VM and that worked well.

                                                                  3. 1

                                                                    Any non-moving allocator is vulnerable to fragmentation from adversarial workloads (see Robson bounds), but modern size-class slab allocators (“segregated storage” in the classical allocation literature) typically keep fragmentation quite minimal on real-world workloads. (But see a fascinating alternative to compaction for libc-compatible allocators: https://github.com/plasma-umass/Mesh.)

                                                                  4. 1

                                                                    This does strike me as a place where refcounting might be a better approach, if you’re going to have any dynamic memory at all.

                                                                    1. 1

                                                                      With ref-counting you have problems with cycles and memory fragmentation. The short-term memory consumption is typically lower with ref-counting than a compacting GC, but the are many more opportunities to have leaks and grow over time. For a long-running process I’m skeptical that ref-counting is a sound choice.

                                                                      1. 1

                                                                        Right. I was thinking that for this kind of problem with sharply limited space available you’d avoid the cycles problem by defining your structs so there’s no void* and the types form a DAG.

                                                                    2. 1

                                                                      Edit: reverting unfriendly comment of dubious value.

                                                                    1. 1

                                                                      Isn’t this Bubble Sort? As in, the O(n^2) sorting algorithm that everyone invents then discovers why it’s so terrible when they learn a little bit of complexity theory? I remember implementing this exact algorithm in OPL for the Psion Series 3 when I was 13. It was the first time I discovered that it’s important to think about algorithmic complexity (on a 3.84MHz 8086 clone, you don’t need a very large n for n^2 to be painfully slow!), thought I didn’t learn how to formally reason about these problems for some years later..

                                                                      1. 10

                                                                        It’s not, and that’s fairly clear from the pseudocode in the paper and the wikipedia article you linked. The biggest difference, is that bubble sort iterates over the list repeatedly, comparing adjacent pairs of elements and swapping them if they’re out of order, until it’s able to iterate over the list without a single swap, i.e.

                                                                        loop:
                                                                            swapped = false
                                                                            for i from 1 to length(xs):
                                                                                if xs[i-1] > xs[i]:
                                                                                    swap(xs[i-1], xs[i])
                                                                                    swapped = true
                                                                            if not swapped:
                                                                                break
                                                                        

                                                                        Whereas the version in the paper compares more than just adjacent elements, and holds no state outside of the array:

                                                                         for i from 0 to length(xs):
                                                                            for j from 0 to length(xs):
                                                                                if xs[i] < xs[j]:
                                                                                    swap(xs[i], xs[j])
                                                                        
                                                                        1. 2

                                                                          Huh, the Wikipedia article is indeed more complex. This article describes the algorithm that I’d implemented, that I was later told was a reinvention of the bubblesort. The animation on the wikipedia page looks exactly like the behaviour I’d expect from this algorithm.

                                                                          1. 5

                                                                            Note that this algorithm appears to swap when the elements are in order, and has a non-trivial proof of correctness.

                                                                        2. 2

                                                                          The beginning of the paper talks about bubble sort, and why it’s different. And the end of the paper says it’s close but not identical to insertion sort. I recommend watching the animations mentioned elsewhere in the thread – it clears it up instantly.

                                                                          https://lobste.rs/s/gh1ngc/is_this_simplest_most_surprising_sorting#c_aadf4b

                                                                        1. 2

                                                                          I think it’s a mistake to group objective orientation completely under self (although it discusses SmallTalk a lot in the commentary for this section). Those two are message passing / evented object oriented systems, polymorphic by shared method interfaces, and as noted build on a persistent system state, and clearly represent a distinct family.

                                                                          The bulk of what people consider to be ‘object oriented’ programming after that inflection point though is the C++ / java style where objects are composite static types with associated methods and polymorphic through inheritance heirarchies - I think this comes from Simula and I think this approach to types and subtypes could be important enough to add to the list as an 8th base case.

                                                                          1. 4

                                                                            I wouldn’t group C++ and Java like that. Java is a Smalltalk-family language, C++’s OO subset is a Simula-family language (though modern C++ is far more a generic programming language than an object-oriented programming language).

                                                                            You can implement Smalltalk on the original JVM by treating every selector as a separate interface (you can use invoke_dynamic on newer ones) and Redline Smalltalk does exactly this. You can’t do the same on the C++ object model without implementing an entirely new dispatch mechanism.

                                                                            Some newer OO languages that use strong structural and algebraic typing blur the traditional lines between the static and dynamic a lot. There are really two axes that often get conflated:

                                                                            • Static versus dynamic dispatch.
                                                                            • Structural versus nominal typing.

                                                                            Smalltalk / Self / JavaScript have purely dynamic dispatch and structural typing. C++ has nominal typing and both static and dynamic dispatch and it also (via templates) has structural typing but with only static dispatch, though you can just about fudge it with wrapper templates to almost do dynamic over structural types. Java has only dynamic dispatch and nominal typing.

                                                                            Newer languages, such as Go / Pony / Verona have static and dynamic dispatch and structural typing. This category captures, to me, the best set of tradeoffs: you can do inlining and efficient dispatch when you know the concrete type, but the you can also write completely generic code and the decision whether to do static or dynamic dispatch depends on the type information available at the call site. Your code feels more like Smalltalk to write, but can perform more like C++ (assuming your compiler does a moderately good job of reification and inlining, which Go doesn’t but Pony does).

                                                                            1. 4

                                                                              From the implementation side yes, the JVM definitely feels more like Smalltalk. But is Java really used in the same dynamic fashion to such an extent that you could say it too is Smalltalk? Just because it’s possible, doesn’t mean it’s idiomatic. I’d argue that most code in Java, including the standard library/classpath, is written in a more Simula-like fashion, the same as C++, and would place it in that same category.

                                                                              1. 6

                                                                                Interfaces, which permit dynamic dispatch orthongonal to the implementation hierarchy, are a first-class parts of Java and the core libraries. Idiomatic Java makes extensive use of them. The equivalent in C++ would be abstract classes with pure virtual methods and these are very rarely part of an idiomatic C++ codebase.

                                                                                Java was created as a version of Smalltalk for the average programmer, dropping just enough of the dynamic bits of Smalltalk to allow efficient implementation in both an interpreter and a compiler. C++ was designed to bring concepts from Simula to C.

                                                                                1. 3

                                                                                  Interesting replies, thanks. The point about Java dispatch is interesting and suggests it is not as good an example as I thought it was. (I’ve not really used it extensively for a very long time). The point I was trying to make was for the inclusion of Simula based on the introduction of classes and inheritance, itself an influence on Smalltalk. I accept that simula is built on Algol, and maybe that means it’s not distinct enough for a branch within this taxonomy. I would note that both Stroustrup and Gosling nominate Simula as a direct influence example citation

                                                                                  (NB: I always thought of java as an attempt to write an objective-C with a more C++ syntax myself, but that’s just based on what seemed to be influential at the time. Sun were quite invested in OpenStep shortly before they pivoted everything into Java)

                                                                                  1. 4

                                                                                    (NB: I always thought of java as an attempt to write an objective-C with a more C++ syntax myself, but that’s just based on what seemed to be influential at the time. Sun were quite invested in OpenStep shortly before they pivoted everything into Java)

                                                                                    And Objective-C was an attempt to embed Smalltalk in C. A lot of the folks that worked on OpenStep went on to work on Java and you can see OpenStep footprints in a lot of the Java standard library. As I understand it, explicit interfaces were added to Java largely based on experience with performance difficulties implementing Objective-C with efficient duck typing. In Smalltalk and Objective-C, every object logically implements every method (though it may implement it by calling #doesNotUnderstand: or -forwardInvocation:), so you need an NxM matrix to implement (class, selector) -> method lookups. GNU family runtimes implement this as a tree for each object that contains every method, with copy-on-write to reduce memory overhead for inheritance and with a leaf not-implemented node that’s referenced for large runs of missing selectors. The NeXT family runtimes implement it with a per-object hash table that grows as methods are referenced. Neither is great for performance.

                                                                                    The problem is worse in Objective-C than in some other languages for two reasons:

                                                                                    • Categories and reflection APIs mean that methods can be added to a class after it’s created. Replacing a method is easy (you already have a key->value pair for it in whatever your lookup structure is, but adding a new valid selector means that you can’t optimise the layout easily).
                                                                                    • The fallback dispatch mechanisms (-forwardInvocation: and friends) mean that you really do have the complete matrix, though you can optimise for long runs of not-currently-implemented selectors.

                                                                                    Requiring nominal interfaces rather than simple structural equality for dynamic dispatch meant that Java could use vtables for dispatch (like C++). Each class just has an array of methods it implements, indexed by a stable ordering of the method names. Each interface has a similar vtable and nominal interfaces mean that you can generate the interfaces up-front. It’s more expensive to do an interface-to-interface cast, but that’s possible to optimise quite a lot.

                                                                                    Languages that do dynamic dispatch but don’t allow the reflection or fallback dispatch mechanism, but still do structural typing, can use selector colouring. This lets you have a vtable-like dispatch table, where every selector is a fixed index into an array, but where many selectors will share the same vtable index because you know that no two classes implement both selectors. The key change that makes this possible is that the class-to-interface cast will fail at compile time if the class doesn’t implement the interface and an interface-to-interface cast will fail at run time. This means that once you have an interface, you never need any kind of fallback dispatch mechanism: it is guaranteed to implement the methods it claims. Interfaces in such a language can be completely erased during the compilation process: the class has a dispatch table that lays out selectors in such a way that selector foo in any class that is ever converted to interface X is at index N, so given an object x of interface type X you can dispatch foo by just doing x.dtable[N](args...). If foo appears in multiple interfaces that are all implemented by an overlapping set of classes, then foo will map to the same N. If one class implements bar and another implements baz, but these two methods don’t ever show up in the same interfaces then they can be mapped to the same index.

                                                                                    Smalltalk has been one of the big influences on Verona too. I would say that we’re trying to do for the 21st century what Objective-C tried to do for the ‘80s: provide a language that captures the programmer flexibility of Smalltalk but is amenable to efficient implementation on modern hardware and modern programming problems. Doing it today means that we care as much about scalability to manycore heterogeneous systems as Objective-C cared about linear execution speed (which we also care about). We want the same level of fine-grained interoperability with C[++] that Objective-C[++] has but with the extra constraint that we don’t trust C anymore and so we want to be able to sandbox all of our C libraries. We also care about things like live updates more than we care about things like shared libraries because we’re targeting systems that typically do static linking (or fake static linking with containers) but have 5+ 9s of uptime requirements.

                                                                                    1. 1

                                                                                      Fascinating reading again, thanks. I had not previously heard of Verona, it sounds very interesting. Objective-C was always one of my favourite developer experiences, the balance of C interoperability with such a dynamic runtime was a sweet spot, but the early systems were noticeably slow, as you say.

                                                                            2. 1

                                                                              It’s because Java and C++ are both ALGOL family with something called “objects” in it. Neither have enough unique features to warrant a family or being part of anything but the ALGOL group.

                                                                            1. 4

                                                                              Does it require MS-DOS, or it will run on any sufficiently compatible DOS system, such as FreeDOS?

                                                                              1. 10

                                                                                I haven’t looked at the code, but if it’s doing what I think it’s doing, there shouldn’t be any reason it would depend on MS-DOS specifically.

                                                                                I suspect it’s launching a Linux kernel, preserving the first 1MB of memory, and then launching a process in Virtual 8086 mode that maps the first 1MB into memory; and finally returning DOS, this time in Virtual 8086 mode.

                                                                                In other words, preserving the state of the system as DOS sees it, and launching a virtual machine with that same state once the Linux kernel starts, and returning that VM to the foreground.

                                                                                EDIT: checked the code, looks like this is the case. Testing it with FreeDOS now.

                                                                                1. 3

                                                                                  That’s a beautiful hack. Thank you for looking into it.

                                                                                  1. 3

                                                                                    If it’s that hands-off, could it run CP/M-86? Or Xenix? Or other OSes for the 16-bit x86?

                                                                                    1. 2

                                                                                      If it’s that hands-off, could it run CP/M-86? Or Xenix? Or other OSes for the 16-bit x86?

                                                                                      CP/M-86 maybe, I believe it’s syscall interface is similar as DOS, but I can’t remember off-hand—I also don’t know if it supported subdirectories. Xenix, no, not without a rewrite.

                                                                                      That being said, the only thing it really needs DOS for is to load the linux system off disk, so it would be relatively easy to reimplement that part of the code for other 16-bit OSes.

                                                                                1. 3

                                                                                  This link 404s for me.

                                                                                  1. 1

                                                                                    I’m not seeing any /wiki links from the home page of the site. I wouldn’t be surprised if paths under /wiki were not intended to be publicly shared. It’s working again.

                                                                                  1. 1

                                                                                    Does this get you anything over extracting a rootfs tarball to the directory of your choice or, in the case of Debian, running debootstrap for the target arch and directory?

                                                                                    1. 2

                                                                                      It’s a squashfs you mount directly (in this case), so it’s smaller than those would be.

                                                                                      1. 1

                                                                                        Ah. Nice. I’d forgotten about squashfs compression. Probably only a minor advantage on zfs or btrfs volumes with compression enabled, but useful elsewhere. I suppose having readonly mounting enforced is another plus for reliability.

                                                                                    1. 2

                                                                                      Isn‘t this a straight up ad? There is just self appraisal of the product by the company who made it. However I don’t have a mac, so I‘m not the target audience and won‘t flag. @iddrougge are you a user of the software and would like to recommend it?

                                                                                      1. 1

                                                                                        Looks like an ad for sure, but one of the rare few I’m glad to see. Not sure it belongs on lobste.rs though, so I’ve flagged it.

                                                                                        It looks like an interesting product that I’ll probably trial. Probably wouldn’t use it for code, but I’ve a few long-form text projects managed it git, that I might use this for out of laziness. The price tag is a bit steep IMO, I’d prefer something in the 10-20€ range.

                                                                                      1. 36

                                                                                        Congrats to waddlesplash, the Haiku project, and all the users!

                                                                                        1. 23

                                                                                          Hey, thanks very much!

                                                                                          1. 3

                                                                                            Best of luck, man! This is amazing! I can’t wait to see all the cool updates!

                                                                                          2. 1

                                                                                            Do you know if Zig has been ported to Haiku yet? I’ve used Haiku on and off for years, and I’ve been getting into Zig recently, so porting it might be fun. Given Zig’s portability, the fact IIRC Haiku uses ELF, and the BYO²S functionality in the stdlib, an initial port would probably be a fun weekend project.

                                                                                            1. 1

                                                                                              Yes

                                                                                              Although I think the work should be audited; I noticed some suspect values the other day. Why would some error codes be negative and others not? I think the contributor who did this did a lot of copy pasting and guesswork.

                                                                                              Anyway not to complain, but I do think there could be a lot of improvements made to Zig’s Haiku support. Contributors welcome :)

                                                                                              1. 2

                                                                                                Why would some error codes be negative and others not?

                                                                                                The short answer that you are looking for is: indeed, those are incorrect, and all the error codes in Haiku are negative. Actually they are all defined in the same file (/system/develop/headers/os/support/Errors.h).

                                                                                                The long answer is … there is a wrapper system which allows you to enable some feature flags and link to a static wrapper library to get positive error codes in the case of applications that really, really want error codes to be positive and cannot be easily patched. It is very rarely used these days, and HaikuPorts instead tries to upstream patches to applications that assume error codes are positive, but it does exist. Most applications should not have to think about this at all, though.

                                                                                                Anyway, please feel free to us Haiku devs on IRC or GitHub to review Haiku-specific things. (I thought I looked at the Zig port before it was merged but I guess I missed that section.) It used to be the case that most projects would just tell us “I don’t really care about Haiku, keep your patches in your own tree;” but that seems to be changing in recent years and now most projects seem to be amenable to accepting patches from us. Rarely, like in the case of Zig, we even get patches done by someone who isn’t a Haiku or HaikuPorts developer! So that is itself an exciting development, even if they get things wrong some of the time.

                                                                                          1. 5

                                                                                            I was plotting out a small hard sci-fi novella, but realized early on that the setting and plot would lend itself really well to a 2D platformer. So now I’m learning Godot.

                                                                                            1. 1

                                                                                              I’m intrigued af.

                                                                                            1. 8

                                                                                              Oh, also: whatever approach you choose, you are going to also need to provide an ergonomic, performant animation API.

                                                                                              This is where I died inside. Just no. Animations are one of the most despicable developments of recent UIs. I don’t want to keep either waiting for the computer to do its trivial jobs, or distract me in general altogether. Want to make your phone faster? Disable animations!

                                                                                              You need to support emoji.

                                                                                              Also no, this is a largely pointless complication.

                                                                                              Async You do have nice ergonomic async support, don’t you?

                                                                                              What does this even mean? Win32/X11/Qt/GTK+ are all async by their sheer nature.

                                                                                              He’s describing an opinionated, everything-but-the-kitchen-sink approach. But a decent overview nonetheless.

                                                                                              1. 33

                                                                                                Complaints about animations an emoji sound very much like “old man yells at cloud”.

                                                                                                Emoji (and generally Unicode support for more than BMP) is now an expected feature of modern software.

                                                                                                UI animations can be done well and be helpful, and hint how UI items relate to each other spatially. It’s really weird that even low-end mobile devices have GPUs that can effortlessly animate millions of 3D triangles, but sliding a menu is considered a big effort.

                                                                                                1. 7

                                                                                                  I’ve honestly never seen a single helpful UI animation in my life, other than spinning wheels and the likes, indicating that the application/system hasn’t frozen up completely. Instead of “click, click, click, be done”, things tend to shift into “click, have your attention grabbed, click, have your attention grabbed, throw your computer out of the window and go live in the woods”. GNOME Builder and GIMP 3 settings dialogues are spectacular examples of GNOME keeping digging its grave.

                                                                                                  One would expect the computer to be a tool, rather than a work of art.

                                                                                                  1. 19

                                                                                                    Maybe that’s just a failing of GNOME?

                                                                                                    For example, macOS has generally well-done animations that don’t get in the way. It tends to animate after an action, but not before. Animations are very quick, unless they’re used to hide loading time. There are also animations to bring your attention to newly created/focused elements. Reordering of elements (menu icons, tabs) is smooth. IMHO they usually add value.

                                                                                                    Touchscreen OSes also animate majority of elements that can be swiped or dragged. The animation is not on a fixed timer, but “played” by your finger movements, so you may not think about it as an animation, but technically it is animating positions and sizes of UI elements. Users would think UI is broken if it instantly jumped instead of “sticking” under the finger.

                                                                                                    1. 4

                                                                                                      Eh, macOS has sort of jumped the gun on animation, too. I’ve activated the “Reduce motion” for it because, by default, the transition between full-screen apps is done by having windows “slide” in and out of view. It’s not just slow (it takes substantially more time to do the transition that in takes to Cmd-Tab between the applications), it legit makes you dizzy if you keep switching back and forth.

                                                                                                      I imagine it’s taken verbatim from iPad (last macOS version I used before Big Sur was… Tiger, I think? so I’m not sure…) where it makes some sense, but I can’t for the life of me understand what’s the point of it on a thing that has a keyboard and doesn’t have a touch screen.

                                                                                                      1. 3

                                                                                                        I can’t for the life of me understand what’s the point of it on a thing that has a keyboard and doesn’t have a touch screen.

                                                                                                        Probably because most people will use their MacBook trackpad to swipe between full screen apps/desktops. I only rarely use the Ctrl-arrow shortcuts for that. A lot of macOS decisions make more sense when you assume the use of a trackpad rather than a mouse (which is why I’ve always found it weird they don’t ship a trackpad with iMacs by default)

                                                                                                        1. 1

                                                                                                          You can still cmd-tab between full-screen applications, which is what a lot of people actually do – a “modern” workplace easily gets you an email client, several Preview windows, a Finder window, and a spreadsheet. Trackpad swiping works great when you have two apps open, not so much when you got a bunch of them.

                                                                                                          When you’re on the seventh out of the fifteen invoices you got open, you kindda want to get back to the spreadsheet without seeing seven invoices again. That’s actually a bad example because full-screen windows from the same app are handled very poorly but you get the point: lots of apps, you don’t always want to see all of them before you get to the one you need…

                                                                                                        2. 1

                                                                                                          These animations are helpful, and have been shown to be so. It’s not something some asshole down at Cupertino just cooked up because he thought it would look cool. Cues as to the spatial relations of things (as the desktops have an order and you use left/right gestures to navigate them) are very valuable to a lot of people, and they even let you turn them off, I don’t really see anything worth complaining about.

                                                                                                          I mean there’s a lot of questionable things Apple is doing these days, but that’s not one of them.

                                                                                                          1. 2

                                                                                                            I’m not talking about desktops, but “maximized” applications (i.e. the default, full-screen maximisation you get when you press what used to be the green button).

                                                                                                            You get full-screen sliding animations when you Cmd-Tab between apps in this case, even though there’s no spatial relations between them, as the order in which they’re shown in the Cmd-Tab stack has nothing to do with the order in which they’re shown in Mission Control (the former is obviously mutable, since it’s in a stack, the latter is fixed).

                                                                                                            In fact, precisely because one’s a stack and the other one isn’t, the visual cue is wrong half the time: you switch to an application to the right of the stack, but the screen slides out to the left.

                                                                                                            Animation when transitioning between virtual desktops is a whole other story and yes, it makes every bit of sense there.

                                                                                                            and have been shown to be so.

                                                                                                            Do you have a study/some data for that? I know of some (I don’t have the papers at hand but I can look it up if you’re curious), but it explicitly expects only occasional use so it doesn’t even attempt to discuss the “what if you use it too much” case. So it’s not even close to applying to the use case of e.g. switching between a spreadsheet and the email client several times a minute.

                                                                                                            (Edit: just for the fun of it, I tried to count how often I Cmd-Tab between a terminal window and the reference manual after I ended up ticking the reduce animation box in accessibility options. I didn’t automate it so I gave up after about half an hour, at which point I was already well past 100. Even if this did encode any spatial cues, I think spatial cues are not quite as valuable as not throwing up my guts.)

                                                                                                      2. 9

                                                                                                        There are many legitimate uses for animations. Yes loading spinners are one of them, unless you think that every single operation can be performed instantly. Sliding and moving elements around on mobile especially is another one. A short animations to reveal new elements after a user action can also improve the UX.

                                                                                                        Not everything has to be one extreme or another - it’s not pointless animations everywhere, or no animations at all. When used well they can improve usability.

                                                                                                        1. 1

                                                                                                          Loading spinners are far from ideal though. A progress bar would be generally better so you have some idea of if it is actually still working and how far is left to go. Or anything else that provides similar information.

                                                                                                          I’ve seen so many times when a loading spinner sits there spinning forever because, for example, the wifi disconnected. The animation then is misleading, since it will never complete.

                                                                                                          1. 4

                                                                                                            That’s an entirely different loading element. You can have animated progress bar. And when progress is indeterminate a spinner makes the most sense. A spinner not stopping on error is a UI bug, not a problem with the concept. If you want to get mad at bad design, how about programming languages and paradigms that don’t make you explicitly handle errors to get in this state.

                                                                                                          2. 1

                                                                                                            The standard way of indicating length is to say so to the user, and possibly add a progress bar, for when progress can be sensibly measured. Like adding a spinner, it needs to be done explicitly. Revealing new elements can be done responsively by improving the design–the standard way is by turning an arrow.

                                                                                                            The notion of phones invading GUIs, as x64k hinted at, is interesting here (though not new). Transplanting things that do not belong, just because of familiarity.

                                                                                                            Going back to the article, it said I needed to. I don’t. And still, I can make anything I desire with the toolkit, with no real change to the required effort. Except for “modern” IM, when I don’t care to implement pictures as pseudo-characters.

                                                                                                          3. 7

                                                                                                            One would expect the computer to be a tool, rather than a work of art.

                                                                                                            The computers I remember most fondly all had some qualities of art in it. Sometimes in the hardware, others in the software, and the ones I like the most had it in both.

                                                                                                            Animations are important in devices in which there is mostly visual feedback. Most computers don’t have haptics, and we often get annoyed at audio feedback. Visual cues that an action was performed or that something happened are important. There is a difference between UI animation and a firework explosion in the corner of the app.

                                                                                                            1. 5

                                                                                                              One would expect the computer to be a tool, rather than a work of art.

                                                                                                              You, me, and everybody in this thread is in the top 0.001% of computer power users. What’s important to us is totally different than what’s important to the rest of the population. Most normies I talk to value pleasing UIs. If you want to appeal to them, you’re going to need animations.

                                                                                                              1. 6

                                                                                                                I’d argue even further that the 0.001% of computer power users also need animations. When done well, animations really effectively convey state changes in a way that speaks to our psychology. A great example is that when I scroll a full page in my web browser, the brief scroll animation shows my brain how where I am now relates to where I was a split second ago. Contrast this to TUIs which scroll by a full page in a single frame. It’s easy to consciously believe that we can do without things like animations, but I’m pretty sure that all the little immediate state changes can add up to a subperceptual bit of cognitive load that nevertheless can be fatiguing.

                                                                                                                1. 2

                                                                                                                  I think good animations are ‘invisible’, but bad ones aren’t. So people remember the bad more than the good.

                                                                                                            2. 3

                                                                                                              UI animations can be done well

                                                                                                              Absolutely. I love the subtle animations in iOS. Like when I long-press on the brightness slider to access more controls like the dark mode toggle. I’m already making an action that takes more time than a simple tap, and the OS responds with a perfectly timed animation to indicate that my action is being processed.

                                                                                                              On the other hand, animations can be very easily abused. Plenty of examples, like today’s Grumpy Website post, show animations that hinder accessibility. I think the cases where animation goes wrong are where it was thrown in only because “it’s modern” rather than primarily as a means to convey information.

                                                                                                            3. 15

                                                                                                              I respectfully disagree with your opinion about animations. There are lots of times when I genuinely feel that the animation is important and actually conveys information. For example, the Exposè-style interface in macOS and GNOME is much better since the windows animate from their “physical” locations to their “overview” locations; the animation provides important context about which windows went where, so your eyes get to track the moving objects. It also helps that those animations track your finger movements on the trackpad perfectly, with one pixel of movement per tiny distance travelled across the trackpad (though the animation also has value when triggered using a hotkey IMO).

                                                                                                              But there’s definitely a lot of software which over-uses animations. The cardinal sin of a lot of GUI animations is to make the user wait for your flashy effects. A lot of Apple’s and GNOME’s animations do fit this description, as well as arguably most animations in general. So I think a GUI framework needs a robust animation system for when it’s appropriate, but application programmers must show much more discretion about when and how they choose to use animations. For example, I’m currently in Safari on macOS, and when I do the pinch gesture to show an overview of all the tabs, I have to wait far too long for the machine to finish its way too flashy zoom animation until I actually get the information I need in order to interact further.

                                                                                                              1. 6

                                                                                                                Bad news, even smooth scrolling is a kind of animation.

                                                                                                                1. 2

                                                                                                                  I’ll admit, this is an improvement in the browser. Bumping my counter to one case of a useful animation.

                                                                                                                2. 3

                                                                                                                  What does this even mean? Win32/X11/Qt/GTK+ are all async by their sheer nature.

                                                                                                                  Not really. It is very easy to block the event loop with various other calls (including things like clipboard which like the thing said is async under the hood on X, but qt and gtk don’t present it that way). The gui thing needs to have some way for other operations to hook into the event loop so you can avoid blocking.

                                                                                                                  Not terribly difficult but still you do need to at least think about it, it isn’t fully automatic. (Even on Windows, where it is heavenly bliss compared to unix, you do still need to call a function in your loop like SleepEx to opt into some additional sync processing.)

                                                                                                                  1. 2

                                                                                                                    GTK+ does present clipboard retrieval asynchronously, see https://docs.gtk.org/gtk3/method.Clipboard.request_contents.html which contains a callback argument–that much I remember. Setting clipboard contents can be done blindly, you can have ownership snatched at any moment.

                                                                                                                    Going the way of recursive event loops requires caution that I would avoid imparting on users as much as possible, in particular because of callbacks vs state hell. Typically, this is reserved for modal dialogues, and the API user knows what they’re dealing with.

                                                                                                                    There’s also the possibility of processing the event pump selectively, though that’s another thing you don’t want to do to yourself.

                                                                                                                  2. 1

                                                                                                                    Want to make your phone faster? Disable animations!

                                                                                                                    This is generally the point of animations as a UX feature - they mask the slow operation of applications with a silly animation to keep you distracted/indicate a process is occurring. Want to notice when your app is using a jpeg to pretend it’s loaded? Disable animations!

                                                                                                                    1. 21

                                                                                                                      It’s not only that. The human visual system is very good at tracking movement, much worse at noting that a thing has disappeared and reappeared. If a thing disappears, you’ll typically notice quickly but not immediately be aware of what has disappeared. If you animate movement then there’s lower cognitive load because a part of your brain that evolved to track prey / predators is used rather than anything related to understanding the context of the application.

                                                                                                                      1. 4

                                                                                                                        This is it exactly. The human visual system evolved in a world where things don’t instantly flicker out of existence or appear out of nothing.

                                                                                                                      2. 2

                                                                                                                        I don’t think OP is talking about things like progress bars and spinning indicators, which are pretty legitimate everywhere, but things like “gliding” page transitions between application screens. If a framework is indeed so slow that you notice it rendering widgets, an animation API will help now and then, but won’t make that big a dent. (Edit: also, unless you’re loading things off a network, loading JPEGs cannot be a problem anymore, it hasn’t been a problem in twenty years!)

                                                                                                                        I do think this piece could’ve been more aptly titled “so you want to write a GUI framework for smartphones”. Animations are important on touch screen driven by gestures (e.g. swiping) – gestures are failure-prone, they need incremental feedback and so on, plus nobody who’s not high on their 2013-era post-PC supply expects efficiency out of phone UIs.

                                                                                                                        But they are pretty cringy on desktop. E.g. KDE’s Control Center (and many Kirigami apps) has an annoying misfeature, where you click “back” and the page moves out of view as if you’d swiped it. But you didn’t swipe. Regardless of what you think about animation, it’s not even the right animation.

                                                                                                                        That’s why so many people see them as useless eye candy. If you go all Borg on it and only think in absolute terms, you get a very Borg user experience.

                                                                                                                        Edit: yes, I know, “a modern GUI toolkit” should have all these things. The point is you can drop a lot of them and still write useful and efficient applications. Just because Google is doing something on Android doesn’t mean everyone has to do it everywhere.

                                                                                                                        1. 3

                                                                                                                          It’s funny you mention page transitions. I have my ebook reader set up to do a 200ms-ish animation when I tap the ‘next page’ button where the current page slides off to the left and the next one slides in from the right. It has an option to disable it, but I actually find that disorienting in this vague way I can’t explain. But on my desktop, it’s fine with no animations.

                                                                                                                        2. 1

                                                                                                                          Empirically, that is not how it’s used. This masking is a minority of use cases, and even then it’s bad. To some people that aren’t me, it might be better described as “eye candy” and “smoothness”.

                                                                                                                          Being able to disable this irritation is being lucky, e.g. it’s CSS-hardcoded in Firefox and GTK+ (/org/gnome/desktop/interface/enable-animations only works partially).

                                                                                                                        3. 1

                                                                                                                          What does this even mean? Win32/X11/Qt/GTK+ are all async by their sheer nature.

                                                                                                                          I think they’re talking about Rust async, since this is all in the context of writing a cross-platform GUI toolkit in Rust. This is more of a problem than it seems because if you’re doing a cross-platform toolkit that uses native widgets, it’s not at all trivial to impedance-match whatever model the native widget toolkit uses behind the scenes with your toolkit, which exposes an async model.

                                                                                                                          (Edit: there are some additional complications here, too. For example there are toolkits that (generally) async, but still do some operations synchronously. The author mentions the copy-paste API as an example.)

                                                                                                                          One might conclude that it’s better to not do any of that and instead expose the platform’s intended programming model, as second-guessing thirty year-old code bases tends to backfire spectacularly, but maybe I’m just being grumpy here…