1. 1

    The last time I was on a Mac, about three years ago, I simply could not adjust to not having highlight-and-middle-click paste. I hardly ever touch the rodent, but when I do that is what I need it for.

    Are there any good solutions to adding that feature to a Mac now? At the time all the “solutions” were workarounds that came with their own set of new problems.

    1. 2

      I use BetterTouchTool and a mouse with extra side buttons, and bind copy and paste to the side buttons. Also both Apple’s Terminal and iTerm2 support middle click paste.

      1. 1

        Thanks. This is yet another workaround, though.

        For one thing, being able to copy without having to click a button - on the keyboard or on the mouse - is very streamlined. Call me spoiled.

        But more importantly, being able to paste with the middle mouse button means that I don’t have to first click or alt-tab to focus the receiving window. I just move the mouse over the non-focused window, middle-click and that window gets focus and the paste. With a dedicated “paste” key one would have to first focus the field that is to accept the paste.

        Therefore the proposed solution is less streamlined on both sides of the operation.

    1. 1

      Thanks for the tips!

      Can you elaborate on the display scaling point? If I understood you correctly you are recommending something completely opposite to https://tonsky.me/blog/monitors/ but I may just misunderstood you

      1. 1

        I believe I recommend the same thing, which is to try and stick to integer scaling factors.

        1. 1

          thank’s for clarification

      1. 4

        For reference I find that when I make the HN front page the spike on my CloudFlare analytics is about 2x as big as the spike on Google Analytics, with Google showing a really high rate of mobile users. Seems plausible to me that over 50% of HN/Lobsters users have an ad blocker on their desktop and some people also have one on their phone.

        1. 3

          This is interesting and represents progress in exposing a problem. But it is end to end.

          I keep wondering why the following hasn’t been measured, min/avg/max latency-wise:

          • (electrical) keypress to message on usb wire.
          • usb wire to gpio (with e.g. a SCHED_FIFO program listening to event, setting gpio on event receive, repeat the test while using the computer as a desktop for an extended amount of time to see the influence of load despite scheduling class).

          My suspicion is that the electronics in keyboards themselves (I suspect dominated by firmware), and the operating system (e.g. Linux, and the UNIX-like design that makes reasoning about latency so hard it is non-tractable) do introduce a large amount of latency and jitter.

          1. 4

            Oh maybe I should have linked to this in the post, it performs the second type of measurement and finds that Linux keyboard processing latency can be less than 1ms if you do it right: https://michael.stapelberg.ch/posts/2018-04-17-kinx-latency-measurement/

            Key press to USB latency of course depends heavily on the keyboard, but you can try to detect it by doing an end to end measurement of different keyboards while holding the software end constant: https://danluu.com/keyboard-latency/

            1. 3

              Linux keyboard processing latency can be less than 1ms

              Careful. Latency does not matter every time it is low enough. This is very easy to achieve in average. Relative to achieving a low maximum latency, that is.

              Maximum latency is the value that really, really matters. Because that’s the one where UX is broken.

              while holding the software end constant

              Linux is too jittery to be used for this measurement. This is why I was very specific (message on usb wire).

              https://danluu.com/keyboard-latency/

              Suffers from the issue above, and from keypress being mechanical. I would argue they are measuring something else entirely, rather than the latency introduced by electronics/firmware within the keyboard.

              1. 6

                In practice, the only times I notice extreme latency (to the point where it doesn’t feel instant) on my Linux system is when I start typing in a Google Doc before it’s loaded or something like that. In that case, the issue is that the application isn’t ready to process input, not the kernel. Even when I’m running four fuzzer instances (each locked to one of my four cores) or compiling something like Chromium, the input latency is not noticeably worse.

                I know that UNIX is a conceptually inferior operating system design compared to some others, but the huge amounts of manpower going into the Linux kernel mean that issues like input latency under load are not significant. I don’t doubt that they are a noticeable issue on some Linux desktop systems, but the issue there would be the relatively poor desktop environments, not the kernel.

                1. 5

                  I agree that tail latency matters, and more like 99.9th or 99.99th percentile latency rather than maximum, but if you read the article I don’t think there’s any reason to suspect based on how things work that Linux input processing will have high tail latencies in any way that doesn’t just apply to anything running on a non-RTOS. I’m pretty convinced that USB event processing latency is dwarfed by other factors like compositor latency as long as you use a 1000hz keyboard.

            1. 7

              Very good summary with a ton of interesting links, I’ll probably get back to them later.

              One point the author missed however, is about organizational complexity. Quite often in my experience, the decision is made to move to a distributed system to reflect the structure of the organisation. That’s a solution (with its set of trade-offs) to the problem of “how to have 50 engineers working on the same project?”.

              1. 4

                Agreed I think I touch on that briefly as one of the real reasons people currently use microservices where in theory with better tools for separately updated hot-reloaded dynamic libraries with stable interfaces could solve the problems people split services based on teams for without adding the asynchrony.

              1. 2

                This is a nice article. I ran into this same issue implementing telefork, except I had the newer Docker version she mentions. This meant for me ptrace worked perfectly but then process_vm_readv failed mysteriously (you may notice it in Julia’s source code excerpt but nothing in the Docker commit that enables ptrace enabling it). I was confused like Julia because nothing in the man pages suggested why process_vm_readv could fail when it was supposed to work based on ptrace permissions and ptrace was working. I don’t quite remember but I think I figured it out by guessing it might be Docker and searching the Docker source for process_vm_readv and finding that same snippet.

                1. 3

                  This is not really fork. For a start, fork implies copy-on-write mappings. If a process has a MAP_SHARED mapping (of a file or [anonymous] shared memory object) then both the parent and the child will see the same thing and it will be explicitly synchronised. You could do this via RDMA, but it wouldn’t be cheap.

                  Ignoring file descriptors also means ignoring the most difficult part of doing this right. VM migration is orders of magnitude easier than POSIX process migration because the amount of state in the hypervisor for each VM is vastly less than the state in a *NIX kernel for each process. A VM typically has a handful of virtual or emulated devices, often just a disk and a network. The only state of the disk device (other than the backing store itself) is the queue of pending requests, which is easy to transport. The only state of the network device (other than the external routing tables) is the set of pending requests and in-flight responses, which are easy to migrate. In contrast, each UNIX file descriptor has an underlying object and an unbounded amount of stream state associated with it. Migrating this properly is difficult for threereasons. First, there’s no introspection to automatically copy the state associated with the object. Second, state is shared. If I open a file and fork, then both processes will share the same file descriptor and reading with one will alter the state of the other. Third, the objects are often intrinsically local. For example, you can copy a file from the local filesystem, but the filesystem is a shared namespace and so you then alter the sharing behaviour between that process and any other process that has the file open.

                  I find it difficult to imagine this being generally useful because any nontrivial process is going to find itself in an undefined state after telefork. The UNIX process model is not the place to start if you want to end up with an abstraction like this. In fact, given the later use cases, an RPC server that runs some WebAssembly provided in the RPC message is closer.

                  1. 7

                    I feel like I explicitly said that handling file descriptors correctly is super hard, although CRIU and DMTCP make attempts that work for the common cases. I also mentioned possible extensions to do both lazy copying and using a MESI-like protocol to do shared memory of pages across machines. What I have is just a fun demo to show what’s possible if you ignore the hard parts, and I say as much.

                    1. 3

                      Just to have said it: That this was a limited tech demo was indeed abundantly clear in the post. Not sure why people are acting as if you’re claiming this to be production grade ready-to-ship software..

                      I really enjoyed reading the article, I can physically feel the excitement you must’ve felt when you first got this demo working. Thanks for writing it up :)

                      1. 1

                        <3

                      2. 1

                        I’m sorry if I came across as overly critical. It is a neat demo. I’ve done something similar in the past and rapidly hit the limitations of the approach quite quickly. I’ve also read a bunch of research papers trying to do something similar as a complete solution and they all hand-waved away a load of the hard bits, so I’m somewhat prejudiced against the approach.

                    1. 1

                      Cool thing but seems to just have been an accidental re-invention of a feature erlang has had for a very long time.

                      1. 4

                        Sure but then your software has to be written in Erlang, this works for any language. The best part of the multiple people who’ve written this comment in different places is that as far as I can tell Erlang doesn’t even support process migration out of the box, you can only spawn a process on a different node which is more akin to copying the binary and running it like MPI does. There does seem to be a third-party solution for Erlang though: https://github.com/michalwski/proc_mobility

                        Maybe I’m wrong though, I haven’t really used Erlang. I really like the ideas and it’s a really cool system, but often I want to write really fast software, and then “use Erlang” stops being a viable solution.

                        Really all of this misses the point though which is that I just did this for fun because it’s silly, and as a mechanism for explaining some low level ideas to people who may not have encountered them before. I mention in the post that this has been done before and that my implementation isn’t actually useful.