1. 47
  1. 6

    Wow, this is a great, no-bullshit explanation. Thanks Carl + the rest of the tokio team.

    1. 6

      Therefore, for applications to be fast, we must maximize the amount of CPU instructions per memory access.

      That’s a great summary of modern CPUs’ performance weirdness.

      Also, loom sounds great:

      It caught more than 10 bugs that were missed by the other unit tests, hand testing, and stress testing.

      1. 4

        An old definition of a “supercomputer” is, “a computer that turns CPU-bound problems into IO-bound ones.”

      2. 8

        It is better to be slow and correct than fast and buggy…

        I just wish this got said more.

        1. 2

          Absolutely incredible. One of the best things in tech I’ve read in recent months.

          When a task transitions to the runnable state, instead of pushing it to the back of the run queue, it is stored in a special “next task” slot. The processor will always check this slot before checking the run queue.

          This can break task fairness, right? The thread can keep running tasks off the next task slot and never get to the run queue. What am I missing here?

          1. 1

            If a task always spawns a new runnable task, you can think of it as an infinite loop, which would also break fairness.

            I think the idea is to mimic the behavior of QNX, where “actor A calls-and-waits-on B; actor B executes and returns; A resumes” is scheduled by executing serially on the same thread.

            1. 1

              If a task always spawns a new runnable task, you can think of it as an infinite loop, which would also break fairness.

              In the base implementation, the queue is FIFO so new tasks will run after all existing tasks. So you’d still have guaranteed fairness.