I’ll see if I can model worst case, too. It’s a bit tricky in PRISM because it can only calculate expected reward, but I have some ideas on how to trickery through.
The best you can hope for is a constant factor: take d jobs of length 1 which all arrive at the same time. The optimal schedule is then to schedule d/2 on both processors (or queues). The sum of latencies times is (0 + 1 + 2 + .. + d/2-1) * 2, which is quadratic in d, and which makes the average latency linear in d. Since our strategy is optimal every randomized strategy at best also linear in d in expectation, and there is no sublinear improvement. (N.B. often times people study the maximum latency, but it doesn’t matter here)
If the tasks cannot come all at the same time then you can imagine long-running tasks arriving, each with a processing time significantly larger than d, such that all tasks arrive long before the first job is finished (take for example 2d, for the asymptotics it doesn’t matter).
You can improve this if you know some facts about the incoming jobs such as sum-of-all-lengths, length of the shortest job, length of the longest job, etc. But unfortunately in the general case there is no quadratic improvement.
If you are interested in finding out more theoretical background check out makespan scheduling.
In expectation. I find the title a bit clickbaitey, since all of this is based on expected load and not worst case scenarios.
I’ll see if I can model worst case, too. It’s a bit tricky in PRISM because it can only calculate expected reward, but I have some ideas on how to trickery through.
The best you can hope for is a constant factor: take d jobs of length 1 which all arrive at the same time. The optimal schedule is then to schedule d/2 on both processors (or queues). The sum of latencies times is (0 + 1 + 2 + .. + d/2-1) * 2, which is quadratic in d, and which makes the average latency linear in d. Since our strategy is optimal every randomized strategy at best also linear in d in expectation, and there is no sublinear improvement. (N.B. often times people study the maximum latency, but it doesn’t matter here)
If the tasks cannot come all at the same time then you can imagine long-running tasks arriving, each with a processing time significantly larger than d, such that all tasks arrive long before the first job is finished (take for example 2d, for the asymptotics it doesn’t matter).
You can improve this if you know some facts about the incoming jobs such as sum-of-all-lengths, length of the shortest job, length of the longest job, etc. But unfortunately in the general case there is no quadratic improvement.
If you are interested in finding out more theoretical background check out makespan scheduling.
“Latency” as a term seems to be confusing to people. In my experience, it’s used synonymously with “lag” and (mis)interpreted in one of several ways:
In this article, “latency” seems to be the queuing delay.
Correct me if im wrong but 2. seems like it is largely related to the execution and arrival times of he jobs and less on the queueing strategy, no?
one worker is infinitely better than zero. now lets get back to work