Props for the entertaining solution. I hope nobody believes this is truly O(N) unless you’re limiting N to the number of cores on the system and guarantee that each invocation of sleep runs on a different core.

What are you talking about? Of course it’s O(N) in time and space. The constant factor is the time and space overhead of fork/exec/kernel-scheduler with the shell sleep sort. And the limit is still the maximum time you have and the space you got in the computer.

For all seriousness, if you want to run it as fast as possible, it is indeed O(N^2) in time, because the aforementioned constant factor is O(N). With the shell program, if you fix that constant factor to be reasonably large, as 1 sec in the article, the algorithm is O(N), with an upper limit of a few million numbers for a month and a little more, in a typical linux box.

Props for the entertaining solution. I hope nobody believes this is truly O(N) unless you’re limiting N to the number of cores on the system and guarantee that each invocation of sleep runs on a different core.

What are you talking about? Of course it’s O(N) in time and space. The constant factor is the time and space overhead of fork/exec/kernel-scheduler with the shell sleep sort. And the limit is still the maximum time you have and the space you got in the computer.

For all seriousness, if you want to run it as fast as possible, it is indeed O(N^2) in time, because the aforementioned constant factor is O(N). With the shell program, if you fix that constant factor to be reasonably large, as 1 sec in the article, the algorithm is O(N), with an upper limit of a few million numbers for a month and a little more, in a typical linux box.

It’s Ω(n log n); the scheduler can insert the next task in its execution queue in Ω(log n). (Priority queue insert.)