Oh wow, this is a blast from the past for me. For my master’s thesis, I patched NetBSD’s scheduler to use a neural network to determine schedule priorities on highly loaded systems.
The idea was to give interactive processes (ideally multimedia-like) higher priority than background jobs. Turns out UNIX’s abstractions make it really difficult to determine the source of video output. Audio was a little easier, but only if the audio devices were used directly.
oh cool :)
Notably, rt-tests’ cyclictest is used mostly with SCHED_RR or SCHED_FIFO.
This is a tool that sets up alarms, then measures the difference between the scheduled time and the time it obtains execution. Thus measuring scheduling latency. It has a very low overhead (as it only runs for very small amounts of time when alarms trigger and these alarms are spaced out in time) and thus can be left running. It keeps statistics and shows them to the user. Values are in microseconds. Of particular interest is the maximum.
Linux does quite badly, and across different hardware it’s not uncommon to observe peaks over 20000µs or 20ms or more than one frame at 60Hz (!) after less than an hour of cyclictest running in the background with SCHED_FIFO.
Linux-rt helps a great deal here, typically managing to keep the Max value under 100µs.
Needless to say, even linux-rt’s results are pathetic. This poor performance is an unavoidable consequence of the complexity of the kernel. This is why RTOSs tend to use the microkernel design, and also why Linux is simply not suitable for the task and should never be used where realtime is needed; even with the improved situation of -rt patchset, scheduling latency is still theoretically unbounded.
Having said that, and returning to the topic, I have to note: Netbsd isn’t decent at this, either.