The Woz disk controller was really simple, and relied on the Apple CPU doing things like turn the motor on, and read some bits, and write some bits, and move the arm here, and so on. There isn’t a command for the disk controller to read a block at an offset, so this routine would be written in DOS, and worse than that: such a command isn’t straightforward to write.
With the innermost track and on the outermost track being different lengths, the programmers had a choice: Using different routines based on the track number, or using extra timing bits to resynchronise the loop reading bits from the disk. DOS did the latter and would choose to arrange data into a stream of 5-bit “nibls”. The high bit would always be on, so when decoding these “nibls”, if the high bit was clear then it must be one of these timing bits, and it would be skipped.
This is background you need to have prior to watching this video: Pretty much every program used this “nibl” reader because it was very complicated and very timing sensitive.
I think if you know this going in, and presumably everyone in the crowd understood this (at least on some level), then it is quite enjoyable: When the author describes the dummy code that “doesn’t do anything”, all I could think about was Mel and a routine that doesn’t do anything but take time.
We don’t generally write these sorts of programs anymore: IDE controllers are smart and do contain a command to read a block at an offset. Microcontrollers are cheap enough that even if hardware vendors want to use dumb hardware (like winmodems and wifi radios), hobbyists are prevented from talking to the hardware directly, and instead pretty much everything speaks some kind of serial packet language. Even memory.
The Woz disk controller was really simple, and relied on the Apple CPU doing things like turn the motor on, and read some bits, and write some bits, and move the arm here, and so on. There isn’t a command for the disk controller to read a block at an offset, so this routine would be written in DOS, and worse than that: such a command isn’t straightforward to write.
With the innermost track and on the outermost track being different lengths, the programmers had a choice: Using different routines based on the track number, or using extra timing bits to resynchronise the loop reading bits from the disk. DOS did the latter and would choose to arrange data into a stream of 5-bit “nibls”. The high bit would always be on, so when decoding these “nibls”, if the high bit was clear then it must be one of these timing bits, and it would be skipped.
This is background you need to have prior to watching this video: Pretty much every program used this “nibl” reader because it was very complicated and very timing sensitive.
I think if you know this going in, and presumably everyone in the crowd understood this (at least on some level), then it is quite enjoyable: When the author describes the dummy code that “doesn’t do anything”, all I could think about was Mel and a routine that doesn’t do anything but take time.
We don’t generally write these sorts of programs anymore: IDE controllers are smart and do contain a command to read a block at an offset. Microcontrollers are cheap enough that even if hardware vendors want to use dumb hardware (like winmodems and wifi radios), hobbyists are prevented from talking to the hardware directly, and instead pretty much everything speaks some kind of serial packet language. Even memory.