1. 12
  1.  

  2. 5

    Warning: all of the videos on this channel are extremely addictive. He manages to squeeze out interesting bits out of the most mundane and uninteresting bits and that’s quite an achievement. I couldn’t care less about most of the topics he talked about, but I gladly click every single one of them because there is always a delightful bit about technology being applied in creative ways.

    1. 2

      There are so many lessons to learn! The one I want to highlight is that even “exact” clocks are not perfect, and our analog components always have ranges and tolerances.

      1. 2

        I kept waiting for him to add some little caveat there, but he never did? Though I suppose with recent over-the-air networked vehicles, they could in principle all be running NTP/PTP (or perhaps more likely just using GPS time) to keep themselves synchronized to well within the threshold of human perception.

        1. 1

          Our computer clocks have such small ticks, drift wouldn’t be perceptible. Further, computers can alter their clocks, making any extrapolation of long term effects in the general case meaningless.

          1. 2

            [Disclaimer: I am by no means a computer time-keeping expert.]

            Our computer clocks have such small ticks, drift wouldn’t be perceptible.

            At the “real-world” scale of the duration a turn signal is typically on at a traffic light, sure, but if you just set up two cars next to each other and let them sit for days, weeks, months…it seems like it could eventually add up to something noticeable? Let’s say an offset starts to be perceptible at 100ms – exceeding that threshold after just a single day of running would only require your clock rates to differ by less than 0.0002%, if I haven’t screwed up my arithmetic.

            [On a different-but-somewhat-relevant anecdotal tangent, a system I’m currently working on firmware for has a little microcontroller on its (2-bay) disk backplane that handles drive presence, status LEDs, etc. While the same controller manages both bays, I’ve noticed that the fault LEDs blink at slightly different rates – they’ll shift between in and out of phase with each other over the course of a minute or two (implying a frequency difference somewhere in the range of a few centihertz, I guess). I can’t imagine that was an intentional design choice, so I’d guess it’s probably just an artifact of some detail of the software running on that controller – nevertheless, it seems a “drift” of a sort crept in there even with (I’m pretty sure) a single clock.]

            Also, I’m not clear on how the scale of the clock tick makes a difference – isn’t a given percentage difference in frequency going to produce the same amount of drift over a given amount of time regardless of what that frequency is? (e.g. a 1% difference in nominally 1GHz clocks will drift just as much over the course of a day as a 1% difference in nominally 1Hz clocks.)

            Further, computers can alter their clocks, making any extrapolation of long term effects in the general case meaningless.

            Well, what I was sort of trying to hint at with my earlier comment was that this would, I think, depend on the exact algorithm used for controlling the blinking. If it’s something like

            loop {
              sleep(interval);
              toggle_light_state();
            }
            

            and sleep()‘s semantics are analogous to Linux’s CLOCK_MONOTONIC_RAW, then clock alterations won’t enter the picture and you’ll be entirely at the mercy of whatever environmental/manufacturing variations affect the frequency of your physical clock source. If you were to instead schedule the next clock toggle event at a point in time determined by CLOCK_REALTIME, then (as I was sort of implying previously) you can of course bring NTP or whatever else to bear and should be able to achieve “good enough” synchronization over arbitrary timescales.

          2. 1

            I am not sure why he would add such a caveat. Yes, of course they could align to some microsecond and then the cars of the same make would be exactly aligned but… why would you do that? That doesn’t seem to be anyone’s requirement. Additionally that would make the whole thing worse, because creates a delay on the first blink because you need to wait for the first “blinking slot”. Also the chip needs a clock now, whereas before a simple quartz as a timer was enough, thus making it orders of magnitude more complicated.

            1. 1

              but… why would you do that?

              This seems like a question about the premise of the whole video in the first place…I don’t think anyone’s trying to derive any practical utility from this, it’s just an OCD-esque excuse to delve into the implementation details of a blinking light.

              Also the chip needs a clock now, whereas before a simple quartz as a timer was enough, thus making it orders of magnitude more complicated.

              I mean…with the blinking controlled by software in more recent cars as described in the video (probably in the context of some fairly sophisticated embedded systems I’d guess), it’s already gotten orders of magnitude more complicated. If your light-blinking control system is already running internet/GPS-connected Linux, it may just be a matter of which clockid you pass to clock_nanosleep().

              1. 1

                Unfortunately he doesn’t show the most modern/current version of the part but even then I doubt that it is more than an MCU with a quartz, no Linux, no Internet, no GPS. Sure, it is controlled by a more sophisticated system via CAN bus, so in theory they could do do clock sync over CAN or even control the entirety of the blinking via a Linux running somewhere but replacing a realtime system with one that doesn’t even do soft realtime feels a bit wonky.

                Heck, last time I drove a car cars didn’t have GPS to begin with, not in the blinker nor anywhere else :)