1. 31

  2. 2

    Thumbs up to the discussion, both in the mailing lists and in the comments.

    I’m glad there are lots of people mentioning the ways 32bit processors will still exist and be used in 2038. Lots of embedded systems are unlikely to move beyond 32bits; they already spend most of their life idling and really only use 32bits because it’s a convenient minimum standard for getting modern operating systems to work. When in the $1 per SOC market: doubling the size of all of your pipelines, registers and arithmetic is unlikely to make your product more abundant. I suspect this will still be true in spirit some 10-20 years from now.

    It’s interesting to think about technological blame in this problem. This isn’t a hardware problem at all, it’s a software and traditions problem, but it’s being practically solved for most desktop users through changes in hardware. The age old traditions of software fixing hardware and hardware fixing software.

    1. 2

      but it’s being practically solved for most desktop users through changes in hardware

      I don’t think that changes in hardware have much to do with it. Changing the hardware was just a convenient chance to make some much-needed ABI-incompatible changes like changing the size of time_t: if you’re breaking ABI, might as well fix off_t and time_t at the same time, right? But ABIs have changed without the underlying hardware changing.

      The problem is definitely a social software problem, as you said. People are incredibly averse to breaking libc ABI, for pretty good reasons. But eventually those reasons are going to have to give way as the costs of not doing so become bigger and bigger while the costs of doing so are probably getting smaller over time. You wouldn’t think that ‘oh no I’ll have to recompile everything’ would really be such a big problem in a world where everything is free software anyway.

    2. 1

      While reading I couldn’t stop but think if there was maybe a way to “extend” the lifetime of the 32bit pool by somehow patching the libraries to start counting from, say, 2010 and keep moving the reference points forward instead?

      And I know one could say that why not just updating to 64bit instead but that is just not possible with many devices.

      1. 6

        You don’t need to have a 64-bit computer to represent seconds-since-1969 as a 64-bit quantity. If you are able to patch the software then patching it to use 64-bit time_t seems like an easier and more consistent change. I think the real problem is more likely to be situations where you can’t patch the software, no?

        1. 2

          Oh, I see!

          I guess that’s why it’s not a proposed “solution” then, I just expected that for 64bit time you needed a 64bit architecture.

          Thanks for the clarification :D