1. 15
  1.  

  2. 4

    There are some really great historical details here. I particularly enjoyed:

    The IBM 1401 does not use bytes. Instead, it uses 6-bit BCD storage.

    It would be super interesting to understand why they built it that way. BCD seems like an interestingly inefficient way to do arithmetic in hardware. Had the “standard” binary encoding approach not been invented yet? Was there another good reason?

    Even the comparison instruction cost extra.

    That whole paragraph is very interesting in comparison to the normal way of doing things today. It’s not unusual for modern hardware products to have features physically supported but soft-disabled. Sometime between the late 60s and now (probably driven by VLSI), the economics of hardware changed dramatically.

    The 1401 I used is the Sterling model which it supports arithmetic on pounds/shillings/pence, which is a surprising thing to see implemented in hardware.

    I consciously knew that the pounds/shillings/pence age ended in the early 70s, but my unconscious mental model of history doesn’t have it overlapping with the computer age.

    1. 5

      Was there another good reason?

      Since it was designed for business they probably wanted to avoid the errors that come with representing base-10 fractions in binary. Billing systems, for example, often use BCD arithmetic so they can perform accurate arithmetic on large columns of numbers with fractions.

      1. 4

        I don’t know why BCD was chosen, but the fact, noted later in the story, that raw machine code was human readable is a really interesting side effect, especially when working at such a low level. Wonder if that was a factor?