1. 12
  1.  

  2. 8

    They did all this before they learned Worse is Better. Now that we know it wins, we have to sneak The Right Thing into what otherwise looks like Worse is Better. Alternatively, do Worse is Better in a way where good, interface design lets us constantly improve on the worse parts inside if the project/product gets adoption. Likewise, I say put new things into products people find useful maybe without those things. Parts of it build on proven principles with the new thing an extra differentiator that might or might not pan out. If it’s a language or environment, they can discover it when trying to modify the product.

    One thing that should be considered for this list is Burroughs Architecture. It made low-level operations high-level, safe, and maintainable with OS written in ALGOL. Although it was commercialized, the hardware enforcement got taken out if I’m remembering correctly. The market only cared price/performance for a long time. Only a few projects applied those concepts later on. A recent project was SAFE Architecture which started like it in original proposal but changed to do something more flexible. Dover Microsystems finally released it commercially as CoreGuard in late 2017. Quite a long delay for anyone to deploy a Burroughs-inspired solution despite fact that it was solving many of today’s problems in 1961.

    1. 6

      One of my favorite courses in college was an OS course where we had to build an OS inside of a VM. The VM was a simplified Burroughs Large System architecture.

      I enjoyed that course a lot and learned so much. It was a refreshing change from x86 and MIPS assembly.

      1. 2

        That’s really neat. I wouldnt have expected people building on Burroughs VM’s unless Unisys had a deal with the college to make them some talent. ;)

        Did the VM have the pointer, bounds and argument checks like the B5000? And did the experience teach you anything that impacted later work?

        1. 3

          The VM was written by our professor - quirky guy but I learned a huge amount from him. My understanding is it was a simplified version of the B5000 but it did have bounds checking.

          As to what I learned - I’m not sure I got any insight about computer architecture because it was a Burroughs ISA. I think a lot of what I learned was more around the trade offs you make in process scheduling and building rudimentary filesystems.

          One big aspect of this project was he gave us an incomplete compiler for a Pascal-like language. You had to extend it to support things like arrays and loops. The compile target was the Burroughs VM. I recall thinking that the ISA was quite clean to generate for.

          I’m sure if I’d have had to reimplement the same project on x86, I’s have seen a lot of the advantages of the B5000.

          A lot of what I recall specific to Burroughs ISA was that it was very easy to understand. I was a CS major so I only had 2 or 3 courses that dealt with hardware directly. For me, x86 was very frustrating to work with.

    2. 7

      1900: people going around on horses, public lightning using gas.

      1960: cars, jet and nuclear powered airplanes, satellites, semiconductors, computers with LISP and COBOL compilers, antibiotics, fiber optics, nuclear fusion experiments (tokamak)

      2020 - another 60 years and do we really have to show?

      1. 11

        Compared to “commonplace” things like cars and antibiotics? Internet, GPS, maglevs, a vast array of surgical techniques, the absence of smallpox…

        Compared to “works but government and academia only” things like satellites and compilers? Hololens, quantum computers, drones, railguns, graphene, carbon nanotubes, metamaterials…

        Compared to “wildly experimental and probably won’t ever happen” things like tokamak and nuclear airplanes? Probably a lot of classified shit. Antimatter experiments at LHC. Arguably a lot of work with AI

        1. 4

          Maglevs were invented in 1950s and first operated in 1970s. I also don’t have anything made from graphene, or know anyone who knows anyone owning a graphene artefact.

          More importantly, none of that is imagination shattering from 1960s point of view. We do not have things mid-century people couldn’t come up with.

          1. 1

            More importantly, none of that is imagination shattering from 1960s point of view. We do not have things mid-century people couldn’t come up with.

            Antibiotics, heavier-than-air flight, cars, and computers (if you count Jacquard Looms) were all demonstrated before the 1900’ss. They weren’t imagination shattering from a 1890’s point of view.

            Even the internet isn’t imagination shattering from an 1890’s point of view.

            1. 3

              Antibiotics, heavier than air flight, and a programmable computer were not demonstrated before 1900s.

              1. 3
                • We first observed that bacteria didn’t grow in the presence of mold in the 1870’s.
                • The first manned, powered heavier-than-air flight was 1890.
                • The Jacquard Loom had programmable loom patterns and was 1804, and the first programmable reading of data was the US 1890 Census.

                Do any of these look close to what our modern conceptions of these things are? Not really. But it shows that the evolution of the first demonstrations of ideas to widespread use of polished version takes time.

                1. 3

                  There’s a huge difference between observation of mold and a concept of antibiotics, no matter how trivial that sounds with hindsight.

                  The “uncontrolled hop” does not qualify as a flight, except in the most trivial sense.

                  The loom is not a computer, but I’d love to see a fizzbuzz with Jacquard patterns to prove me wrong.

                  1. 2

                    It still means that all of the “imagination shattering” stuff in the 1960’s had precedents more than half a century old. We do not have things mid-century people could not have come up with. They did not have things 1800’s people could not have come up with, so we shouldn’t be thinking that our era is particularly barren.

        2. 4

          I think it is reasonable to say that the reworking of daily life has slowed.

          The stove, the refrigerator and the car changed the routine of life tremendously.

          The computer might be more impressive by any number of measures but it didn’t rework daily life so much as add another layer on top of ordinary life. We still must cook meals and drive around.

          The linear extension of the car and the stove would be the auto-chef and the flying/auto-driving car.

          Both things are still further than is sometimes claimed by the press but the seem a bit closer than 2012. However, the automation offered by externally available power, which began in the 1800s, definitely has reached a point of diminishing returns.

          We may experience further progress through computers, AI and such. But this seems to hampered by a “complexity barrier” - an equivalent amount of daily life automation as various technologies offered earlier through power now requires systems that are much more computationally complex. Folding towels really does turn out to be the hard part of washing, etc and even with vast advances in computational ability, we may still be at diminishing returns.

          1. 2

            There have been significant advances since then (for instance, in medical treatments like cancer therapies and surgery—life expectancy in the US has risen from 70 to 79 since 1960), but nothing revolutionary, that would seem remotely as magical as the developments across the first half of the century.

            1. 3

              Magical is relative. All the psychiatrics I take were invented after 1970. They’re pretty magic!

            2. 2

              The advancement of mankind for many seems to be focused on rocket ships, self driving cars, and mechanisms to know more about you to in order to influence your actions and spend.

              Perhaps it’s just infatuation with celebrity, consumerism, and the startup. The age of discovery driven by solving “big and meaningful” problems seems over. At least we will get self driving things, cheaper rockets, better algorithms to tell you what to buy, and new ways to share selfies with others.

              1. 0

                I agree with the mood, but rocket ships are not in the same group of self driving cars.

                Exploring the universe, might be the best use these apes can do of the expensive brains they carry around over their shoulders.

              2. 1

                I feel your pain, but this is the efficiency of free market! ;-)

                Whether or not a hyperlink is broken on the web still relies entirely upon the maintenance of the page pointed to, despite all hypertext projects prior to the 1992 Berners-Lee project having solved this problem.

                Great, you made me feel young! :-)

                What are you talking about?

                I cannot imagine how they could fix the arcs of a graph they do not control entirely after modifying a bunch of nodes they own…

                Can you share more details?

                  1. 2

                    As the resident hypertext crank, pretty much every time I say “hypertext” I’m referring to Project Xanadu. However, Xanadu was only slightly ahead of the twenty or thirty commercial hypertext systems available in the 1980s in solving this problem.

                    TBL’s pre-web hypertext system, Enquire, also didn’t have the breaking-hyperlink problem.

                    Other than “make addresses permanent”, I don’t think Xanadu’s solutions to this problem are the best ones, personally. I prefer distributed systems over centralized services, and prior to the early 90s, Xanadu addressing schemes were intended for use with centralized services; lately, Xanadu addressing schemes are actually just web addressing schemes, leaning on The Internet Archive and other systems to outsource promises of permanent URLs. I prefer named-data, and the hypertext systems I’ve worked on since leaving Xanadu have used IPFS.

                    “Make addresses permanent” is also a demand made but not enforced by web standards. Nobody follows it, so facilities based on the assumption of permanent addresses are broken or simply left unimplemented.

                  2. 1

                    Modifying hypertext documents is a no-no (even, theoretically, on the web: early web standards consider changing the content pointed to by a URI to be rude & facilities exist that assume that such changes only occur in the context of renaming or cataclysm; widespread use of CGI changed this).

                    The appropriate way to do hypertext with real distribution is to replace host-centric addresses (which can only ever be temporary because keeping up a domain name has nonzero cost) with named-data networking (in other words, permanent addresses agnostic about their location) & rely upon the natural redundancy of popular content to invert the current costs. (In other words, the bittorrent+DHT model).

                    A modified version of a document is a distinct document & therefore has a distinct address.

                    This kind of model would not have been totally unheard of when TBL was designing the web, but it would have been a lot more fringe than it is now. Pre-web hypertext, however, typically had either a centralized database or a federation of semi-centralized databases to ensure links didn’t break. (XU88 had a centralized service but depended on permanent addresses, part of which were associated with user accounts, and explicit version numbering: no documents were modified in place, and diffs were tracked so that links to a certain sequence in a previous version would link to whatever was preserved, no matter how munged, in later versions.)