1. 7
  1.  

  2. 4

    I decided to make an updated version of that “high-end workstation” chart: https://i.imgur.com/WmIxbmk.png

    Yeah..

    1. 6

      Which ignores all the differences involved in power management/ACPI, USB, dbus, roaming networks, GPU drivers and buses and compute, flash filesystems, non-symmetric processing and scheduling (big.little processors), scaling I/O and parallelism (the c10k problem, which is now fairly routinely solved, has apparently been replaced with the c10m problem), VM’s, VM live migration between systems which was heckin’ science fiction in 2000, correctness proofs for kernels, and so on.

      A basic metalworking lathe hasn’t changed much since 1990 either, apart from slightly sharper tools and slightly better tolerances. Just ignore the existence of cheap CNC machines and you can pretend progress hasn’t happened at all.

      1. 4

        There was a bit of a lull in the late 90s/early 2000s, when this was written. The companies that really invested in systems research like Sun and AT&T weren’t doing well, and many of the smartest people were at dot-com startups that didn’t have time to do anything fundamentally revolutionary. Then there was the bust and much lower investment for about 5 years.

        But yes, since then, things have changed pretty radically. The new round of tech giants got big enough to have money to “waste” on long-term research. Rob Pike himself helped with this at Google.

        1. 3

          The shift in academia took longer, but I think it’s happened now. The number of new operating systems that I’ve read about has increased. There are cycles in computing between specialised and commodity hardware and software and around 2000 was close to one of the peaks for commodity stuff. Generally, new markets get specialised things, then commodity things enter and are a lot cheaper and a bit slower, then about the same speed, then faster. Then the market grows large enough that it’s worth building something specialised for that particular market and so you get a proliferation of new specialised things. Then one or two of those things becomes successful enough that it’s now the commodity thing in that space.

          Around 2000 two of these lined up: x86 CPUs were fast enough that they were outperforming both workstation CPUs and a lot of accelerators in terms of performance per dollar. At the same time, commodity implementations late ’70s operating system designs (BSD/Linux, Windows NT) reached the point of being good enough that they displaced almost all of the other experimental things (BeOS died soon after) in the combined desktop, mobile, and server markets.

          The growth of mobile devices and cloud computing meant that tweaked desktop operating systems are not the best fit and so we start to see things like Fuchsia on the client and various unikernels (with hypervisors evolving to look even more exokernel-like than they used to) on the server. At the same time, GPUs started to displace CPUs as the fastest raw compute you could buy and are now standard parts in mobile and desktop systems and available for numeric-compute-heavy cloud workloads. As core counts went up and core kinds became more heterogeneous, we’ve seen ’70s OS abstractions struggle, which has led to a lot of interesting OS research.

          1. 1

            Very true, and a good point.

      Stories with similar links:

      1. Systems Software Research is Irrelevant via inactive-user 7 years ago | 7 points | 9 comments