1. 42
  1.  

  2. 13

    I suspect these “nothing’s happened since 19XX” articles are just strawman arguments. Humans build on old technology. Technology isn’t made in a vacuum. To some degree, “nothing new has happened here” can be said about anything. For example, humans have basically discovered every place on the surface of Earth, does that mean that archaeologists and explorers are useless?

    1. 28

      Thermionic tubes are made in a vacuum.

      1. 2

        Thank you for brightening my day :)

    2. 10

      I don’t know about 1978, but certainly since the last bust the tiresome wags who call SV ‘surveillance valley’ have been proven correct. It’s produced almost nothing of note that hasn’t involved harvesting and monetising information about me and you and other people around the world. There is the odd biotech fig-leaf that VCs do for PR purposes that have produced almost zero science (but plenty of PR, c.f. Theranos). It’s too expensive a place for new ideas to make it into adulthood - the exact opposite of the reason in thrived when it was about silicon and specifically wasn’t the east coast. How many great bands have come out of central Manhattan or London in the last 20 years? How many brilliant artists and sculptors have studios in the 14th Arrondissement of Paris? Exactly.

      SV is largely now javascript’s answer to Gamestop. You’ve got to look elsewhere for anything interesting. This is all natural, though.

      1. 4

        I agree with you entirely! The first time I ever went to the Bay Area I was excited as hell because I assumed there would be all kinds of neat things going on. I went to some tech meetups, they were mostly networking events. I tried some breweries, they were uninspired and hyper-commercial. I’m sure there are cool things going on, but no more than any other major population center.

        This is all natural, though.

        This is important. No place can have tremendous success and remain a hotbed of counter-culture or innovation forever. That’s fine. In my view, the trouble is that we cut off many of the pathways the original innovators used to do what they did. In particular (and among many other things), we made college crazy expensive and forced research programs to be tied to commercial outcomes.

      2. 6

        Hololens

        1. 3

          I’ll believe it when I see it. Besides, wasn’t MIT doing AR stuff back in the late 80s already?

          1. 1

            I tried a hololens system before I wrote this essay. I wasn’t very impressed. The resolution appeared to be about the same as other head mounted display systems available on the consumer market at the time.

            I understand that there are a lot of fiddly technical problems that need to be solved in order to get depth of field working properly for virtual objects and so on, and I wish my other head mounted displays were translucent, but it’s easy to make the case that hololens is a very fancy version of the ceiling-mounted computer-controlled display armatures that Ivan Sutherland was working with in the 50s.

            I haven’t seen any indication that the hololens tech is introducing interesting new interface metaphors (the way that, admittedly, Jaron Lanier did for his VR rigs in the 80s). And, many of the technical challenges hololens has been working with were known and addressed by Steve Mann in the 80s and 90s.

            Certainly, post-1970s advances in the manufacture of integrated circuits gave us access to technologies like micromirror arrays, which I imagine would be useful for translucent displays based around projection.

            But, there’s no major conceptual jump. Hololens developers are not thinking brand new thoughts that they weren’t capable of thinking before putting on the glasses – while early CS and early computer hardware design was full of these genuinely groundbreaking ideas. By the end of the 60s, mice, pen computing, trackballs, interactive command line interfaces, joysticks, touch interfaces, gesure-based control, handwriting recognition, and selection of icons were all established (with promising prototypes that sometimes worked). By a couple years later, so was piping and other forms of inter-process communication, regular expressions, networking, fat and thin terminals, the client-server model…

            1. 3

              I’m slightly biased, because a lot of the Hololens work was done by folks just down the corridor from me, but I also worked with a startup building an AR platform around a decade ago and Hololens is a massive jump from what they were doing (I’ve only played with Hololens 2, I don’t know how much better it was than V1). A lot of the novel interface work is still ongoing but there are some very neat things done to enable it. One of the biggest problems with the older stuff I tried was that you needed to hold your finger vertically to point to things because the camera obviously couldn’t see your finger in a natural pointing direction when it was occluded by your fist. The Hololens builds a machine-learning model of the shape of your hand and so it can accurately track your finger position from how your knuckles move, even when it can’t see your finger. It’s the first AR display system that I’ve found even vaguely intuitive. I think it needs a few more generations of refinement to get the weight (and price!) down, but I’m pretty enthusiastic longer term.

              This is still incremental improvement, but so is everything since the first stored-program computer.

              1. 1

                This is still incremental improvement, but so is everything since the first stored-program computer.

                This is exactly what I reject.

                I’ve given a pretty specific idea of what non-incremental means: it’s an expansion of the adjacent possible. I don’t think it’s controversial to say that there are things created after ENIAC that expanded what people could imagine (even as science fiction) – after all, even in science fiction, up until the early 70s all the computers were big hulking centralized things.

                There’s a particular time when we started seeing non-technical people imagine what we’d now call AR – some time between the publication of PKD’s A Scanner Darkly in the 70s and Gibson’s Neuromancer in 1984. VR appeared earlier (maybe in Simulacron-3 and maybe as much as ten years earlier). While home automation was a mainstay of 30s science fiction magazines, what we’d call ubiquitous computing (wherein the devices in your home have some kind of rudimentary autonomy and communicate with each other as individual entities) shows up – alongside a now-familiar freemium model of monetization – in PKD’s Ubik, the very story that PARC researchers took the inspiration for the name “ubiquitous computing” from. Something recognizable as social media showed up in E. M. Forester’s The Machine Stops in 1910, and most of the familiar criticisms of social media (aside from those revolving around monetized surveillance) showed up there; all our current concerns about mass surveillance were present in 20s radio serial episodes about “television”, back before the reality of centralized TV networks and big broadcast towers became part of the collective understanding of what television meant. As soon as landline telephones coexisted in the same homes as radio receivers, we started seeing newspaper comics accurately predicting the design of cellular phones and the social ramifications of their use.

                Outside of computing – well, we don’t have human cloning, but folks knew animal cloning was theoretically possible for years before it was done to zebra fish in 1957, and we had a good run of science fiction exploring all the possible ramifications of that during the 60s and 70s until finally we emptied the well (after which it was basically all reruns with bigger budgets: Overdrawn at the Memory Bank and Boys From Brazil became templates for everything clone-related afterward). There was a time before the concept of a human clone, and science fiction that was clone-adjacent before that was very strange – Alraune, the german silent blockbuster horror film about what we’d now call a test-tube baby, used alchemical metaphors. There was a time before the concept of robots, too, and it was sometime after RUR (because the robots in RUR, like the replicants in blade runner, are biological). Modern readers of Brave New World may be confused by Huxley’s description of the process of manufacturing the different classes of humans – but it’s because there was no concept of genetic engineering (although as soon as we knew about DNA encoding – the result of a trendy adoption of cybernetic ideas among the interdepartmental jet-set in the late 50s and early 60s – we started seeing the idea floating around, first in academic discussions and then, quite quickly, in science fiction).

                Similarly, as soon as Alan Kay saw Plato’s plasma display, he imagined Dynabook. Whether or not Dynabook was ever created is beside the point. The point is that before he saw Plato’s plasma display, he could not imagine Dynabook.

                When’s the last time you saw a piece of technology that made you imagine possibilities you could never have imagined before?

                It seems like between 1950 and 1980, it happened regularly.

                1. 1

                  Another example from outside of computing, since I came across it the other day:

                  J. D. Bernal’s 1929 book The World, the Flesh, and the Devil describes the concepts and future challenges behind spaceflight, solar sails, the creation of human environments from hollowed-out asteroids, and the production of cyborg bodies for humans – basically, the fundamentals of what we might call modern transhumanist thought (sans mind uploading, which he was unable to predict seeing as how he was writing before the first stored program computer, and genetic engineering, which he was unable to predict because he was writing before the discovery of DNA). Bernal was a pioneer of x-ray crystallography & brings this to bear on the materials science side of this work. If I hadn’t read the date of publication, I would have thought that this was a work from the 60s, since that’s when most of these ideas really hit the mainstream.

                  Just as the 1929 publication date puts Bernal in a position to imagine space rockets and solar sails (and solar power, since the photoelectric effect was already known) but doesn’t give him access to the idea of mind uploading or genetic engineering (which he brushes up against with regard to a discussion of embryonic surgery – the same tech as used in The Island of Doctor Moreau), there was a time when people imagined spaceflight but could not imagine rockets doing it – Jules Verne wanted to use a cannon, while two centuries earlier Wilkins and Hooke (and, drawing from them, de Bergerac) wanted to use sails. Once the rocket formula was developed, we almost immediately see people changing their imagined form of space flight to rocketry – suddenly we could very clearly see that rockets could achieve escape velocity, given a certain amount of tinkering, and during the first half of the 20th century people like Werner von Braun and Jack Parsons invented the technologies necessary to fulfill the prophecy that Tsilovsky’s rocket equations made.

                  The situation with rocketry is similar to the one with steam engines, but much more intense. The ancient greeks had steam engines of a sort; they were toys, because if you tried to get enough force out of them, they’d explode. It took the invention of calculus literally thousands of years later for somebody to quantify the pressure a boiler needed to withstand in order to get a certain amount of torque, and some advances in the synthesis of steel to get steam engines capable of moving trains. Once we had calculus, it took only decades for the first useful steam engine to show up, and only a century after that for steam locomotives to become commonplace.

                  Basically, while some technical advances are incremental because they allow you to advance toward the goal you have in mind, others create a general model that allows you to predict what sorts of things will be possible in the future. You need to have both in order to progress: the latter type, which forms the outlines of the world-picture, and the former, incremental type wherein you color in the shapes that have already been drawn.

                  We have structured our society so that only coloring within the lines is rewarded, and software has a worse case of this than most other fields of similar age. What we’re doing with computing is the equivalent of saying that rockets are only good for fireworks.

            2. 1

              Redmond

            3. 6

              A few weeks ago, a different rant claimed that no progress had been made since 1996, and that there was a clear “wall” hit in 1996. It points out many innovations that have happened after 1978, although it similarly ignores and minimizes anything after its particular choice of date.

              I don’t think any of these arbitrary years are accurate, and are the software equivalent of “I liked the band before it was cool.” It’s hard to imagine someone being given a working iPhone or driving a Tesla in 1978 and being told that this was modern 1978 technology and not being astounded.

              If the argument is that everything after [date] is merely expanding upon existing theory and thus doesn’t count, everything after the lambda calculus was developed is baloney and all innovation stopped in 1936.

              1. 4

                People were in fact driving electric cars in 1978 – the Whole Earth Catalogue used to sell kits to convert your VW bug to electric. The working iPhone example is sort of egregious because, famously, Alan Kay drew his original dynabook sketch immediately after seeing a demo of rudimentary plasma displays that the PLATO people were developing.

                Things in our environment do not look like they were developed in the 70s, but they look and operate more or less exactly how a relatively uncreative but extremely plugged-in fan of computer technology would expect them to work in 2021 – in other words, merely projecting that incremental progress would happen on already-existing tech.

                The groundbreaking tech here isn’t the iPhone. The groundbreaking tech is the thing that opened up new intellectual vistas for Kay – the flat plasma display (which at the time was capable of showing two letters, in chunky monochrome, but by the end of the 70s powered a portable touchscreen terminal with multi-language support). Anybody who knew that plasma display technology was being developed in the 70s could reasonably expect flat pocket-sized touch-screen computers communicating with each other wirelessly in a few decades, and indeed they were on the market in the early 90s.

                There is nothing astounding about incremental progress. The same tendencies that produce incremental progress will, if allowed to, also produce groundbreaking new tech. The thing is, for that to happen, you need to pursue the possible ramifications of unexpected elements – to see violations of your mental model as opportunities to develop brand new things instead of bugs in whatever you thought you were making. And you can’t do that if you’re in the middle of a sprint and contracted to deliver a working product in six weeks.

                1. 4

                  The groundbreaking tech here isn’t the iPhone. The groundbreaking tech is the thing that opened up new intellectual vistas for Kay – the flat plasma display (which at the time was capable of showing two letters, in chunky monochrome, but by the end of the 70s powered a portable touchscreen terminal with multi-language support). Anybody who knew that plasma display technology was being developed in the 70s could reasonably expect flat pocket-sized touch-screen computers communicating with each other wirelessly in a few decades, and indeed they were on the market in the early 90s.

                  I don’t agree. There was a huge jump between resistive and capacitive touch screens. There were a lot of touchscreen phones a decade or so before the iPhone, but they all required a stylus to operate and so were limited to the kind of interactions that you do with a mouse. The phones that appeared at the same time as the iPhone (the iPhone wasn’t quite the first, just the most successful) allowed you to interact with your fingers and could track multiple touches, giving far better interaction models than were previously possible.

                  There were also a lot of incremental improvements necessary to get there. TFTs had to get lower power (plasma doesn’t scale anywhere near that low - I had a 386 laptop with a plasma screen and it had a battery that lasted just long enough to get between mains sockets). CPUs and DRAM had to increase in performance / density and drop in power by a few orders of magnitude. Batteries needed to improve significantly in storage density and number of charge cycles.

                  1. 4

                    People were in fact driving electric cars in 1978

                    I’m aware :) But they definitely weren’t driving electric cars that talked to you, knew effectively every map in the world, told you when to make turns, listened to voice commands, streamed music and karaoke from the Internet, and could drive themselves relatively unaided on highways.

                    They also weren’t driving electric cars that could do 0-60 in two seconds or had hundreds of miles of range. Lithium-ion batteries weren’t commercialized until the early nineties.

                    The working iPhone example is sort of egregious because, famously, Alan Kay drew his original dynabook sketch immediately after seeing a demo of rudimentary plasma displays

                    Sure, but the Dynabook may as well have been science fiction at the time. It’s still science fiction: Alan Kay’s vision was for it to have near-infinite battery life. Getting from science fiction to working technology takes, and took, enormous technical innovation. If you handed Alan Kay a working iPhone the minute after he saw the first barely-working two-letter plasma display, he would have been flabbergasted.

                    1. 1

                      Incremental improvement is useful. Sure. I said as much in the essay.

                      But incremental improvement is not the same thing as a big advance, and cannot be substituted for one. In order to have big shifts, you need to do a lot of apparently-useless research; if you don’t, then you will only ever invent the things you intended to invent in the first place & will never expand the scope of the imaginable.

                2. 5

                  “Silicon Valley hasn’t innovated since 1978” != “innovation hasn’t happened since 1978”. I easily believe the former. I’m not so sure about the latter.

                  I would, perhaps, say that less innovation has happened in the past few decades. While I would also attribute some of the early stuff to government-funded multidisciplinary sci-engineers, I don’t think that you can’t have innovation without them.

                  In particular, I think that the open source community has massive potential for innovation - anyone can start their own software project, with millions of man-hours of work made available to be tapped through massive library ecosystems and mature language tooling. I think that part of the reason that we haven’t seen any innovation to speak of is because of the paradoxical combination of pervasive NIH syndrome and a resistance to adoption of newer+better ideas in the open-source community (with an honorable mention to “fatal flaws in the UNIX philosophy (“tools that do one thing composed through text streams” was a massive mistake) hampering technological sophistication”).

                  1. 2

                    Yes. Open source work has also become a way to pad your resume, and so folks are incentivized toward doing the same kind of work in their free time as they would at a potential day job (or to work on “big important” projects, which are generally attempts at writing compatible clones of existing proprietary products or at solving scaling problems).

                    Experimentation is hard to justify when you need to put bread on your table with provably-profitable products (or something you can convince a guy who stopped coding in 1993 to invest in).

                  2. 4

                    If we look at all technology around us, then there are actually very few “foundational technologies” and a lot of refinements on that. Wheels, metallurgy, and steam power have been known for a very long time, but it wasn’t until the early 1800s that we actually got usable steam locomotives (although experimental ones had existed for a few decades before that). Is this a “refinement” or “innovation”? And are diesel trains a refinement or innovation? What about electric trains?

                    We can have similar arguments about planes, houses, bridges, and all sorts of things. Hell, it goes back all the way to the first technology we had: early flint tools were essentially just pieces of flint with an edge knocked off, and later ones were much more refined (polishing them was a major refinement/innovation, for example).

                    I really hate the “silicon valley is a center of innovation” memeplex & feel the need to inject some historical context whenever I see it. It’s weird, masturbatory, Wired Magazine bullshit & it leads to the lionization of no-talent con artists like Steve Jobs. Making money & making tech are very different skills

                    I don’t disagree with this sentiment though; I guess part of the problem is that “innovative” has become a meaningless buzzword in certain circles; hyperbolic adjectives in general are often misused and I find it cringy as hell. But that doesn’t mean there is no innovation happening at all in computing.

                    1. 3

                      I sort of wish folks wouldn’t post my tech-industry-critical essays here. This community isn’t the right audience.

                      1. 4

                        Why not? Who would be the right audience?

                        1. 2

                          When it was posted to the fediverse it spawned a lot of constructive dialogue. Usually, when this sort of thing is posted here or HN, there’s a stream of low-effort responses & rejections by folks who read half the title and stopped. (Already, in the comments on this post, there are several people who missed the point of the post and glossed over the political/economic aspects of supporting undirected research, are trying to defend the value of incremental work against an imagined attack, etc.)

                          1. 2

                            It’s always more fun to preach to the choir.

                          2. 1

                            If they didn’t I would have missed out on this one. That would be a real shame.

                            This site discourages just writing ‘nice article, I agree’, and with good reason. So you just have to assume that there are a lot of people nodding their heads and going ‘hmm…’.

                            To be fair to the critics here though, your title is kind of click-bait. It sets out in a categorical way something that is actually kind of fuzzy and ill-defined. Then it doubles down on that in the first paragraph in a way that really won’t let someone with a slightly different definition of sufficient ‘level of novelty’ or ‘innovated’ get past.

                            I think this kind of criticism of the tech industry is critically important and I want more people to see this and talk about it. Keep it up and try to view all criticism as constructive if you can.

                            1. 1

                              Sorry about that, will keep it in mind.

                            2. 2

                              Major innovations you use every day off the top of my head:

                              • cloud computing, in the sense of massive-scale data centers built out of heterogeneous, disposable commodity hardware
                              • the scroll wheel (invented in 1989 at Apple, re-invented a couple times, popularized with the MS Intellimouse)
                              • the @-mention, which grew somewhat organically out of Twitter social practices but is now a ubiquitous feature of communications.
                              1. 3

                                I wouldn’t call any of those “new innovations”. Cloud computing is the centralized-data-center model. There were cranks for scrolling through code as gag hardware in the 60s. The @-mention is not fundamentally different from notification practices on nick mentions on IRC. More importantly, none of these made a formerly unimaginable world trivially imaginable.

                              2. 2

                                Nope, no innovation at all.

                                https://www.microsoft.com/en-us/hololens (VR Headset)

                                https://www.oculus.com/ (VR Headset)

                                https://en.wikipedia.org/wiki/AlphaGo (NN which beat Lee Sedol at Go)

                                https://en.wikipedia.org/wiki/BERT_(Language_model) (SOTA language model)

                                https://en.wikipedia.org/wiki/GPT-3 (Text generation NN)

                                https://waymo.com/ (Self-Driving Cars)

                                https://nest.com/ (Voice controlled home assistants, thermostats, and more)

                                But, using 70s tech to make 60s tech bigger (ex., deep neural networks) isn’t innovation — it’s doing the absolute most obvious thing under the circumstances, which is all that can be defended in a context of short-term profitability.

                                Much easier said than done. Deep Learning was dead in the water for a long time, and it wasn’t until the widespread adoption of ReLU (https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) as an activation function that we were really able to train an NN quick enough to even achieve useful results, whether we had access to GPUs for training or not.

                                1. 1

                                  You’re exhaustively listing (impressive) incremental advances, which were predictably possible to even laymen in the 60s. There are no paradigm shifts in this list.

                                2. 1

                                  Trace scheduling and VLIWs might qualify; Josh Fisher’s PhD thesis Optimization of horizontal microcode within and beyond basic blocks was published in 1979. VLIW research continued at Yale as the ELI-512 project and Multiflow Computer was founded in 1986 to commercialize this work. Multiflow’s designs had substantial ILP - they executed 7, 14, or 28 instructions simultaneously.

                                  Here are a few other possibilities:

                                  1. 1

                                    Right on the spot. Thank you, Mr. Ohno.

                                    1. 1

                                      I am sympathetic to the article but I think the author falls victim to not identifying the ebb and flow of research. OS research has all but died for now, but that doesn’t mean it won’t come back (which AI winter are we about to enter now? The third or fourth?), which seems to be the main thing the article focuses on.

                                      In the mean time though, cryptography is improving, our languages are (slowly) getting better and more powerful type systems, and many more incremental improvements throughout the field occur every year. It’s not all bad (although some of it is).

                                      1. 4

                                        The polemic style may have obscured it (since this was originally a twitter thread) but I’m actually talking about a very specific kind of progress that isn’t being made – fundamentally new ground with meaningful impact on user experience, as opposed to incremental progress.

                                        1. 2

                                          You are also focusing on software in the article, whereas a lot of the counter-examples people are bringing up are more physics related. I think people here are identifying with silicon valley here and feeling attacked, despite the fact that most of the people posting probably here don’t live or work there. Seems like the ‘“silicon valley is a center of innovation” memeplex’ as you put it is more pervasive than any of us thought.

                                          All I know is that when I was doing software as a hobby I was innovating. Now that I do it professionally I am not. I don’t have time for my innovations anymore. I just need to find someone on the internet that has implemented the functionality I need and then shape it to fit into what I am doing. I kind of miss the hobby days, but one must pay the rent.

                                          1. 2

                                            I think people here are identifying with silicon valley here and feeling attacked, despite the fact that most of the people posting probably here don’t live or work there.

                                            I’m not sure this follows. I don’t think anyone here is defending silicon valley as much as refuting the assertion that technology has stopped progressing. It feels disingenuous to draw this conclusion.

                                            All I know is that when I was doing software as a hobby I was innovating. Now that I do it professionally I am not. I don’t have time for my innovations anymore. I just need to find someone on the internet that has implemented the functionality I need and then shape it to fit into what I am doing. I kind of miss the hobby days, but one must pay the rent.

                                            I want to stress that your (for any definition of “your”, including my own) personal feelings and anecdotes are not enough to draw a systemic critique. I notice this a lot in technology circles, where folks perceive that their work has gotten more monotonous or miss a time that they may or may not have been active during and use that to draw systemic critiques. Experiences are varied, as this thread has shown, and it’s naive to assume that your own experiences define everyone’s experiences.

                                            1. 1

                                              when I was doing software as a hobby I was innovating. Now that I do it professionally I am not. I don’t have time for my innovations anymore. I just need to find someone on the internet that has implemented the functionality I need and then shape it to fit into what I am doing. I kind of miss the hobby days, but one must pay the rent.

                                              I’m exactly in the same boat, & this is part of what led me to write this.

                                            2. 1

                                              That sentiment reminds me of this post.

                                              1. 1

                                                I found the post in this comment on reddit: https://teddit.net/r/emacs/comments/l6xued/emacs_fomo/gl4mt4s/, and it mentions the article you’ve referenced too.

                                          2. 1

                                            I can’t think of any examples of “just in time” compilation prior to the 1990s, so that might be one new thing.

                                            1. 3

                                              The Alto used reprogrammable microcode to convert between OS-specific bytecode formats and the base machine code instructions, which (while hardware-assisted) is arguably comparable to the kind of JIT compilation performed in Java; Alto Smalltalk also compiled smalltalk code in real time so that running code could be edited live.

                                              However, JIT compilation is also more or less a performance enhancement. Had moore’s law not run out, we could reasonably imagine it not mattering at all, & having machines that had no compiled code at all other than some kind of interpreter living in firmware. There are all sorts of performance tricks that have been invented, & some are very clever, but their purpose is to increase the speed of conventional programs rather than change the kind of programs one can imagine using or writing – they are quantitative rather than qualitative improvements.

                                              1. 2

                                                The term JITs may go back to the 1990s but the idea is much older; the 1974 thesis Adaptive systems for the dynamic run-time optimization of programs describes a JIT.

                                                1. 2

                                                  The BBC Micro had a dialect of BASIC that had an inline assembler that could take a string, assemble it, and give you back a pointer that you could jump to from BASIC. One of the learn-to-program books at my school had a toy language that you could write and compile in a REPL.