1. 19
  1.  

  2. 11

    This seems like a kind of arbitrary list that skips, among other things, iOS and Android, and that compares a list of technologies invented over ~40 years to a list that’s in its twenties.

    1. 7

      I noticed that Go was mentioned as a post-1996 technology but Rust was not, which strikes me as rather a big oversight! Granted at least some of the innovations that Rust made available to mainstream programmers predate 1996, but not all of them, and in any case productizing and making existing theoretical innovations mainstream is valuable work in and of itself.

      In general I agree that this is a pretty arbitrary list of computing-related technologies and there doesn’t seem to be anything special about the 1996 date. I don’t think this essay makes a good case that there is a great software stagnation to begin with (and for that matter, I happened to be reading this twitter thread earlier today, arguing that the broader great stagnation this essay alludes to is itself fake, an artifact of the same sort of refusal to consider as relevant all the ways in which technology has improved in the recent past).

      1. 2

        It’s also worth noting that Go is the third or fourth attempt at similar ideas by an overlapping set of authors.

        1. 1

          The author may have edited their post since you read it. Rust is there now in the post-1996 list.

        2. 3

          I find this kind of casual dismissal that constantly gets voted up on this site really disappointing.

          1. 2

            It’s unclear to me how adding iOS or Android to the list would make much of a change to the author’s point.

            1. 3

              Considering “Windows” was on the list of pre-1996 tech, I think iOS/Android/touch-based interfaces in general would be a pretty fair inclusion of post-1996 tech. My point is that this seems like an arbitrary grab bag of things to include vs not include, and 1996 seems like a pretty arbitrary dividing line.

              1. 2

                I don’t think the list of specific technologies has much of anything to do with the point of how the technologies themselves illustrate bigger ideas. The article is interesting because it makes this point, although I would have much rather seen a deeper dive into the topic since it would have made the point more strongly.

                What I get from it, and having followed the topic for a while, is that around 1996 it became feasible to implement many of the big ideas dreamed up before due to advancements in hardware. Touch-based interfaces, for example, had been tried in the 60s but couldn’t actually be consumer devices. When you can’t actually build your ideas (except in very small instances) you start to build on the idea itself and not the implementation. This frees you from worrying about the details you can’t foresee anyway.

                Ideas freed from implementation and maintenance breed more ideas. So there were a lot of them from the 60s into the 80s. Once home computing really took off with the Internet and hardware got pretty fast and cheap, the burden of actually rolling out some of these ideas caught up with them. Are they cool and useful? In many cases, yes. They also come with side effects and details not really foreseen, which is expected. Keeping them going is also a lot work.

                So maybe this is why it feels like more radical ideas (like, say, not equating programming environments with terminals) don’t get a lot of attention or work. But if you study the ideas implemented in the last 25 years, you see much less ambition than you do before that.

                1. 2

                  I think the Twitter thread @Hail_Spacecake posted pretty much sums up my reaction to this idea.

              2. 2

                I think a lot of people are getting woosh’d by it. I get the impression he’s talking from a CS perspective. No new paradigms.

                1. 3

                  Most innovation in constraint programming languages and all innovation in SMT are after 1996. By his own standards, he should be counting things like peer-to-peer and graph databases. What else? Quantum computing. Hololens. Zig. Unison.

                  1. 2

                    Jonathan is a really jaded guy with interesting research ideas. This post got me thinking a lot but I do wish that he would write a more thorough exploration of his point. I think he is really only getting at programming environments and concepts (it’s his focus) but listing the technologies isn’t the best way to get that across. I doubt he sees SMT solvers or quantum computing as something that is particularly innovative with respect to making programming easier and accessible. Unfortunately that is only (sort of) clear from his “human programming” remark.

                2. 2

                  It would strengthen it. PDAs - with touchscreens, handwriting recognition (what ever happened to that?), etc. - were around in the 90s too.

                  Speaking as someone who only reluctantly gave up his Palm Pilot and Treo, they were in some ways superior, too. Much more obsessive focus on UI latency - especially on Palm - and far less fragile. I can’t remember ever breaking a Palm device, and I have destroyed countless glass screened smartphones.

                  1. 3

                    The Palm Pilot launched in 1996, the year the author claims software “stalled.” It was also created by a startup, which the article blames as the reason for the stall: “There is no room for technology invention in startups.”

                    They also didn’t use touch UIs, they used styluses: no gestures, no multitouch. They weren’t networked, at least not in 1996. They didn’t have cameras (and good digital cameras didn’t exist, and the ML techniques that phones use now to take good pictures hadn’t even been conceived of yet). They couldn’t play music, or videos. Everything was stored in plaintext, rather than encrypted. The “stall” argument, as if everything stopped advancing in 1996, just doesn’t really hold much water to me.

                    1. 1

                      The Palm is basically a simplified version of what already existed at the time, to make it more feasible to implement properly.

              3. 4

                Idk, seems to me that this is absolutely normal to how research works.

                Technology, discoveries and research are (likely ) not a linear function of time. This to say that we know that discoveries blossom in different areas in different times. CS is stalling? So be it. Perhaps the field is not ripe for new developments yet. We had little stagnation in genomics for a while and now that’ll blossom again. There was little stagnation with electro-magnetic theory, until the beginning of the 20th century. Some stagnation in machine learning, and now it’s exploding again. this is ok.

                1. 3

                  I’ve seen similar articles in other fields. The last one I remember was how technological progress has stalled and nothing significant was invented after the 1960s. The author seemed to feel that cars and airplanes and The Computer were significant, but that the Internet and smartphones weren’t, or were somehow just details of The Computer.

                  So I’m not taking this list too seriously. The author seems a bit blinkered — no mention I can see of the advances in database technology and Big Data in general, or the amazing progress in GPUs, or new GUI paradigms like FRP. And pushing machine learning to the side as something different is just absurd, as if those neural networks just invented themselves.

                  1. 2

                    This is a great summary of my impressions as well. Just take a look at new OS projects anywhere: most are some forms of UNIX clones, replicating the old design decisions (failures) for the sake of better compatibility with existing software, which are also captives of their own path and post success.

                    The web-platform (Chrome) as a giant monolith swallowing everything is a new such monster after POSIX, which has grown too big and too successful to be ever substantially improved upon/redesigned.

                    When will we have something better, yet still free instead of email?

                    1. 2

                      First of all, most of our software technology was built in companies (or corporate labs) outside of academic Computer Science.

                      Is this true? This seems like a pretty wild claim.

                      The risk-aversion and hyper-professionalization of Computer Science is part of a larger worrisome trend throughout Science and indeed all of Western Civilization

                      Why specifically western civilization? I would say this is more of an issue of how capital functions, not because of Western Culture™️

                      1. 4

                        Bell Labs, Xerox PARC, SRI, BBN invented a pretty jaw-dropping swath of tech in the 60s and 70s: packet switched networks and the ARPAnet (together with MIT), Unix, the mouse and the GUI, online collaboration, text editing as we know it, bitmap displays, Ethernet, the laser printer, the word processor, file servers, (much of) object-oriented programming, digital audio and computer music (Max Matthews)…

                        On the hardware side, it was nearly all corporate — Bell Labs, TI, Fairchild, Intel… (though IIRC, Carver Mead at Caltech is the father of VLSI, and RISC was invented at UC Berkeley.)

                        1. 2

                          To back up snej’s comment, there are several great history books on this period:

                          Gertner’s “The Idea Factory” is also a good read, but focuses more on the overall history of Bell Labs.

                        2. 2

                          Obviously missing from the recent list IMHO: Git, Rust, Swift, LLVM, distributed databases (Cassandra, HBase, Riak), the entire Hadoop + Spark ecosystems, DCOS/K8s/Nomad, NixOS/Guix, GPU-accelerated everything, software-defined networking, web browsers as universal runtimes (and so JSON, REST, and GraphQL as standard RPC methods, and Electron as the [il]logical conclusion of that dominance).

                          OTOH, there are some technologies that have actually gone backwards/been lost: image-based languages (Common Lisp, Smalltalk, HyperCard), 4GLs in general, and “workstations” that aren’t just warmed-over PC clones.

                          I think there’s a good point about there being less of an explosion of new languages and paradigms for programming, but in my mind that largely comes down to the professionalization and commoditization of programming as a trade. Most businesses don’t want to “innovate” on their core enterprise systems, and most programmers (esp. outside communities like this one) want “portable” and “popular” tools in their toolbox so they can easily move between projects and firms.

                          So sure, a huge portion of the ongoing work has gone towards refinement, scale, and operational reliability. That seems like a natural reflection of programming moving solidly into the business mainstream, though.

                          1. 1

                            I do think that GPU programming is a major paradigm that emerged since 1996. It’s a different style of system architecture.

                          2. 1

                            Go doesn’t belong on the post-1996 list since it’s mostly a modern (and excellent) implementation of Newsqueak, which dates back to the early 1990s. More than 95% of Go was designed 25 years before it was actually released. That’s just how long it took them to get the resources to do it.

                            1. 1

                              I realize this is vague, but what I think happened after 1996 is that things got complex. Understanding of computers is no longer the bottleneck in improving software, so computer science can’t help us anymore. Instead we need to develop a new science: complexonomy or whatever you want to call it.

                              I don’t mean the well-known algorithmic complexity but the general challenge of structuring systems with many (many) different facets, like organizations, legal systems or social dependencies. I imagine this starts with computer code, because it is relatively easy to reason about and the gains are immediately apparent.

                              As I see it, this science would be centered around understanding the processes that lead to the easiest to understand structure for any set of requirements. Subjects would be inherent vs accidental complexity, developing simplicity metrics (connascence maybe?), ways to stay aware of paradigm limits, safeguards from logical fallacies, measuring the distance between theory and practice, how iteration/evolution leads to elegant solutions in nature, etc etc. Just the tip of the iceberg obviously, there is so much thought that still needs to be invented.

                              1. 5

                                Instead we need to develop a new science: complexonomy or whatever you want to call it

                                this new science has existed for decades now: it’s called cybernetics. There’s a lot of interesting stuff to look at there, I think most of the topics you mention fall under that umbrella.

                                1. 1

                                  Thanks, I hadn’t made that link yet. I would agree that cybernetics covers at least a subset of this space.

                                  However I haven’t seen many references to cybernetics when discussing code structure and apart from whether cybernetics would indeed be applicable in this domain, I guess my main point is that the connection is rarely made in articles like this that progress has not stopped because we all got lazy but because systems are getting too complex and we haven’t developed the tools to address this yet. This suggests that the importance of the study of complexity may be overlooked at the moment.