1. 33
  1.  

    1. 10

      I do not have a SO account but.

      • HLL and sketches in general have had a lot of work poured into them and lot of good results
      • polymorphic algebraic effect handlers have seen a lot of work and are starting to be seen in industry, GHC and Ocaml in particular
      • gradual typing have made a come back with massive work on the ability to evolve a code base toward it.
      • Ryu and DragonBox have massively reshaped the algorithms used to print floating point numbers
      • CRDT have exploded since then, with massive research into them
      • CPS transformations have become the main way to talk about compilers in the literature, which distanced itself from the SSA reality of industry compilers
      • a new focus on sugaring and desugaring in order to provide powerful error messages to users
      • on the parsing side, the return of GLR and Pratt’s parsing in order to deal better with partial errors and recover
      • widespread adoption of parser combinators in industry
      • in the follow up of Roslyn and typescript, a massive look into query based compilers.

      I probably forget a lot of things. Things are moving quite well :) adoption into industry is a bit harder, but we are getting there.

      1. 1

        If you had to pick one of these, which would you?

        1. 1

          I would say query-based compilers, or sugaring and desugaring, as they are allowing massive change in UX.

          For personal purely “cool” factor, algebraic effect handling as capabilities and the floating point numbers casting to strings, because it is stuff I work on.

          But tbf, I work on my personal projects on nearly all of list I posted :D

    2. 3

      I’m not sure if replacing theoretical computer science with a fudge counts, but DLSS has pretty much solved anti-aliasing.

      1. 5

        Good that you mentioned DLSS. It’s something I’ve been thinking about a lot myself lately so I indulged myself in musing about what inventions since 2010 were necessary for it work.

        Theoretical inventions first. Note that not all these were crucial but I believe them to been at least an inspiration.

        • Greater understanding of temporal anti-aliasing (TAA) techniques (A great 2020 survey):
          • Iterative filters that don’t blur (Sacht-Nehab) (2015)
          • Visually pleasing blue noise dithering (1993), introduced to graphics lately.
          • Tricks such as “neighborhood color clamping” (2011) to make it practical to sample past frames without “ghosting” trail artifacts.
            • I think this function was the first site of deployment of deep learning in TAA techniques. The neural network implemented a function f(3x3 neighbor pixel colors colors, guessed color) -> clamped color that was previously a heuristic tweaked by hand. Can’t find the source but it was some Nvidia or Epic slide deck, I recall it using the phrase “like programming a million-lane wide SIMD” when referring to TensorFlow.
        • Deep learning know-how to make models trainable in the first place.
          • Xavier network weight initialization (2010)
          • Adam optimizer (2014)
          • ReLU activation (2011)
          • Residual connections (2015)
          • Pixel Shuffle layer for CNNs (2016) (Maybe? It’s useful for autoencoders.)
          • U-net architecture (2015)

        My deep learning knowledge is somewhat out of date though :)

        On the practical side there’s of course simply the birth of GPU-accelerated differential programming frameworks PyTorch (2016) – Facebook’s combination of torch (2002) and Chainer (2015) – and Google’s TensorFlow (2015). These both implement reverse mode automatic differentiation which has been around for decades. I suppose the combination of GPUs + Python + deep learning built-in was the usability invention here.

        Fast GPUs with programmable shaders are of course a necessity but weren’t those around in 2010 already? Compare two roughly comparable Nvidia cards:

        • GTX 580 (2010) for $592 (inflation adjusted to 2020)
          • 3 GB VRAM, 192 GB/s bandwidth, 1.6 TFLOPS (FP32)
        • RTX 3080 (2020) for $699
          • 12 GB VRAM, 760 GB/s bandwidth, 30.6 TFLOPS (FP32)

        So in memory size and bandwidth we have a 4x increase that happens to reflect a pixel count jump from 1080p to 2160p resolution. The 19x increase in floating point ops sounds awesome but it’s still only 5x more per-pixel if you take the higher display resolutions into account.

        To me it seems like high-resolution displays are the hardware improvement that really drove the development of DLSS and its ilk, not GPUs. Well you could argue that Nvidia’s proprietary RTX tech with its high computational load was the culprit. In any case, games ended up having too much work to do per pixel and algorithmic & tooling improvements came to the rescue. And there really have been new discoveries since 2010!