1. 48
  1.  

  2. 7

    Agreed. Here’s my version of this list: https://wiki.alopex.li/ThingsWeGotRight

    1. 3

      Great list! Most of those were things I take for granted but would hate to be without. 36-bit words and EBCDIC, yikes.

      either you are an embedded system without much memory and don’t need an MMU, or you’re large enough to afford an MMU with paging

      I do actually wish for MMUs on embedded CPUs, because they allow for memory-mapped storage, which is a big boon for structured storage with low RAM overhead (viz. LMDB.) You can get that without an MMU, but you have to build it in at the hardware level with access to the bus, which you don’t have on an SoC.

      1. 3

        I do actually wish for MMUs on embedded CPUs, because they allow for memory-mapped storage, which is a big boon for structured storage with low RAM overhead (viz. LMDB.) You can get that without an MMU, but you have to build it in at the hardware level with access to the bus, which you don’t have on an SoC.

        This is generally not done on embedded systems intentionally. Doing memory-mapped storage requires having a buffer cache that manages a pool of memory[1]. This works well if you can fit the working set of storage into memory and if you don’t mind memory accesses to the storage-backed memory having unpredictable latency. Neither of these tends to be true on embedded devices. When RAM is that constrained, it’s much better to do explicit I/O and buffer a sensible amount in the program, rather than relying on the RTOS to provide some heuristics that probably aren’t a great fit for your specific workload.

        It works a bit better with ROM, where you can do execute in place, but generally ROM is much slower than SRAM and so you end up needing caches. The Azure Sphere SoCs do this for the A-profile core, which reduces the amount of SRAM they need but means that they must have caches to get good instruction-fetch performance and so can’t do hard realtime things on the A-profile core (an i-cache miss would introduce jitter).

    2. 6

      I’m thankful for

      • Type checking and type inference, which make my code more reliable and easier to write.
      • Steady advances in C++ that tide me over until I’m able to switch to Nim or Rust or SooperFoobar2022, speaking of which:
      • The tremendous resurgence in language design — sometimes I have to laugh at how new languages pop up every week, but there are so many interesting ideas and the state of the art seems to be advancing fast.
      • My new MacBook Pro, for (a) being super fast, and (b) belonging to me not my employer, so I can install whatever I like on it without reprisal.
      • The way that even the OSs I don’t use (Windows, Linux/BSD/etc, Android) have made such huge gains in usability. I remember being frustrated back in the 90s that systems as shitty as Windows 95 and Motif were in wide use. Nowadays the state of the art in UX is so much higher, for everyone.
      • USB-C, for allowing me to plug something in on the first try, not the third.
      • JSON — I totally agree with the OP. It’s kind of the duct tape of the software world, isn’t it?
      1. 2

        Fantastic list, aside from the MBP (my personal machine is still a a late 2013 MBP and the new ones are the first ones released since that have made me consider upgrading, so I’m just jealous on this point).

        I think I’d tie the first and third points together. Things like flow-sensitive typing, and type inference over structural and algebraic type systems, and viewpoint adaptation have been advances in type systems that have only started to be feasible to implement in a real language in the last few years. I’d largely given up on static type systems as being able to express properties I cared about 15 years ago, now I’m tremendously excited about what they can provide. I think this has been a big factor in the growth of new languages. Some of the things we’re doing in the static type system Verona are things I had tried to do with dynamic typing and couldn’t make fast enough even building custom hardware to accelerate them.

      2. 5

        Stealing the idea from this post, I did the same exercise. Here’s my list for this year: https://blog.ovalerio.net/archives/2298