1. 7

    As this is the expected denoumentd of the Elasticsearch license change, might be worth folding into https://lobste.rs/s/qtsjh1/elasticsearch_does_not_belong_elastic for context.


      Seems like a good call, I hadn’t realised we’d linked those stories. Thanks!

      I think that’s something a mod would need to do, if appropriate, right?


        Yes, it has to be done by the Lone Ranger Mod[1], @pushcx

        [1] so far


          Hey, we still have Irene!


            I can’t believe I forgot that. Apologies.

    1. 8

      XTerm also supports real graphics as well, thanks to the VT430’s sixel mode. This page has some nifty demos.

      I wish this feature was better known/more used. Because few programs take advantage of this capability, many terminal emulators haven’t implemented sixel support.

      Personally, I have found this feature useful for listing directories of images with lsix, and also for displaying output from GNUPlot.

      1. 5

        This is some amazing singing pig stuff.

        1. 2

          Singing pig?

          1. 6

            Like, the quality of the output isn’t nearly as impressive as the fact that it works at all!

            1. 4

              Ahh I see. Not an expression I had heard before.

              I do agree, the output quality does leave a bit to be desired, but considering this is 30 year old technology, I give it somewhat of a pass.

              I do wish that there was a higher-quality more modern alternative. I know iTerm2 has an image drawing protocol, but no other terminal emulators have adopted it as far as I know.

              1. 5

                Kitty supports a graphics protocol too. I plan to one day port this to alacritty.

                1. 4

                  The term that I’ve seen is dancing bear (commonly attributed as a Russian proverb): “The marvel is not that the bear dances well, but that the bear dances at all.”

                  1. 2

                    I would love for something more like Mathematica (as you note in a peer comment) that was open and widely adopted for interactive computing.

            2. 3

              I think the real tragedy is shoving this into a glorified vt100, instead of realizing there are better tools for this…

              1. 6

                If you want an interactive CLI type interface that can also embed images, what other tool is there available for this today? I guess Mathematica kind of has this, but it’s proprietary, and cannot be used as a general-purpose UI for other programs to target.

                1. 5



                    I’d add to that: works over SSH or some equivalent.

                    I have a proof-of-concept implementation that adds two features to FreeBSD:

                    • A content negotiation protocol over pipes, so the sender advertises the set of things it can produce and the receiver picks the one that it wants, gracefully falling back to unknown if one end doesn’t support the protocol.
                    • A ‘pipe pealing’ mechanism in the TTY layer, so that you can establish independent pipes to the terminal emulator, for different types of data.

                    Both of these could be cleanly encapsulated in the SSH protocol, but I haven’t actually done the work (yet).

                    I’d love to see something like this standardised. I modified libxo in the base system to support this protocol, so you can pipe any libxo-enabled utility to something that wants JSON or HTML and have it work without the user having to pass any libxo flags to the first thing in the pipeline. Oh, and a proof-of-concept using the PTY interfaces so that the terminal could request an HTML version of what was being displayed and open that in a web browser. I’d love to have a terminal incorporate something like that properly so that, for example, ls gave me a table view that I could sort and filter.


                      Jupyter console and, slightly more distantly, interactive notebooks both fit, imo.


                      I think the real tragedy is that we are rio(1) (well, a descendant of it, anyway), in which this sort of thing — like a lot of things in Plan 9 — was so easy that it didn’t even seem noteworthy.

                  1. 2

                    How much of this is tooling related? If we were using editors that made it trivial to show “what executes here? where does this go to? where is this defined?”, would we be still thinking this is “spooky?”

                    1. 2

                      I think that Haskell IDE Engine has helped a lot, since it allows you to see how a polymorphic function is instantiated. So it helps a lot, yes.

                      1. 1

                        An excellent point. A great example is Causeway, a debugger for E which could trace execution across multiple distributed regions of computation, and understood asynchronous execution.

                      1. 1

                        I actually ran into this professionally - there are two extensions that would be desirable to enable at the same time, but you can’t, because both of their dependencies have the exact same symbol names. If you have both loaded, chaos ensues.

                        1. 1

                          Hopefully, there won’t be a Text Encoding menu in 2022.

                          Most of documents server over HTTP are probably live enough to already have metadata properly declaring their encoding.

                          But browsers are sometimes used to view local files – and such files might be very old or come from unusual platforms – offline documentation, some harvested website, notes, reports etc. In such case, metadata is often either lost or was never present because years or decades ago, authors expected certain platform default encoding or were not aware of encoding at all.

                          1. 1

                            When viewing such files locally, have you had the need to manually override Firefox’s guess after Firefox 78?

                            1. 1

                              Unfortunately there’s some old pages on the Japanese web that don’t seem to express their encoding properly. Maybe it worked on a Japanese OS. Not to mention some western ones with similar confusion between non-UTF-8 and UTF-8…

                              It’s one of those features I wish I didn’t have to use, and could be tucked nice and out of the way, but it’s handy whenever you do run into that.

                              1. 1

                                Has Firefox failed to guess the encoding of such pages for you after Firefox 78? (In cases where the encoding remains undeclared as opposed to a server update introducing a server-level UTF-8 declaration despite the content being old.)

                                Edit: Context for why 78: https://lobste.rs/s/dbwqu6/chardetng_more_compact_character

                                1. 2

                                  I haven’t been browsing those kinds of pages in a while, so I’m not sure. I’ll let you know if I do though.

                          1. 2

                            That is really interesting. I’m always surprised that these things run open on the internet with a public IP. I’ve always expected them to be behind some kind of VPN.

                            Not that this makes the device “secure”, but it just adds one layer of security in case of a vulnerability in these IoT products.

                            Sometimes, I’m happy that I’m not working on airplane software, because I might not fly ever again…

                            1. 3

                              Oh the hardware is already scary enough, no need to look at the software ;)

                              1. 2

                                Working manufacturing IT / with SCADA systems is really an experience. Lots of very old systems powering very expensive automation and with no security.

                                I once saw a storage closet full of VAXes and jokingly asked if I could have one. Nope: backups for the overhead robotic transport system. But they were better than the Windows 2000 systems all around the factory floor that had to be aggressively firewalled off – fewer VMS worms running around the internet.

                                1. 1

                                  For the vintage computing people, this is why it’s hard to find VAXen/Alphas - because companies will buy them up for hot spares!

                                  1. 1

                                    The person I e-know who works at a plant that uses software written for VAX has told me they run it under emulation. Unfortunately I couldn’t find the URL for the software in my logs, but I do remember the home page looked very 90s…

                                    1. 1

                                      Replying to myself, the company is https://www.avtware.com/, and my e-friend says emulation is definitely a good option if you have source - maybe not if you don’t .

                              1. 35

                                The real gold nugget is in the bug tracker:

                                A few weeks ago, my kids wanted to hack my linux desktop, so they typed and clicked everywhere, while I was standing behind them looking at them play… when the screensaver core dumped and they actually hacked their way in! wow, those little hackers…

                                I have this unpopular opinion that, while here in the open source community we’re used to poking fun at MICROS~1 Windows because it crashes and it’s insecure, lots and lots of things have changed in Windows land since 1998. Security-wise, I’d be much more inclined to trust a Windows 10 machine than an Ubuntu machine, in spite of its malware telemetry.

                                1. 13

                                  I agree. I am by no means a Windows fan (quite the opposite, *nix user since 1994), but Microsoft has really invested in security. Some examples:

                                  • You can pick a set of folders for malware protection with controlled folder access. Only trusted applications can access those folders. Attempted access by a non-trusted application requires permission of the user.

                                  • Virtualization-based security uses Hyper-V to run the Windows kernel at a less privileged level with hardware isolation. Sensitive information is stored in memory not accessible to the main Window kernel. The same mechanism is used to verify driver signatures, etc. (so that a compromised kernel cannot load rogue drivers).

                                  • A subset of store apps is sandboxed (they loosened the requirement to attract more traditional apps I guess).

                                  • Easy, user-friendly support for running a Edge browser that runs in an isolated VM.

                                  • A driver and application verification model.

                                  There are some exceptions to this in the unix world, such as macOS and Fedora (with good secure boot support with module signing, SELinux, a push of Flatpaks with sandboxing, however imperfect), which do defense in depth. But largely the Linux threat model is as if we are still in the 80’ies or 90’ies, where gaining root access is the primary goal.

                                  1. 9

                                    And it’s not just that the threat model is out of date (which it is!) but also that the tech stack has long, long exceeded the complexity level at which people who work in their spare time can use it and not screw up, no matter how good they are and despite their best intentions. See my second-favourite KDE bug: https://bugs.kde.org/show_bug.cgi?id=389815 .

                                    People look at the threat models of operating systems like Windows and they (rightfully, to some degree) think that they’re an artefact of the closed-source development model producing applications distributed through all sorts of channels, where of course you’re not going to trust applications you got from the Internet.

                                    But lots of bits in that threat model are there to guard against all sorts of bugs that can get exploited, not directly against deliberately malicious applications. If you’re going to adopt the bug making machine (complex libraries and protocols, development and release habits like rolling releases and the like), you have to adopt the bug protection mechanisms, too, otherwise they keep biting.

                                    1. 2

                                      Wow, that bug is truly outrageous. Remote code execution with virtually no effort.

                                    2. 9

                                      Microsoft significantly shaped on security since ~2003 when XP was a superfund site. They’ve pushed many security mitigations (like W^X) into production (even with the extreme backwards compatibility situation) that even Theo de Raadt actually thought they were doing a better job than the Linux ecosystem. Not to mention that I think they’re the only OS that actually ships formally verified drivers…

                                    3. 3

                                      The telemetry is bad on non professional versions, if you get the enterprise version is pretty much clean.

                                    1. 32

                                      Well written article and an enjoyable read. Only part I disagree with is your stance on “early exit”, it turns out that this is the tiny hill i’m willing to die on, one I was unaware I cared about until now. I think this is primarily because if I read code that has a return within an if block then anything after that block is inferred else.

                                      I could become pedantic and retort that all control flow is a goto at the end of the day but I wont, because that would be silly and this was a genuinely good read, thank you for sharing.

                                      1. 17

                                        I also was surprised how much I disagreed about the early exit.

                                        When I originally learned programming, I was told that multiple returns were bad and you should restructure your code to only return once, at the end. After learning go (which has a strong culture of return early and use returns for error handling), I tend to favor early returns even in languages that don’t use returns for error handling.

                                        The thought process I’ve adopted is that any if/return pair is adding invariants, eg if I’m at this point in the program, these previous statements must be true (or they would have exited early). If you squint at it, you’re partway to pre/post-conditions like in Eiffel/Design By Contract.

                                        1. 3

                                          When I originally learned programming, I was told that multiple returns were bad and you should restructure your code to only return once, at the end.

                                          Ah, so functional programming </sarcasm>

                                          1. 2

                                            Pure functional programming is all about early returns, if anything. There’s just no return keyword. When everything is an expression, you can’t store now and return later.

                                            1. 1

                                              In a pure functional language the whole function is a single expression – I fsil to see how it is “all about early returns”? Certainly you can simulate imperitive return or raise using various tricks, but ultimately there is always just one expression and that is what gets returned, anything else is syntactic sugar.

                                              1. 3

                                                Conditionals and pattern matching are expressions. This means you’d have to put effort to avoid an early return.

                                                Consider a function that converts boolean values to string in the canonical structured style with a single return.

                                                function bool_to_string(x) {
                                                  var res
                                                  if(x) { res = "true" } else { res = "false" }
                                                  return x

                                                In a functional style it’s most naturally written like this:

                                                bool_to_string x = if x then "true" else "false"

                                                We could put an extra effort to store the return value of if x then "true" else "false" but it looks like obviously useless effort:

                                                bool_to_string x =
                                                  let res = if x then "true" else "false" in
                                          2. 3

                                            I had a similar experience, from “only one return” to “return early” And I think it depends on the domain and language you are using too.

                                            One project I worked on was initially written in C and then moved to C++ and started by people who mostly wrote java. There is a common pattern in C to use goto near the return statement to free memory when you exist (think of it as a defer in go but written by hand), and since goto‘s are the hallmark of bad programmers and returning early was not an option the dev’s came up with an ingenious pattern

                                            int result = -1;
                                            do {
                                              if (!condition) {
                                              result = 1;
                                            } while (false)
                                            return result;

                                            it took a while to decipher why it was there but then become common place because your promotions were heavily influenced by your “coding capability”

                                          3. 2

                                            I think early exit is OK in the sense of basically guards against invalid inputs and your language lacks the ability to express it in other says - you know, C. (Probably the same for freeing resources at the end, since you don’t have finally or defer.)

                                            1. 2

                                              Strongly agree with you.

                                              I first came upon early returns as a recommended style in Go, under the name of “line of sight” (the article makes a good case for its benefits), and have since become a huge advocate, and use it in other languages as well.

                                              Knowing all your if-indented code represents a “non-standard” case makes the code really easy to scan and interpret. Aside from leading to flatter code, it leads to semantically uniform code.

                                              1. 1

                                                +1 on early exits. They’re clarifying: the first thing I do in many functions is check various corner cases and then early exit from them, freeing up my thought process for everything afterwards. Knowing that those corner cases no longer apply means I don’t have to worry that someone — and “someone” might even just be me, in the future — adds some code that doesn’t handle the corner cases well after the terminating } of the else block (or de-indent, or insert syntax here for closing an else block).

                                              1. 5

                                                i keep wrestling with phoenix and getting frustrated, in large part because a lot of the expected newbie workflow is based around one-time code generation, which makes it really inflexible in terms of iterating on early decisions. of course part of it could be that web programming is inherently a mess, but i would love to see a framework with a clean separation of generated code and user-modified code, so that the generated bit could be regenerated.

                                                1. 4

                                                  Yeah, I love working in Elixir, but I never use Phoenix because it feels incredibly overwhelming - just using Plug with EEx in the raw is more comfortable, but I suspect I’m missing out on a lot.

                                                  1. 2

                                                    I hear you. What I ended up doing was create an example “library” of different cases of generated code.

                                                    It helped a lot with learning and also allows me to refactor along the way.

                                                    The other thing I do is that whenever I run a mix generation task, the output of that is always a single git commit, and the git message includes all the stdoutput from the gen task itself. the next commit then is any manual changes I make.

                                                    This doesn’t change your original complaint, but after a while, it certainly becomes less of an issue. I usually use gen tasks to just create a base template for me these days, then heavily modify after the fact.

                                                    1. 1

                                                      Whoops. If mods see this, just merge it.

                                                    1. 4

                                                      Reading the Relational Model was interesting, because while the relational model had been so ingrained in us over the years, it wasn’t patently obvious then, but:

                                                      • SQL is arguably a bastardization of what Codd intended. It isn’t close to RMv1, let alone RMv2. The biggest axe he had to grind was duplicate rows in SQL as being a violation of the model (data as identity). Codd had some… interesting ideas I’m not sure I agree with, like four-valued logic. That only makes sense to me in the context of manual data entry - perhaps there’s a lost opportunity for types instead.
                                                      • There were other data models vying for attention. I forget what they were, but he does address them. Many seem like alternate twists on relational/hierarchical models; not quite NoSQL.
                                                      • He is dated/grounded by his assumption of users at terminals on mainframes interacting with the system that way. Unless you have a background in IBM mainframes, the entire indicators section seems puzzling.

                                                      I need to get around to reading Date. I’ve had some ideas floating in the back of my head around types and relational data for a while now…

                                                      1. 2

                                                        The Third Manifesto might be my favorite CS book (that or Project Oberon or maybe Clause and Effect).

                                                        1. 1

                                                          There is a weird discrepancy in SQL between the elegance and solidity of its foundations and the actual technology as it stands.

                                                          1. 1

                                                            Sadly, we’re so stuck in SQL land it’s very hard to get out. (And no, most replacements like NoSQL are worse).

                                                            I strongly recommend https://www.oreilly.com/library/view/sql-and-relational/9781491941164/

                                                            “SQL is full of difficulties and traps for the unwary. You can avoid them if you understand relational theory, but only if you know how to put that theory into practice. In this book, Chris Date explains relational theory in depth

                                                            Some of his advice seems extreme… but extreme benefits arise from sticking to it.

                                                          1. 49

                                                            Good lord, how is it elegant to need to turn your code inside-out to accomplish the basic error handling available in pretty much every other comparable language from the last two decades? So much of Go is well-marketed Stockholm Syndrome.

                                                            1. 16

                                                              I don’t think that responding with a 404 if there are no rows in the database is that any language supports out of the box. Some frameworks do, and they all have code similar to this for it.

                                                              1. 3

                                                                And sadly so often error handling is often done in a poor manner in the name of abstraction, though really bad one that effectively boils down to ignore that things can go wrong, meaning that one ends up digging through many layers when they actually do go wrong.

                                                                People eventually give up and copy paste StackOverflow[1] solutions in the hopes that one of them will work, even when the fix is more accidental and doesn’t fix the root cause.

                                                                The pinnacle was once checking code that supposedly could not fail. The reason was that every statement was wrapped in a try with an empty catch.

                                                                But back to the topic. Out of the box is all good and nice until you want to do something different which in my experience happens more often than one would think. People sometimes create workarounds. In the example of non existing rows, for example doing a count before fetch, so doing two queries instead of one, just to avoid cases where a no rows error would otherwise be thrown.

                                                                Now i am certainly not against (good) abstractions or automation, but seeing people fighting against those in many instances makes me prefer systems where they can be easily be added and can easily be reasoned about, like in this example.

                                                                [1] Nothing against StackOverflow, just blindly copy pasting things, one doesn’t even bother to understand.

                                                              2. 10

                                                                In what way is Go’s error handling turning my code inside out?

                                                                1. 6

                                                                  Pike has set PLT back at least a decade or two.

                                                                  1. 7

                                                                    It is possible to improve the state of the art while also having a language like Go that is practical, compiles unusually fast and is designed specifically to solve what Google found problematic with their larger C++ projects.

                                                                    1. 8

                                                                      compiles unusually fast

                                                                      There is nothing unusual about it. It’s only C++ and Rust that are slow to compile. Pascal, OCaml, Zig and the upcoming Jai are decent. It’s not that Go is incredible, it’s that C++ is really terrible in this regard (not a single, but a lot of different language design decisions made it this way).

                                                                      1. 3

                                                                        For single files, I agree. But outright disallowing unused dependencies, and designing the language so that it can be parsed in a single pass, really helps for larger projects. I agree on Zig and maybe Pascal too, but in my experience, OCaml projects can be slow to compile.

                                                                        1. 2

                                                                          I’m enjoying tinkering with Zig but I do wonder how compile times will change as people do more and more sophisticated things with comptime.

                                                                          1. 2

                                                                            My impression from hanging out in #zig is that the stage 1 compiler is known to be slow and inefficient, and is intended as a stepping-stone to the stage 2 compiler, which is shaping up to be a lot faster and more efficient.

                                                                            Also there’s the in-place binary patching that would allow for very fast incremental debug builds, if it pans out.

                                                                        2. 2

                                                                          Don’t forget D.

                                                                        3. 1

                                                                          My experience with Go is that it’s actually very slow to compile. A whole project clean build might be unusually fast, but it’s not so fast that the build takes an insignificant amount of time; it’s just better than many other languages. An incremental build, however, is slower than in most other languages I use; my C++ and C build/run/modify cycle is usually significantly faster than in Go, because its incremental builds are less precise.

                                                                          In Go, incremental builds are on the package level, not the source level. A package is recompiled when either a file in the same package changes, or when a package it depends on changes. This means, most of the time, that even small changes require recompiling quite a lot of code. Contrast with C, where most of the time I’m working on just a single source file, where a recompile means compiling a single file and re-linking.

                                                                          C’s compilation model is pretty bad and often results in unnecessary work, especially as it’s used in C++, but it means that you can just work on an implementation by just recompiling a single file every build.

                                                                          1. 1

                                                                            I have not encountered many packages that take more than one second to compile. And the Go compiler typically parallelizes compilation at the package level, further improving things. I’m curious to see any counter examples, if you have them.

                                                                        4. 4

                                                                          I don’t remember anyone in POPL publishing a PLT ordering, partial or total. Could you show me according to what PLT has been set back a decade?

                                                                        5. -2

                                                                          Srsly, I was looking for simpler, and was disappointed by the false promise.

                                                                        1. 1

                                                                          It’s wild to remember that the creators of Halo, the biggest 2000s gaming phenomenon, started as a quirky Mac developer.

                                                                          1. 1

                                                                            i don’t know if I’d call Marathon “quirky”. It was basically Doom with 3 dimensions[1] and a better story.

                                                                            [1] it was still sprite-based “2.5D” but the game had primitive physics for grenade launchers and rockets, and you had to aim up at enemies above you, unlike Doom.

                                                                          1. 9

                                                                            You mention similarity to PHP already, but you may not know that in 1999 PHP supported <script language="php"></script> too! I’m not sure if PHP still does.

                                                                            1. 4

                                                                              ASP had runat="server".

                                                                              1. 3

                                                                                Thankfully that was removed in the 7-series. Possibly 7.0.0?

                                                                              1. 23

                                                                                I think it’s going to be the year (and decade) of shell scripts written in YAML … Github Actions, Gitlab runners, Kubernetes config, sourcehut, etc. :)

                                                                                I have a few blog posts coming up about that

                                                                                1. 12

                                                                                  Oh. Well, as my grandmother would say, rats.

                                                                                  1. 8

                                                                                    This is excellent news for those of us who’ve been writing bash scripts for a long time. We have decades of experience with an idiosyncratic, designed-by-oh-fuck-it language that barely has a syntax and performs in all sorts of surprising ways! This is practically the same thing, it’s a new flavour, there are fewer people who know it, so the consulting rates are higher…

                                                                                    1. 2

                                                                                      designed-by-oh-fuck-it language


                                                                                    2. 2

                                                                                      That’s about how I feel … There are a lot of useful platforms locked behind YAML.

                                                                                      But it looks like there is a way out: JSON is a subset of YAML, so I changed my .travis.yml and sourcehut YAML to be generated. So I have .travis.yml.in, and .travis.yml, the latter of which is just JSON. [1]

                                                                                      So I can change the source language to be something else, but I didn’t yet. My configs are like 30 lines now, so it may not be worth it. But I have worked on big services that are thousands of lines of config (e.g. when I used to work Google) . I would say that’s the norm there, and tens of thousands of lines is pretty common too.

                                                                                      I remember someone saying that Facebook is written with like hundreds of thousands of configuration like this? https://research.fb.com/wp-content/uploads/2016/11/holistic-configuration-management-at-facebook.pdf

                                                                                      I’d be curious if people like it or dislike it.

                                                                                      So it looks like you can already replace YAML with Jsonnet, Cue, probably Dhall, etc. Does anyone actually do it? Anecdotally, it does seem like templating YAML is more popular? I wonder why that is. I only work with a handful of services that accept YAML now.


                                                                                      I think the functional languages are a little unfamiliar. Oil will use Ruby-like blocks for configuration, sort of like Vagrant or Chef but more succinct.

                                                                                      Oil should will be more familiar to people who know shell / Python / Ruby. If you know those languages, I don’t expect that Jsonnet, Cue, or Dhall is very familiar. They feel like an “extra” thing. And the last thing we want in infrastructure management is yet another config language. (That’s why I think it makes sense to bundle into a shell.)

                                                                                      Ditto for Nix. Nix is very similar to these languages – it’s basically an expression language over dynamically typed JSON-like records, but in this thread there is some negative feedback about that.

                                                                                      Anyway I want to fix this problem with Oil, but I’m not sure in which cases people would actually accept the extra “compiler”. It seems like people are very eager to template YAML, and embed shell in YAML, which is weird to me. I wonder why that is.

                                                                                      [1] https://github.com/oilshell/oil/blob/master/.travis.yml.in


                                                                                    3. 8

                                                                                      I already spend too much time being a YAMLgineer.

                                                                                      1. 3

                                                                                        I’ve thought about writing a language that uses YAML syntax but in the style of LISP or XSLT. It would be a total troll language, but I could see some projects actually using it.

                                                                                        1. 6

                                                                                          Github almost beat you to it, except:

                                                                                          if: ${{ github.event.label.name == 'publish' }}
                                                                                          runs-on: ubuntu-latest
                                                                                            - uses: actions/checkout@v2

                                                                                          clearly needs to be

                                                                                              op: ==
                                                                                                op: .
                                                                                                left: github
                                                                                                  op: .
                                                                                                  left: event
                                                                                                  right: ...
                                                                                              right: publish
                                                                                          runs-on: ubuntu-latest
                                                                                            - uses: actions/checkout@v2





                                                                                        2. 3

                                                                                          I’ve actually been working on my own CI system (not yet finished/released) because I got so fed up with this. After I ran out of the Travis credit thing I looked at GitHub Actions, and I just couldn’t get PostgreSQL to work: it just fails (after waiting 5 to 10 minutes, of course) and it’s pretty much impossible to inspect anything to see what’s going on. I already did this song and dance years ago with Travis, and it was painful then, and even more painful now.

                                                                                          It just sets up an image and starts /run-ci (or another program) from your repo in a container with runc. The script can be written in $anything supported on the container, and that’s it. While this won’t cover 100% of the CI usage cases, it’s probably suitable for half or more of them, and you can, you know, actually debug it.

                                                                                          1. 2

                                                                                            I’d like to buy an M1 Mac for the better battery life and thermals, but I have to run a lot of Linux VMs for my job, so it’s a nonstarter.

                                                                                            If VirtualBox or VMWare or whatever adds support for M1 Macs to run ARM VMs and I could run CentOS in a virtual machine with reasonable performance, it would definitely affect my decision.

                                                                                            (Note that I’d still have to think about it since the software we ship only ships for x86_64, so it would be…yeah, it would probably still be a nonstarter, sadly.)

                                                                                            1. 4

                                                                                              Parallels runs well, at least for Windows. I’ve heard the UI for adding Linux VMs is picky, but they’ll work fine too.

                                                                                              Much of the work around HVF/Virtualization.framework is to make Linux stuff drop-dead easy.

                                                                                              1. 3

                                                                                                And Qemu is a good option for those too, with picking the patchset from the mailing list for using HVF.

                                                                                                VMWare Fusion support is coming, VirtualBox will not be supported according to Oracle.

                                                                                              2. 3

                                                                                                I do have Parallels running Debian (10, arm64) on an M1. It was a bit weird getting it setup, but it works pretty well now, and certainly well enough for my needs.

                                                                                                1. 2

                                                                                                  There’s a Parallels preview for M1 that works: https://my.parallels.com/desktop/beta

                                                                                                  It has VM Tools for ARM64 Windows, but not Linux (yet).

                                                                                                  In my opinion Linux is a better experience under QEMU w/ the patches for Apple Silicon support (see https://gist.github.com/niw/e4313b9c14e968764a52375da41b4278#file-readme-md). I personally have it set up a bit differently (no video output, just serial) and I use X11 forwarding for graphical stuff. See here: https://twitter.com/larbawb/status/1345600849523957762

                                                                                                  Apple’s XQuartz.app isn’t a universal binary yet so you’re probably going to want to grab xorg-server from MacPorts if you go the route I did.

                                                                                                  1. 1

                                                                                                    Genuine question, why not just have a separate system to run the vm’s? That keeps the battery life nice at the expense of requiring network connectivity but outside of “on an airplane” use cases its not a huge issue i find.

                                                                                                  1. 1

                                                                                                    ZIp was actually one of the worst removable media formats in its day due to the bad reliability issues. They were popular pretty much only because iomega employed an aggressive razors-and-blades model and pushing it to OEMs.

                                                                                                    For better or at least more interesting formats, MO was wildly popular in Japan, and LS-120 was a superfloppy format that was backwards compatible with old floppies.

                                                                                                    1. 1

                                                                                                      Maybe your situation was different than mine in ’97-‘98 but iirc I hadn’t heard about LS-120 (living in Germany and just getting started being interested in all things hardware and MO either wasn’t widely available or too expensive) and ZIP was available, affordable and mostly worked. So in my book it was still the best removable media format in that time between ‘3.5” floppies are big enough’ and ‘whee, CD-RW’, that must’ve been 2-3 years I’d say, pinpointing CD-RW more in 1999 than 1998 because I think my first CD writer couldn’t do it.

                                                                                                      1. 1

                                                                                                        I owned an LS-120 drive and those things were awful. My sample size is small but 2 out of 3 of those 120MB floppies were unusable after a couple of days. Not sure what killed them, but never had those problems with plain floppy disks.

                                                                                                      2. 1

                                                                                                        And Zip looked like cuneiform tablets compared to the shitshow that was Iomega’s Jaz.

                                                                                                      1. 2

                                                                                                        I agree with the replies: Bitcoin is an interesting prototype rushed into production, and its advocates have pushed that instead of throwing the prototype away to learn from it on the next systems.

                                                                                                        1. 11

                                                                                                          This seems like a kind of arbitrary list that skips, among other things, iOS and Android, and that compares a list of technologies invented over ~40 years to a list that’s in its twenties.

                                                                                                          1. 7

                                                                                                            I noticed that Go was mentioned as a post-1996 technology but Rust was not, which strikes me as rather a big oversight! Granted at least some of the innovations that Rust made available to mainstream programmers predate 1996, but not all of them, and in any case productizing and making existing theoretical innovations mainstream is valuable work in and of itself.

                                                                                                            In general I agree that this is a pretty arbitrary list of computing-related technologies and there doesn’t seem to be anything special about the 1996 date. I don’t think this essay makes a good case that there is a great software stagnation to begin with (and for that matter, I happened to be reading this twitter thread earlier today, arguing that the broader great stagnation this essay alludes to is itself fake, an artifact of the same sort of refusal to consider as relevant all the ways in which technology has improved in the recent past).

                                                                                                            1. 2

                                                                                                              It’s also worth noting that Go is the third or fourth attempt at similar ideas by an overlapping set of authors.

                                                                                                              1. 1

                                                                                                                The author may have edited their post since you read it. Rust is there now in the post-1996 list.

                                                                                                              2. 3

                                                                                                                I find this kind of casual dismissal that constantly gets voted up on this site really disappointing.

                                                                                                                1. 2

                                                                                                                  It’s unclear to me how adding iOS or Android to the list would make much of a change to the author’s point.

                                                                                                                  1. 3

                                                                                                                    Considering “Windows” was on the list of pre-1996 tech, I think iOS/Android/touch-based interfaces in general would be a pretty fair inclusion of post-1996 tech. My point is that this seems like an arbitrary grab bag of things to include vs not include, and 1996 seems like a pretty arbitrary dividing line.

                                                                                                                    1. 2

                                                                                                                      I don’t think the list of specific technologies has much of anything to do with the point of how the technologies themselves illustrate bigger ideas. The article is interesting because it makes this point, although I would have much rather seen a deeper dive into the topic since it would have made the point more strongly.

                                                                                                                      What I get from it, and having followed the topic for a while, is that around 1996 it became feasible to implement many of the big ideas dreamed up before due to advancements in hardware. Touch-based interfaces, for example, had been tried in the 60s but couldn’t actually be consumer devices. When you can’t actually build your ideas (except in very small instances) you start to build on the idea itself and not the implementation. This frees you from worrying about the details you can’t foresee anyway.

                                                                                                                      Ideas freed from implementation and maintenance breed more ideas. So there were a lot of them from the 60s into the 80s. Once home computing really took off with the Internet and hardware got pretty fast and cheap, the burden of actually rolling out some of these ideas caught up with them. Are they cool and useful? In many cases, yes. They also come with side effects and details not really foreseen, which is expected. Keeping them going is also a lot work.

                                                                                                                      So maybe this is why it feels like more radical ideas (like, say, not equating programming environments with terminals) don’t get a lot of attention or work. But if you study the ideas implemented in the last 25 years, you see much less ambition than you do before that.

                                                                                                                      1. 2

                                                                                                                        I think the Twitter thread @Hail_Spacecake posted pretty much sums up my reaction to this idea.

                                                                                                                    2. 2

                                                                                                                      I think a lot of people are getting woosh’d by it. I get the impression he’s talking from a CS perspective. No new paradigms.

                                                                                                                      1. 3

                                                                                                                        Most innovation in constraint programming languages and all innovation in SMT are after 1996. By his own standards, he should be counting things like peer-to-peer and graph databases. What else? Quantum computing. Hololens. Zig. Unison.

                                                                                                                        1. 2

                                                                                                                          Jonathan is a really jaded guy with interesting research ideas. This post got me thinking a lot but I do wish that he would write a more thorough exploration of his point. I think he is really only getting at programming environments and concepts (it’s his focus) but listing the technologies isn’t the best way to get that across. I doubt he sees SMT solvers or quantum computing as something that is particularly innovative with respect to making programming easier and accessible. Unfortunately that is only (sort of) clear from his “human programming” remark.

                                                                                                                      2. 2

                                                                                                                        It would strengthen it. PDAs - with touchscreens, handwriting recognition (what ever happened to that?), etc. - were around in the 90s too.

                                                                                                                        Speaking as someone who only reluctantly gave up his Palm Pilot and Treo, they were in some ways superior, too. Much more obsessive focus on UI latency - especially on Palm - and far less fragile. I can’t remember ever breaking a Palm device, and I have destroyed countless glass screened smartphones.

                                                                                                                        1. 3

                                                                                                                          The Palm Pilot launched in 1996, the year the author claims software “stalled.” It was also created by a startup, which the article blames as the reason for the stall: “There is no room for technology invention in startups.”

                                                                                                                          They also didn’t use touch UIs, they used styluses: no gestures, no multitouch. They weren’t networked, at least not in 1996. They didn’t have cameras (and good digital cameras didn’t exist, and the ML techniques that phones use now to take good pictures hadn’t even been conceived of yet). They couldn’t play music, or videos. Everything was stored in plaintext, rather than encrypted. The “stall” argument, as if everything stopped advancing in 1996, just doesn’t really hold much water to me.

                                                                                                                          1. 1

                                                                                                                            The Palm is basically a simplified version of what already existed at the time, to make it more feasible to implement properly.