1. 3
  1. 8

    I don’t know about anyone else, but I tend to demand correctness from the compilers I use.

    1. 4

      Me too, but it seems like we’re all about to tip into an age of telescoping justifications of probabilistic incorrectness. This article is a great example: it sounds like it demonstrates an understanding of what a compiler is, until you think about it for even a second and it becomes clear that there’s no real semantic comprehension in evidence, just a particularly sophisticated form of word association. I’d honestly suspect that it were just the output of an LLM except for the fact that an LLM would be very unlikely to misspell “steroids” as “steriods” as has been done here, and also I don’t think LLMs have yet mastered the art of being this insufferable in their use of typographic emphasis.

    2. 4

      The dream of going more or less directly from English to code is as old as COBOL. Anyone remember 4GL languages? Today we have “no/lo-code” and LLMs are just an extension of that.

      1. 4

        As for LLM, English is not the only language that works, I was amazed when I tried throwing weird varients of English like Singlish and Manglish and it still works. It even works with Chinese and when I asked for the Cantonese dialect, it works too which is already way better than Google translate give that Google translate can’t even work with Cantonese.

        1. 3

          So close! LLMs are translators. We think of folks as being “fluent in”, “able to speak”, or “reading” programming languages; if an LLM is also fluent in natural language and programming language, then it is performing a task analogous to human translators.

          This actually is quite enlightening, because it shows exactly where an LLM could possibly fit into a typical software-engineering role. An LLM would not be able to gather customer requirements nor synthesize them into a precise specification, nor could the LLM diagnose and repair operational issues. But it could translate the specification from natural language to programming language.

          Maybe the next wave of LLM tools will look like Dependabot, but slightly less unaware.

          1. 1

            Like self-driving cars, LLMs don’t need to be perfect translating English into code, they just need to beat humans.

            And like self-driving cars, they’ll fail to do so in amusing ways for many years. (Until they don’t.)

            1. 4

              I wouldn’t call being driven full-speed into a highway dividing barrier and killed “amusing”, nor do I particularly enjoy the idea of sharing public roads with other amusing self-driving failures.

              1. 1

                Humans also drive into highway features and die quite often. Iirc, that crash ended in a death because the divider’s crumple zone had already been taken advantage of in a previous, non-autopilot crash, and I recall somebody going through its Streetview history and noting it got crumpled basically monthly.

                You already share public roads with teenagers, drunks and septuagenarians.

                1. 4

                  You already share public roads with teenagers, drunks and septuagenarians.

                  Yes, so clearly making the problem worse in well-known ways doesn’t hurt at all.

                  1. 1

                    If the self-driving car is better than a drunk, if you can get the drunk to drive with autopilot, you’ve just made the road safer. Just saying, as long as you can subdivide your group by skill, improving the average is not that difficult. If FSD was made mandatory for drunk Tesla drivers in towns with known good performance, road safety may plausibly already improve today.

              2. 4

                It’s also worth noting that ‘beat’ is quite situational. There are a lot of jobs currently that could benefit from some automation but where paying a programmer would not make sense. If a moderately competent programmer could spend an hour writing a script that saves a junior administrator 10 minutes, that’s economically unfeasible. If an LLM can do it in one minute without needing a programmer, it’s a huge productivity improvement.