1. 13
  1.  

    1. 5

      Not trying to pick on this blog post in particular, but it bothers me how much of the anti-AI writing is framed. It feels like they almost always start out with an explanation of “why the AI sucks” (quality issues, as described in the article), as if they would gladly welcome the AI if it can produce content on par with humans. I mean this is one interpretation of the title, “the empty promise” being that of matching human quality.

      This opens up the articles to several valid counterarguments, the gist of them being that it’s not unreasonable to expect that the limitations of AI (creativity) issues can be overcome in the near-term or even present. Keeping a human in the loop is the most immediately accessible, but you can also try to circumvent some of the issues with hallucinations or meandering by having the AI plan its prose and execute smaller chunks. Not to mention that the context windows of some models are growing. Finally for this particular piece, let’s not forget that video game dialogue and plots can be notoriously, laughably bad. If your standards for AI (creativity) are so high that some humans won’t meet them, are they reasonable standards?

      I wish more of these articles would make ethical objections to the use of LLMs their central argument. If I were to forbid the use of AI in my product, it would have to be because I have a less flimsy objection to its use than “it’s bad.”

      1. 4

        There’s bad as in low quality, and there’s bad as in morally wrong. I too find the ethical arguments much more compelling, but in any case, these are different kinds of arguments. I don’t think they can be fairly compared with each other, except on an amoral utilitarian basis, which is (ironically) what you’ve done in this comment!

        When I protest the use of AI as unethical (bad for others) or unhealthy (bad for oneself) I do so not to make a make a more convincing argument or, as you say, one less open to obvious and valid counterarguments, but rather to expand the frame of discussion beyond aesthetics or utility, where things often tend to get stuck here in the realm of technology talk. I want people to consider these dimensions, even if they come to different conclusions than mine: I’m less interested in winning than in teaching.

        1. 2

          There’s bad as in low quality, and there’s bad as in morally wrong. I too find the ethical arguments much more compelling, but in any case, these are different kinds of arguments.

          Agreed.

          I don’t think they can be fairly compared with each other, except on an amoral utilitarian basis

          I’m not sure where you get the notion that I’m trying to make an amoral utilitarian comparison here. (I also don’t understand what you mean by amoral in this context.) The article’s thesis is “AI is bad because it’s (1) low quality and (2) unethical.” My point is that I would rather the article be about (2) because I find its claims on (1) not compelling. I of course respect the right of the author to say what they want, and I’m not saying that I think it’s invalid to ever discuss (1) — rather it would validate my own biases to read compelling evidence of it.

          I’m less interested in winning than in teaching.

          I don’t think we disagree. I would personally rather not teach poorly, though. Blanket-rejecting AI for dubious claims of “fundamental” inadequacy feels too preachy and vibes-based to me. I think it’s fine to make this kind of blanket-rejection if you have objections, for example, to the AI’s existence (such as (2) — or even spiritual ones, as linked on this site a few months ago), but not if you’re going to talk of its utility.

          1. 1

            I don’t think we disagree either. I just meant “amoral utility” in the sense that, say, a nuclear weapon or a veal crate or whatever technical artifact with inextricable moral implications can be evaluated as “better” or “worse” purely in terms of its functional effectiveness, with no regard to those implications at all: what we were calling “quality”.

            I understood you to be (probably inadvertently) comparing the two kinds of arguments on the basis of their effectiveness (i.e. “quality”). Which do I think ironic, although I intend no disrespect.

      2. 2

        so far, i’ve been using AI as a standalone tool, rather than using it as a baked in feature of other tools

        i use it often, but i prefer there being a bit of friction with the context swtich. i don’t want to train my brain to completely outsource creativity, without making an attempt first

        i think there’s some merit in integrated AI review tools, etc. but i’m still hesitant to do it in a realtime way

        (i have the same feelings about autocomplete, but weaker)

        1. 1

          What exactly is the yarn? Spinner?

          1. 2

            It’s a tool (linked from the start of the article) used for writing game dialogue. It’s been used in a bunch of popular indie titles.

            1. 1

              According to the link in the blog, it’s a company that makes game development tools.

            2. 1

              Last week I asked Grok something like: “If we let 1000 LLM instances talk between each other for 100 years. Will they invent anything new?”

              The response started with … “What a fascinating thought experiment! …”

              1. 8

                The feigned personality of LLMs always irritates me. You’re not fascinated, you’re probabilistic token output. Just answer the question.