1. 24
  1. 6

    That’s actually really cool for tasks filled with simple procedures and formality, though I wouldn’t want to be the human who has to parse machine-written occasionally-falsely-confident English so I’d hope people keep the author’s note in mind:

    ([personal assistant; ChatGPT] who is often wrong - need to read their work thoroughly)

    1. 12

      We’re probably already doomed to have chat bots on both ends of the conversation.

      1. 7

        I remember having arguments about what the future of interoperability standards would be - XML? JSON? SGML? How would government and business communicate? The answer apparently is AI written pseudo-English.

        1. 3

          I appreciate the sentiment, but I don’t see anything pseudo about either option presented in the article? Are you expecting future generators to devolve from this?

          1. 1

            They could! It happened with a pair of Facebook chatbots back in 2017 when the parameters of both models were allowed to float.

            And I mean, why not? If it’s more efficient, more power to ’em.

        2. 3

          TBH, I’d love a reliable fact extraction for most of my incoming correspondence.

        3. 7

          The note is warranted if it was a completely automated correspondence. Here the text was verified by the human.

          This is more like “I have all the facts just don’t feel like coming up with a formal text for it”.

        4. 3

          And now imagine the other side was also ChatGPT. If they would train it with all laws, legal commentaries and court decisions of a country, you could even use ChatGPT as a judge. Brave New World.