1. 18
    1. 36

      LLMs are very bad at facts. They routinely invent stuff and confidently tell lies. As much as people like to bash Wikipedia for being unreliable, it’s much more rooted in reality than any LLM.

      PS: please excuse me if I missed something, I used GPT to get the key points from the post, it’s too long.

      1. 8

        Whether this is a joke or not, I got a good laugh out of it. Kudos.

      2. 7

        I read through the whole article, hoping to get a clear answer of “how” ChatGPT killed the author’s passion project, but never found it.

        Not sure why I hoped for such an answer, when the title of the article contains no such promise.

        I can totally infer a reason, of course, but… you know.

        1. 3

          I came out from the article with the same feeling as you. I feel like that ChatGPT in the title was essentially a click bait.

          1. 2

            “Soon after everybody started talking about ChatGPT and other LLMs (and even before I had a chance to play with it myself—I wasn’t too eager, to be honest, yet I followed the theory and examples as a curious bystander), I had a consistent internal explanation of why this thingy was the right way to the goal that I looked for years (and all in the wrong direction).”

            I felt largely the same as you. But my takeaway from the examples and this paragraph at the end is that LLMs are a better solution to the problem they were trying to solve. The examples were things chatgpt would provide a fairly solid answer to.

            It also struck me that we’ve seen OpenAI plug ChatGPT into some of the same factual-providing interfaces the author was building. I don’t think the author was fully going in the wrong direction… I think they were just focusing on the factual backend and their interface idea needed development. LLM as interface, live facts on the backend.

          2. 4

            Maybe it’s orthogonal. Maybe you could integrate the two with GPT plugins. If infoboxer really does something novel and you think it’s unique or interesting, it might not be invalidated. Also, ChatGPT or any company can still enshitify, we don’t know what’s going to happen.

            There have been other sort of sources or truths, parsers or attempts at world modeling. There is ConceptNet, HowNet and WordNet. I think these things still have value in the era of LLMs because ChatGPT still does not have a world model perse or it’s possible that the party trick of attention is all you need ends and some other technique is needed to continue performance increases.

            This tale reminds me of The Bitter Lesson which is bitter for complex reasons. I think the thing to ask is, if computation doubled, would this project benefit from it? If computation in GPUs doubled next year, would it still?

            1. 4

              Warning: this post is only Part 1, which describes how “ChatGPT have killed my passion project”. There will apparently be at least two more posts (on https://zverok.space/writing/) before the reason “I am fine” is described.