1. 4
    1. 1

      I am irritated by the authors’ attitudes. On the orange site, they had a show-and-tell thread where it was obvious that the authors want to claim glory for themselves. However, several folks pointed out that applying regular and context-free parsing techniques to LLMs is obvious enough that it was already done several times. For example, previously, on Lobsters, we discussed ParserLLM. Here’s what the Outlines paper says about ParserLLM:

      Such features have recently been generalized in prompting libraries and interfaces, but their applicability can be limited by their scaling costs.

      That’s it! I find this to be a rather poor acknowledgement of prior work, and if the citations are examined, it seems like this line clumps ParserLLM with OpenAI’s recent “Functions” product, which operates using reinforcement learning and re-prompting rather than grammatical constraints.

      Disclaimer: I’m salty about this because I noted this in April to @simonw in passing, and by June, I noted that this work was already done by @mattr. (I could have sworn that I talked to @mattr in the meantime, but I can’t find my old posts.) The Free Software community doesn’t need to write big Show HN posts in order to get stuff done. We can point out the fundamentals of computer science, hack up some packages, and release them under a decent license; all without publishing a paper in a journal.