It is very impressive that we’re compressing these things into a browser! Honestly, that fact gives me hope that this industry can escape a “all knowledge work as a service” outcome that looms over us.
But I really wish our evaluation of LLMs would move beyond the “wow, I’m amazed this works!” phase and towards an evaluation of performance. This approach does work for certain classes of problems. We know that now. It also fails in specific areas, and turning a really bad flub (e.g., the Cher question) into a near-hit seems counter-productive at this point.
Given the central importance of the “Prolly Tree” (something I was unfamiliar with before this article), I was surprised that they basically never link directly to the really excellent introduction to the structure in the Noms documentation.
This is a curiously out-of-date sentiment from DHH. I’d expect this, by the title, to be a critique of WASI’s development rather than a complain about the deployed container ecosystem which most industry members know is complex to the point where almost every successful org using it succeeds because of which parts they don’t use.
Given this, the article feels more like a January 2020 article rather than a January 2023 article.
Are folks generally aware that there’s a massive change incoming in this space because of WASM+WASI? Relevant to these complaints: the goals include reclaiming cold-start performance, reducing I/O overhead (a longstanding issue with many container implementations), direct modeling of how components compose (as opposed to an implicit model), ease of scheduling, lower overhead, and language agnosticism with only a minimal performance cost. All of these will substantially change the requirements that systems like Kubernetes address, and in my opinion they will do so for the better.
Were I addressing Kubernetes problems and then publicly complaining about them, I think I’d be talking about how I am running towards this next generation design rather than trying to suggest the bad old days of VM shuffling or fleet-managed and statefully modified machines was my goal.
Yes, that’s precisely what this is. It’s a technique that’s been around for quite some time, but I’m always it’s still effective on modern CISC processors. I suppose reducing your hot path code size is good for any any architecture.
This reminds me of Dotsies (https://dotsies.org/), which is another attempt to compress reading down into a very small space. In the case of Dotsies, the plan was to abandon latin letterforms entirely.
While these are neat to look at (and often make really cool flair additions to tech projects like circuit boards and I’ve actually had dotsies pin names on a few custom board silkscreens I ordered), the only practical use I know of them is trying to render text in very low pixelcount environments like LED matricies or old LCD displays.
Fascinating concept. The lack of case (upper/lower) is a bit of a drawback for general use (perhaps an additional dot can be used to denote a capital letter).
I’m reminded of the fictional Marain from Banks’ Culture universe.
This is actually pretty normal for english rewrite attempts. Case is discarded in the majority of english-rewrite attempts I’m aware of (or at least, greatly reduced in importance). Quikscript, Shavian, and Deseret either discard case or only use scale for indicating case.
It is very impressive that we’re compressing these things into a browser! Honestly, that fact gives me hope that this industry can escape a “all knowledge work as a service” outcome that looms over us.
But I really wish our evaluation of LLMs would move beyond the “wow, I’m amazed this works!” phase and towards an evaluation of performance. This approach does work for certain classes of problems. We know that now. It also fails in specific areas, and turning a really bad flub (e.g., the Cher question) into a near-hit seems counter-productive at this point.
Given the central importance of the “Prolly Tree” (something I was unfamiliar with before this article), I was surprised that they basically never link directly to the really excellent introduction to the structure in the Noms documentation.
For those interested in the underlying structure without wading through product blog tree, it’s here: https://github.com/attic-labs/noms/blob/master/doc/intro.md#prolly-trees-probabilistic-b-trees
I’m very thankful to the authors for exposing this. It is really cool.
This is a curiously out-of-date sentiment from DHH. I’d expect this, by the title, to be a critique of WASI’s development rather than a complain about the deployed container ecosystem which most industry members know is complex to the point where almost every successful org using it succeeds because of which parts they don’t use. Given this, the article feels more like a January 2020 article rather than a January 2023 article.
Are folks generally aware that there’s a massive change incoming in this space because of WASM+WASI? Relevant to these complaints: the goals include reclaiming cold-start performance, reducing I/O overhead (a longstanding issue with many container implementations), direct modeling of how components compose (as opposed to an implicit model), ease of scheduling, lower overhead, and language agnosticism with only a minimal performance cost. All of these will substantially change the requirements that systems like Kubernetes address, and in my opinion they will do so for the better.
Were I addressing Kubernetes problems and then publicly complaining about them, I think I’d be talking about how I am running towards this next generation design rather than trying to suggest the bad old days of VM shuffling or fleet-managed and statefully modified machines was my goal.
Isn’t this approach the direct threading approach?
Yes, that’s precisely what this is. It’s a technique that’s been around for quite some time, but I’m always it’s still effective on modern CISC processors. I suppose reducing your hot path code size is good for any any architecture.
This reminds me of Dotsies (https://dotsies.org/), which is another attempt to compress reading down into a very small space. In the case of Dotsies, the plan was to abandon latin letterforms entirely.
While these are neat to look at (and often make really cool flair additions to tech projects like circuit boards and I’ve actually had dotsies pin names on a few custom board silkscreens I ordered), the only practical use I know of them is trying to render text in very low pixelcount environments like LED matricies or old LCD displays.
Fascinating concept. The lack of case (upper/lower) is a bit of a drawback for general use (perhaps an additional dot can be used to denote a capital letter).
I’m reminded of the fictional Marain from Banks’ Culture universe.
This is actually pretty normal for english rewrite attempts. Case is discarded in the majority of english-rewrite attempts I’m aware of (or at least, greatly reduced in importance). Quikscript, Shavian, and Deseret either discard case or only use scale for indicating case.
Come to think of it, Cyrillic alphabets usually don’t have different letterforms for cases either.
Thanks for the mention of Deseret, TIL.