Threads for joseferben

    1. 2

      i’ve been seeing marginalia pop up every now and then but i didn’t realize it’s open source! learned quite a bit about searching and indexing while browsing the code, great stuff

      1. 61

        I don’t use AI precisely because I am curious.

        I build things and program because I genuinely enjoy taking things apart and understanding, truly understanding them.

        AI may be able to solve the task faster, and maybe it can explain the solution, but it won’t help me intuitively understand. Using AI is like copying the teacher’s answer sheets for your homework.

        I’ve tried AI tools, and when a new version comes out I try that too, but so far I’ve had no reason to actually use them.

        Often enough the only Google search results for what I’m working on are my own posts. In that case ChatGPT isn’t any more helpful either.

        1. 20

          I wish I could upvote more than once.

          Not only are good programmers curious, but they are also responsible. Say we were lawyers and the entrenched culture was to use AI and that this was not considered malpractice. The same question the author asks, flipped on its head as you have done, would be, “Don’t you want to see for yourself what the actual case law is, you know, like, from the court records—the source?” When some junior dev comes to me with questions about a Git commit with my name on it, I need a better answer than to shrug and say, “Copilot wrote it. LGTM.”

          I admit, sometimes I’m fresh out of ideas. I would actually occasionally rather have a dialog with a BS generator than to generate bad ideas on my own. In the words of Don Draper, I need someone to fail faster. But it doesn’t happen often enough that I want that BS generator looking over my shoulder constantly volunteering solutions.

          Another aspect of this curiosity argument is internships. Two years ago, the kinds of tasks one assigns to interns—people who were paid very little to be very curious—are now often assigned to LLMs. I don’t have any data, but my impression is that the appetite for hiring software engineering interns has rapidly diminished since ChatGPT. I never enjoyed correcting interns, but talking with them about what they learned and watching their careers progress was immensely gratifying.

          1. 18

            I build things and program because I genuinely enjoy taking things apart and understanding, truly understanding them.

            I’m exactly the same way… and that’s one of the reasons I’m so enthusiastic about LLM-assisted programming. I can do SO MUCH more exploratory programming with Claude by my side.

            I’ve been wanting to figure out WebSockets and SSEs and <iframe sandbox> for years. Thanks to Claude I’m finally getting stuck in to all three of those, learning at a furious pace where previously the friction involved stopped me from getting started with them.

            If you’re insatiably curious, LLMs are a gift that keeps giving.

            1. 11

              Funny you mention SSEs — I also learned about those recently, and find them cool!

              For my approach, I read some blog posts, looked at some libraries, and then wrote an SSE-based application without using any LLM-based tools. Along the way, I found interesting articles about how Wikimedia uses SSEs, as well as a library to expose Kafka via SSE. Reading through such libraries’ GitHub issues and code was also interesting and useful. Coding was a small part of my journey.

              To each their own (I won’t knock anyone who benefits from LLM-based tools!), but I struggle to see where I’d fit it into my workflow. For doing the research? For writing the code? I like both of those things… What do you do?

              I actually think I’m more likely to ask ChatGPT about something I’m not curious about and just need a solution for (like copying files out of a Docker container).

              1. 4

                yes! additionally, llms allow me to postpone learning about details i don’t care about in that moment, while providing an okay-ish implementation of them. i feel much more in charge when exploring, frictionless is truly the right word.

                1. 2

                  Agreed. LLMs, in some sense, are finally the [imperfect but] infinitely patient teacher I’ve always wanted to handle my outsized curiosity.

                2. 8

                  This is exactly my experience as well.

                  I use them when I don’t know what something is called, because sometimes they can tell me the right keywords. They look like they work when you ask them to do things that have been done a million times before, but if you ask them to do something new, they mostly give you garbage. When I point out their errors, they give me even worse garbage. Better to spend the time learning the problem or the tool than programming the AI assistant.

                  1. 5

                    I noticed something similar as well. It’s incredibly hard to get useful results out of a fringe language or a library. I guess as a rule of thumb, if you don’t find promising results when searching for your problem online then a LLM might not be so useful either.

                    1. 3

                      It’s incredibly hard to get useful results out of a fringe language or a library.

                      Then Rust is probably already fringe. Other than the most basic borrowing problems, it’s not really helpful to get something out of the LLM.

                  2. 2

                    A few days ago I used Claude to help me solve the monthly jane street puzzle: https://www.janestreet.com/puzzles/current-puzzle/

                    First, I had it build me a little interactive visualization of the problem. I could drag points around and see the relevant regions of space. This allowed me to have the breakthrough that reframing the problem a certain way made it much easier to solve.

                    I needed regular google, and pen and paper, to find the final equation, but once it came to actually integrating this equation Claude helped me write the majority of the code and saved me from at least an hour of reading docs.

                    Sure, using AI here is like “copying the teacher’s answer sheets for your homework” but the thing I’m trying to do here is not “learn frontend” or “memorize the interface to a numeric integration library”, I am completely fine with not truly understanding those components. Claude allowed me to mostly ignore those components in service of actually truly understanding this fun math problem.

                  3. 8

                    i tried using kamal outside the rails ecosystem and it didn’t feel great. i love the “point-to-ip-and-deploy” approach though and the v8 changelog is great, especially sqlite by default! i wish something like this existed in a type-safer language.

                    1. 1

                      What about Kamal didn’t feel great?

                      1. 2

                        it wasn’t possible to customize the paths of the .env and the .yaml files. the locations probably make sense in a rails app, but my non rails app doesn’t have production secrets in the local .env.

                        also i had to run kamal as a docker container moumting the docker socket because i didn’t want to deal with ruby. is this really the best way to distribute ruby as executable?

                        overall kamal felt opinionated, its development is clearly driven by a very specific use case. i couldn’t figure out how to get it to work with an existing typescript monorepo.

                        https://www.sidekickdeploy.com/ is similar to kamal but less opinionated.

                        1. 1

                          overall kamal felt opinionated

                          Oh for sure. Yeah, that’s like, these folks whole thing.

                    2. 1

                      neat approach, also very llm friendly!

                      1. 4

                        What does “llm friendly” mean in this context? An easy way to access LLM APIs? Something LLMs can easily help generate? Something else?

                        1. 2

                          oh, i meant in the context of copilot, cursor, supermaven etc, so llm assisted coding.

                          having the whole context in a single file makes things a bit easier, for instance: no need to select certain files to send it along with the prompt and it’s easier to apply diffs generated by the llm to a single file.

                          also if the file fits into the context window (or even if a large related chunk fits), i imagine llm autocomplete works better without indexing the code base and storing embeddings.

                          so i’d say this single-file approach works well even when llm tooling is limited, maybe only a large context size is needed.

                          appreciate the reminder to elaborate, my original comment was a bit anemic.

                      2. 2

                        congrats on the launch!

                        i really like the minimalistic approach, for instance that migrations are additive only. also the combo server-side templates + tailwind + daisyui works so well for shipping fast.

                        i’ve been building something similar myself over at https://www.plainweb.dev but decided to embrace htmx instead of client-side scripting. i wonder how the wasm story looks like in feedback? from the website it kinda feels like “there you go: it can run go in the browser, figure out how to build an app with it”. maybe worth looking into htmx so most of the things can be done in go?

                        i’ve been looking into templ + htmx in the past and it looked like a nice way to do things in go. but ultimately decided to settle for typescript and jsx because of the flexible type system.

                        regarding communication:

                        i get the appeal for using “x on rails” but i wonder whether this truly highlights the benefits of feedback. when i opened the landing page my immediate thoughts were “minimalism” and “simplicity” maybe also “constraining by design”. more than “feature packed”, “battle-tested” or “rich ecosystem”, which is how i would describe rails.

                        i mention rails, django and laravel briefly in the plainweb.dev docs as well. haha i guess it’s hard to avoid comparing yourself to the big ones! good luck with feedback, looks really cool.

                        1. 1

                          thanks! yeah I need to make the wasm section much better :)

                          A sample project is https://github.com/andrewarrow/epoch if you get this running locally and look in “browser” package you can see all the wasm magic.

                          Love the “feature packed”, “battle-tested” or “rich ecosystem” terms. Yeah I think I need to stop saying “rails” at all. No one gets that and people that do just think “old irrelevant thing from years ago”

                          1. 1

                            thanks for the demo project, love the screenshots you should definitely put them on the landing page!

                            Love the “feature packed”, “battle-tested” or “rich ecosystem” terms.

                            these are really hard to back up tho!

                            1. 1

                              bring it! Here’s a high traffic site in production: https://linkscopic.com/

                        2. 1

                          Happy birthday lobsters!

                          1. 4

                            This is a very similar stack to what we’ve been using on an internal tool, just a bit further along. Interested in testing it out and seeing the friction in swapping out a decision or two. It seems a nice starting point for greenfield development.

                            1. 2

                              awesome! super curious to learn which decisions you would swap out and why. how is it going so far with this stack?

                              1. 3

                                Mainly would swap express for bun’s built in web server or hono, and skip using drizzle orm until we see a need.

                                It’s been quite productive, though we did go through a period of using nunjucks for templating before switching to hono/jsx. I can maybe clean up our decision log and post it.

                                1. 2

                                  sensible changes.

                                  the first version was using bun but i ran into too many issues. long-term the goal is to switch to bun and use the built-in http server, file router and sqlite driver.

                                  i was considering hono as well, but ended up going with the more battle-tested and boring alternative. can definitely see a version where express could be replaced by hono!

                            2. 6

                              I like this. JSX is a little weird, but there’s a reason it’s popular: it’s a surprisingly decent templating language. Seems like a collection of a bunch of reasonable choices for building a webapp. I’d swap HTMX with Alpine personally, but it’s fine.

                              1. 3

                                i’m considering embracing alpine in the docs for client-side interactivity. the landing page is using alpine! but it probably won’t replace htmx in plainweb.

                                1. 2

                                  I haven’t used HTMX beyond just reading the docs, so take this with a grain of salt, but my feeling is you could implement HTMX as just a small plugin in Alpine, but not vice versa, so if I had to have just one, it should be Alpine, but I dunno, maybe there’s something I’m missing.

                                  1. 7

                                    Online tech forums live up to their reputation yet again of people claiming they can rewrite production-level projects in a weekend ;-)

                                    1. 1

                                      Haha. In this case though, Alpine has stuff for managing directives and whatnot, so a very minimal implementation would look something like:

                                      Alpine.directive("swap", async (el, { expression }) => {
                                        el.onclick = () => {
                                          let target = document.querySelector(expression);
                                          let html = await fetchPage(el.href);
                                          Alpine.morph(target, html);
                                        };
                                      });
                                      
                                      <a href="/somepage/" x-swap="#container">click here</a>
                                      

                                      Obviously, this is not as much power as real HTMX, but the idea is you would start there and then just slowly build it out as you need more HTMX operations. It’s not “I could build HTMX in a weekend!” It’s “I could get the minimum set of the HTMX features I need in a weekend and then slowly add more of the long tail of other features if and when I were to need them.”

                                      1. 2

                                        Well, yeah. This is probably how htmx itself started. All that other long tail stuff got added because it turned out to be necessary for production-level use, not because htmx people like bloat ;-)

                                    2. 3

                                      oh yeah if you did that then i guess just one lib would suffice, but for now both projects seem to have little overlap.

                                      1. 2

                                        Its a neat idea, and probably a great exercise, but for usage - why reimplement one framework in another, rather than just use the original framework? If simplicity is the key, one would need to learn both htmx-on-alpine + alpine at some point.

                                        1. 2

                                          why reimplement one framework in another, rather than just use the original framework

                                          Keeps bytes on the wire minimal. Alpine already has a morph DOM plugin, at which point implementing HTMX just means adding a couple of custom directives.

                                  2. 9

                                    I am not sure why you would want to introduce JSX and React-style component definitions to HTMX’s html-first ecosystem. What’s the raison d’être here?

                                    1. 6

                                      ‘Components’ ie reusable pieces of UI defined on the server side actually pair quite well with htmx. You are often returning HTML fragments which are also composed together to build whole pages. Eg imagine a todo app with two panels–the left showing a list of todos, and the right showing a single todo detail view. You can componentize each of these pieces and use them to build the entire page or return fragments in response to user actions.

                                      1. 4

                                        good points regarding fragments, this makes type-safe components very useful when using htmx

                                        1. 1

                                          Fully agree, as far as design patterns go.

                                          JSX and all it comes with is not a particularly well-suited component system for server-side templates, when compared to the large corpus of dedicated server-side template libraries that exist, which is what I was getting at.

                                          1. 5

                                            What about JSX makes it less suitable for server-side rendering than other template libraries (and which ones?)

                                            1. 1

                                              FWIW, Astro offers server-only HTML templates using JSX syntax. I’m not a huge fan of JSX personally, but I get why new frameworks would want to offer a familiar alternative to developers trying to get away from the heaviness of React without having to learn an entirely new templating paradigm.

                                          2. 6

                                            jsx is being used as type-safe server-side templating engine (including type-safe htmx attributes).

                                            the support for jsx components makes template re-use easier and safer, compared to something like django templates where you would often have partials reading state from a global context without type-safety.

                                            with plainweb you would start out with inline jsx in your POST/GET handlers and then extract components as needed.

                                            being able to mix typescript and markup, a single file containing both POST and GET and the added html attributes go really well with htmx.

                                            1. 1

                                              You can approximate this very closely with Ruby using just sinatra + ViewComponent, and then you get to avoid the JS ecosystem and reactisms on your server.

                                            1. 3

                                              For real. As much as I do like TypeScript, its value is realized when in larger projects where contracts across code are needed because of the footguns you can run into JavaScript. But even then, modern JavaScript is good enough (IMO!) such that YAGNI TypeScript.

                                              1. 12

                                                Different strokes, I guess. I wouldn’t write anything nontrivial in pure JS; it’s far too easy to misspell something or pass the wrong arg type or get args in the wrong order, and then not find out until runtime and have to debug a dumb mistake TS would have flagged the moment I typed it.

                                                (Why yes, I am a C++/Rust programmer.)

                                                1. 11

                                                  ha wild that typescript is the controversial part here. i haven’t encountered anyone advocating for full-stack javascript in years.

                                                  i think the overhead that defensive programming adds when using javascript justifies the added build process/tooling of typescript in anything but smaller scripts.

                                                  1. 2

                                                    There has been some talk about it recently, mostly started by DHH’s No Build blogpost

                                                    1. 1

                                                      funny enough it is one of plainweb’s main principles to not have build processes (well almost, it uses esbuild until node can be replaced by bun)

                                                      especially frontend build processes are a major source of complexity that are imo not worth it for most web apps.

                                                2. 3

                                                  I’ve been building a medium-sized internal tool for my website, and I’ve debated many times whether I should switch to TypeScript for the superior IDE code analysis.

                                                  I chose not to, because JavaScript is good enough, and I really don’t wanna pull in the complexity of JavaScript build systems into my codebase. I really like that I can do cargo run and watch it go, without having to deal with npm or anything.

                                                  1. 5

                                                    I’m sure you’re aware of this already, but just in case: have you tried using JSDoc-flavoured Typescript? You can write pretty much all Typescript types as JSDoc comments in a regular JS file. That way you get all the code analysis you want from your IDE (or even from the Typescript type checker using something like tsc --noEmit --allowJs), but you don’t need a separate build step. The result is typically more verbose than conventional Typescript, but for simple scripts it should work really well. I know the Svelte JS framework have gone down this route for various reasons — if you search for that, there might be some useful resources there.

                                                    1. 2

                                                      I’ve recently started using a small amount of JSDoc for IDE completion, yeah. But I have written a small Lua codebase using this approach, and I can’t say I’m a huge fan; it gets old pretty quick.

                                                3. 4

                                                  i’m using supermaven and claude 3.5, it’s an awesome combo

                                                  1. 25

                                                    I’ve had mixed results, I did write about it recently. TLDR: Copilot is junk, Supermaven is good, ChatGPT (4) produces way more subtle garbage than you think and set you up for failure in the long run. I’ve used both Google’s Advanced Gemini Models and GPT 4 and neither produce helpful or correct enough information for me to rely on them at all. Google’s was significantly worse than GPT 4, but neither were good enough that I’d pick them over a Kagi search.

                                                    Edit: I’ve mostly used it for Python with some Clojure and a bunch of other PLs. Supermaven works much better because it restricts itself to finishing a single line and focuses on latency so it’s dramatically more helpful.

                                                    1. 6

                                                      +1 on supermaven, it’s kinda wild how much better it is than copilot. i’m using it for typescript mostly.

                                                      1. 2

                                                        Great insights,

                                                        I recently tried Claude and I was amazed by the results

                                                        1. 1

                                                          I have heard mixed reviews for GPT 4 for some it has been worse and for some it has been great

                                                          1. 28

                                                            It “works” if you’re naive to the topic and can’t spot hallucinations or don’t care about correctness.

                                                            1. 16

                                                              This is my impression too. Weirdly, some people will vigorously deny the model is wrong despite evidence to the contrary. I think it’s a bit of “Gell-Mann amnesia effect” happening, mixed in with noise from grifters and hype artists.

                                                              “Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.”
                                                              – Michael Crichton

                                                              1. 5

                                                                I think another part of it is that once people have used these tools enough, incorporated them into their products and the services of their employers, written positively about them, and recommended their use to colleagues, they would feel embarrassed to admit that LLMs work poorly or not at all for most applications. Better to pretend that all problems will be resolved soon with more data, and to insist that when they fail they were simply used incorrectly.

                                                                1. 7

                                                                  I’ve written more about LLMs than most people, but I’m feeling pretty good about the way I’ve been describing them. I’ve been banging the drum about their susceptibility to dumb attacks like prompt injection since September 2022, a couple of months before ChatGPT landed.

                                                                  My angle right from the start has essentially been “these things are flawed in all sorts of expected and unexpected ways, so the intellectual challenge here is figuring out how we can productively and responsibly apply them in spite of all of those problems”. The more experience I get with them the more I think that this the right way to approach them.

                                                                  The science fiction hypesters with the “ignore any flaws, those will be fixed in the next update” takes aren’t looking great.

                                                                  1. 5

                                                                    I find it so amusing that a tool that literally has pRNG baked into the core algorithm (transformer architecture) could be expected to be consistent in any manner.

                                                                    We already have enough problems with overly complicated systems that behave weirdly in edge cases, let’s just make the edge cases the expected behavior for 20%+ of case /s

                                                                    1. 3

                                                                      Ah, but you see- humans make mistakes sometimes; therefore it is not only acceptable but desirable that computer systems should make mistakes on industrial scales!

                                                                      We mustn’t let “good enough” be the enemy of “short-term profitable”.

                                                                    2. 2

                                                                      I felt the same way at first. When I first tried ChatGPT it felt revolutionary. Same when I saw the DALL-E for the first time. I had a brief moment of existential dread, thinking “am I out of a job?” which so many seem to also be experiencing.

                                                                      But through stubbornness and luck, I remained skeptical. The cracks in the facade started to show. You can learn to calibrate your bullshit detector for LLM-generated text or code, just like you learned to spot the “wrong” hands in diffusion-generated images. The newer models are improving, but their inherent flaws aren’t disappearing, just receding from “obvious” to “subtle”. When a critical mass of paying customers aren’t bothered by the flaws, the models will stop improving.

                                                                  2. 4

                                                                    Disagree. It also works if you know what it’s doing wrong but think a bad program is better than no program.

                                                                    Personally, I find it a lot easier to correct somebody’s attempt than start my own. This suggests an obvious synergy, which seems to work great: I get the LLM to do an initial attempt and then fix it up myself.

                                                                    1. 2

                                                                      I get the LLM to do an initial attempt and then fix it up myself.

                                                                      Same way I use it. It’s a solution to the “cold start” problem. Suddenly you have something that may work but needs tweaking, or has some simple fixable errors in it, and off you go.

                                                                      Sometimes it’s because you didn’t describe what you want in an exact enough fashion, and when you do that, suddenly you yourself understand the problem domain better (sort of like LLM’s being the “unintentional rubber-duck debugger”)

                                                              2. 2

                                                                first of all thanks a ton for riot, that’s what got me into the whole component game!

                                                                i’m excited about nue and i’ve been following the progress via newsletter. you quote jarred sumner as inspiration and nue definitely gives me bun vibes.

                                                                i’ve been wondering where nue is heading after reading this article. one of the great things tailwind got right is that they met developers where they are, not where they should or could be.

                                                                maybe it’s ok that the majority of developers don’t get exposed to 70% of css. tailwind is constraining in many ways: providing design tokens on top of css, global is feared, and so on. this might feel too constraining to experienced devs like yourself, but it creates success for devs that don’t have that experience. and imo that’s a great thing in itself.

                                                                regarding the templates vs components discussion, there was a quote on the riot page at some point: “templates separate by technologies, not by concern”. when working with utility classes, i can understand the styling of the component by just looking at the markup of the component. i don’t have to jump to where the utility class was defined.

                                                                i hope nue will find a good balance between innovation and meeting developers where they are. keep up the good work!