Threads for jaredwhite

    1. 1

      Writing a whole lotta documentation for the upcoming v2.0 of Bridgetown (it’s a Ruby web framework). We’re basically at feature freeze, but there’s quite a lot that needs polishing on the docs side and as anyone who works on OSS knows, that’s often the hardest part!

      1. 10

        So the TL;DR is that LLMs are handy-dandy reducers of written copy.

        Which is neat, I guess, if folks are looking for that sort of thing. (I virtually never need to “summarize” anything ever and I don’t quite understand the massive need for that, but then again I don’t work in large enterprises.)

        How do we square that with the Marc Benioffs and the Mark Zuckerbergs of the world claiming that later this year companies will be managing both human workers and digital workers? I’m not saying this to argue with the author, I’m saying that gap between what various people will claim is the state-of-the-art of this tech is one of the wildest disconnects I’ve ever seen in the history of computing.

        It’s like if we were a few years into the iPhone era, and some people were saying it’s a decent cell phone with very good texting capabilities but crappy apps, and other people were saying it will transform into a flying car and transport you halfway around the world and also water your plants and feed your dog.

        Truly strange times we live in.

        1. 9

          Completely agree. The gulf between the hypothetical and the realistic conversations about what this stuff can do appears to keep on getting wider.

          1. 2

            How do we square that

            The missing bit is that some LLMs are trained to produce special tokens that can be interpreted to call external programs e.g. interpreters, REST APIs etc. and parse the response for subsequent rounds of LLM inference. This is the capability upon which “agent” systems being hyped nowadays are built.

            This increased reach of the LLM agents, and their seeming ability to perform unassisted tasks in the digital world got many excited.

          2. 2

            Trying out Coolify is high up on my list of sysadminy things to check out this year. I almost went for it last month, but ended up chickening out and just did some manual VPS setup + Caddy (which is fantastic BTW). It’s also hard for me to want to wean myself off of Render because I really like that service. But Coolify just sounds so, er, cool, and I love the idea of decoupling the service layer from the underlying hardware/cloud layer.

            1. 2

              This entire article could be summed up: “you shouldn’t care too much about your craft and you should produce substandard work because the company doesn’t value your attention to detail”

              in which case, you should probably be looking for a new job at the earliest opportunity because if true things there are bad and will only get worse.

              Life is too short to spend a significant percentage of your waking hours intentionally trying to limit what you’re capable of.

              1. 1

                If one’s contract allows it, one could spend on personal projects any extra attention to detail for which one’s job isn’t willing to pay.

              2. 20

                Or…just opt out entirely and advocate for a better world.

                I simply don’t understand the need to be “forced” into accepting technology we think is unethical and dangerous. I’d rather write code slower, more deliberately, and even limit what I’m able to understand clearly, in order to write my own code. I’m not going to reach for the BS-generator-at-scale in my IDE just because Sam Altman tells me to.

                1. 5

                  A million times this. No one should feel burdened to explain why they aren’t busily cramming LLMs into every orifice and concocting elaborate guidelines for how to use this particular flask of snake-oil “responsibly” any more than they should feel burdened to explain why they aren’t using NFTs, or Bitcoin, or any other sizzling buzzword that venture capitalists desperately need the world to believe will expand in value by orders of magnitude in the next few quarters.

                  Winter can’t come soon enough.

                2. 13

                  This is the sort of “Ruby is weird” stuff that I love about Ruby.

                  Ruby is what I might call an “object maximalist” language…it tries to take the concept of everything-is-an-object to the nth degree, and thus very few “keywords” are indeed language constructs but are instead simply methods on objects. This means it’s fairly trivial to invent complex dialects of Ruby which almost feel like a new language. Again, it can get weird and it’s why some folks don’t like it. They might prefer a language where it’s hard to invent dialects and there’s very little metaprogramming possible, so that everyone’s code looks similar and it’s simple from a syntax standpoint. I personally find these sorts of languages very boring. There’s a reason I don’t write applications in Go, for example. But again, I understand why others might!

                  Thanks zverok for the great writeup on this!

                  1. 1

                    They might prefer a language where it’s hard to invent dialects and there’s very little metaprogramming possible, so that everyone’s code looks similar and it’s simple from a syntax standpoint. I personally find these sorts of languages very boring.

                    I mean, writing languages with lots of metaprogramming is fun. But reading them is often less so, at least if you’re trying to accomplish something specific. Also it’s very easy for the invented dialects to become incompatible with one another. Basically everyone should have to write in boring languages except me.

                  2. 3

                    I’ve been learning a lot of CSS while writing my first website, and it’s definitely a step down from more ergonomic languages. Using :has() was really cool for adding a dark-mode toggle, and variables are nice to have, but CSS really reveals how difficult it would be to design a language from scratch with the whole world weighing in. The legacy naming and inconsitencies between properties are terrifying.

                    I’m trying to focus on learning and understanding it though, so for the initial design of the site I’m doing all the CSS and HTML fully by hand (and no javascript yet, except maybe for code highlighting). I’ll definitely upgrade to a preprocessor/static site generator in time, but not yet.

                    1. 1

                      You’re absolutely doing it right. The upfront education you’re going through now will pay huge dividends in the future. Best of luck to you!

                    2. 16

                      I’m gonna push back on HashWithDotAccess, and similar like HashWithIndifferentAccess and Hashie. These are a fundamentally wrong approach to the problem and the value they bring to a project is strictly negative.

                      If your data objects can have unpredictable forms, your code will explode in complexity as you manage all the possible branch paths, and you will never capture them all. The solution to this is to validate your data first, and then create a stable representation of it (preferably immutable). In other words, parse, don’t validate

                      If you’re dealing with unpredictable data, don’t preserve this unpredictability, normalize it to be predictable. If you’re annoyed by inconsistent key access, eliminate the problem. Yeah it’s slightly more work upfront. But you save yourself hours of toil in the long-run.

                      1. 7

                        People who author Bridgetown sites can put literally any front matter imaginable in each page (resource), for example:

                        ---
                        hello: world
                        foo: bar
                        ---
                        

                        And access that via data.hello, data.foo (plus data[:hello] or even data["hello"] if they really feel like it). This is just basic developer DX. Now if you think data itself should be Data class or something like that, that’s an interesting argument, but it would need to be a unique definition for every individual resource, meaning 1000 resources == 1000 separate Data classes which each have an instance of 1. So that seems odd to me.

                        1. 4

                          I wrote some detailed examples of why hashie can cause some headaches https://www.schneems.com/2014/12/15/hashie-considered-harmful.html

                          If you can guarantee all keys are only strings or only symbols that would help with some of it. But Ruby is so mutable it’s hard to prevent people from adding things in random places unless you freeze the objects. The other option could be to define the hash with a default proc that raises an error when the wrong type string/symbol is get/set.

                          1. 3

                            I’m not familiar with Bridgetown, but totally unstructured data that is provided by users in a site generator is a pretty specific use-case where I would agree this hack is probably fine.

                            My comments apply to application development.

                            1. 2

                              Fair fair…and I do think your points are valid in the context of internal app code. I love “value objects” as well.

                              1. 2

                                If I were writing a front-matter parser, I would just compile the YAML AST into a binding context with dataclass-based local variables. Not as hard as it sounds.

                          2. 2

                            I wouldn’t use it in a long-running application, but hash-pretending-to-be-object is irreplaceable for data-processing scripts, console experimentation, and quick prototyping (Which, arguably, are the areas where Ruby excels but less associated with them in recent years, and more with “how we build large long-living apps”.)

                            1. 2

                              The problem with quick prototyping is, there’s nothing more permanent than a temporary solution. My default position is one of skepticism for this reason.

                              For instance, I disagree about data processing scripts. I think you should be doing schema validation of your inputs, otherwise what you’ll end up with will be extremely fragile. If you just doing console exploration, just use dig. Even in a true throwaway-code situation, the value is pretty minimal.

                              1. 1

                                Well, you somewhat illustrate my point (with what Ruby associates more currently). My possible usages for “rough” hash-as-object areas were intentionally abstract, but the default assumption was that “quick prototypes” would be of possibly future-long-living-production apps (and not just to “check and sketch some idea”), and that data-processing is something that would be, also, something is designed to be set and stone and used many times (and not just some quick investigation of data at hand, where you develop a script to perform several times on some files and forget; or run once in a month, fixing as necessary).

                                But there is probably some personal difference in approach. At the early stages of anything I prefer to try thinking in code (at the level of lines and statements) as quickly as possible while keeping the missing parts simple (e.g., “let it be Hashie for the first day”); but I understand that for other people the thinking might start from schemas and module structure, before the algorithm itself.

                                1. 1

                                  Help me reconcile your comments:

                                  I wouldn’t use it in a long-running application

                                  My possible usages for “rough” hash-as-object areas were intentionally abstract, but the default assumption was that “quick prototypes” would be of possibly future-long-living-production apps

                                  If you’re saying that quick prototyping never becomes permanent, I beg to differ. Perhaps you have not seen this, but I have many times. So I’m more defensive about validating my inputs always.

                                  At the early stages of anything I prefer to try thinking in code (at the level of lines and statements) as quickly as possible while keeping the missing parts simple (e.g., “let it be Hashie for the first day”)

                                  I constantly drop into a REPL or a single-file executable to prototype something quickly. But the argument I am making is: it’s never too soon to validate. The longer you wait, the more uncertainty your code has to accomodate and this has lots of negative architectural implications.

                                  1. 2

                                    If you’re saying that quick prototyping never becomes permanent, I beg to differ. Perhaps you have not seen this, but I have many times.

                                    “I’ve seen things you people wouldn’t believe” (not a personal attack, just wanted to use a quote :))

                                    I mean, I am 25 years in the industry in all possible positions, and I do understand where you are coming from.

                                    The only things I was trying to say are:

                                    1. There are many situations when the code is not intended for a long life, and the set of tools you allow yourself in those situations is different. When you have just a huge ugly JSON/YAML from a tool, and you need to make some stats of it once (or, once a month, as a local script), starting with Hashie is convenient for at least the first iteration.
                                    2. (More cautiously) Even as a part of long-living app development, there are different mindsets regarding the first prototypes of something, when you are not sure if it would even work, or what the requirements are. For some devs/some situations, “design contract ASAP” is reasonable, for others, “find a way to write the algo expressively by taking some unholy shortcuts” might be the most efficient way. But of course, paired with no hesitation before rewriting/hardening it as soon as it matures.
                          3. 2

                            Writing documentation for the upcoming 2.0 release of site generator & framework Bridgetown, and enjoying cooler weather here in Portland. Might go buy some fresh produce at the local Farmers Market!

                            1. 1

                              I love the simplicity on the client side but don’t like the complexity on the server side. I wonder if a ‘pipe’ attribute could be added which would first pipe the contents through a javascript function. Perhapse to transform a JSON response into HTML.

                              1. 6

                                IMO we should normalize HTML once again as the default response type of web servers. There’s plenty to like about JSON, but the “JSONification” of web development with its resulting SPA-all-the-things approach has been net negative for the web platform.

                                1. 1

                                  The challenge for me is that roles in multiperson teams naturally split on ‘knows css doesn’t know backend’ and ‘hates css knows backend’ lines. So it is nice to be able to let css / frontend people be css frontend people and not even have to launch the backend or setup the db or deal with docker. If I can keep the design work seperate it’s just cleaner and easier…

                                  1. 1

                                    Frontend people who can’t even run the app aren’t frontend people, they’re designers working in isolation. Which is fine — that’s good and valuable work — but let’s not confuse that with the best ways to architect web applications and structure codebases.

                              2. 1

                                is it possible to have alt text for <pre>?

                                1. 1

                                  Possibly using aria-label, which generally acts as “alt text” for any HTML element.

                                  1. 1

                                    Figure with figcaption is probably the closest.

                                  2. 1

                                    I couldn’t agree more! I also like wrapping third-party API calls into a higher-level object or mixin so you’re calling your own app logic which connects to the third-party API, even if it’s a very thin layer.

                                    1. 37

                                      Bummer to see them doubling down on AI, I’d hoped that the current integration was a bandwagon / securing funding thing.

                                      1. 13

                                        I genuinely think that AI-assisted coding features are table stakes for a developer-oriented text editor in 2024 - just like syntax highlighting and language server autocomplete.

                                        I’m not at all surprised to see Zed investing time in building these features.

                                        1. 41

                                          I genuinely think that AI-assisted coding features are table stakes for a developer-oriented text editor in 2024 - just like syntax highlighting and language server autocomplete.

                                          Watching the demos in this blog post, I’m struck by how much this feature changes the process of software development from “writing code” into “reviewing code”. Reviewing code is not fun, especially when it’s walls of text spit out from an LLM that you know will require extra scrutiny. If this is the future of software development, I’m not excited for it. So, respectfully, I hope you’re wrong.

                                          1. 5

                                            Fully agree, I’m never touching AI code assistants until they unequivocally become smarter than me (probably will happen, but not for years) and they get over their overwhelming Python bias, since I don’t use Python but it’s practically the only language I ever see it used with (probably will happen, but again, years from now). Until then, I am not going to be picking out bugs and incorrect types from autogenerated code like picking out lint from a lollypop I dropped on the floor.

                                            However, I do strongly agree with the sentiment that text editors with AI assistants is the norm, and any text editor that wants to be taken seriously needs it, even if I detest the feature, personally. I think it should be opt-in, and, ideally, completely uninstallable as if it were any other addon, not just for the sake of feeling “clean” but also your editor shouldn’t favor any built-in functionality over what plugins can and can’t do, plus it cleans up the core codebase. It especially needs to be opt-in at the moment because we don’t yet know the legality of this generated code, and some companies do not feel legally confident in using it.

                                            1. 6

                                              However, I do strongly agree with the sentiment that text editors with AI assistants is the norm, and any text editor that wants to be taken seriously needs it

                                              Anyone who needs it can install a 3rd party plugin or something. As soon as someone adds AI to something, I take that something and that someone much less seriously.

                                            2. 4

                                              You’re welcome not to use these features, but that doesn’t mean other people don’t want them.

                                              I’ve been tracking AI-assisted programming for a while now. Personally I’ve found it extremely beneficial. https://simonwillison.net/tags/ai-assisted-programming/

                                              1. 7

                                                You’re welcome not to use these features

                                                That’s not how this works. Building features takes time/money/resources/attention/know-how. Working on features like this which many people actively dislike (guilty as charged) is disappointing because it means other features have fallen by the wayside.

                                                1. 1

                                                  Depends if the developer of that feature actively wants it or enjoys working on it as opposed to another feature. Even in a company, people aren’t machines that just spit out arbitrary code, they’re more productive doing what they like. Can’t actually say anything about how the Zed team works internally, though.

                                            3. 21

                                              I’m sure that AI assistance offers some convenience to programmers. But our convenience cannot justify the energy and natural resource usage which AI requires, now and for the foreseeable future.

                                              It’s like fast-fashion, where the ecological and humanitarian cost is abstracted away from us by distance and time.

                                              We know exactly where our inability to face the real cost of our convenience has led us. We should be wiser than this.

                                              1. 4

                                                I’m less worried about that now that I can run a competent LLM on my own laptop.

                                                Training costs are high but have been dropping dramatically - the Phi-2 series were trained for less than $50,000 in electricity costs as far as I know.

                                                When I compare the carbon cost of training an LLM used by millions of people to the carbon cost of flying a single passenger airliner from London to New York I feel a lot less environmentally guilty about my use of LLMs!

                                                1. 19

                                                  Although there’s a lot more to it. Google and Microsoft are saying they will not reach their climate targets now because of AI investments. Sam Altman is going around lobbying for increasing energy production to power AI.

                                                  https://disconnect.blog/generative-ai-is-a-climate-disaster/

                                                  Also this ignores the ethical concerns of going around hovering data from all over the internet, not accrediting anyone and making a profit of it.

                                                  1. 3

                                                    I’m inclined to think that the huge energy increases from those companies are more a factor of the AI arms race than something that’s required by the LLMs themselves. There is massive over-investment in this field right now - see NVIDIA stock price - it’s very bubbly.

                                                    The ethics of the training remain incredibly murky. I fully respect the opinion of people who opt out on that basis, just like I respect the opinion of vegans despite not being a vegan myself.

                                                  2. 5

                                                    There was a recent report from Goldman Sachs which takes a less optimistic stance than yours. A summary courtesy of this article by Ed Zitron:

                                                    In an interview [with] former Microsoft VP of Energy Brian Janous (page 15), the report details numerous nightmarish problems that the growth of generative AI is causing to the power grid, such as:

                                                    • Hyperscalers like Microsoft, Amazon and Google have increased their power demands from a few hundred megawatts in the early 2010s to a few gigawatts by 2030, enough to power multiple American cities.
                                                    • The centralization of data center operations for multiple big tech companies in Northern Virginia may potentially require a doubling of grid capacity over the next decade.
                                                    • Utilities have not experienced a period of load growth — as in a significant increase in power draw — in nearly 20 years, which is a problem because power infrastructure is slow to build and involves onerous permitting and bureaucratic measures to make sure it’s done properly.
                                                    • The total capacity of power projects waiting to connect to the grid grew 30% in the last year and wait times are 40-70 months.
                                                    • Expanding the grid is “no easy or quick task,” and that Mark Zuckerberg said that these power constraints are the biggest thing in the way of AI, which is… sort of true.
                                                    1. 8

                                                      There’s this weird thing at the moment where if you work for a company in the AI space - Microsoft, Meta, OpenAI - you are strongly incentivized to argue that this is the future of all technology and will require vast amounts of resources, because that’s how you justify your valuation.

                                                      Meanwhile I’m watching as the quality of LLMs that run on my laptop continues to increase dramatically month over month - which is a little inconvenient if you’re trying to make the case that you need a trillion dollars to build and run a digital God.

                                                      1. 5

                                                        I have to admit, I can only admire the audacity of claiming that everyone is just pretending that AI uses lots of energy. :-)

                                                        1. 7

                                                          That’s not what I’m trying to say. My point here is that I take some of the claims of AI purveyors - like Sam Alaskan with his trillion dollar data center plans - with a healthy pinch of salt, because nobody raised a trillion dollars saying “demand for this is going to level off at a sensible level”.

                                                      2. 2

                                                        I just ran into another attempt to quantify AI energy usage (the title indicates they plan a second part, but I don’t see that it’s available yet).

                                                        1. 1
                                                  3. 9

                                                    Table stakes to get VC funding yes, for developers writing anything else besides react apps or python scripts hardly so.

                                                    1. 1

                                                      I have a whole lot of experience and work with a bunch of different languages. I wouldn’t pick a text editor today that didn’t have Copilot-style autocomplete. So it’s table stakes for me at least.

                                                      1. 2

                                                        That’s fine, I respect your choice and feel happy for you. But I think people here are arguing the usefulness in general, and not only for low hanging fruits like simple autocomplete or text generation, let’s not even get to the massive privacy concerns.

                                                        1. 1

                                                          What are the privacy concerns if you’re using a local model?

                                                  4. 8

                                                    I was thinking Zed would be a good editor to introduce to my partner who is beginning to code. But she despises AI so I’ll try to find something else. I also don’t think relying on them when trying to learn is a good idea.

                                                    1. 2

                                                      Just about every major GUI editor is in the process of introducing some sort of AI feature at this point. Zed, JetBrains, VSCode, it’s pretty pervasive.

                                                      1. 2

                                                        Yeah I suppose it’d be either sublime or atom

                                                        1. 3

                                                          I think Atom is no longer maintained. And maybe we have different priorities, but I feel like using a proprietary paid-for editor to escape a feature you can simply disable in other text editors is a bit unreasonable. I use VSCode, I don’t even have any AI enabled in it. I don’t think VSCode would ever get AI as default since the default, and what MS is pushing, is Copilot, which is paid for, and I can’t see them making that free and on by default any time soon.

                                                          1. 2

                                                            Ah I didn’t know Sublime was a paid product. VSCode I don’t particularly like for other reasons.

                                                            I googled around a bit and found GEdit, I think that will suffice.

                                                          2. 2

                                                            either sublime or atom

                                                            Sincere, non-rhetorical question: Do you see the traditional programmer’s-editors, Vim and Emacs, as too difficult to learn for beginners (even one who’s in deep enough already to “despise[] AI”)? Has she tried their tutorials?

                                                      2. 4

                                                        I don’t really like AI neither, but I think that Zed has been quite transparent about it:

                                                        Providing server-side compute to power AI features is another monetization scheme we’re seeing getting traction.

                                                        So even if I probably won’t use this feature, I don’t think that the rest of the editor will suffer too much from it (since it should have happened long ago).

                                                        1. 3

                                                          They are an (for now) unprofitable VC-funded startup looking for new investment rounds. Investors continually ask “so what’s your AI strategy?”. It was only a matter of time.

                                                        2. 5

                                                          I saw the domain on the link and thought at first this was about GitHub specifically. How do I opt-out all my repos on GitHub? Is that even possible?

                                                          1. 3

                                                            I believe the only option is private repositories.

                                                            Github says the following: What data has GitHub Copilot been trained on?

                                                            GitHub Copilot is powered by generative AI models developed by GitHub, OpenAI, and Microsoft. It has been trained on natural language text and source code from publicly available sources, including code in public repositories on GitHub.

                                                            Unfortunately, even if GitHub had some opt-out mechanism, anything public will be scraped and accessible to other parties. The general consensus now is to grab the data, sort out any issues later.

                                                            In some alternative universe, GitHub would’ve taken matters into its own hands to defend open source against AI scraping bots on the ground and in courts. This might have been possible with old GitHub and would probably been a win for them publicity-wise and good for business. But we live in a different universe, where they’re now advocating that anything public is a fair game.

                                                            1. 4

                                                              Much as I am critical of AI-generated slop, I believe that training on FLOSS licensed code is probably the least objectionable from a legal/copyright point of view.

                                                              1. 1

                                                                At least if they provided the source code + data for their models.

                                                              2. 3

                                                                Which also means that you can’t opt-out your self-hosted code from being taken by MS / OAI / Github.

                                                            2. 1

                                                              I must admit I’ve never been tempted to try using Deno. My usage of Node is almost exclusively related to frontend build pipelines, and more recently via esbuild in particular, so none of the perceived benefits of Deno ever seemed relevant to me. And even when engaging in light usage of a framework like Eleventy or Astro, I still couldn’t explain why Deno would be preferable over Node.

                                                              I’m sure dedicated backend JS programmers can articulate reasons why Deno is better than Node, but to someone slightly on the periphery of the ecosystem, I just never grokked why it’s a worthwhile “upgrade”. And the more Deno tries to market itself as supporting various Node-isms, the USP of Deno becomes even murkier IMO. 🤷🏻‍♂️

                                                              1. 9

                                                                I am not sure why you would want to introduce JSX and React-style component definitions to HTMX’s html-first ecosystem. What’s the raison d’être here?

                                                                1. 6

                                                                  ‘Components’ ie reusable pieces of UI defined on the server side actually pair quite well with htmx. You are often returning HTML fragments which are also composed together to build whole pages. Eg imagine a todo app with two panels–the left showing a list of todos, and the right showing a single todo detail view. You can componentize each of these pieces and use them to build the entire page or return fragments in response to user actions.

                                                                  1. 4

                                                                    good points regarding fragments, this makes type-safe components very useful when using htmx

                                                                    1. 1

                                                                      Fully agree, as far as design patterns go.

                                                                      JSX and all it comes with is not a particularly well-suited component system for server-side templates, when compared to the large corpus of dedicated server-side template libraries that exist, which is what I was getting at.

                                                                      1. 5

                                                                        What about JSX makes it less suitable for server-side rendering than other template libraries (and which ones?)

                                                                        1. 1

                                                                          FWIW, Astro offers server-only HTML templates using JSX syntax. I’m not a huge fan of JSX personally, but I get why new frameworks would want to offer a familiar alternative to developers trying to get away from the heaviness of React without having to learn an entirely new templating paradigm.

                                                                      2. 6

                                                                        jsx is being used as type-safe server-side templating engine (including type-safe htmx attributes).

                                                                        the support for jsx components makes template re-use easier and safer, compared to something like django templates where you would often have partials reading state from a global context without type-safety.

                                                                        with plainweb you would start out with inline jsx in your POST/GET handlers and then extract components as needed.

                                                                        being able to mix typescript and markup, a single file containing both POST and GET and the added html attributes go really well with htmx.

                                                                        1. 1

                                                                          You can approximate this very closely with Ruby using just sinatra + ViewComponent, and then you get to avoid the JS ecosystem and reactisms on your server.

                                                                      3. 43

                                                                        I don’t use them, and in addition I actively avoid them on ethical grounds. OpenAI is a particularly egregious company, and the recent comments from their CTO Mira Murati about how “maybe some creative jobs shouldn’t have been there in the first place” should automatically disqualify the company from any usage by any creative professional ever.

                                                                        1. 1

                                                                          where did she say this? link?

                                                                          it’s normal for new tech to cause upheaval in job markets, perhaps she was referring to that?

                                                                            1. 1

                                                                              I’m not really sure she has an empirical ground to declare all art dead lol. But she did say “some”, which is in line with general historical patterns of technological disruption

                                                                        2. 2

                                                                          I’ve used Next.js for a client project which required React.

                                                                          I’ve used Astro because…it’s really cool, well-designed, and full-featured. It often goes with the “grain of the web” rather than against it, and its upfront focus on islands architecture is truly appreciated. I’d recommend it by default unless I knew a particular project had particular requirements (like they’re “all in” on Vue and would feel comfortable using Nuxt, etc.)

                                                                          1. 1

                                                                            Can you elaborate on what you like about Astro and the grain of the web? Have you used their “islands”?

                                                                          2. 5

                                                                            It’s an optional experience…which you need to opt-out of?

                                                                            That’s stretching the meaning of “optional” so much, you could measure its thickness in nanometers. 😄

                                                                            P. S. The security ramifications of this are truly terrifying.

                                                                            1. 4

                                                                              Is that so unusual? It really doesn’t seem that way to me. There are lots of optional things which are default-on.

                                                                              1. 2

                                                                                A lot of people who use Microsoft stuff note that they repeatedly see that settings they opted out of get apparently-accidentally switched back on again after a while without their knowledge. A lot of people view “optional opt-out implemented by Microsoft” as “de facto mandatory”.

                                                                                1. 3

                                                                                  Sure, noted. I am just responding to “optional experience you need to opt-out of” “stretching the meaning of ‘optional’”.

                                                                            2. 4

                                                                              I wonder why it’s hard to find VPS hosting in the US using arm64? Seems like most hosting providers, like Digital Ocean, are still only on x86-64. Europe is ahead of the game on this one.

                                                                              FWIW, I’ve been running Fedora in a VM on my M1 Mac mini, and it’s pretty awesome—though it gets laggy, so I’m increasingly thinking of purchasing a used Apple Silicon Mac just to install Asahi Fedora on.

                                                                                1. 3

                                                                                  I think that proves their point: Hetzner is European and only provides arm64 VMs in European regions (I have a few Hetzner VMs, and immediately jumped to check if they had launched them in the US regions without me noticing).

                                                                                  1. 5

                                                                                    Ah, that’s right. I try to stay away from AWS whenever possible though… 😬

                                                                                    1. 1

                                                                                      I’ve been running ARM EC2 instances for a few years now and it has always worked out pretty well. I think I pay around $3/month for their smallest spot instance. Haven’t had a problem yet.

                                                                                      I’m curious what the bottleneck is on your VM. I’ve been considering throwing a used Apple Silicon Mac Mini into my homelab cluster in a VM (I did not enjoy running native Linux on Mac back in 2012 and I don’t expect I would enjoy it much more today).