I was expecting that “Test” job, but why were there two separate deploys?
I had a hunch that the previous, default Jekyll deploy was still running, while the new deploy ran at the same time—and it was pure luck of the timing that the new script finished later and over-wrote the result of the original.
It was time to ditch the LLMs and read some documentation!
I found this page on Using custom workflows with GitHub Pages but it didn’t tell me what I needed to know.
(emphasis my own)
Why did you ditch the LLM at this point? Cost? Accuracy? How did you know that it was time for the human to take the reins, rather than asking it to read the docs and fix the problem for you?
I think your writing has convinced me that LLMs are great for these kinds of quick-and-dirty throwaway applications. I’m curious (and perhaps a bit worried) about where the rift between “AI” programmers and human programmers remains — and why the rift exists.
Honestly, it was just a hunch. I expected it might be a UI thing, not a code thing - and asking an LLM with a training cutoff date 6-12 months ago “where do I go on the GitHub website to fix this problem” rarely works. Sites may have redesigned since then, and even without that I tend to find that “where do I go on a website” questions provide poor quality answers compared to “write me code to do X”.
My main beef with the article can be summarized by the author’s use of the Zen of Go to justify its error handling.
Simplicity matters
Plan for failure, not success
Using the simple if err != nil snippet to all functions which return (value, error) helps ensure failure in your programs is thought of first and foremost. You don’t need to wrangle with complicated, nested try catch blocks which appropriately handle all possible exceptions being raised.
If your two key tenets are simplicity and handling of failure, it seems at odds to then require the programmer to remember to add boilerplate code to every function (call) which can error. Especially so if (as others have mentioned) the compiler won’t remind you to. Being able to plan for failure should be simple, which to me means that it should be hard to forget or do incorrectly. That IDEs can enforce this is great, but in that case Go should not be lauded for “awesome error handling,” much like Java is not lauded for NullAway preventing the so-called “Billion Dollar Mistake” of null pointers.
The author gives a great example for how Go’s error paradigm can introduce subtle bugs, too:
if err := criticalDatabaseOperation(); err != nil {
// Only logging the error without returning it to stop control flow (bad!)
log.Printf("Something went wrong in the DB: %v", err)
// WE SHOULD `return` beneath this line!
}
if err := saveUser(user); err != nil {
return fmt.Errorf("Could not save user: %w", err)
}
This, in my view, demonstrates what makes languages like Haskell and Rust’s approach to error handling a better choice: they combine the ease of raising exceptions with the power of value-based errors. In Rust, you would write something like
criticalDatabaseOperation()
.map_err(|e| format!("Something went wrong in the DB: {}", e))?
saveUser(user)
.map_err(|e| format!("Could not save user: {}", e))
Which almost looks trivial, because the only addition is a ? to ensure that errors in the first function call cause an early return. But if you omit the ?, the compiler will give you a warning because it knows that the error will be swallowed.
Now I do think that the error chains that the author talks about are nice. And I think they demonstrate something which does conform to one of Go’s tenet’s, which is simplicity. I think being able to do error chaining by building a string is great for debugging, and it seems this comes from Go’s decision to have errors be mostly strings (or string-like).
Rust and Haskell both suffer from the abstraction problem, which tempts developers into writing code as specific as possible in the data and as general as possible in the functions. In the case of errors, it results (ha!) in having to do things which people rightfully complain about, such as wrapping the myriad errors that different libraries use. Yes, you could deal with it by formatting all of the errors into strings, but knowing that you could do it a “better” way is enough to guilt you into going down that rabbit hole.
But if every Rust library’s error type was String, or its type system more dynamic, then I don’t think there would be an argument for Go having the better approach. I am in favor of having the compiler hold my hand.
BTW, I don’t want to make the claim that Rust and Haskell have solved error handling. Whispers in the wind tell me that algebraic effects are great, and lispers in the wind that their error conditions (if I have the right name) are too. Knowing what little I do about error handling, I must say I am inclined to disagree that Go’s is awesome.
I like this idea! Do you think it’s extreme to try and implement dark/light mode using static HTML? I can’t seem to find a good workaround for a javascript-less solution to give people the option to choose to deviate from their system preference.
But it sure feels like overkill to generate a copy of each page just to avoid making someone enable JS to change the colors on their screen… which I don’t even do because I prefer everything in dark mode anyway.
I don’t understand why so many web sites implement a dark mode toggle anyway. If your page uses CSS conditionally on prefers-color-scheme to apply a light theme or dark theme depending on the user’s system preference, why isn’t that enough?
For example, if the user is looking at your page in light theme and suddenly they think their bright screen is hurting their eyes, wouldn’t they change their systempreference or their browser’spreference to dark? (If they don’t solve the problem by just lowering their screen brightness.) After they do so, not only your page but all their other apps would look dark, fixing their problem more thoroughly.
For apps (native or web) the user hangs around in for a long time, I can see some reasons to allow customizing the app’s theme to differ from the system’s. A user of an image editing app might want a light or dark mode depending on the brightness of the images they edit, or a user might want to theme an app’s windows so it’s easily recognizable in their window switcher. But for the average blog website, these reasons don’t apply.
I am curious about how many people use it as well. But it certainly is easier to change by clicking a button in your window than going into your system or browser settings, which makes me think that it would be nice to add. Again, for the imagined person who decides to deviate from their system preference.
Although you’ve made me realize that even thinking about this without putting work into other, known-to-be-used accessibility features is kind of ridiculous. There is lower hanging fruit.
Here’s a concrete example. I generally keep my browser set to dark mode. However, when using dark mode, the online training portal at work switches from black text on a white background to white text on a white background. If I wanted to read the training material, I would need to go into my browser settings and switch to light mode, which then ruins any other tab I would switch to.
If there was a toggle button at the training portal, I could switch off dark mode for that specific site, making the text readable but not breaking my other tabs. Or, if the training portal at work won’t add the button, I could at least re-enable dark mode in every tab whose site had added such a toggle.
Sure, but only on sites that provide a button. It seems a little silly that one bad site should mean that you change your settings on every other site / don’t have your preferred theme on those sites.
Given how widely different colour schemes can vary, even just within the broad realms of “light” and “dark”, I can imagine some users would prefer to see some sites in light mode, even if they want to see everything else in dark mode. It’s the same reason I’ve set my browser to increase the font size for certain websites, despite mostly liking the defaults.
It would be nicer if this could be done at the browser level, rather than individually for each site (i.e. if there was a toggle somewhere in the browser UI to switch toggle between light/dark mode, and if the browser could remember this preference). As it is, a lot of sites that do have this toggle need to either handle the preference server-side (not possible with static sites, unnecessary cookies), handle the preference client-side (FOUC, also unnecessary cookies), or don’t save the preference at all and have the user manually toggle with every visit. None of these options are really ideal.
That said, I still have a theme switcher on my own site, mostly because I wanted to show off that I made two different colour schemes for my website, and that I’m proud of both of them… ;)
I remember the days when you could do <link rel="alternate stylesheet" title="thing" href="..."> and the browser would provide its own nice little ui for switching. Actually, Firefox still does if you look down its menu (view -> page style), but it doesn’t remember your preference across loads or refreshes, so meh, not a good user experience. But hey, page transitions are an IE6 feature coming back again, so maybe alternate stylesheets will too someday.
The prefers dark mode css thing really also ought to be a trivial button on the browser UI too. I’m pretty sure it is somewhere in the F12 things but I can’t even find it so woe on the users lol.
But on the topic in general too, like I think static html is overrated. Remember you can always generate html on demand with a trivial program on the server with these changes and still use all the same browser features…
I’ve been preparing something like this. You can do it with css pseudo selectors and a checkbox: :root:has(#checkbox-id:checked) or so; then you use this to either ‘respect’ the system theme, or invert it.
The problems I’m having with this approach:
navigating away resets the checkbox state
svg and picture elements have support for dark/light system theme, but not for this solution
Yeah, I think I saw the checkbox trick before, but the problems you outline make the site/page/dark and site/page/light solution seem more enticing, since they can avoid especially the state reset issue. I like the idea of respecting/inverting the system theme as a way of preserving a good default, though!
Yeah, as an alternative, for the state issue I was thinking of using a cookie + choose the styles based on it, but that brings a whole host of other “issues”
Not to treat things hyperbolically, but this is honestly the most beautiful keyboard I have ever seen.
I currently use a ZSA Moonlander and have been considering the ZSA Voyager to switch to a low-profile keyboard, but I wasn’t completely sold on it. The thing is, I feel that the limited split keyboard market is mainly RGB/gamer-focused. On the contrary, Bayleaf is the type of tool that I would love to use daily as a programmer. The commercially finished look was an excellent idea, and I’m particularly looking forward to seeing the improved thumb cluster. I would easily spend ZSA keyboard amounts and probably more for a keyboard like this.
Also I could see this keyboard pairing really well with fractal design products.
Lastly, I would love to learn the best way to stay in touch with this keyboard!
However, the market for ergo (low-profile) keyboards has gotten a lot better in recent years. Although a lot of things have RGB (like the Moonlander), I would say most of the ergo keyboards aren’t gamer-focused.
Copying my comment from the orange site in case you’re interested in looking at some others:
The latter is pretty awesome. Use a GUI to generate a custom keyboard that you just need to put together. Alas it seems like there is no PCB support for LP switches (at least the Gateron and Kailh Choc).
Do most people using split keyboards have small hands? The thumb cluster on the Ergodox is one of it’s best features, and I never ever see it replicated.
Can you reach every key on that cluster comfortably? I don’t think I have small hands, but have to admit that not all of the keys there are easily reachable for me.
I also have an ergodox and I’m a hair under 6’ and I literally never use any thumb cluster key other than the one closest to my right thumb. I’m not even sure what all of the other keys do even though I wrote this layout myself
lol I actually forgot that I use one of the left thumb keys as a layer modifier to change the home row into arrow keys. the vast majority of modifiers on the left hand. I use emacs so I use alt/meta quite a lot which is why it has a relatively prominent placement. the URL below should have a rough outline of my current layout but I’m not sure if it’s actually public or not
Yeah, I’ve never had an issue pressing the furthest thumb cluster buttons irregularly and my hands aren’t huge (I’m 6’2 to give you some sense of scale) so it’s not like you can’t occasionally stretch for them. Mine have home/end on one side and pgup/pgdn on the other, since I don’t use those often but when I do I don’t want to have to navigate to another layer.
I have a Glove 80 and I can stretch to reach all of the keys on its cluster, but only find the two closest to my thumb to be comfortable.
I would say I have average-to-small sized hands. I don’t know of a good way to measure hands, but I can reach ninths (octave-to-octave +1 key in case you’re a music theory luddite like me) pretty comfortably on a musical keyboard.
Not trying to pick on this blog post in particular, but it bothers me how much of the anti-AI writing is framed. It feels like they almost always start out with an explanation of “why the AI sucks” (quality issues, as described in the article), as if they would gladly welcome the AI if it can produce content on par with humans. I mean this is one interpretation of the title, “the empty promise” being that of matching human quality.
This opens up the articles to several valid counterarguments, the gist of them being that it’s not unreasonable to expect that the limitations of AI (creativity) issues can be overcome in the near-term or even present. Keeping a human in the loop is the most immediately accessible, but you can also try to circumvent some of the issues with hallucinations or meandering by having the AI plan its prose and execute smaller chunks. Not to mention that the context windows of some models are growing. Finally for this particular piece, let’s not forget that video game dialogue and plots can be notoriously, laughably bad. If your standards for AI (creativity) are so high that some humans won’t meet them, are they reasonable standards?
I wish more of these articles would make ethical objections to the use of LLMs their central argument. If I were to forbid the use of AI in my product, it would have to be because I have a less flimsy objection to its use than “it’s bad.”
There’s bad as in low quality, and there’s bad as in morally wrong. I too find the ethical arguments much more compelling, but in any case, these are different kinds of arguments. I don’t think they can be fairly compared with each other, except on an amoral utilitarian basis, which is (ironically) what you’ve done in this comment!
When I protest the use of AI as unethical (bad for others) or unhealthy (bad for oneself) I do so not to make a make a more convincing argument or, as you say, one less open to obvious and valid counterarguments, but rather to expand the frame of discussion beyond aesthetics or utility, where things often tend to get stuck here in the realm of technology talk. I want people to consider these dimensions, even if they come to different conclusions than mine: I’m less interested in winning than in teaching.
There’s bad as in low quality, and there’s bad as in morally wrong. I too find the ethical arguments much more compelling, but in any case, these are different kinds of arguments.
Agreed.
I don’t think they can be fairly compared with each other, except on an amoral utilitarian basis
I’m not sure where you get the notion that I’m trying to make an amoral utilitarian comparison here. (I also don’t understand what you mean by amoral in this context.) The article’s thesis is “AI is bad because it’s (1) low quality and (2) unethical.” My point is that I would rather the article be about (2) because I find its claims on (1) not compelling. I of course respect the right of the author to say what they want, and I’m not saying that I think it’s invalid to ever discuss (1) — rather it would validate my own biases to read compelling evidence of it.
I’m less interested in winning than in teaching.
I don’t think we disagree. I would personally rather not teach poorly, though. Blanket-rejecting AI for dubious claims of “fundamental” inadequacy feels too preachy and vibes-based to me. I think it’s fine to make this kind of blanket-rejection if you have objections, for example, to the AI’s existence (such as (2) — or even spiritual ones, as linked on this site a few months ago), but not if you’re going to talk of its utility.
I don’t think we disagree either. I just meant “amoral utility” in the sense that, say, a nuclear weapon or a veal crate or whatever technical artifact with inextricable moral implications can be evaluated as “better” or “worse” purely in terms of its functional effectiveness, with no regard to those implications at all: what we were calling “quality”.
I understood you to be (probably inadvertently) comparing the two kinds of arguments on the basis of their effectiveness (i.e. “quality”). Which do I think ironic, although I intend no disrespect.
I would like to represent the delegation of broke people in their 20s whose tech salaries are efficiently absorbed by their student loans:
You don’t need a smart bed. My mattress cost $200 and my bedframe cost <50. I sleep fine. I know as people age they need more back support but you do NOT need this. $250 worth of bed is FINE. You will survive!!
I’m not sure I agree. Like if you are living paycheck-to-paycheck then yeah, probably don’t drop $2k on a mattress. But I believe pretty strongly in spending good money on things you use every day.
The way it was explained to me that aligned with my frugal-by-nature mindset was basically an amortization argument. You (hopefully) use a bed every single day. So even if you only keep your bed for a single year (maybe these newfangled cloud-powered beds will also have planned obsolescence built-in, but the beds I know of should last at least a decade), that’s like 5 bucks a day. Which is like, a coffee or something in this economy. I know my colleagues and I will sometimes take an extra coffee break some days, which could be a get up and walk break instead.
You might be young now, but in your situation I would rather save for my old age than borrow against my youth. And for what it’s worth I have friends in their 20s with back problems.
(of course, do your own research to figure out what sort of benefits a mattress will give to your sleep, back, etc. my point is more that even if the perceived benefits feel minimal, so too do the costs when you consider the usage you get)
Mattresses are known to have a rather high markup, and the salesmen have honed the arguments you just re-iterated to perfection. There are plenty of items I’ve used nearly daily for a decade or more. Cutlery, pots, my wallet, certain bags, my bike, etc. None of them cost anywhere near $2000. Yes, amortized on a daily basis, their cost comes to pennies, which is why life is affordable.
Yes, there are bad mattresses that will exacerbate bad sleep and back problems. I’ve slept on some of them. When you have one of those, you’ll feel it. If you wake up rested, without pains or muscle aches in the morning, you’re fine.
I too lament that there are things we buy which have unreasonable markups, possibly without any benefits from the markups at all. I guess my point is more that I believe – for the important things in life – erring on the side of “too much” is fine. I personally have not been grifted by a $2k temperature-controlled mattress, but if it legitimately helped my sleep I wouldn’t feel bad about the spend. So long as I’m not buying one every month.
I think one point you’re glossing over is that sometimes you have to pay an ignorance tax. I know about PCs, so I can tell you that the RGB tower with gaming branding plastered all over it is a grift [1]. And I know enough about the purpose my kitchen knife serves to know that while it looks cool, the most that the $1k chef’s knife could get me is faster and more cleanly cut veggies [2].
You sound confident in your understanding of mattresses, and that’s a confidence I don’t know if I share. But if I think of a field I am confident in, like buying PCs, I would rather end the guy who buys the overly marked-up PC that works well for him than the one who walks a way with a steal that doesn’t meet his needs. Obviously we want to always live in the sweet spot of matching spend to necessity, but I don’t know if it’s always so easy.
[1] except for when companies are unloading their old stock and it’s actually cheap.
[2] but maybe, amortized, that is worth it to you. I won’t pretend to always be making the right decisions.
I personally have not been grifted by a $2k temperature-controlled mattress, but if it legitimately helped my sleep I wouldn’t feel bad about the spend.
Note, because it’s not super obvious from the article: the $2k (or up to about 5k EUR for the newest version) is only the temperature-control, the mattress is extra.
All that said: having suffered from severe sleep issues for a stretch of years, I can totally understand how any amount of thousands feels like a steal to make them go away.
One of the big virtues of the age of the internet is that you can pay your ignorance tax with a few hours of research.
In any case, framing it as ‘$5 a day’ doesn’t make it seem like a lot until you calculate your daily take-home pay. For most people, $5 is like 10% of their daily income. You can probably afford being ignorant about a few purchases, but not about all of them.
One of the big virtues of the age of the internet is that you can pay your ignorance tax with a few hours of research.
Maybe I would have agreed with you five years ago, but I don’t feel the same way today. Even for simple factual things I feel like the amount of misinformation and slop has gone up, much less things for which we don’t have straight answers.
For most people, $5 is like 10% of their daily income. You can probably afford being ignorant about a few purchases, but not about all of them.
Your point is valid. I agree that we can’t 5-bucks-of-coffee-a-day away every purchase we make. Hopefully the ignorance tax we pay is much less than 10% of our daily income.
I think smart features and good quality are completely separate issues. When I was young, I also had a cheap bed, cheap keyboard, cheap desk, cheap chair, etc. Now that I’m older, I kinda regret that I didn’t get better stuff at a younger age (though I couldn’t really afford it, junior/medior Dutch/German IT jobs don’t pay that well + also a sizable student loan). More ergonomic is better long-term and generally more expensive.
Smart features on the other hand, are totally useless. But unfortunately, they go together a bit. E.g. a lot of good Miele washing machines (which do last longer if you look at statistics of repair shops) or things like the non-basic Oral-B toothbrushes have Bluetooth smart features. We just ignore them, but I’d rather have these otherwise good products without she smart crap.
Also, while I’m on a soapbox – Smart TVs are the worst thing to happen. I have my own streaming box, thank you. Many of them make screenshots to spy on you (the samba.tv crap, etc).
Also, while I’m on a soapbox – Smart TVs are the worst thing to happen. I have my own streaming box, thank you. Many of them make screenshots to spy on you (the samba.tv crap, etc).
Yes, absolutely! Although it would be cool to be able to run a mainline kernel and some sort of Kodi, cutting all the crap…
You don’t need a smart bed. My mattress cost $200 and my bedframe cost <50. I sleep fine. I know as people age they need more back support but you do NOT need this. $250 worth of bed is FINE. You will survive!!
I guess you never experienced a period with serious insomnia. It can make you desperate. Your whole life falls in to shambles, you’ll become a complete wreck, and you can’t resolve the problem while everybody else around seems to be able to just go to bed, close their eyes and sleep.
There is so much more to sleep than whether your mattress can support your back. While I don’t think I would ever buy such a ludicrous product, I have sympathy for the people who try this out of sheer desperation. At the end of the day, having Jeff Bezos in your bed and some sleep is actually better than having no sleep at all.
You make some good points why this kind of product shouldn’t exist and anything but a standard mattress should be a matter of medical professionals and sleep studies. When people are delirious from a lack of sleep and desperate, these options shouldn’t be there to take advantage of them. I’m surprised at the crazy number of mattress stores out there in the age of really-very-good sub-$1,000 mattresses you can have delivered to your door. I think we could do more to protect people from their worn out selves.
None of the old people in my family feel the need for an internet connected bed (that stops working during an internet or power outage). Also, I imagine that knowing you are being spied on in your sleep by some creepy amoral tech company does not improve sleep quality.
I do know that creepy amoral tech companies collect tons of personal data so that they can monetize it on the market (grey or otherwise). Knowing that you didn’t use your bed last night would be valuable information for some grey market data consumers I imagine. This seems like a ripe opportunity for organized crime to coordinate house breakins using an app.
I believe the people who buy this want to basically experience the most technological “advanced” thing they can pay for. They don’t “need” it. It’s more about the experience and the bragging rights, but I could be wrong.
I’m sorry to somewhat disagree. The reason I would buy this (not at that price tag, I had actually looked into this product) is because I am a wildly hot person/sleeper. I have just a flat sheet on and I am still sweating. I have ceiling fans running additional fans added. This is not only about the experience unless a good night sleep is now considered “an experience”. I legitimately wear shorts even in up to a foot of snow.
Ouch… Please do not follow this piece of advice. A lot of cheap mattresses contain “cancer dust”[1] that you just breath in when you sleep. You most likely don’t want to buy the most expensive mattress either, because many of the very expensive mattresses are just cheap mattresses made overseas with expensive marketing.
The best thing to do is to look at your independent consumer test results for your local market. (In Germany where I live it’s “Stiftugn Warentest” and in France where I’m from it’s “60 millions de consommateurs (fr).” I don’t know what it is in the US.)
A good mattress is not expensive, but it’s not cheap either. I spend 8 hours sleeping on this every day, I don’t want to cheap out.
[1] I don’t mean literal cancer dust. It’s usually just foam dust created when the mattress foam was cut, or when it rubs against the cover. People jokingly call it “cancer dust”
Llama 3 used 500 MWh and GPT-3 training used 1,287 MWh—even more if you include the cost of training failed models which preceded these, the experiments that made the models possible. The listed figures are high, and 500 MWh is about the cost of a large jet flying for 7 hours.
I suspect this comparison is off by several orders of magnitude, although I’m no stranger to the idea that people massively underrate the climate impact of jet travel. For example, a single round-trip cross-country flight emits more carbon than citizens of some poor countries do in an entire year, and in the same ballpark as carbon emissions saved by going vegetarian for a year. I was once on a Delta flight and their pre-flight branding video featured someone arrogantly declaring that we all ought to start caring about climate change, which was very darkly funny.
Many years ago I worked on a carbon calculator app for a while, and it was so depressing seeing how anything you did to reduce your footprint in terms of domestic energy usage choices, public transport etc, meant almost nothing if you also took a return international flight that year.
Yeah. For this reason we just stopped taking flights for vacations. My last flight was from NL to Ireland in 2019.
The cognitive dissonance of people is real. I know quite a lot of people who say to be strongly in favor of cutting carbon emissions and fly to a far-away country for holidays every year. Just admit that you are not.
I had some hope that after COVID conferences would go online more. Lot of people were saying at the time ‘this works, it’s better for the environment, and makes conferences more accessible to people from low-income countries ‘. But once COVID was over, people were traveling to far-away conferences again.
Online conferences don’t work for all people though. I haven’t gone to one but I’ve also stopped doing online ones because I didn’t get anything from them. My benefit was talking to the people in the hallway and the social events, for me mostly the people who don’t post on social media 24/7 or are active in the IRC channels and bug trackers (I mean, also some of them). So just saying “why did people continue going in person” - I suppose for a good part of us because it’s either that or no good conferences.
That’s mostly my experience, but I found a couple of events that used GatherTown pretty good. It’s an 8-bit Zelda-style world that your avatar walks around with a few nice features. When you get close to a group, you start to hear them. When you get very close, you see their camera as well. This makes it easy to walk around a ‘space’ and join in conversations that seem interesting. Unlike a physical conference, twenty people could stand on the same square and you couldn’t hear people five squares away at all and so you the workable conversation size scaled up better.
I attended one event that used that platform, and it negated the usual accessibility advantages (for blind people) of online meetings, requiring me to have a virtual sighted guide. Let’s not bring the worst parts of the physical world into the online one.
Ah yeah, I attended one of those and it didn’t really work out at all. Hardly any discussions, everyone seemed to just do bathroom breaks between the talks (also 1 day conf, no longer breaks).
I’m not saying it can’t work - I’ve just not seen it work - as a 25y IRC user, I’m certainly not shy to type.
Thanks for the counter-point. I think it’s a two-edged sword. I’ve heard from introverts that online conferences with chat made it easier for them to approach people than IRL conferences.
While I largely agree with you on the socializing/networking aspect, I also feel that ~two years was a really too short a time to figure out how online conferences could be improved in this respect.
I can’t find it now, but I remember someone comparing the CO2 impact of travel to the NeurIPS machine learning conference with the cost of training one of the leading models and the conference was as lot more expensive.
I think it’s unproductive to put the onus on individuals to reduce their carbon footprint. It’s a drop in a bucket, like you said a lot of hard work can be very easily negatively offset by “small” decisions and a very large majority of people don’t even care.
The key is legislation to nudge industry into reducing energy usage and switch to more sustainable forms of energy use. Flying electrically could be one of those, which would then hopefully make it less impactful to go on international flights. This has to be coordinated worldwide. It makes no sense for small (wealthy) countries to go on a radical energy shift while huge developing countries like e.g. China and India keep polluting by the truckload. Yes, this is unfair so there needs to be some form of compensation from wealthy to developing countries to help them with the energy transition.
Flying electrically could be one of those, which would then hopefully make it less impactful to go on international flights.
A lot of these technological solutions are still so far off that they don’t help short- or midterm. The harsh truth is that it’s simply not possible to maintain the current level of consumerism, yearly holiday/conference flights, etc. if we want to end up nearly close to a target that is not going to have large consequences (famine, large number of refugees, etc.). Long-term we could maybe go back to the current consumption/travel levels, but it will take another 50 years to roll out enough solar, wind, nuclear, have electric planes (if ever), boats, etc.
The problem is no government is going to tell its citizens, let alone enforce not flying every year, not getting a new smartphone every year, etc. Not in normal times, let alone in these times of political instability. The only way is changing the hearts and minds of people one by one by setting an example and explaining. A lot of people do believe in climate change and believe something needs to be done, they just don’t want to make the sacrifices themselves (yet) and do mental gymnastics along the lines of “but if I don’t take that plane, it will still fly”.
It’s most likely too little, too late. We are not doing great, the only temporal reduction was during the initial phase of COVID, with the reduction in transport being one of the most important factors. But I at least want to go down fighting.
I share your concerns and pessimism. Unfortunately, abstinence-based campaigns tend to fail. Look at the tobacco industry; it took a full frontal multi-pronged assault to get people to stop smoking and it’s still unclear whether it really helped because now there’s a vaping problem that’s possibly worse than the tobacco smoking ever was. What it took there (off the top of my head):
A wide ban on advertising
Media coverage on the impact it has on your health
Disallowing attractive packaging
Clear warning labels on packaging
Heavy (very, very heavy!) taxation
Mostly banning public smoking
Something similar needs to be done for CO2-heavy consumer goods and services.
Tobacco is a really good example to express the difficulty of the problem. We humans are good at dealing with immediate threats (which makes sense from an evolutionary perspective). The consequences of smoking tobacco are much more invisible, but still a lot of us have lost loved ones from cancer that is likely caused by tobacco. This has probably helped a lot as well.
Climate change is even more abstract as a threat, most people are not yet affected by it. Even though the numbers are already large, grave consequences currently only affect a fraction of a percentage of the people who cause most of the emissions.
Something similar needs to be done for CO2-heavy consumer goods and services.
We have done that for 10-20 years already at a very slow pace. Going faster will probably upset a lot of people if it’s not their own choice. And unfortunately, the political tides have changed, climate policy has seen a strong decrease in priority in the US and Europe recently. I am sure that the use of renewables will increase, it’s becoming increasingly economically attractive, but I’m afraid that strong improvements in climate regulations will only return when the population asks for them again.
Tobacco is a really good example to express the difficulty of the problem. We humans are good at dealing with immediate threats (which makes sense from an evolutionary perspective).
I dare say that this is the problem with living ethically today. We’re surrounded by choices whose effects are felt at a distance, both because our contributions are individually small and because those affected are far from our view. And though I call them “choices,” to many they are assumed givens or necessities.
Take environmentalism as an example. Many of us grow up riding in cars everywhere, taking flights each year for vacations, and eating meat daily. Everyone around us does this too, which contributes to their normalization. As a result, to give up any of these things, much less all of them, sounds like madness.
Or what about buying chocolate? If some bars are produced using slave labor, we don’t actually see that labor. Yes, you could buy the 3x more expensive “ethical” chocolate, but if you don’t think about it, then you can’t be bothered by buying the cheaper one. And money is tight…
We learn to selectively ignore the problems with our accustomed comforts. And when pressed, we make up reasons to keep them. Taylor Swift should curb her jet emissions first. Public transit is too loud and slow for me. How can you be sure that my chocolate was made by slaves?
I think it’s such a tricky spot to be in because there are powerful forces at work to keep us complacent. And even in their absence, human nature compels us by way of peer pressure (e.g. everybody’s doing it) and fallacy (e.g. what impact do I have as an individual?). Increasingly I am becoming convinced that being more moral means – ultimately – questioning and possibly changing every thing that we do. And honestly that’s exhausting. It’s so much easier to put your head down, plug your ears, and let Greta Thunburg handle it. I’ll vote for a half-cent sales tax increase on plane tickets. I support moral causes in principle, just don’t make me be the one to put them to practice.
What fortuitous timing. I was just working on switching my blog over to using Atkinson Hyperlegible for its body font. I’ll have to see how its mono font looks.
We’ll see if the blog restyle ever makes it past my combination high standards and low CSS skill.
Well, at the risk of being totally wrong… In the next five years
The AI bubble bursts. Maybe not as catastrophic a failure as the other bursts we’ve seen in decades past, since I think that the AI of today has some value, even if overhyped. But we don’t get AGI nor do we get armies of Devons doing all software engineering. This has a few ramifications:
The VCs lose their gambit big time and the only surviving big AI players are the ones who have a solid niche (I still haven’t seen any news about AI used in the adult industry, but surely it can be a big player there, even if just with chatbots) or those who pivot.
Software engineering as a whole takes a hit, as companies discover Devons will not in fact replace their juniors. Until this point, however, the market will continue to be tough, especially for new grads. When everyone finally realizes that they still need to be hiring and training up juniors, there’s going to be a shortage of senior talent. This will be additionally and especially compounded by the average ability of junior engineers decreasing due to the combination of COVID schooling and ChatGPT.
AI tooling eventually gets folded into everything. It’s useful, but not revolutionary. The seniors who get good at using it, however, become even more valuable.
The (western) world takes a hit from the various crashes, possibly enough to start the recession that has supposedly been looming for the past 5+ years, although I think if that happens there would need to be other contributing factors than just AI flopping.
And because I can dream, Refinement types reach the mainstream, like how gradual types did with Typescript (though possibly and probably in a less successful tool).
LLMs are trained in existing code, written by humans. I have yet to see a good response to what’s going to happen when they start training on LLM-generated code and errors are compounded. Perhaps “programmers” in the future will have as a necessary skill “fitness testing” to see if the LLM-generated code actually does what it’s supposed to do, or the derived model actually models the use case, or whatever…
…but spicy autocomplete trained on existing corpora will never, from an information-theoretic perspective, contain more information than those initial corpora. To say that LLMs will do all the coding is to say there will never be truly new coding ever again.
(I suspect that there may be some sea change in the future where we join LLMs with modeling or something to produce formal specifications more quickly and comprehensively, and then pass those models to a (non-LLM) code generator, but it would still need a human to check that the derived specifications actually match the domain. Basically, short of true AGI, we’re not gonna be completely removing humans from coding for a while.)
…but spicy autocomplete trained on existing corpora will never, from an information-theoretic perspective, contain more information than those initial corpora. To say that LLMs will do all the coding is to say there will never be truly new coding ever again.
I don’t understand the information-theoretic argument, seeing as you can come up with example dumb programs no one has written that LLMs will be able to produce (“Write me a program that prints Hello World!! to the console in blue if it is 11:11 PM or AM in UTC +0 otherwise in green”). I suspect what you’re saying is that transfer learning or whatever tactics these models use to generalize their training data has some limit – and presently whatever limit of human developers is beyond it.
I think an interesting question is “how many of the programs we care about have been written [and are on the internet to train]?” I would guess that part of the flashiness of LLMs is that we tend to evaluate them on small programs/functions which may have already been written before. So the answer to that question would be “most of the simple programs have been written,” thus explaining the hype when you tell the LLM to write my sample dumb program and it succeeds.
If the hardest problems in programming that senior devs do, like architecting and maintaining large pieces of software, have as wide a range of options to solve as we think, then I would agree that the comparative dearth of training data could pose a challenge for LLMs. Especially if their reasoning is tantamount to selecting and modifying some existing piece of code that they have seen before.
I think I agree with the general sentiment of your parenthetical, though, which is that we’ll need something more than just LLMs before machines can hope to replace human developers.
I think you highly overestimate amount of novel programming out there. We do not invent new sorting algorithms every day. Neither we solve travelling salesperson problem every week. Most of the work is doing the same CRUD e-commerce site over and over again. Sure, you might not have coded exactly the same button to the character twice but you surely did a whole lot of buttons that do very similar things.
Another point is that code is not the only corpus that goes into training data. Is it a new code if it comes from a paper that describes an algorithm but doesn’t provide a reference implementation? Is it a novel code if it recreates a system from the documentation/specification of a system, whose code is not in the training corpus? Is it a new code if it’s a translation from another language and there’s no implementation in the target language? Is it a new code if it’s an OOP rewrite of an FP implementation?
There’s a lot can be done with non-code component of the training corpus that might produce useful code. After all, at one point all of science was pushed forward by polymaths drawing on connections between disciplines. It’s still a major source of inspiration for novel discoveries. Why can’t LLMs do the same, especially since they are encoded patterns.
Maybe the author is a selfish asshole with no appreciation for others, but I value humanity over productivity. GenAI guys are hard for slop and consume for consumption’s sake, like a capitalist psychosis.
A piece is left out. It’s the expectation that there will be a market for these hand crafted computational artifacts, like there is a market for hand-crafted luxury physical items and people are willing to pay more.
There won’t be a market.
Yeah, fuck you, pal. I’ve paid for Free Software and I’ll happily pay for hand-crafted software.
It was pretty harsh, I’ll admit that was an unfair thing to say. His post made me angry and I commented heedlessly. My point still stands - he doesn’t value creation, he values creations: the work of individuals is incidental and can be dismissed. I can’t elaborate just how vile a worldview this article presents to me.
OK, I’ll bite. I agree with what the author says about “artisanal software.” I don’t see what’s vile about viewing it as not inherently more valuable.
I think any hard work a human does is commendable. I respect people who memorize digits of pi or can solve a Rubik’s cube fast by hand. Even if machines can do both with more accuracy or speed, I don’t see why the human achievement shouldn’t be lauded.
But supposing I wanted several thousand digits of pi or my Rubik’s cube to be solved – and that a machine would do it cheaper – I wouldn’t hesitate to pick the machine. I don’t think it’s unreasonable for these people to then be out of a job if a machine obviates them.
BUT, this is in a hypothetical where I am asked to choose abstractly between machine-created and human-created software.
If in the present you asked me whether I would pay more for “handmade software” then I absolutely would because I would trust it far more.
If the reliability problem of AI-generated code is solved, then the question for me is an ethical one – assuming (not unreasonably) that the AI-generated software of the future continues to amass ethical concerns. In that case, I would argue that the author should not call it “artisanal” but rather “ethically-sourced”. And I should hope that we as consumers try to prefer the latter.
I don’t see how an LLM is going to solve compositional tasks like this because they aren’t word probability tasks. But I could very easily imagine an LLM translating those “facts” into statements that an SMT could solve and offloading the work to it. Why should we want an LLM to do it itself?
“The work is really motivated to help the community make this decision about whether transformers are really the architecture we want to embrace for universal learning,”
It seems to me that there is no reason to embrace a single architecture.
“The reason why we all got curious about whether they do real reasoning is because of their amazing capabilities,”
I assume by “real” they mean “formal”.
Take basic multiplication. Standard LLMs, such as ChatGPT and GPT-4, fail badly at it.
But they’re extremely good at producing a Python program that performs multiplication.
Anyway, my point is just this:
I guess it’s interesting to try to see if we can get formal logic to emerge from statistical logic but I suspect we’re all aligned on it being unlikely to pan out.
It seems like these limitations are addressed by combining statistical and formal models. The statistical model determines the inputs and configuration of the formal model and the formal model executes based on that. I think this has borne out extremely well in my experience. For example,
word = "strawberry"
print(word.lower().count("r"))
ChatGPT will have no problem writing this code and executing it and we completely avoid any tokenization issues or “it can’t technically count” issues. And I think that’s fine. It’s also most likely going to be radically more efficient to offload math to a math machine then to try to scale a language machine such that math is emergent.
The article discusses how to push LLMs further, which again I think is very interesting and worthwhile, but I am left wondering if it wouldn’t be far more effective to just get the LLMs to offload more and more work. Certainly the O1 approach of “okay so first im going to x, okay then y, and does that make sense? hmmm, okay yes, let me then do z” seems cool and maybe helps but these limitations appear fundamental to modeling a formal domain using a statistical model.
I really agree with this take and I’ve been surprised that I haven’t seen more LLMs calling out to existing programs as part of their execution (although I don’t have my finger on the LLM pulse). Imagine if when you got an LLM to write you statically typed code, it also ran the type checker against it and iterated until it was correct. I understand that there is some downstream research (of LLM research) trying to interface LLMs with these things, but I would be interested to know if there is anything preventing the next big models from having more “native” tool use.
At that point you’re literally executing untrusted code, so it seems like fixing the strawberry problem by having it write and execute the code above would be harder (and more expensive) to turn into a general-interest product.
They already do execute arbitrary code though. They could also just run the code via wasm or use a language other than Python if there were some big concern. But again, they already run arbitrary code. ChatGPT will generate and run the code I provided.
That’s not a problem at all. Your browser is executing untrusted code all the time. As long as we properly control and limit what this code can do, all is good.
LLMs calling out to existing programs as part of their execution
It is assuredly a big deal. The acronym is RAG (Retrieval Augmented Generation.) I work for a database developer and this is the legit part of the AI hype at work.
Nope. H100s were prohibited by the chip ban, but not H800s. Everyone assumed that training leading edge models required more interchip memory bandwidth, but that is exactly what DeepSeek optimized both their model structure and infrastructure around.
Again, just to emphasize this point, all of the decisions DeepSeek made in the design of this model only make sense if you are constrained to the H800; if DeepSeek had access to H100s, they probably would have used a larger training cluster with much fewer optimizations specifically focused on overcoming the lack of bandwidth.
This was amusing to read and feels almost like the premise for a novel. Country bans export of important resource. Opposition discovers a way to use a cheaper resource to reach parity.
And maybe (although probably not) the public-domain implementation of SMB, which I mention here as a good excuse to ask the following question:
Somewhere online, Andrew Tridgell tells the story of how Samba was inspired by Linus Torvalds being bitten by a penguin at Canberra Zoo, but I’ve lost my bookmark to it and can’t find it. Does anyone have a reference to it please?
Still looking for what tridge has written about it, in case any of youse know. I’m pretty sure he’s said something about how it (also) relates to Samba.
Rumor out there is that lots of Chinese companies have bought plenty of Nvidia hardware via Singapore. There are few users of Nvidia cards in that location, yet it accounts for 15% of Nvidia’s worldwide sales.
I am not saying those are DeepSeek’s. I actually believe the architecture and training scheme they use could be more efficient. Given how novel LLMs are, few things have been attempted at scale, so it’s logical there is some low-hanging fruit in terms of performance improvements.
Not saying it’s not China, but there are a lot of wholesalers in Singapore that distribute to APAC. As an Australian consumer it can often be cheapest to get grey market electronics from Singapore and/or HK - and local NVIDIA stock levels have been pretty tight for quite a while now.
Rumor out there is that lots of Chinese companies have bought plenty of Nvidia hardware via Singapore. There are few users of Nvidia cards in that location, yet it accounts for 15% of Nvidia’s worldwide sales.
Few users? Singapore has a huge tech footprint, with both data centres and fully staffed offices for every major player.
Meanwhile HK has confirmed turning a blind eye to sanctions. And they’re in the process of losing the last of their sweetheart trade benefits with the west as a result.
Louisiana isn’t Hong Kong and the US isn’t China, and both the domestic and international frameworks that create Louisiana and Hong Kong are barely comparable. Case in point: Louisiana hasn’t received special carveouts in both international treaties and bilateral trade policies.
Look, I’m happy to talk about this in excruciating detail, but I can’t tell what if any background you have in this topic?
I’m not sure this level of snark is appropriate given that you’re flatly wrong: the A100/H100/A800/H800 export controls apply to China and Hong Kong. While Chinese companies continue to be able to acquire them through various means (and they’re not illegal in China, just harder to get due to the American export controls), there is no difference in American export controls on GPUs between China and Hong Kong.
HK has been treated as a part of China since the 2020 EO, an EO that was renewed every year by Biden too.
However, the export controls themselves are targeted mostly at Chinese based entities. (Here’s the Federal Register link.) Moreover, HK based entities still have a heap of exemptions and licenses.
But, back to the original point, as you said the controls have only raised the price and difficulty. And no one just straight buys a bunch of a controlled item and reports it as a direct shipment. Look at how HK copped another round when heaps of transshipment for Russia against Ukraine was revealed.
Basically, export controls are not a simple binary. I say this, having inadvertently ran into them a few times.
This jumped out as me as well. I had just read DHH describing how constraints ended up contributing to success in Founders at Work last week, so the concept was on my mind.
I use a Brewfile to keep track and cleanup things I have installed previously. Try: brew bundle --help. Also this: alias brewall='brew leaves | xargs brew desc --eval-all' will show everything you have installed with brew and a description of the packages.
I manage my terminal history with Atuin which is nice because it keeps track of history per-directory (but let’s you quickly switch to other modes too, like “global”).
Starship is a pretty prompt and will show you a lot of useful information, for instance when entering a project repository, without requiring a lot of configuration.
Even though these aren’t daily ablutions, I learned about Starship today. Thanks!
Anyone here switched to it from powerlevel10k? It’s one of those things I’ll keep in the back of my mind although I could be convinced to bring it forward if there’s good reason to.
I did! The main reason I moved is because I loved to fish and starship seemed like the best p10k alternative in the fish world, but I wouldn’t say there’s a very strong reason to switch to starship unless you want to go ham configuring your prompt. Starship is just way more configurable than p10k.
I’ve rewritten it a bit. This is about the things you do daily to clean/maintain your computer. Most of them are probably not even necessary to do but you do them anyway.
Every few days it seems like I see a new post which basically goes: ‘I did this thing with an LLM, I used to think LLMs were bad now I think they are good but not so good that I will lose my job’. I don’t disagree with the sentiment but it has been said already imo.
I agree with the sentiment and the quantum of posts on this.
It is my experience too (I’ve been using cursor / windsurf) to develop apps.
I still welcome such posts and read because it is “comforting” to know, people have approached these LLMs from different angles and all reach the same conclusion.
It also gives me a buffer to take a step back, stop being filled with FOMO, read other’s experiments and decide to get in only if I see 10x results different from ine.
I remain skeptical of the “no threat to my job” point, despite hoping for it to be true. I think too many of the people who say this sort of thing are in a position where it would be Very Bad for their job to become obsolete. Which means that they evaluate these tools looking for a reason why it cannot replace them.
I’m in a position of hiring software developers on a constrained budget, so it would be Very Good for my job if I could hire fewer people to achieve the same things (or, ideally, more).
Everything I’ve seen indicates that, except in situations where the developer is coming to a completely new environment (e.g. a systems programmer writing some in-browser JavaScript for the first time or doing some numerical analysis in Python), they are a net productivity drain. They let experienced developers accomplish less in the same time because they’re spending more time fixing bugs that they would never have introduced. The code that these things generate is the worst kind of buggy code: code that looks correct (complete with misleading comments, in some cases).
It’s a translation problem. LLMs are fairly good at translation for natural languages but they tend to fall down on nuance and homonyms where the context matters. Programming languages, by design, typically don’t have that property. Translating between languages is something I’d expect them to be moderately good at, with the caveat that the program as written must be representable in both languages. It looks as if the input was using flat or tree-structured data. Trying it with something that included cyclic data structures (which can’t be expressed in Rust without some explicit cycle breaking) would be interesting. I very rarely encounter translation problems in software development though, so this is an outlier.
The post explicitly says that the author ‘ didn’t bother verifying how well Claude’s Rust code matched the original elisp, because I don’t actually care until and unless it has bugs I notice’. That kind of YOLO development is totally fine for a personal project that no one else uses. Not something you should rely on for anything you might want a customer to use. My experience with LLM-generated code is that the bugs are much harder to find than in human-written code because, by design, they produce code that looks correct.
There is no mention at all of how maintainable the code is (except near the end ‘ You can now generate thousands of lines of code at a price of mere cents; but no human will understand them’). If LLMs could completely replace programmers, that wouldn’t matter: if requirements change, just tell them to generate new code. But with LLMs, that will often introduce changes that subtly break existing functionality.
I agree with the premise of the article: for low stakes development (no one cares if it’s wrong, nearly right code is better than no code, which covers a lot of places where currently there is no code being written), LLMs are probably a win. I’d still be concerned about the studies that link LLM use to a reduction in critical thinking ability and to reduced domain-specific learning there, because I suspect they will widen, rather than narrow, the gap between people who can and can’t program.
To go meta: the prevailing sentiment on lobsters is so negative on LLMs that I think we need more people with credibility (like Nelhage or Simon Willison) to post their experiences.
Everyone should make up their own mind how useful LLMs are, but we need to break the meme that the only people interested in them are wild-eyed futurists or management types scheming to deskill programmers (that argument is a sort of reverse argument from authority).
I’d like to object to your characterization of the Lobsters “prevailing sentiment” opposition to genAI for programming or in general. Off the top of my head, here are a few reasons for opposition that you didn’t mention:
Awareness of hype cycles in general, and experience thereof. (Remember the Metaverse? Blockchain everything?)
Ethical and legal issues around the provenance of training data and process, almost entirely proprietary and secret even for current open-weight models
Dependency on cloud services in general (there’s a long tradition of that here)
Dependency on heavy GPU computing even when local, with associated energy costs
Introduction of black boxes into large projects, where nobody ever understood how it works
Hallucinations and all their ramifications
Amplification of incumbent technologies which are highly represented in training data
Damage to long-established learning and career growth pathways for junior engineers
I welcome the debate, and I do think that there are useful perspectives to be heard from the pro-genAI camp. But dismissive strawmanning basically never furthers that end. Lobsters is a rare oasis that maintains a culture of encouraging quality discourse. More pro-genAI experience reports from more credible sources will continue to experience pushback here, for the reasons above and probably some others I missed. Join the debate, by all means! But don’t just try to drown out arguments against your favored position. We can go anywhere else on the Internet for that kind of shouting-past style.
My take from reading them is that LLMs can be a good rubber duck, but that in that case they are the world’s most expensive rubber duck.
To go meta on your meta: why do we need “fair and balanced” views on LLMs here on lobste.rs? Those members of the community who find them useful and productive can just… use them, and refrain from posting if they’re getting flamed for doing so. I’m sure there are plenty of members of this community who do good productive work in “unpopular” programming languages who don’t feel the need to broadcast that.
To go meta on your meta: why do we need “fair and balanced” views on LLMs here on lobste.rs? Those members of the community who find them useful and productive can just… use them, and refrain from posting if they’re getting flamed for doing so. I’m sure there are plenty of members of this community who do good productive work in “unpopular” programming languages who don’t feel the need to broadcast that.
Fundamentally, we need accuracy. And the memes that I’m describing are inaccurate.
I’m struggling to even understand your perspective. You seem to be saying we should just live with flaming users of unpopular languages. I’d normally consider that a reductio, and I would’ve considered saying “the current LLM reaction is as if anytime someone posted an article about a C/C++ tool, most comments were to say it’s dumb because no one should write in C.”
While I’m happy to say that certain languages are badly designed, and you have to live with the occasional “it would be better to rewrite it in Rust” or “we should avoid just rewriting software in Rust” comment, I do not think we should be flaming users of unpopular languages, or users of LLMs, or people who don’t use LLMs. A certain degree of criticism is fine. A knee-jerk echo chamber is bad.
My point is that a productive user of PHP, say, might find the Lobsters isn’t the best venue to discuss PHP, because there will probably be a vocal minority of hecklers dumping on their language choice. But that’s ok, because there are other venues which are more welcoming.
It’s the same with GenAI. There’s a section of the userbase that doesn’t like the technology, and who are prepared to let others know they don’t like it. Either tune them out, or discuss GenAI somewhere else, or just use it in your daily life and be happy and productive.
Noscript. No text. No luck. Lifting the restriction on notion.so does nothing, I’m still redirected here, with no chance to unblock whatever needs unblocking — because at this point I no longer see what I need unblocking. First time I see a website redirecting me to a different URL just to tell me I need to turn on Javascript.
Sorry I can’t comment on the actual content. I… kinda didn’t get a chance to read it.
I created a gist containing the content (hopefully I didn’t break any licenses) here. Feel free to tell me I’m wrong, I will delete it. Hopefully GitHub would work better (though it might still be bad)
You need to lift it on notion.site I think, although I gave up after I couldn’t figure it out and kept getting redirected.
Web developers, please heed my plea: don’t redirect noscripting users!! Especially not to another domain.
If I’m interested enough, I’ll enable JS for your site. I’m used to doing this, despite the fact that I primarily read text and submit basic forms on the internet. I’m even patient enough to allowlist your myriad 3rd party scripts.
I have JS on by default, but I got tripped up by that too.
I noticed that the page was a bit slow and, more importantly, that it hijacks my arrow keys which I use to finetune my scroll position. Since that is often fixed by disabling JS without any ill effects, I flipped the scripts temporarily off in uBlock for the domain and reloaded. Since it would now immediately take me to another domain, to turn the scripts back on it was easiest to just restart Firefox.
I think there are two schools of thought here, with their trade offs, and in my opinion neither are necessarily better. One school is that a configuration file exists for configuring things, and the other is that a configuration should be programmable.
VSCode, Zed, helix, etc use configuration.
Neovim, vim and Emacs use programming.
This divide also exists in places like Linux window managers.
I’ve used five of the six editors I listed (not zed) extensively, and I understand why the configuration camp is more popular. I think it lends towards easier to use/maintain tools working great out of the box. The problem Neovim and Emacs fans miss is that most people, including myself at times, want an editor that works well without the tinkering and fragility inherent in the programming model.
Helix is implementing a Scheme based plugin system, to allow extending the program without bloating it. I think all editors want to be extended eventually.
Plugins/Extensions are different, though. VSCode and Zed both have extensions tou can write with a programming language; the difference is that in emacs and neovim the extensions take the form of configuration, while in vsc, see (and helix, soon) extensions are given special powers you can’t have with configuration.
I’d go further and say that most developers are better off with configuration instead of programming. With configuration you get autocomplete, linting, extensions are less likely to break each other, etc. I say this as a person with a thousand lines of handwritten neovim programming.
This is why I like community-created configs like Doom Emacs. I get all of the cool features with much less of the config. Although I concede that when things break or I want a tweak it is a bit of a struggle since I deliberately avoided learning how to configure my editor. The communities have been pretty helpful in those circumstances.
But I do worry whether newfangled plugins will become VSCode-exclusive. So far I haven’t found anything that doesn’t have a “good enough” Emacs equivalent. But I expect there may come a day when I have to change for a killer feature (much like how in the past people switched to Emacs for Magit).
Exactly. That’s basically what I was going for (the configuration/programming distinction) but you expressed it better than how I originally wrote it in the post.
(emphasis my own)
Why did you ditch the LLM at this point? Cost? Accuracy? How did you know that it was time for the human to take the reins, rather than asking it to read the docs and fix the problem for you?
I think your writing has convinced me that LLMs are great for these kinds of quick-and-dirty throwaway applications. I’m curious (and perhaps a bit worried) about where the rift between “AI” programmers and human programmers remains — and why the rift exists.
Honestly, it was just a hunch. I expected it might be a UI thing, not a code thing - and asking an LLM with a training cutoff date 6-12 months ago “where do I go on the GitHub website to fix this problem” rarely works. Sites may have redesigned since then, and even without that I tend to find that “where do I go on a website” questions provide poor quality answers compared to “write me code to do X”.
My main beef with the article can be summarized by the author’s use of the Zen of Go to justify its error handling.
If your two key tenets are simplicity and handling of failure, it seems at odds to then require the programmer to remember to add boilerplate code to every function (call) which can error. Especially so if (as others have mentioned) the compiler won’t remind you to. Being able to plan for failure should be simple, which to me means that it should be hard to forget or do incorrectly. That IDEs can enforce this is great, but in that case Go should not be lauded for “awesome error handling,” much like Java is not lauded for NullAway preventing the so-called “Billion Dollar Mistake” of null pointers.
The author gives a great example for how Go’s error paradigm can introduce subtle bugs, too:
This, in my view, demonstrates what makes languages like Haskell and Rust’s approach to error handling a better choice: they combine the ease of raising exceptions with the power of value-based errors. In Rust, you would write something like
Which almost looks trivial, because the only addition is a
?to ensure that errors in the first function call cause an early return. But if you omit the?, the compiler will give you a warning because it knows that the error will be swallowed.Now I do think that the error chains that the author talks about are nice. And I think they demonstrate something which does conform to one of Go’s tenet’s, which is simplicity. I think being able to do error chaining by building a string is great for debugging, and it seems this comes from Go’s decision to have errors be mostly strings (or string-like).
Rust and Haskell both suffer from the abstraction problem, which tempts developers into writing code as specific as possible in the data and as general as possible in the functions. In the case of errors, it results (ha!) in having to do things which people rightfully complain about, such as wrapping the myriad errors that different libraries use. Yes, you could deal with it by formatting all of the errors into strings, but knowing that you could do it a “better” way is enough to guilt you into going down that rabbit hole.
But if every Rust library’s error type was String, or its type system more dynamic, then I don’t think there would be an argument for Go having the better approach. I am in favor of having the compiler hold my hand.
BTW, I don’t want to make the claim that Rust and Haskell have solved error handling. Whispers in the wind tell me that algebraic effects are great, and lispers in the wind that their error conditions (if I have the right name) are too. Knowing what little I do about error handling, I must say I am inclined to disagree that Go’s is awesome.
I like this idea! Do you think it’s extreme to try and implement dark/light mode using static HTML? I can’t seem to find a good workaround for a javascript-less solution to give people the option to choose to deviate from their system preference.
But it sure feels like overkill to generate a copy of each page just to avoid making someone enable JS to change the colors on their screen… which I don’t even do because I prefer everything in dark mode anyway.
There’s a CSS-only way (using a heavily restyled checkbox) to toggle other CSS attributes:
Today I learned that
light-dark()is a thing! Thanks!I’m using a similar idea for my own dark mode checkbox: https://isuffix.com (website is still being built).
GP comment might enjoy more examples of CSS
:has()in this blog post: https://www.joshwcomeau.com/css/has/I don’t understand why so many web sites implement a dark mode toggle anyway. If your page uses CSS conditionally on
prefers-color-schemeto apply a light theme or dark theme depending on the user’s system preference, why isn’t that enough?For example, if the user is looking at your page in light theme and suddenly they think their bright screen is hurting their eyes, wouldn’t they change their system preference or their browser’s preference to dark? (If they don’t solve the problem by just lowering their screen brightness.) After they do so, not only your page but all their other apps would look dark, fixing their problem more thoroughly.
For apps (native or web) the user hangs around in for a long time, I can see some reasons to allow customizing the app’s theme to differ from the system’s. A user of an image editing app might want a light or dark mode depending on the brightness of the images they edit, or a user might want to theme an app’s windows so it’s easily recognizable in their window switcher. But for the average blog website, these reasons don’t apply.
I am curious about how many people use it as well. But it certainly is easier to change by clicking a button in your window than going into your system or browser settings, which makes me think that it would be nice to add. Again, for the imagined person who decides to deviate from their system preference.
Although you’ve made me realize that even thinking about this without putting work into other, known-to-be-used accessibility features is kind of ridiculous. There is lower hanging fruit.
Here’s a concrete example. I generally keep my browser set to dark mode. However, when using dark mode, the online training portal at work switches from black text on a white background to white text on a white background. If I wanted to read the training material, I would need to go into my browser settings and switch to light mode, which then ruins any other tab I would switch to.
If there was a toggle button at the training portal, I could switch off dark mode for that specific site, making the text readable but not breaking my other tabs. Or, if the training portal at work won’t add the button, I could at least re-enable dark mode in every tab whose site had added such a toggle.
Or, hear me out, instead of adding javascript to allow users to work around its broken css, the training portal developers could fix its css?
(Browsers should have an easy per-site dork mode toggle like the reader mode toggle.)
I feel like this is something to fix with stylus or a user script, maybe?
sounds like the button fixes it
Sure, but only on sites that provide a button. It seems a little silly that one bad site should mean that you change your settings on every other site / don’t have your preferred theme on those sites.
Or the DarkReader extension or similar.
Given how widely different colour schemes can vary, even just within the broad realms of “light” and “dark”, I can imagine some users would prefer to see some sites in light mode, even if they want to see everything else in dark mode. It’s the same reason I’ve set my browser to increase the font size for certain websites, despite mostly liking the defaults.
It would be nicer if this could be done at the browser level, rather than individually for each site (i.e. if there was a toggle somewhere in the browser UI to switch toggle between light/dark mode, and if the browser could remember this preference). As it is, a lot of sites that do have this toggle need to either handle the preference server-side (not possible with static sites, unnecessary cookies), handle the preference client-side (FOUC, also unnecessary cookies), or don’t save the preference at all and have the user manually toggle with every visit. None of these options are really ideal.
That said, I still have a theme switcher on my own site, mostly because I wanted to show off that I made two different colour schemes for my website, and that I’m proud of both of them… ;)
I remember the days when you could do
<link rel="alternate stylesheet" title="thing" href="...">and the browser would provide its own nice little ui for switching. Actually, Firefox still does if you look down its menu (view -> page style), but it doesn’t remember your preference across loads or refreshes, so meh, not a good user experience. But hey, page transitions are an IE6 feature coming back again, so maybe alternate stylesheets will too someday.The prefers dark mode css thing really also ought to be a trivial button on the browser UI too. I’m pretty sure it is somewhere in the F12 things but I can’t even find it so woe on the users lol.
But on the topic in general too, like I think static html is overrated. Remember you can always generate html on demand with a trivial program on the server with these changes and still use all the same browser features…
I’ve been preparing something like this. You can do it with css pseudo selectors and a checkbox:
:root:has(#checkbox-id:checked)or so; then you use this to either ‘respect’ the system theme, or invert it.The problems I’m having with this approach:
Yeah, I think I saw the checkbox trick before, but the problems you outline make the
site/page/darkandsite/page/lightsolution seem more enticing, since they can avoid especially the state reset issue. I like the idea of respecting/inverting the system theme as a way of preserving a good default, though!Yeah, as an alternative, for the state issue I was thinking of using a cookie + choose the styles based on it, but that brings a whole host of other “issues”
Not to treat things hyperbolically, but this is honestly the most beautiful keyboard I have ever seen.
I currently use a ZSA Moonlander and have been considering the ZSA Voyager to switch to a low-profile keyboard, but I wasn’t completely sold on it. The thing is, I feel that the limited split keyboard market is mainly RGB/gamer-focused. On the contrary, Bayleaf is the type of tool that I would love to use daily as a programmer. The commercially finished look was an excellent idea, and I’m particularly looking forward to seeing the improved thumb cluster. I would easily spend ZSA keyboard amounts and probably more for a keyboard like this.
Also I could see this keyboard pairing really well with fractal design products.
Lastly, I would love to learn the best way to stay in touch with this keyboard!
It’s a lovely board for sure.
However, the market for ergo (low-profile) keyboards has gotten a lot better in recent years. Although a lot of things have RGB (like the Moonlander), I would say most of the ergo keyboards aren’t gamer-focused.
Copying my comment from the orange site in case you’re interested in looking at some others:
The keyboard they were inspired by (not for sale… yet?): https://old.reddit.com/r/ErgoMechKeyboards/comments/1cfg3vr/…
Corneish (out of stock): https://lowprokb.ca/products/corne-ish-zen?variant=376943319… Unicorne: https://new.boardsource.xyz/products/unicorne-LP
The corneish is an absolute gem in my opinion. It is possibly (probably?) open-sourced too.
Edit: Some more finds from my own perusal
Comparison of split keyboards: https://jhelvy.shinyapps.io/splitkbcompare/
Mostly open-source ergo keyboard customizer: https://ryanis.cool/cosmos/
The latter is pretty awesome. Use a GUI to generate a custom keyboard that you just need to put together. Alas it seems like there is no PCB support for LP switches (at least the Gateron and Kailh Choc).
Do most people using split keyboards have small hands? The thumb cluster on the Ergodox is one of it’s best features, and I never ever see it replicated.
Can you reach every key on that cluster comfortably? I don’t think I have small hands, but have to admit that not all of the keys there are easily reachable for me.
I also have an ergodox and I’m a hair under 6’ and I literally never use any thumb cluster key other than the one closest to my right thumb. I’m not even sure what all of the other keys do even though I wrote this layout myself
May I ask your solution to modifiers? I’ve been using homerow mods and find it quite difficult to avoid mistakes.
I also only use a small number of the available thumb keys.
lol I actually forgot that I use one of the left thumb keys as a layer modifier to change the home row into arrow keys. the vast majority of modifiers on the left hand. I use emacs so I use alt/meta quite a lot which is why it has a relatively prominent placement. the URL below should have a rough outline of my current layout but I’m not sure if it’s actually public or not
https://configure.zsa.io/ergodox-ez/layouts/v6QGK/latest/0
Yeah, I’ve never had an issue pressing the furthest thumb cluster buttons irregularly and my hands aren’t huge (I’m 6’2 to give you some sense of scale) so it’s not like you can’t occasionally stretch for them. Mine have home/end on one side and pgup/pgdn on the other, since I don’t use those often but when I do I don’t want to have to navigate to another layer.
You are 6 cm taller, so not a big difference? On my layout I basically never reach for del/f13 or the matching keys on the other side.
I had this problem too, and when my Ergodox finally packed it in, I went with a keyball44 for (in large part) this reason.
I have a Glove 80 and I can stretch to reach all of the keys on its cluster, but only find the two closest to my thumb to be comfortable.
I would say I have average-to-small sized hands. I don’t know of a good way to measure hands, but I can reach ninths (octave-to-octave +1 key in case you’re a music theory luddite like me) pretty comfortably on a musical keyboard.
I was never able to get used to the ZSAs; I do enjoy two different split keyboards in regular use:
Having said that, I can’t help but agree that the Bayleaf is gorgeous, esp for a custom built keyboard.
Not trying to pick on this blog post in particular, but it bothers me how much of the anti-AI writing is framed. It feels like they almost always start out with an explanation of “why the AI sucks” (quality issues, as described in the article), as if they would gladly welcome the AI if it can produce content on par with humans. I mean this is one interpretation of the title, “the empty promise” being that of matching human quality.
This opens up the articles to several valid counterarguments, the gist of them being that it’s not unreasonable to expect that the limitations of AI (creativity) issues can be overcome in the near-term or even present. Keeping a human in the loop is the most immediately accessible, but you can also try to circumvent some of the issues with hallucinations or meandering by having the AI plan its prose and execute smaller chunks. Not to mention that the context windows of some models are growing. Finally for this particular piece, let’s not forget that video game dialogue and plots can be notoriously, laughably bad. If your standards for AI (creativity) are so high that some humans won’t meet them, are they reasonable standards?
I wish more of these articles would make ethical objections to the use of LLMs their central argument. If I were to forbid the use of AI in my product, it would have to be because I have a less flimsy objection to its use than “it’s bad.”
There’s bad as in low quality, and there’s bad as in morally wrong. I too find the ethical arguments much more compelling, but in any case, these are different kinds of arguments. I don’t think they can be fairly compared with each other, except on an amoral utilitarian basis, which is (ironically) what you’ve done in this comment!
When I protest the use of AI as unethical (bad for others) or unhealthy (bad for oneself) I do so not to make a make a more convincing argument or, as you say, one less open to obvious and valid counterarguments, but rather to expand the frame of discussion beyond aesthetics or utility, where things often tend to get stuck here in the realm of technology talk. I want people to consider these dimensions, even if they come to different conclusions than mine: I’m less interested in winning than in teaching.
Agreed.
I’m not sure where you get the notion that I’m trying to make an amoral utilitarian comparison here. (I also don’t understand what you mean by amoral in this context.) The article’s thesis is “AI is bad because it’s (1) low quality and (2) unethical.” My point is that I would rather the article be about (2) because I find its claims on (1) not compelling. I of course respect the right of the author to say what they want, and I’m not saying that I think it’s invalid to ever discuss (1) — rather it would validate my own biases to read compelling evidence of it.
I don’t think we disagree. I would personally rather not teach poorly, though. Blanket-rejecting AI for dubious claims of “fundamental” inadequacy feels too preachy and vibes-based to me. I think it’s fine to make this kind of blanket-rejection if you have objections, for example, to the AI’s existence (such as (2) — or even spiritual ones, as linked on this site a few months ago), but not if you’re going to talk of its utility.
I don’t think we disagree either. I just meant “amoral utility” in the sense that, say, a nuclear weapon or a veal crate or whatever technical artifact with inextricable moral implications can be evaluated as “better” or “worse” purely in terms of its functional effectiveness, with no regard to those implications at all: what we were calling “quality”.
I understood you to be (probably inadvertently) comparing the two kinds of arguments on the basis of their effectiveness (i.e. “quality”). Which do I think ironic, although I intend no disrespect.
I would like to represent the delegation of broke people in their 20s whose tech salaries are efficiently absorbed by their student loans:
You don’t need a smart bed. My mattress cost $200 and my bedframe cost <50. I sleep fine. I know as people age they need more back support but you do NOT need this. $250 worth of bed is FINE. You will survive!!
I’m not sure I agree. Like if you are living paycheck-to-paycheck then yeah, probably don’t drop $2k on a mattress. But I believe pretty strongly in spending good money on things you use every day.
The way it was explained to me that aligned with my frugal-by-nature mindset was basically an amortization argument. You (hopefully) use a bed every single day. So even if you only keep your bed for a single year (maybe these newfangled cloud-powered beds will also have planned obsolescence built-in, but the beds I know of should last at least a decade), that’s like 5 bucks a day. Which is like, a coffee or something in this economy. I know my colleagues and I will sometimes take an extra coffee break some days, which could be a get up and walk break instead.
You might be young now, but in your situation I would rather save for my old age than borrow against my youth. And for what it’s worth I have friends in their 20s with back problems.
(of course, do your own research to figure out what sort of benefits a mattress will give to your sleep, back, etc. my point is more that even if the perceived benefits feel minimal, so too do the costs when you consider the usage you get)
Mattresses are known to have a rather high markup, and the salesmen have honed the arguments you just re-iterated to perfection. There are plenty of items I’ve used nearly daily for a decade or more. Cutlery, pots, my wallet, certain bags, my bike, etc. None of them cost anywhere near $2000. Yes, amortized on a daily basis, their cost comes to pennies, which is why life is affordable.
Yes, there are bad mattresses that will exacerbate bad sleep and back problems. I’ve slept on some of them. When you have one of those, you’ll feel it. If you wake up rested, without pains or muscle aches in the morning, you’re fine.
I too lament that there are things we buy which have unreasonable markups, possibly without any benefits from the markups at all. I guess my point is more that I believe – for the important things in life – erring on the side of “too much” is fine. I personally have not been grifted by a $2k temperature-controlled mattress, but if it legitimately helped my sleep I wouldn’t feel bad about the spend. So long as I’m not buying one every month.
I think one point you’re glossing over is that sometimes you have to pay an ignorance tax. I know about PCs, so I can tell you that the RGB tower with gaming branding plastered all over it is a grift [1]. And I know enough about the purpose my kitchen knife serves to know that while it looks cool, the most that the $1k chef’s knife could get me is faster and more cleanly cut veggies [2].
You sound confident in your understanding of mattresses, and that’s a confidence I don’t know if I share. But if I think of a field I am confident in, like buying PCs, I would rather end the guy who buys the overly marked-up PC that works well for him than the one who walks a way with a steal that doesn’t meet his needs. Obviously we want to always live in the sweet spot of matching spend to necessity, but I don’t know if it’s always so easy.
[1] except for when companies are unloading their old stock and it’s actually cheap.
[2] but maybe, amortized, that is worth it to you. I won’t pretend to always be making the right decisions.
Note, because it’s not super obvious from the article: the $2k (or up to about 5k EUR for the newest version) is only the temperature-control, the mattress is extra.
All that said: having suffered from severe sleep issues for a stretch of years, I can totally understand how any amount of thousands feels like a steal to make them go away.
One of the big virtues of the age of the internet is that you can pay your ignorance tax with a few hours of research.
In any case, framing it as ‘$5 a day’ doesn’t make it seem like a lot until you calculate your daily take-home pay. For most people, $5 is like 10% of their daily income. You can probably afford being ignorant about a few purchases, but not about all of them.
Maybe I would have agreed with you five years ago, but I don’t feel the same way today. Even for simple factual things I feel like the amount of misinformation and slop has gone up, much less things for which we don’t have straight answers.
Your point is valid. I agree that we can’t 5-bucks-of-coffee-a-day away every purchase we make. Hopefully the ignorance tax we pay is much less than 10% of our daily income.
I think smart features and good quality are completely separate issues. When I was young, I also had a cheap bed, cheap keyboard, cheap desk, cheap chair, etc. Now that I’m older, I kinda regret that I didn’t get better stuff at a younger age (though I couldn’t really afford it, junior/medior Dutch/German IT jobs don’t pay that well + also a sizable student loan). More ergonomic is better long-term and generally more expensive.
Smart features on the other hand, are totally useless. But unfortunately, they go together a bit. E.g. a lot of good Miele washing machines (which do last longer if you look at statistics of repair shops) or things like the non-basic Oral-B toothbrushes have Bluetooth smart features. We just ignore them, but I’d rather have these otherwise good products without she smart crap.
Also, while I’m on a soapbox – Smart TVs are the worst thing to happen. I have my own streaming box, thank you. Many of them make screenshots to spy on you (the samba.tv crap, etc).
Yes, absolutely! Although it would be cool to be able to run a mainline kernel and some sort of Kodi, cutting all the crap…
I guess you never experienced a period with serious insomnia. It can make you desperate. Your whole life falls in to shambles, you’ll become a complete wreck, and you can’t resolve the problem while everybody else around seems to be able to just go to bed, close their eyes and sleep.
There is so much more to sleep than whether your mattress can support your back. While I don’t think I would ever buy such a ludicrous product, I have sympathy for the people who try this out of sheer desperation. At the end of the day, having Jeff Bezos in your bed and some sleep is actually better than having no sleep at all.
You make some good points why this kind of product shouldn’t exist and anything but a standard mattress should be a matter of medical professionals and sleep studies. When people are delirious from a lack of sleep and desperate, these options shouldn’t be there to take advantage of them. I’m surprised at the crazy number of mattress stores out there in the age of really-very-good sub-$1,000 mattresses you can have delivered to your door. I think we could do more to protect people from their worn out selves.
None of the old people in my family feel the need for an internet connected bed (that stops working during an internet or power outage). Also, I imagine that knowing you are being spied on in your sleep by some creepy amoral tech company does not improve sleep quality.
I do know that creepy amoral tech companies collect tons of personal data so that they can monetize it on the market (grey or otherwise). Knowing that you didn’t use your bed last night would be valuable information for some grey market data consumers I imagine. This seems like a ripe opportunity for organized crime to coordinate house breakins using an app.
I believe the people who buy this want to basically experience the most technological “advanced” thing they can pay for. They don’t “need” it. It’s more about the experience and the bragging rights, but I could be wrong.
I’m sorry to somewhat disagree. The reason I would buy this (not at that price tag, I had actually looked into this product) is because I am a wildly hot person/sleeper. I have just a flat sheet on and I am still sweating. I have ceiling fans running additional fans added. This is not only about the experience unless a good night sleep is now considered “an experience”. I legitimately wear shorts even in up to a foot of snow.
As the article says, you can get the same cooling effect with an aquarium chiller for that purpose. You don’t need a cloud-only bed cooler.
Ouch… Please do not follow this piece of advice. A lot of cheap mattresses contain “cancer dust”[1] that you just breath in when you sleep. You most likely don’t want to buy the most expensive mattress either, because many of the very expensive mattresses are just cheap mattresses made overseas with expensive marketing.
The best thing to do is to look at your independent consumer test results for your local market. (In Germany where I live it’s “Stiftugn Warentest” and in France where I’m from it’s “60 millions de consommateurs (fr).” I don’t know what it is in the US.)
A good mattress is not expensive, but it’s not cheap either. I spend 8 hours sleeping on this every day, I don’t want to cheap out.
[1] I don’t mean literal cancer dust. It’s usually just foam dust created when the mattress foam was cut, or when it rubs against the cover. People jokingly call it “cancer dust”
source?
https://www.everydayhealth.com/healthy-home/does-your-mattress-contain-fiberglass-how-to-know-and-why-its-dangerous/
wait… is it carcinogenic? Now I’m concerned lol
I wouldn’t know. Because it depends on what the “dust” is. It just lead most reviewer to say “this can’t be healthy”
This article claims that it just lead to lung irritation. But again, I’m just paranoid, with asbestos we started having concerns way too late.
I suspect this comparison is off by several orders of magnitude, although I’m no stranger to the idea that people massively underrate the climate impact of jet travel. For example, a single round-trip cross-country flight emits more carbon than citizens of some poor countries do in an entire year, and in the same ballpark as carbon emissions saved by going vegetarian for a year. I was once on a Delta flight and their pre-flight branding video featured someone arrogantly declaring that we all ought to start caring about climate change, which was very darkly funny.
Many years ago I worked on a carbon calculator app for a while, and it was so depressing seeing how anything you did to reduce your footprint in terms of domestic energy usage choices, public transport etc, meant almost nothing if you also took a return international flight that year.
Yeah. For this reason we just stopped taking flights for vacations. My last flight was from NL to Ireland in 2019.
The cognitive dissonance of people is real. I know quite a lot of people who say to be strongly in favor of cutting carbon emissions and fly to a far-away country for holidays every year. Just admit that you are not.
I had some hope that after COVID conferences would go online more. Lot of people were saying at the time ‘this works, it’s better for the environment, and makes conferences more accessible to people from low-income countries ‘. But once COVID was over, people were traveling to far-away conferences again.
Online conferences don’t work for all people though. I haven’t gone to one but I’ve also stopped doing online ones because I didn’t get anything from them. My benefit was talking to the people in the hallway and the social events, for me mostly the people who don’t post on social media 24/7 or are active in the IRC channels and bug trackers (I mean, also some of them). So just saying “why did people continue going in person” - I suppose for a good part of us because it’s either that or no good conferences.
That’s mostly my experience, but I found a couple of events that used GatherTown pretty good. It’s an 8-bit Zelda-style world that your avatar walks around with a few nice features. When you get close to a group, you start to hear them. When you get very close, you see their camera as well. This makes it easy to walk around a ‘space’ and join in conversations that seem interesting. Unlike a physical conference, twenty people could stand on the same square and you couldn’t hear people five squares away at all and so you the workable conversation size scaled up better.
I attended one event that used that platform, and it negated the usual accessibility advantages (for blind people) of online meetings, requiring me to have a virtual sighted guide. Let’s not bring the worst parts of the physical world into the online one.
Interesting, I’m surprised that they can’t make an accessible interface. It seems like it should be fairly easy.
Ah yeah, I attended one of those and it didn’t really work out at all. Hardly any discussions, everyone seemed to just do bathroom breaks between the talks (also 1 day conf, no longer breaks).
I’m not saying it can’t work - I’ve just not seen it work - as a 25y IRC user, I’m certainly not shy to type.
Thanks for the counter-point. I think it’s a two-edged sword. I’ve heard from introverts that online conferences with chat made it easier for them to approach people than IRL conferences.
While I largely agree with you on the socializing/networking aspect, I also feel that ~two years was a really too short a time to figure out how online conferences could be improved in this respect.
I can’t find it now, but I remember someone comparing the CO2 impact of travel to the NeurIPS machine learning conference with the cost of training one of the leading models and the conference was as lot more expensive.
I think it’s unproductive to put the onus on individuals to reduce their carbon footprint. It’s a drop in a bucket, like you said a lot of hard work can be very easily negatively offset by “small” decisions and a very large majority of people don’t even care.
The key is legislation to nudge industry into reducing energy usage and switch to more sustainable forms of energy use. Flying electrically could be one of those, which would then hopefully make it less impactful to go on international flights. This has to be coordinated worldwide. It makes no sense for small (wealthy) countries to go on a radical energy shift while huge developing countries like e.g. China and India keep polluting by the truckload. Yes, this is unfair so there needs to be some form of compensation from wealthy to developing countries to help them with the energy transition.
A lot of these technological solutions are still so far off that they don’t help short- or midterm. The harsh truth is that it’s simply not possible to maintain the current level of consumerism, yearly holiday/conference flights, etc. if we want to end up nearly close to a target that is not going to have large consequences (famine, large number of refugees, etc.). Long-term we could maybe go back to the current consumption/travel levels, but it will take another 50 years to roll out enough solar, wind, nuclear, have electric planes (if ever), boats, etc.
The problem is no government is going to tell its citizens, let alone enforce not flying every year, not getting a new smartphone every year, etc. Not in normal times, let alone in these times of political instability. The only way is changing the hearts and minds of people one by one by setting an example and explaining. A lot of people do believe in climate change and believe something needs to be done, they just don’t want to make the sacrifices themselves (yet) and do mental gymnastics along the lines of “but if I don’t take that plane, it will still fly”.
It’s most likely too little, too late. We are not doing great, the only temporal reduction was during the initial phase of COVID, with the reduction in transport being one of the most important factors. But I at least want to go down fighting.
I share your concerns and pessimism. Unfortunately, abstinence-based campaigns tend to fail. Look at the tobacco industry; it took a full frontal multi-pronged assault to get people to stop smoking and it’s still unclear whether it really helped because now there’s a vaping problem that’s possibly worse than the tobacco smoking ever was. What it took there (off the top of my head):
Something similar needs to be done for CO2-heavy consumer goods and services.
Tobacco is a really good example to express the difficulty of the problem. We humans are good at dealing with immediate threats (which makes sense from an evolutionary perspective). The consequences of smoking tobacco are much more invisible, but still a lot of us have lost loved ones from cancer that is likely caused by tobacco. This has probably helped a lot as well.
Climate change is even more abstract as a threat, most people are not yet affected by it. Even though the numbers are already large, grave consequences currently only affect a fraction of a percentage of the people who cause most of the emissions.
We have done that for 10-20 years already at a very slow pace. Going faster will probably upset a lot of people if it’s not their own choice. And unfortunately, the political tides have changed, climate policy has seen a strong decrease in priority in the US and Europe recently. I am sure that the use of renewables will increase, it’s becoming increasingly economically attractive, but I’m afraid that strong improvements in climate regulations will only return when the population asks for them again.
I dare say that this is the problem with living ethically today. We’re surrounded by choices whose effects are felt at a distance, both because our contributions are individually small and because those affected are far from our view. And though I call them “choices,” to many they are assumed givens or necessities.
Take environmentalism as an example. Many of us grow up riding in cars everywhere, taking flights each year for vacations, and eating meat daily. Everyone around us does this too, which contributes to their normalization. As a result, to give up any of these things, much less all of them, sounds like madness.
Or what about buying chocolate? If some bars are produced using slave labor, we don’t actually see that labor. Yes, you could buy the 3x more expensive “ethical” chocolate, but if you don’t think about it, then you can’t be bothered by buying the cheaper one. And money is tight…
We learn to selectively ignore the problems with our accustomed comforts. And when pressed, we make up reasons to keep them. Taylor Swift should curb her jet emissions first. Public transit is too loud and slow for me. How can you be sure that my chocolate was made by slaves?
I think it’s such a tricky spot to be in because there are powerful forces at work to keep us complacent. And even in their absence, human nature compels us by way of peer pressure (e.g. everybody’s doing it) and fallacy (e.g. what impact do I have as an individual?). Increasingly I am becoming convinced that being more moral means – ultimately – questioning and possibly changing every thing that we do. And honestly that’s exhausting. It’s so much easier to put your head down, plug your ears, and let Greta Thunburg handle it. I’ll vote for a half-cent sales tax increase on plane tickets. I support moral causes in principle, just don’t make me be the one to put them to practice.
I can’t imagine how much time and care this must have taken to put together even the website alone. Major kudos to the author.
I hope they don’t mind me studying their site for inspiration (after I finish their post!).
What fortuitous timing. I was just working on switching my blog over to using Atkinson Hyperlegible for its body font. I’ll have to see how its mono font looks.
We’ll see if the blog restyle ever makes it past my combination high standards and low CSS skill.
I’m not saying I’m prescient, but in The Before Times I did something similar with Mechanical Turk for SIGBOVIK 2020 (pg. 258).
I don’t think I would have guessed how soon this dumb idea would be practical. Nor the use of monetary complexity as a measure.
This is really cool! Thanks for sharing. Some folks created a general programming framework using Mechanical Turk back in 2012. Both projects are basically LLMs before LLMs!
Well, at the risk of being totally wrong… In the next five years
The AI bubble bursts. Maybe not as catastrophic a failure as the other bursts we’ve seen in decades past, since I think that the AI of today has some value, even if overhyped. But we don’t get AGI nor do we get armies of Devons doing all software engineering. This has a few ramifications:
The VCs lose their gambit big time and the only surviving big AI players are the ones who have a solid niche (I still haven’t seen any news about AI used in the adult industry, but surely it can be a big player there, even if just with chatbots) or those who pivot.
Software engineering as a whole takes a hit, as companies discover Devons will not in fact replace their juniors. Until this point, however, the market will continue to be tough, especially for new grads. When everyone finally realizes that they still need to be hiring and training up juniors, there’s going to be a shortage of senior talent. This will be additionally and especially compounded by the average ability of junior engineers decreasing due to the combination of COVID schooling and ChatGPT.
AI tooling eventually gets folded into everything. It’s useful, but not revolutionary. The seniors who get good at using it, however, become even more valuable.
The (western) world takes a hit from the various crashes, possibly enough to start the recession that has supposedly been looming for the past 5+ years, although I think if that happens there would need to be other contributing factors than just AI flopping.
And because I can dream, Refinement types reach the mainstream, like how gradual types did with Typescript (though possibly and probably in a less successful tool).
Haven’t read TFA yet, but wow, Slipshow is amazing! https://choum.net/panglesd/slides/campus_du_libre.html
Agreed, I skimmed the post and thought it was fine, but honestly I was more impressed with Slipshow. I think it deserves to be posted on its own.
LLMs are trained in existing code, written by humans. I have yet to see a good response to what’s going to happen when they start training on LLM-generated code and errors are compounded. Perhaps “programmers” in the future will have as a necessary skill “fitness testing” to see if the LLM-generated code actually does what it’s supposed to do, or the derived model actually models the use case, or whatever…
…but spicy autocomplete trained on existing corpora will never, from an information-theoretic perspective, contain more information than those initial corpora. To say that LLMs will do all the coding is to say there will never be truly new coding ever again.
(I suspect that there may be some sea change in the future where we join LLMs with modeling or something to produce formal specifications more quickly and comprehensively, and then pass those models to a (non-LLM) code generator, but it would still need a human to check that the derived specifications actually match the domain. Basically, short of true AGI, we’re not gonna be completely removing humans from coding for a while.)
I don’t understand the information-theoretic argument, seeing as you can come up with example dumb programs no one has written that LLMs will be able to produce (“Write me a program that prints Hello World!! to the console in blue if it is 11:11 PM or AM in UTC +0 otherwise in green”). I suspect what you’re saying is that transfer learning or whatever tactics these models use to generalize their training data has some limit – and presently whatever limit of human developers is beyond it.
I think an interesting question is “how many of the programs we care about have been written [and are on the internet to train]?” I would guess that part of the flashiness of LLMs is that we tend to evaluate them on small programs/functions which may have already been written before. So the answer to that question would be “most of the simple programs have been written,” thus explaining the hype when you tell the LLM to write my sample dumb program and it succeeds.
If the hardest problems in programming that senior devs do, like architecting and maintaining large pieces of software, have as wide a range of options to solve as we think, then I would agree that the comparative dearth of training data could pose a challenge for LLMs. Especially if their reasoning is tantamount to selecting and modifying some existing piece of code that they have seen before.
I think I agree with the general sentiment of your parenthetical, though, which is that we’ll need something more than just LLMs before machines can hope to replace human developers.
I think you highly overestimate amount of novel programming out there. We do not invent new sorting algorithms every day. Neither we solve travelling salesperson problem every week. Most of the work is doing the same CRUD e-commerce site over and over again. Sure, you might not have coded exactly the same button to the character twice but you surely did a whole lot of buttons that do very similar things.
Another point is that code is not the only corpus that goes into training data. Is it a new code if it comes from a paper that describes an algorithm but doesn’t provide a reference implementation? Is it a novel code if it recreates a system from the documentation/specification of a system, whose code is not in the training corpus? Is it a new code if it’s a translation from another language and there’s no implementation in the target language? Is it a new code if it’s an OOP rewrite of an FP implementation?
There’s a lot can be done with non-code component of the training corpus that might produce useful code. After all, at one point all of science was pushed forward by polymaths drawing on connections between disciplines. It’s still a major source of inspiration for novel discoveries. Why can’t LLMs do the same, especially since they are encoded patterns.
Maybe the author is a selfish asshole with no appreciation for others, but I value humanity over productivity. GenAI guys are hard for slop and consume for consumption’s sake, like a capitalist psychosis.
Yeah, fuck you, pal. I’ve paid for Free Software and I’ll happily pay for hand-crafted software.
That is unnecessarily harsh. FYI, the author has pretty much devoted his career to helping others learn ML and Python. (I am not the author.)
It was pretty harsh, I’ll admit that was an unfair thing to say. His post made me angry and I commented heedlessly. My point still stands - he doesn’t value creation, he values creations: the work of individuals is incidental and can be dismissed. I can’t elaborate just how vile a worldview this article presents to me.
OK, I’ll bite. I agree with what the author says about “artisanal software.” I don’t see what’s vile about viewing it as not inherently more valuable.
I think any hard work a human does is commendable. I respect people who memorize digits of pi or can solve a Rubik’s cube fast by hand. Even if machines can do both with more accuracy or speed, I don’t see why the human achievement shouldn’t be lauded.
But supposing I wanted several thousand digits of pi or my Rubik’s cube to be solved – and that a machine would do it cheaper – I wouldn’t hesitate to pick the machine. I don’t think it’s unreasonable for these people to then be out of a job if a machine obviates them.
BUT, this is in a hypothetical where I am asked to choose abstractly between machine-created and human-created software.
If in the present you asked me whether I would pay more for “handmade software” then I absolutely would because I would trust it far more.
If the reliability problem of AI-generated code is solved, then the question for me is an ethical one – assuming (not unreasonably) that the AI-generated software of the future continues to amass ethical concerns. In that case, I would argue that the author should not call it “artisanal” but rather “ethically-sourced”. And I should hope that we as consumers try to prefer the latter.
Agreed, but the vile-ness doesn’t make it less likely.
But encouraging more people to recognize it as vile might.
Good. You’re awesome. You’re also in a very small minority.
It’s a minority that only grows, join us if you haven’t yet.
I don’t see how an LLM is going to solve compositional tasks like this because they aren’t word probability tasks. But I could very easily imagine an LLM translating those “facts” into statements that an SMT could solve and offloading the work to it. Why should we want an LLM to do it itself?
It seems to me that there is no reason to embrace a single architecture.
I assume by “real” they mean “formal”.
But they’re extremely good at producing a Python program that performs multiplication.
Anyway, my point is just this:
I guess it’s interesting to try to see if we can get formal logic to emerge from statistical logic but I suspect we’re all aligned on it being unlikely to pan out.
It seems like these limitations are addressed by combining statistical and formal models. The statistical model determines the inputs and configuration of the formal model and the formal model executes based on that. I think this has borne out extremely well in my experience. For example,
ChatGPT will have no problem writing this code and executing it and we completely avoid any tokenization issues or “it can’t technically count” issues. And I think that’s fine. It’s also most likely going to be radically more efficient to offload math to a math machine then to try to scale a language machine such that math is emergent.
The article discusses how to push LLMs further, which again I think is very interesting and worthwhile, but I am left wondering if it wouldn’t be far more effective to just get the LLMs to offload more and more work. Certainly the O1 approach of “okay so first im going to x, okay then y, and does that make sense? hmmm, okay yes, let me then do z” seems cool and maybe helps but these limitations appear fundamental to modeling a formal domain using a statistical model.
(disclaimer: only came here to read the comments)
I really agree with this take and I’ve been surprised that I haven’t seen more LLMs calling out to existing programs as part of their execution (although I don’t have my finger on the LLM pulse). Imagine if when you got an LLM to write you statically typed code, it also ran the type checker against it and iterated until it was correct. I understand that there is some downstream research (of LLM research) trying to interface LLMs with these things, but I would be interested to know if there is anything preventing the next big models from having more “native” tool use.
At that point you’re literally executing untrusted code, so it seems like fixing the strawberry problem by having it write and execute the code above would be harder (and more expensive) to turn into a general-interest product.
They already do execute arbitrary code though. They could also just run the code via wasm or use a language other than Python if there were some big concern. But again, they already run arbitrary code. ChatGPT will generate and run the code I provided.
That’s not a problem at all. Your browser is executing untrusted code all the time. As long as we properly control and limit what this code can do, all is good.
Point taken, although I still am curious about the feasibility not from an economical standpoint but from a research one.
You might enjoy reading up on langchain cves then https://github.com/advisories?query=langchain+type%3Areviewed+ecosystem%3Apip
This pattern is increasingly common now.
ChatGPT has had the ability to solve problems by running Python code for over a year now.
Google Gemini has the same ability, though I think it’s harder to tell when it’s doing it.
Claude added the ability [to run JavaScript](https://simonwillison.net/2024/Oct/24/claude-analysis-tool/( in October.
It is assuredly a big deal. The acronym is RAG (Retrieval Augmented Generation.) I work for a database developer and this is the legit part of the AI hype at work.
Someone on HN got o3-mini to solve the example puzzle in this article using Prolog.
I got the right answer directly out of both o3-mini-high and R1 without needing an extra tool.
Ah yeah, perfect example. IMO this is the path forward to emergent levels of reasoning.
This was amusing to read and feels almost like the premise for a novel. Country bans export of important resource. Opposition discovers a way to use a cheaper resource to reach parity.
It reminds me of the innovation coming out of Australia (rsync, squid, transparent proxying) because we were paying 19c/meg for international traffic.
And maybe (although probably not) the public-domain implementation of SMB, which I mention here as a good excuse to ask the following question:
Somewhere online, Andrew Tridgell tells the story of how Samba was inspired by Linus Torvalds being bitten by a penguin at Canberra Zoo, but I’ve lost my bookmark to it and can’t find it. Does anyone have a reference to it please?
My memory is that the penguin bite inspired the Linux logo. I thought tridge was already working on Samba at that point but I could be wrong.
Ah yes. Thank you!
Still looking for what tridge has written about it, in case any of youse know. I’m pretty sure he’s said something about how it (also) relates to Samba.
Rumor out there is that lots of Chinese companies have bought plenty of Nvidia hardware via Singapore. There are few users of Nvidia cards in that location, yet it accounts for 15% of Nvidia’s worldwide sales.
I am not saying those are DeepSeek’s. I actually believe the architecture and training scheme they use could be more efficient. Given how novel LLMs are, few things have been attempted at scale, so it’s logical there is some low-hanging fruit in terms of performance improvements.
Not saying it’s not China, but there are a lot of wholesalers in Singapore that distribute to APAC. As an Australian consumer it can often be cheapest to get grey market electronics from Singapore and/or HK - and local NVIDIA stock levels have been pretty tight for quite a while now.
Few users? Singapore has a huge tech footprint, with both data centres and fully staffed offices for every major player.
Meanwhile HK has confirmed turning a blind eye to sanctions. And they’re in the process of losing the last of their sweetheart trade benefits with the west as a result.
HK is part of China, for good or ill. Didn’t the sanctions apply there too?
No, because not everyone has got the message that One Country, Two Systems is dead.
I mean Louisiana uses the Napoleonic code rather than common law. Doesn’t mean there are border checks.
Louisiana isn’t Hong Kong and the US isn’t China, and both the domestic and international frameworks that create Louisiana and Hong Kong are barely comparable. Case in point: Louisiana hasn’t received special carveouts in both international treaties and bilateral trade policies.
Look, I’m happy to talk about this in excruciating detail, but I can’t tell what if any background you have in this topic?
I’m not sure this level of snark is appropriate given that you’re flatly wrong: the A100/H100/A800/H800 export controls apply to China and Hong Kong. While Chinese companies continue to be able to acquire them through various means (and they’re not illegal in China, just harder to get due to the American export controls), there is no difference in American export controls on GPUs between China and Hong Kong.
HK has been treated as a part of China since the 2020 EO, an EO that was renewed every year by Biden too.
However, the export controls themselves are targeted mostly at Chinese based entities. (Here’s the Federal Register link.) Moreover, HK based entities still have a heap of exemptions and licenses.
But, back to the original point, as you said the controls have only raised the price and difficulty. And no one just straight buys a bunch of a controlled item and reports it as a direct shipment. Look at how HK copped another round when heaps of transshipment for Russia against Ukraine was revealed.
Basically, export controls are not a simple binary. I say this, having inadvertently ran into them a few times.
This jumped out as me as well. I had just read DHH describing how constraints ended up contributing to success in Founders at Work last week, so the concept was on my mind.
Some of my newish terminal habits:
I use a Brewfile to keep track and cleanup things I have installed previously. Try:
brew bundle --help. Also this:alias brewall='brew leaves | xargs brew desc --eval-all'will show everything you have installed with brew and a description of the packages.I use direnv to automatically trigger shell commands inside specific directories. I use it with bash but it supports a bunch of different shells.
I manage my terminal history with Atuin which is nice because it keeps track of history per-directory (but let’s you quickly switch to other modes too, like “global”).
Starship is a pretty prompt and will show you a lot of useful information, for instance when entering a project repository, without requiring a lot of configuration.
Even though these aren’t daily ablutions, I learned about Starship today. Thanks!
Anyone here switched to it from powerlevel10k? It’s one of those things I’ll keep in the back of my mind although I could be convinced to bring it forward if there’s good reason to.
I did! The main reason I moved is because I loved to fish and starship seemed like the best p10k alternative in the fish world, but I wouldn’t say there’s a very strong reason to switch to starship unless you want to go ham configuring your prompt. Starship is just way more configurable than p10k.
I’ve rewritten it a bit. This is about the things you do daily to clean/maintain your computer. Most of them are probably not even necessary to do but you do them anyway.
Every few days it seems like I see a new post which basically goes: ‘I did this thing with an LLM, I used to think LLMs were bad now I think they are good but not so good that I will lose my job’. I don’t disagree with the sentiment but it has been said already imo.
I agree with the sentiment and the quantum of posts on this.
It is my experience too (I’ve been using cursor / windsurf) to develop apps.
I still welcome such posts and read because it is “comforting” to know, people have approached these LLMs from different angles and all reach the same conclusion.
It also gives me a buffer to take a step back, stop being filled with FOMO, read other’s experiments and decide to get in only if I see 10x results different from ine.
I remain skeptical of the “no threat to my job” point, despite hoping for it to be true. I think too many of the people who say this sort of thing are in a position where it would be Very Bad for their job to become obsolete. Which means that they evaluate these tools looking for a reason why it cannot replace them.
I’m in a position of hiring software developers on a constrained budget, so it would be Very Good for my job if I could hire fewer people to achieve the same things (or, ideally, more).
Everything I’ve seen indicates that, except in situations where the developer is coming to a completely new environment (e.g. a systems programmer writing some in-browser JavaScript for the first time or doing some numerical analysis in Python), they are a net productivity drain. They let experienced developers accomplish less in the same time because they’re spending more time fixing bugs that they would never have introduced. The code that these things generate is the worst kind of buggy code: code that looks correct (complete with misleading comments, in some cases).
I would love to hear your thoughts on this blog post in that case, since its author espouses Claude’s productivity benefits to them.
Three things I’d note from the post:
I agree with the premise of the article: for low stakes development (no one cares if it’s wrong, nearly right code is better than no code, which covers a lot of places where currently there is no code being written), LLMs are probably a win. I’d still be concerned about the studies that link LLM use to a reduction in critical thinking ability and to reduced domain-specific learning there, because I suspect they will widen, rather than narrow, the gap between people who can and can’t program.
Thanks for sharing, I enjoyed reading what you had to say.
To go meta: the prevailing sentiment on lobsters is so negative on LLMs that I think we need more people with credibility (like Nelhage or Simon Willison) to post their experiences.
Everyone should make up their own mind how useful LLMs are, but we need to break the meme that the only people interested in them are wild-eyed futurists or management types scheming to deskill programmers (that argument is a sort of reverse argument from authority).
I’d like to object to your characterization of the Lobsters “prevailing sentiment” opposition to genAI for programming or in general. Off the top of my head, here are a few reasons for opposition that you didn’t mention:
I welcome the debate, and I do think that there are useful perspectives to be heard from the pro-genAI camp. But dismissive strawmanning basically never furthers that end. Lobsters is a rare oasis that maintains a culture of encouraging quality discourse. More pro-genAI experience reports from more credible sources will continue to experience pushback here, for the reasons above and probably some others I missed. Join the debate, by all means! But don’t just try to drown out arguments against your favored position. We can go anywhere else on the Internet for that kind of shouting-past style.
Simon Willison is a member and his posts have been submitted multiple times: https://lobste.rs/domains/simonwillison.net
My take from reading them is that LLMs can be a good rubber duck, but that in that case they are the world’s most expensive rubber duck.
To go meta on your meta: why do we need “fair and balanced” views on LLMs here on lobste.rs? Those members of the community who find them useful and productive can just… use them, and refrain from posting if they’re getting flamed for doing so. I’m sure there are plenty of members of this community who do good productive work in “unpopular” programming languages who don’t feel the need to broadcast that.
What makes LLM use special?
Fundamentally, we need accuracy. And the memes that I’m describing are inaccurate.
I’m struggling to even understand your perspective. You seem to be saying we should just live with flaming users of unpopular languages. I’d normally consider that a reductio, and I would’ve considered saying “the current LLM reaction is as if anytime someone posted an article about a C/C++ tool, most comments were to say it’s dumb because no one should write in C.”
While I’m happy to say that certain languages are badly designed, and you have to live with the occasional “it would be better to rewrite it in Rust” or “we should avoid just rewriting software in Rust” comment, I do not think we should be flaming users of unpopular languages, or users of LLMs, or people who don’t use LLMs. A certain degree of criticism is fine. A knee-jerk echo chamber is bad.
Note that despite Simon being a very productive member of the community, his last post got flagged as spam for no good reason, and many of his other LLM posts have several spam votes. https://lobste.rs/s/oclya6/building_python_tools_with_one_shot.
My point is that a productive user of PHP, say, might find the Lobsters isn’t the best venue to discuss PHP, because there will probably be a vocal minority of hecklers dumping on their language choice. But that’s ok, because there are other venues which are more welcoming.
It’s the same with GenAI. There’s a section of the userbase that doesn’t like the technology, and who are prepared to let others know they don’t like it. Either tune them out, or discuss GenAI somewhere else, or just use it in your daily life and be happy and productive.
Noscript. No text. No luck. Lifting the restriction on notion.so does nothing, I’m still redirected here, with no chance to unblock whatever needs unblocking — because at this point I no longer see what I need unblocking. First time I see a website redirecting me to a different URL just to tell me I need to turn on Javascript.
Sorry I can’t comment on the actual content. I… kinda didn’t get a chance to read it.
I created a gist containing the content (hopefully I didn’t break any licenses) here. Feel free to tell me I’m wrong, I will delete it. Hopefully GitHub would work better (though it might still be bad)
Fortunately, you can get GitHub to deliver you the raw binary directly
Hopefully, archive.is works better
archive.is is worse because it sometimes wants a CAPTCHA, and raw-data GitHub URL is fine for a wide range of ways to fetch it.
You need to lift it on notion.site I think, although I gave up after I couldn’t figure it out and kept getting redirected.
Web developers, please heed my plea: don’t redirect noscripting users!! Especially not to another domain.
If I’m interested enough, I’ll enable JS for your site. I’m used to doing this, despite the fact that I primarily read text and submit basic forms on the internet. I’m even patient enough to allowlist your myriad 3rd party scripts.
I have JS on by default, but I got tripped up by that too.
I noticed that the page was a bit slow and, more importantly, that it hijacks my arrow keys which I use to finetune my scroll position. Since that is often fixed by disabling JS without any ill effects, I flipped the scripts temporarily off in uBlock for the domain and reloaded. Since it would now immediately take me to another domain, to turn the scripts back on it was easiest to just restart Firefox.
I think there are two schools of thought here, with their trade offs, and in my opinion neither are necessarily better. One school is that a configuration file exists for configuring things, and the other is that a configuration should be programmable.
VSCode, Zed, helix, etc use configuration. Neovim, vim and Emacs use programming.
This divide also exists in places like Linux window managers.
I’ve used five of the six editors I listed (not zed) extensively, and I understand why the configuration camp is more popular. I think it lends towards easier to use/maintain tools working great out of the box. The problem Neovim and Emacs fans miss is that most people, including myself at times, want an editor that works well without the tinkering and fragility inherent in the programming model.
Helix is implementing a Scheme based plugin system, to allow extending the program without bloating it. I think all editors want to be extended eventually.
Plugins/Extensions are different, though. VSCode and Zed both have extensions tou can write with a programming language; the difference is that in emacs and neovim the extensions take the form of configuration, while in vsc, see (and helix, soon) extensions are given special powers you can’t have with configuration.
I’d go further and say that most developers are better off with configuration instead of programming. With configuration you get autocomplete, linting, extensions are less likely to break each other, etc. I say this as a person with a thousand lines of handwritten neovim programming.
This is why I like community-created configs like Doom Emacs. I get all of the cool features with much less of the config. Although I concede that when things break or I want a tweak it is a bit of a struggle since I deliberately avoided learning how to configure my editor. The communities have been pretty helpful in those circumstances.
But I do worry whether newfangled plugins will become VSCode-exclusive. So far I haven’t found anything that doesn’t have a “good enough” Emacs equivalent. But I expect there may come a day when I have to change for a killer feature (much like how in the past people switched to Emacs for Magit).
Exactly. That’s basically what I was going for (the configuration/programming distinction) but you expressed it better than how I originally wrote it in the post.