It’s stagnating, and salaries are very low compared to the cost of living. Taxes are rising, including the sales tax (VAT), which was one of the highest in Europe to begin with (24%), now 25.5%. Salaries in software engineering are lower than in Western Europe, but the biggest problem IMO is that there aren’t enough good jobs. Most job openings are in endless consultancy firms. Very little product development, startups and innovation in the past decade. And I can’t see any chances of the situation improving in the near future.
Many professionals I know have left Finland or seriously considering to move away.
The YouTube channel here seems to be a person who needs to be dramatic for view reasons. I think the actual content, and the position of the Ghostty author here on this topic, is pretty mild.
An actual bit from the video:
Guest: “…I don’t know, I’m questioning everything about Go’s place in the stack because […reasonable remarks about design tradeoffs…]”
Host: “I love that you not only did you just wreck Go […]”
Aside… In the new year I’ve started reflexively marking videos from channels I follow as “not interested” when the title is clickbait, versus a succinct synopsis of what the video is about. I feel like clickbait and sensationalism on YouTube is out of control, even among my somewhat curated list of subscribed channels.
This is why I can’t stand almost any developer content on YouTube and similar platforms. They’re way too surface-level, weirdly obsessed with the inane horse race of finding the “best” developer tooling, and clickbait-y to a laughable degree. I have >20 years of experience, I’m not interested in watching someone blather on about why Go sucks when you could spend that time on talking about the actual craft of building things.
But, no, instead we get an avalanche of beginner-level content that lacks any sort of seriousness.
This is why I really like the “Developer Voices” channel. Great host, calm and knowledgeable. Interesting guests and topics. Check it out if you don’t know it yet.
I’m in a similar boat. Have you found any decent channels that aren’t noob splooge? Sometimes I’ll watch Asahi Lina, but I haven’t found anything else that’s about getting stuff done. Also, non-OS topics would be nice additions as well.
7 (7!) Years ago LTT made a video about why their thumbnails are so… off putting and it essentially boiled down to “don’t hate the player; hate the game”. YouTube rewards that kind of content. There’s a reason why nearly every popular video these days is some variant of “I spent 50 HOURS writing C++” with the thumbnail having a guy throwing up. If your livelihood depends on YouTube, you’re leaving money on the table by not doing that.
It’s not just “Youtube rewards it”, it’s that viewers support it. It’s a tiny, vocal minority of people who reject those thumbnails. The vaaaaast majority of viewers see them and click.
I don’t think you can make a definitive statement either way because YouTube has its thumb on the scales. Their algorithm boosts videos on factors other than just viewer click through or retention rates (this has also been a source of many superstitions held by YouTubers in the past) and the way the thumbnail, title and content metas have evolved make me skeptical that viewers as a whole support it.
What is the alternative? That they look at the image and go “does this person make a dumb face” ? Or like “there’s lots of colors” ? I think the simplest explanation is that people click on the videos a lot.
…or it’s just that both negative and positive are tiny slices compared to neutrals but the negative is slightly smaller than the positive.
(I use thumbnails and titles to evaluate whether to block a channel for being too clickbait-y or I’d use DeArrow to get rid of the annoyance on the “necessary evil”-level ones.)
I am quite happy to differ in opinion to someone who says ‘great content’ unironically. Anyway your response is obviously a straw man, I’m not telling Chopin to stop composing for a living.
Your personal distaste for modern culture does not make it any higher or lower than Chopin, nor does it invalidate the fact that the people who make it have every right to make a living off of it.
They literally don’t have a right to make a living from Youtube, this is exactly the problem. Youtube can pull the plug and demonetise them at any second and on the slightest whim, and they have absolutely no recourse. This is why relying on it to make a living is a poor choice. You couldn’t be more diametrically wrong if you tried. You have also once again made a straw man with the nonsense you invented about what I think about modern culture.
How’s that any different from the state of the media industry at any point in history? People have lost their careers for any reason in the past. Even if you consider tech or any other field, you’re always building a career on top of something else. YouTube has done more to let anyone make a living off content than any other stage in history, saying you’re choosing poorly to make videos for YouTube is stupid.
You have also once again made a straw man with the nonsense you invented about what I think about modern culture
You’re the one who brought it up:
I am quite happy to differ in opinion to someone who says ‘great content’ unironically
Isn’t this kind of a rigid take? Why is depending on youtube a poor choice? For a lot of people, I would assume it’s that or working at a fast-food restaurant.
Whether that’s a good long-term strategy, or a benefit to humanity is a different discussion, but it doesn’t have to necessarily be a poor choice.
Not really?
I mean sure if you’ve got like 1000 views a video then maybe your livelihood depending on YouTube is a poor choice.
There’s other factors that come into this, but if you’ve got millions of views and you’ve got sponsors you do ad-reads for money/affiliate links then maybe you’ll be making enough to actually “choose” YouTube as your main source of income without it being a poor choice (and it takes a lot of effort to reach that point in the first place).
We’ve been seeing this more and more. You can, and people definitely do, make careers out of YouTube and “playing the game” is essential to that.
Heh - I had guessed who the host would be based on your comment before I even checked. He’s very much a Content Creator (with all the pandering and engagement-hacking that implies). Best avoided.
Your “ghostty author” literally built a multibillion dollar company writing Go for over a decade, so Im pretty sure his opinion is not a random internet hot take.
Yup. He was generally complimentary of Go in the interview. He just doesn’t want to use it or look at it at this point in his life. Since the Lobsters community has quite an anomalous Go skew, I’m not surprised that this lack of positivity about Go would be automatically unpopular here.
And of course the title was click-baity – but can we expect from an ad-revenue-driven talk show?
I was able to get the incremental re-builds down to 3-5 seconds on a 20kloc project with a fat stack of dependencies which has been good enough given most of that is link time for a native binary and a wasm payload. cargo check via rust-analyzer in my editor is faster and does enough for my interactive workflow most of the time.
Don’t be a drama queen ;-) You can, all you want. That’s what most people do.
The host actually really likes Go, and so does the guest. He built an entire company where Go was the primary (only?) language used. It is only natural to ask him why he picked Zig over Go for creating Ghostty, and it is only natural that the answer will contrast the two.
I don’t believe the “a11y” tag is appropriate here. An example of what would have made it appropriate is if he had talked about how it’s difficult to implement accessibility in cross-platform UI toolkits.
Yes, and I don’t remember putting it in, and the moderation log does not seem to contain anyone adding it afterwards. Perhaps I made a mistake and didn’t notice it. Removed the tag now.
I guess we’re gonna see a new “Lobsters Battlestations and Screenshots” thread soon! (2024 edition)
I was curious (again) to see 2016 missing from the battlestations list, although it seems we just didn’t ask that year. Closest I can find in search for tag:ask clicking through to 2016 stories (roughly pages 88-97) is either What desktop environment and window manager do you use? or What does your development environment look like? How might it change?, although only the first story has screenshots linked in the comments and not very many. Not sure either fit the criteria so I guess we have a hole in the streak.
I plan to get it up on Monday. I try to get the battlestations thread going on Mondays so people can add their workstations throughout the work-week :D
I’ve seen photos of it, but never bothered to learn more. Thanks for mentioning it, it looks pretty cool! Surprised to see it was made by Panic, who are (at least to me) known for macOS&iOS dev-specific apps and utilities.
UPD: Oh, apparently Panic is also a video game publisher! One of the games I really enjoyed was Firewatch, which TIL was published by them. Untitled Goose Game is also fun.
LÖVE expects these files exists and panics (calls error) if they don’t.
This is fine because in a game, you should be able to expect all the required files exist. If you still want to handle the error, you can use pcall as shown in the blog post.
I’ve noticed that often a link will be on HN and then appear on lobste.rs a few days later, so it would be interesting to widen the time window rather than just looking at the front page. (Obviously that would be a lot more work, though!)
I agree. I see an article on the orange site and after a few comments things kind of turn sour for me and I head to lobsters to see the comments. Usually lobsters does not disappoint…
Without relying on server-side storage (which I’d like to avoid), it’d be great to be able to fetch some historical data. I fetch the Lobse.rs feed from https://lobste.rs/hottest.json, which I found in some random comment here; I am not aware of other endpoints. For HN, I’m using the Algolia API (https://hn.algolia.com/api/v1/search?tags=front_page), which gives a bit more flexibility.
I’m going to say Github here, but I’m sure alternatives work too. All you need is static hosting and an ability to run a cron that can commit to itself.
It would look something like this:
Every hour (or your interval of choice), run a Github Action that fetches the entries on the current homepage. Save them in a file in your git repo.
Then to render your site, check back through the git history for whatever overlap window (eg, 5 days might work well for HN/Lobsters). Commit the html file, and host it on Github Pages.
For something as small as “the links on Lobsters every hour”, you can easily keep decades of history in a git repo.
Container images were meant (among other goals) to provide reproducible pieces of software, and I’ve been wondering if it makes sense to go further and provide whole machines (virtual, or even physical) with static, immutable, pre-defined setup for the sake of a single executable binary. Sans OS updates, it’s a security liability, of course, but, just like with containers, the static machine would not be meant to be exposed to the outside world directly; instead, the admin would have to set up the “front end” for it (e.g. some proxy if we’re talking about web services). The contract here is that the maintainer of software takes care of its environment, and provides a thin interface in form of HTTP or some other simple-enough protocol.
Like, imagine buying a personal accounting app in form of a Raspberry pi. (“Bring your own storage” and “bring your own security”).
I think that’s suggested not for security but to make it so the backend can last for longer before the next time that changing requirements force it to change.
The point of the reverse proxy is TLS termination. Every 5 to 20 years the internet at large starts to demand new ciphers so you have to upgrade your TLS libraries even if they don’t have any security vulnerabilities per se.
So having a (stateless, upgradeable) reverse proxy in front that speaks HTTPS to the internet and HTTP to the backend lengthens the time for which the backend can avoid needing to change.
A proxy can certainly protect somewhat naive HTTP implementations against protocol-level attacks (so completely malformed requests are dropped) and certain types of resource exhaustion attacks by limiting the number of in-flight connections.
It’s interesting that the comments here are solidifying this bit:
group #2 bugs me though, because they should be on my side, or at least nod and go “oh yeah, i don’t have time to make that for you, but i get why you’d want it. good luck man.” instead they act like i shit in their soup when i don’t want to use fucking Jekyll, and idk why. i think i’m being pretty reasonable here, wanting a thing i could do when i was 13, which was 50 times faster than any of this hacking-the-mainframe horseshit, to simply continue being possible. but it isn’t, and i’ve given up, so it’s time to compromise.
I personally strongly agree that there’s a big hole in the website creation space left behind by Frontpage and Dreamweaver. There totally should be a desktop GUI WYSIWYG “type up some stuff and export some HTML” website creator program, but I’ve searched and can’t really find anything. I’m firmly in the “oh yeah, i don’t have time to make that for you, but i get why you’d want it. good luck man.” camp, and if I end up learning enough about desktop software development at some point I might just make the time to work on it.
This is probably not something you’re after, but just in case: I’m also building a simple blogging platform (https://exotext.com, example blog) and am seeking any sort of feedback in exchange for a free-of-charge service until some future public release, and a perpetual discount after that. Let me know if you (or anyone else reading this) want to try it.
I really enjoyed making Queueing, it was fun making different visuals interact with each other (the graphs toward the end) and the whole “tasks” system is something I’d love to use again.
A few months ago we’ve been re-implementing a queueing solution at work, and your visualization helped some people (especially less technical people) understand the domain better.
I considered talking about Babbage and Lovelace but ultimately decided it was out of scope. Such a tragic story, though. “The Innovators” by Walter Isaacson was a great read.
What is this thing, computer science? Who are these people, programmers? What are these terms, anyway? Flawed and ever-changing, my friends, just like the rest of the universe.
You have insulted me with the word “friends”, because its meaning had changed, just like the rest of the universe :)
Joking, of course, but I hope you get my point.
Your closing paragraph is what inspired my feelings of camaraderie! I may not agree with or relate to everything you’ve written, but I appreciate your honesty about why you feel the way you do and the nuance of your conclusion.
I’m watching the game show pilot now. One thing I appreciated is that one of the questions in the web standards category mentions the impact that a particular CSS property has on screen readers. This is progress.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries.
In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Most backend server frameworks use templating instead.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
(I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type .astro that renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.
That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what .astro does (.rb, .py, .yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).
I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
Well that’s one way to look at it. Another is that popular developer culture doesn’t need to be unified. There can be one segment for whom “computer science” means “what does this javascript code snippet do”; I am not a linguistic prescriptivist so this does not bother me. It doesn’t put us in danger of, like, forgetting the theory of formal languages. It also isn’t really a new sensibility; this comic has been floating around for at least 15 years now. People tend to project a hierarchy onto this where the people who care about things like formal languages and algorithms are more hardcore than the javascript bootcamp world. But this is a strange preoccupation we would be better to leave behind. If you want to get in touch with “hardcore” popular developer culture you can participate in Advent of Code! It’s a very vibrant part of the community! Or go to one of the innumerable tech conferences on very niche topics that occur daily throughout the developed world.
I wouldn’t say I’m a prescriptivist either, but the purpose of language is to communicate and it seems very odd to me that there are people whom “computer science” is semantically equivalent to “javascript trivia”. I think that seemingly inexplicable mismatch in meaning is what the article is remarking upon; not being judgemental about Javascript qua Javascript, but that there’s such a gulf in ontologies among people in the same field.
I have heard this phrase many times when defending a prescriptivist perspective and it is never actually about the words confusing people, it’s about upholding the venerability of some term they’ve attached to themselves.
Ehhhh strong disagree on that one! There’s a difference between “that’s not a word” and “that word choice is likely to cause more confusion than clarity”. The former is a judgement, the latter is an observation.
I have not attached the term “computer science” to myself, I truly think it’s just confusing to use that phrase refer to minutiae of a particular programming language. I’m not saying that’s “wrong”, nor that javascript in particular is bad or anything like that, just that it is very contrary to my expectations and so saying “computer science” in that manner communicates something very different to me than whoever is using it in that way would intend.
I would say that it’s wrong and bad to impugn my motives and claim that I actually mean the exact opposite of what I said though!
I’m confused because when someone says something is regarding “computer science” I expect to see things related to the branch of mathematics of that name. In the linked article, they mention seeing the term used in a game show: If I was told I was going to be on a game show and the category was “computer science” I would prepare by studying the aforementioned field and if I then was presented with language-specific trivia instead, I would be a bit miffed.
What proportion of an undergraduate computer science degree would you say is dedicated to the branch of mathematics and what proportion is dedicated to various language trivia?
When I was in school, the computer science course I took were pure math, but I can’t speak to what other programs at other school at other times are like. I don’t really understand why you’re so vexed here; I’m not trying to “project a hierarchy” or say that one thing is better than another (I taught at a javascript bootcamp!), merely that I think the point the article is making is that it feels strange to discover one’s peers use familiar words to mean very different things. I’m just trying to explain my perspective; you’re the one who is repeatedly questioning my honesty.
90% mathematics, from my experience doing such a degree in Swansea in the early 2000s and teaching at Cambridge much later. Programming is part of the course, but as an applied tool for teaching the concepts. You will almost never see an exam question where some language syntax thing is the difference between a right or wrong answer and never something where an API is the focus of a question.
Things like computer graphics required programming, because it would make no sense to explore those algorithms without an implementation that ends up with pixels on a screen (though doing some of the antialiased line-drawing things on graph paper was informative).
Dijkstra’s comment was that computer science is as much about computers as astronomy is about telescopes. You can’t do much astronomy without telescopes, and you need to understand the properties of telescopes to do astronomy well, but they’re just tools. Programming is the same for computer science. Not being able to program will harm your ability to study the subject, even at the theoretical extremes (you can prove properties of much more interesting systems if you can program in Coq, HOL4, TOA+, or whatever than if you use a pen and paper).
Even software engineering is mostly about things that are not language trivia and that’s a far more applied subject.
When I got my undergraduate degree in 1987 it was literally a B.S. Applied Mathematics (Computer Science), and when we learned language trivia it was because the course was about language trivia (Comparative Programming Languages, which was a really fun one involving very different languages like CLU, Prolog, and SNOBOL).
If kids nowadays are learning only language trivia to get a CS degree, somebody is calling it by the wrong name.
“I truly think it’s just confusing to use [the phrase “computer science” to] refer to minutiae of a particular programming language” -> “How are you confused?”
This is a bit underhanded. It’s confusing when someone says “computer science”, and then later it turns out they mean “Array.map returns an array and Array.foreach doesn’t” and not what the term has been used to refer to for a long time; algorithms, data structures, information theory, computation theory, PLT, etc. I am absolutely a linguistic descriptivist, but to claim that using a term in a way that doesn’t match the established usage is not cause for confusion is not actually a descriptivist take. It’d become less confusing once that becomes the established usage, but until we’re there, it’s still going to fail to communicate well and cause — say — confusion.
Funny you should mention Dijkstra. Like Naur, he had a hard time getting on board with the term, “computer science.” Dijkstra favored the subtly but importantly different “computing science.” Naur preferred “dataology.”
In some countries the field is called informatics, I wonder why that didn’t happen in English speaking ones (or if there’s a difference I’m not aware of).
That term is used in various places with different meaning, including in the US, where it’s a term for “information science”. I.e. studies of things like taxonomy and information archiving. In other words that’s already used for several purposes so if you want to use it for clarity you need to look elsewhere.
Informatics, just like dataology, seemingly focuses on data (information). Of course, processing data is at the core of computing, but, IMO, this misses the point. The point is the computing process, not what the process is applied to. Computing science is great, but cybernetics is cool, and also wider: it encompasses complex systems, feedback, and interactions.
Taking Naur’s side for a moment, though, the really science-y parts of computer science curriculum (e.g. time complexity, category theory, graph theory) are basically just mathematics applied to data/information, so perhaps “datology” and “informatics” aren’t that far off the mark. Everything else that’s challenging about a CS undergrad curriculum (e.g. memory management, language design, parallel computation, state management) is just advanced programming that has accumulated conventions that have hardened over the years and are often confused with theory.
Interesting because I feel like cybernetics is both a subset of computing science (namely one concerned with systems that respond and adapt to their environment) and trans-disciplinary enough to evade this categorization.
I feel like we do? It becomes especially apparent whenever one is where cars are but is not in a car themselves (pedestrian, cyclist, motorcyclist) — there is a strong sense of “drivers” and “everyone else”.
I think I would characterize that among drivers as a lack of awareness for non drivers, at least in the USA. My point is that we don’t expect a sense of shared tribal affiliation between drivers like the author seems to expect between programmers.
It’s a continuum. JS syntax is somewhat CS (I don’t know, 0.2?), but likely it’s more “engineering” (0.9?). Saying that JS syntax is more engineering than CS is NOT negative towards JS. Neither engineering not CS are superior or lesser respect each other!
On the other hand, although having clearly defined terms is probably a lost battle, words are needed to communicate, and it’s definitely worthwhile to try to have stable, useful words that we can use to be able to explain and discuss things.
I think in some cases it would have been great if terms hadn’t lost their original meaning (I’m thinking devops, for example), but likely if I thought about it, I’d find terms where I would think their drift from their original meaning has been good.
…
I like the article, even if I disagree with it. I feel this generational gap with the newer crop of programmers. Likely my elders feel a similar gap with my cohort!
But we are always quick to judge that we are on a downhill slope. We’ve been so long on this downhill slope, yet we are still alive and kicking, so maybe it’s not so downhill.
The new crop of programmers is just new. They haven’t specialized yet! They come with, at most, an undergrad education that hopefully exposed them to Context-Free Grammars and Turing Machines and a moderately-complicated datastructure like a red-black tree and a basic misunderstanding of what the halting problem is (they think it means you can’t write a program to tell whether a program will halt). Or maybe they come from a bootcamp and learned none of that. The career path of a programmer goes many places. Sure, some of this cohort will stay at the JS-framework-user level forever. That’s fine, it’s a job. Others will grow and learn and become specialists in very interesting areas, pushed by the two winds of their interests and immediate job necessities.
Yes and… the new crop of programmers likely includes a ton of people who do know. And also, the “new” crop of programmers has always included such folk.
What I’m saying, in every generation there’s always some people say the next generation will be a disaster. Hasn’t happened yet :D
A fresh out of school colleague of mine recently used Math.log2() to test whether a bit was set. They did study IPv4 and how e.g. address masks work. Not sure what to think about that.
I understand they spend most of time coding pretty complex Vue apps, but still.
Largely agree, but after having spent the last year or so getting up to speed with modern frontend dev, I’m cranking JS syntax alone to like a 0.7 on the CS scale. Half of the damn language is build-time AST transformations + runtime polyfill. Learning Babel is practically a compilers course.
I’m not sure I understand you correctly, but let me reiterate on something.
Something being CS or not does not make it harder or more deserving of merit. There are things that require different levels of intelligence in CS; there are plenty of easy things.
And there are lot of very difficult things in life which are not CS. Beating the world chess champion would make me much prouder than passing my first subjects in CS! As a less far-fetched example, there is plenty of software which is an engineering feat, but really is not a CS feat. (And viceversa, there’s a ton of “CS code” around which is rather simple in engineering terms.)
I’m not saying that your JS code was not CS-y- I would have to see it to give my opinion. But doing something hard with a computer does not mean you are doing CS (but it can!). And like, figuring out a complex CS proof frequently does not require any coding at all (but it can!).
I feel that if the teacher were instead explaining Nyquist-Shannon sampling theorem, the joke would work much better and be somewhat more balanced.
While analyzing algorithmic complexity can be useful, modern machines are quite unintuitive with register renaming and caches to the point that trying rough solutions and benchmarking often works better.
Nice talk! I worked through Nielson and Chuang a while back, and was pleasantly surprised to see that Nielson was one of the authors of quantum.country. (I skimmed it, and it looks like the site is just the first chapter of N+C).
A couple questions:
You said that current QC can go up to about 100 qubits. Are those physical qubits or error corrected quibits (based on many physical quibits)? And are they general purpose (can apply any gate to between any pair of qubits for example ) or are they limited (like I’ve seen systems where you can only use a gate between qubit 0 and qubit 1 and between qubit 1 and 2, but there will be no gate between qubit 0 and 2)
Finally, since the M+C book came out 20 years ago, my knowledge of QC is probably dated. Since you work in the space, what’s new? Anything really cool beyond Shor and Grover?
The number of qubits I mention throughout the talk are physical qubits. Currently, we as an industry are the level of a handful of error-corrected qubits, at most. There isn’t a direct mapping between X physical == Y logical, because this depends on the choice and implementation of error correction, and it’s an area of active research. Roughly, we think that ~1000 physical qubits would yield 5-10 logical qubits. Roadmaps of the majority of hardware manufacturers target this amount in the next few years. Some claim huge jumps before 2030.
In typical superconducting QCs qubit connectivity is a big issue, and it’s fundamentally different to e.g. ion trap QCs, where almost arbitrary many-to-many connectivity is possible. (The advantage of superconducting over ion traps is the speed: the former is about 1000 times faster than the latter.) A connection between two qubits requires a physical thing (in our case - a so called tunable coupler). We can at most fit under 10 (usually 4) tunable couplers to 1 qubit, this is why most chips are arranged as a lattice like this:
So yeah, you can’t just have 2 arbitrary qubits connected. In practice, when you’re writing an algorithm in form of a quantum circuit, you treat any 2 qubits as connected because in the the pre-processing and compilation our software adds swap operations. In the picture above you can apply a 2-qubit gate to qubits Q1 and Q3, and the preprocessor would e.g. swap values of Q1 with Q2, apply the operation between Q2 and Q3, then swap back Q2 with Q1. This is similar to what happens in CPU memory. Of course, this adds overhead, reduces the quality of results, and increases the overall time for execution, which is very limited (nanoseconds; though in some circumstances we’ve reached a millisecond).
If your circuit needs an operation between Q1 and Q9, then the preprocessor has to make multiple swaps, iteratively moving the state in a chain of qubits back and forth. In practice, another preprocessing step (routing) would determine the best initial mapping of your qubits and actual physical qubits, so that if your circuit requires lots of interactions between Q1 and Q9, then your Q1 will more likely be mapped to physical Q8, for example. Routing and optimization is an NP-hard problem.
Some cool research is happening in error-correction algorithms. Check out qldpc and other approaches. In applications, there are exciting things in various optimization and simulations, like battery chemistry simulations. Overall, I think the best and most promising applications always boil down to simulating quantum systems.
The computational resonator looks really cool. It seems like a good solution to the connectivity problem. I’m having trouble finding more info other than that one press release, but I’m imagining the topology is probably the 24 qubits arranged in 24 point star, and in the center are 2 qubits with connectivity to each of the other ones. Then when you want interaction between any two you swap into the center do the gate, then swap back. In computing analogy, the center is like a registers/ALU and the other bits are the RAM.
Am I close? If so, do you think that it will scale to larger number of qubits? I imagine there are probably physical limits to how many qubits can connect to the “ center “
—
I will check those links out, thanks again.
Currently, the only publicly available chip with a computational resonator has 6 qubits (IQM Star 6), and it can be accessed via cloud. Interestingly, you can disable some restrictions and access higher level states of the resonator.
Then when you want interaction between any two you swap into the center do the gate, then swap back. In computing analogy, the center is like a registers/ALU and the other bits are the RAM. Am I close?
I can’t share much about the current developments, but you are close in a sense that there are various configurations to achieve the balance between fabrication, connectivity, and fidelity. One can imagine building small stars and connecting them with longer-range couplers to other stars. There are some issues still due to the fact that it’s a 2D lattice in the end, while for some error correction techniques the ideal topology is actually a torus; but I don’t think many companies can build such complex 3D chips today.
If so, do you think that it will scale to larger number of qubits?
I think there’s definitely a potential. Computational resonators can be relatively long, and that’s the main factor. Connecting many qubits to it is not, AFAIK, the biggest issue since it’s basically the same coupling as between any 2 qubits in a normal square lattice topology.
BTW, I made a mistake in my previous comment, I said “…increases the overall time for execution, which is very limited (nanoseconds…)” — I meant to say “microseconds”. Nanoseconds is the scale of execution of a single operation (gate). Currently, we can execute hundreds and hundreds of gates reliably.
AFAIK, it generates a conversational audio file ABOUT the data you provided. It is not simply reading the original text, which I want. Also, it does not generate a RSS feed with podcasts.
My own: https://minifeed.net/
Always wanted a lobste.rs/HN-style feed of links rather than a traditional RSS reader. I ended up implementing kind of both, but in most cases I prefer to go to the original page on the author’s website anyway.
Looking for job opportunities in Ireland. Feeling pretty excited about a potentially better job market over there compared to where I am now.
Where are you now?
Finland
Is the Finnish economy particularly bad right now?
It’s stagnating, and salaries are very low compared to the cost of living. Taxes are rising, including the sales tax (VAT), which was one of the highest in Europe to begin with (24%), now 25.5%. Salaries in software engineering are lower than in Western Europe, but the biggest problem IMO is that there aren’t enough good jobs. Most job openings are in endless consultancy firms. Very little product development, startups and innovation in the past decade. And I can’t see any chances of the situation improving in the near future.
Many professionals I know have left Finland or seriously considering to move away.
I think almost everywhere has a little bit of that and getting worse. (Of course places can be better in absolute terms).
Why is there a fork?
Because the development and maintenance of the original app had stalled, no activity in the last 3 years.
Wasn’t it commercial?
i wonder if i’ll live to see the day where we can talk about a language without putting a different language down
The YouTube channel here seems to be a person who needs to be dramatic for view reasons. I think the actual content, and the position of the Ghostty author here on this topic, is pretty mild.
An actual bit from the video:
Guest: “…I don’t know, I’m questioning everything about Go’s place in the stack because […reasonable remarks about design tradeoffs…]”
Host: “I love that you not only did you just wreck Go […]”
Aside… In the new year I’ve started reflexively marking videos from channels I follow as “not interested” when the title is clickbait, versus a succinct synopsis of what the video is about. I feel like clickbait and sensationalism on YouTube is out of control, even among my somewhat curated list of subscribed channels.
This is why I can’t stand almost any developer content on YouTube and similar platforms. They’re way too surface-level, weirdly obsessed with the inane horse race of finding the “best” developer tooling, and clickbait-y to a laughable degree. I have >20 years of experience, I’m not interested in watching someone blather on about why Go sucks when you could spend that time on talking about the actual craft of building things.
But, no, instead we get an avalanche of beginner-level content that lacks any sort of seriousness.
This is why I really like the “Developer Voices” channel. Great host, calm and knowledgeable. Interesting guests and topics. Check it out if you don’t know it yet.
Very nice channel indeed. Found it accidentally via this interview about Smalltalk and enjoyed it very much.
Do you have other channel recommendations?
I found Software Unscripted to pretty good too. Not quite as calm as Developer Voices, but the energy is positive.
Thanks! Didn’t know Richard Feldman hosted a podcast, he’s a good communicator.
Signals and Threads is another great podcast, albeit doesn’t seem to have a scheduled release
Thanks for the suggestion. I will check it out!
I’m in a similar boat. Have you found any decent channels that aren’t noob splooge? Sometimes I’ll watch Asahi Lina, but I haven’t found anything else that’s about getting stuff done. Also, non-OS topics would be nice additions as well.
As someone else said, Developer Voices is excellent, and the on the opposite end of the spectrum from OP.
Two more:
The Software Unscripted podcast publishes on YouTube too, and I enjoy it a fair bit at least in the audio only format.
Book Overflow, which focuses on reading a software book about once every two weeks and talking about it in depth.
7 (7!) Years ago LTT made a video about why their thumbnails are so… off putting and it essentially boiled down to “don’t hate the player; hate the game”. YouTube rewards that kind of content. There’s a reason why nearly every popular video these days is some variant of “I spent 50 HOURS writing C++” with the thumbnail having a guy throwing up. If your livelihood depends on YouTube, you’re leaving money on the table by not doing that.
It’s not just “Youtube rewards it”, it’s that viewers support it. It’s a tiny, vocal minority of people who reject those thumbnails. The vaaaaast majority of viewers see them and click.
I don’t think you can make a definitive statement either way because YouTube has its thumb on the scales. Their algorithm boosts videos on factors other than just viewer click through or retention rates (this has also been a source of many superstitions held by YouTubers in the past) and the way the thumbnail, title and content metas have evolved make me skeptical that viewers as a whole support it.
What is the alternative? That they look at the image and go “does this person make a dumb face” ? Or like “there’s lots of colors” ? I think the simplest explanation is that people click on the videos a lot.
…or it’s just that both negative and positive are tiny slices compared to neutrals but the negative is slightly smaller than the positive.
(I use thumbnails and titles to evaluate whether to block a channel for being too clickbait-y or I’d use DeArrow to get rid of the annoyance on the “necessary evil”-level ones.)
then you have chosen poorly.
No, I think it’s okay for people to make great content for a living.
I am quite happy to differ in opinion to someone who says ‘great content’ unironically. Anyway your response is obviously a straw man, I’m not telling Chopin to stop composing for a living.
Your personal distaste for modern culture does not make it any higher or lower than Chopin, nor does it invalidate the fact that the people who make it have every right to make a living off of it.
They literally don’t have a right to make a living from Youtube, this is exactly the problem. Youtube can pull the plug and demonetise them at any second and on the slightest whim, and they have absolutely no recourse. This is why relying on it to make a living is a poor choice. You couldn’t be more diametrically wrong if you tried. You have also once again made a straw man with the nonsense you invented about what I think about modern culture.
How’s that any different from the state of the media industry at any point in history? People have lost their careers for any reason in the past. Even if you consider tech or any other field, you’re always building a career on top of something else. YouTube has done more to let anyone make a living off content than any other stage in history, saying you’re choosing poorly to make videos for YouTube is stupid.
You’re the one who brought it up:
Isn’t this kind of a rigid take? Why is depending on youtube a poor choice? For a lot of people, I would assume it’s that or working at a fast-food restaurant.
Whether that’s a good long-term strategy, or a benefit to humanity is a different discussion, but it doesn’t have to necessarily be a poor choice.
Not really?
I mean sure if you’ve got like 1000 views a video then maybe your livelihood depending on YouTube is a poor choice.
There’s other factors that come into this, but if you’ve got millions of views and you’ve got sponsors you do ad-reads for money/affiliate links then maybe you’ll be making enough to actually “choose” YouTube as your main source of income without it being a poor choice (and it takes a lot of effort to reach that point in the first place).
We’ve been seeing this more and more. You can, and people definitely do, make careers out of YouTube and “playing the game” is essential to that.
Heh - I had guessed who the host would be based on your comment before I even checked. He’s very much a Content Creator (with all the pandering and engagement-hacking that implies). Best avoided.
Your “ghostty author” literally built a multibillion dollar company writing Go for over a decade, so Im pretty sure his opinion is not a random internet hot take.
Yup. He was generally complimentary of Go in the interview. He just doesn’t want to use it or look at it at this point in his life. Since the Lobsters community has quite an anomalous Go skew, I’m not surprised that this lack of positivity about Go would be automatically unpopular here.
And of course the title was click-baity – but can we expect from an ad-revenue-driven talk show?
My experience is that Lobste.rs is way more Rust leaning than Go leaning, if anything.
We have more time to comment on Lobsters because our tools are better ;)
Waiting for compile to finish, eh?
Hahahahaha. Good riposte!
I was able to get the incremental re-builds down to 3-5 seconds on a 20kloc project with a fat stack of dependencies which has been good enough given most of that is link time for a native binary and a wasm payload.
cargo checkviarust-analyzerin my editor is faster and does enough for my interactive workflow most of the time.Yeah, Haskell is so superior to Rust that’s not even fun at this point.
It’s funny you say that because recently it seems we get a huge debate on any Go-related post :D
First thought was “I bet it’s The Primeagen.” Was not disappointed when I clicked to find out.
Don’t be a drama queen ;-) You can, all you want. That’s what most people do.
The host actually really likes Go, and so does the guest. He built an entire company where Go was the primary (only?) language used. It is only natural to ask him why he picked Zig over Go for creating Ghostty, and it is only natural that the answer will contrast the two.
i can’t upvote this enough
I don’t believe the “a11y” tag is appropriate here. An example of what would have made it appropriate is if he had talked about how it’s difficult to implement accessibility in cross-platform UI toolkits.
Yes, and I don’t remember putting it in, and the moderation log does not seem to contain anyone adding it afterwards. Perhaps I made a mistake and didn’t notice it. Removed the tag now.
Nice setup! The window not being in the center of the wall frustrates me though :D
I guess we’re gonna see a new “Lobsters Battlestations and Screenshots” thread soon! (2024 edition)
I was curious (again) to see 2016 missing from the battlestations list, although it seems we just didn’t ask that year. Closest I can find in search for
tag:askclicking through to 2016 stories (roughly pages 88-97) is either What desktop environment and window manager do you use? or What does your development environment look like? How might it change?, although only the first story has screenshots linked in the comments and not very many. Not sure either fit the criteria so I guess we have a hole in the streak.Thank You - yes - I also needed to get used to that :)
I plan to get it up on Monday. I try to get the battlestations thread going on Mondays so people can add their workstations throughout the work-week :D
This week I’ve been playing with Love2D, a simple game engine, wrote a prototype in Lua, and found it a refreshingly clear and a pleasant language.
Btw, the game Balatro - which is amazing IMO - is written in Lua on Love2D.
Balatro is awesome.
Have you seen the Playdate console? Its SDK is Lua based and should feel familiar to Love.
I’ve seen photos of it, but never bothered to learn more. Thanks for mentioning it, it looks pretty cool! Surprised to see it was made by Panic, who are (at least to me) known for macOS&iOS dev-specific apps and utilities.
UPD: Oh, apparently Panic is also a video game publisher! One of the games I really enjoyed was Firewatch, which TIL was published by them. Untitled Goose Game is also fun.
What’s the error handling like, after having tried it out for a while?
I notice that the examples just loads from file like this:
whale = love.graphics.newImage("whale.png")and:
sound = love.audio.newSource("music.ogg", "stream").LÖVE expects these files exists and panics (calls
error) if they don’t.This is fine because in a game, you should be able to expect all the required files exist. If you still want to handle the error, you can use
pcallas shown in the blog post.I’ve noticed that often a link will be on HN and then appear on lobste.rs a few days later, so it would be interesting to widen the time window rather than just looking at the front page. (Obviously that would be a lot more work, though!)
I sometimes cross post to Lobsters because I feel like this community would have a different or deeper opinion on the linked article.
Please continue — the S/N ratio in comments is much better here!
I agree. I see an article on the orange site and after a few comments things kind of turn sour for me and I head to lobsters to see the comments. Usually lobsters does not disappoint…
Without relying on server-side storage (which I’d like to avoid), it’d be great to be able to fetch some historical data. I fetch the Lobse.rs feed from https://lobste.rs/hottest.json, which I found in some random comment here; I am not aware of other endpoints. For HN, I’m using the Algolia API (https://hn.algolia.com/api/v1/search?tags=front_page), which gives a bit more flexibility.
My go-to for this sort of app is using “git scraping” ala https://simonwillison.net/2020/Oct/9/git-scraping/
I’m going to say Github here, but I’m sure alternatives work too. All you need is static hosting and an ability to run a cron that can commit to itself.
It would look something like this:
Every hour (or your interval of choice), run a Github Action that fetches the entries on the current homepage. Save them in a file in your git repo.
Then to render your site, check back through the git history for whatever overlap window (eg, 5 days might work well for HN/Lobsters). Commit the html file, and host it on Github Pages.
For something as small as “the links on Lobsters every hour”, you can easily keep decades of history in a git repo.
There’s active.json as well, and you always could use the rss links too for filtering
“How bloom filters made SQLite 10x faster” is not detected as shared, but it is
Yeah, that was a problem with normalization, as per ryan-duve’s comment above. Fixed now.
Have you considered using heuristics to normalize the links? For example, I currently see
https://avi.im/blag/2024/sqlite-past-present-futureas “Unique to Lobsters”https://avi.im/blag/2024/sqlite-past-present-future/as “Unique to Hacker News”Good point, fixed!
If you’d like a few more normalization rules, ours are here.
Container images were meant (among other goals) to provide reproducible pieces of software, and I’ve been wondering if it makes sense to go further and provide whole machines (virtual, or even physical) with static, immutable, pre-defined setup for the sake of a single executable binary. Sans OS updates, it’s a security liability, of course, but, just like with containers, the static machine would not be meant to be exposed to the outside world directly; instead, the admin would have to set up the “front end” for it (e.g. some proxy if we’re talking about web services). The contract here is that the maintainer of software takes care of its environment, and provides a thin interface in form of HTTP or some other simple-enough protocol.
Like, imagine buying a personal accounting app in form of a Raspberry pi. (“Bring your own storage” and “bring your own security”).
You can’t really “bring your own security” by putting a proxy in front.
If the interface is http(s), do you think a proxy that forwards all/most of http(s) is actually going to help with security? Why?
I think that’s suggested not for security but to make it so the backend can last for longer before the next time that changing requirements force it to change.
The point of the reverse proxy is TLS termination. Every 5 to 20 years the internet at large starts to demand new ciphers so you have to upgrade your TLS libraries even if they don’t have any security vulnerabilities per se.
So having a (stateless, upgradeable) reverse proxy in front that speaks HTTPS to the internet and HTTP to the backend lengthens the time for which the backend can avoid needing to change.
A proxy can certainly protect somewhat naive HTTP implementations against protocol-level attacks (so completely malformed requests are dropped) and certain types of resource exhaustion attacks by limiting the number of in-flight connections.
It’s interesting that the comments here are solidifying this bit:
I personally strongly agree that there’s a big hole in the website creation space left behind by Frontpage and Dreamweaver. There totally should be a desktop GUI WYSIWYG “type up some stuff and export some HTML” website creator program, but I’ve searched and can’t really find anything. I’m firmly in the “oh yeah, i don’t have time to make that for you, but i get why you’d want it. good luck man.” camp, and if I end up learning enough about desktop software development at some point I might just make the time to work on it.
Here is one: https://getpublii.com/
Wow! Thanks for the pointer — Publii looks fantastic. I might actually get the blog going again.
This is probably not something you’re after, but just in case: I’m also building a simple blogging platform (https://exotext.com, example blog) and am seeking any sort of feedback in exchange for a free-of-charge service until some future public release, and a perpetual discount after that. Let me know if you (or anyone else reading this) want to try it.
I encourage people to check out Sam’s other visualized explanations, they are excellent. My favorite is about queues: https://encore.dev/blog/queueing
Thank you! 🙏
I really enjoyed making Queueing, it was fun making different visuals interact with each other (the graphs toward the end) and the whole “tasks” system is something I’d love to use again.
A few months ago we’ve been re-implementing a queueing solution at work, and your visualization helped some people (especially less technical people) understand the domain better.
Dunno if you can fix it, but the queueing diagrams are completely missing on iOS.
I see them on iOS. Do you have JavaScript disabled?
No, just some ad blockers 🤷
The “retrocomputing” tag was a nice touch.
Not retro enough, Babbage’s analytical engine is the OG retro :)
Yeah yeah you kids are all about that newfangled engine thing. Back in my day all we had was the Jacquard loom, and we liked it!
I considered talking about Babbage and Lovelace but ultimately decided it was out of scope. Such a tragic story, though. “The Innovators” by Walter Isaacson was a great read.
Thanks for the recommendation, I just got a fresh credit to spend on Audible, so I’m gonna get “The Innovators”.
What is this thing, computer science? Who are these people, programmers? What are these terms, anyway? Flawed and ever-changing, my friends, just like the rest of the universe.
You have insulted me with the word “friends”, because its meaning had changed, just like the rest of the universe :) Joking, of course, but I hope you get my point.
Your closing paragraph is what inspired my feelings of camaraderie! I may not agree with or relate to everything you’ve written, but I appreciate your honesty about why you feel the way you do and the nuance of your conclusion.
Thank you! I guess I can now find refuge in understanding that things change, and there are people who understand that, and that’s great!
I’m watching the game show pilot now. One thing I appreciated is that one of the questions in the web standards category mentions the impact that a particular CSS property has on screen readers. This is progress.
Accessibility is generally being considered and discussed a lot more in the frontend circles than even 5 years ago, this is indeed progress.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
To be fair, JSX is a pleasurable way to sling together HTML, regardless of if it’s on the frontend or backend.
Many backend server frameworks have things similar to JSX.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries. In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
I’m aware, I wrote such a lib for common lisp. I was talking that in most frameworks most people use they are still at the templating world.
It’s a shame other languages don’t really have this. I guess having SXSLT transformation is the closest most get.
Many languages have this, here’s a tiny sample: https://github.com/yawaramin/dream-html?tab=readme-ov-file#prior-artdesign-notes
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
JSX is one among many 😉
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
You can render a template (as in, plug in values for the placeholders in an HTML skeleton), and that’s the intended usage here I think.
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
It seems that was a phase? The term transpiler annoys me a bit, but I don’t remember seeing it for quite a while now.
Worked very well for Opera Mini for years. Made very low-end web clients far more usable. What amazed me was how well interactivity worked.
So now I want a server side rendering framework that produces a PNG that fits the width of my screen. This could be awesome!
There was a startup whose idea was to stream (as in video stream) web browsing similar to cloud gaming: https://www.theverge.com/2021/4/29/22408818/mighty-browser-chrome-cloud-streaming-web
It would probably be smaller than what is being shipped as a web page these days.
Exactly. The term is simply wrong…
ESL issue. “To render” is fairly broad term meaning something is to provide/concoct/actuate, has little to do with graphics in general.
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
The way that seems ‘different’ to you is the way that is idiomatic in the context of websites 😉
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type
.astrothat renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what
.astrodoes (.rb,.py,.yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
Well that’s one way to look at it. Another is that popular developer culture doesn’t need to be unified. There can be one segment for whom “computer science” means “what does this javascript code snippet do”; I am not a linguistic prescriptivist so this does not bother me. It doesn’t put us in danger of, like, forgetting the theory of formal languages. It also isn’t really a new sensibility; this comic has been floating around for at least 15 years now. People tend to project a hierarchy onto this where the people who care about things like formal languages and algorithms are more hardcore than the javascript bootcamp world. But this is a strange preoccupation we would be better to leave behind. If you want to get in touch with “hardcore” popular developer culture you can participate in Advent of Code! It’s a very vibrant part of the community! Or go to one of the innumerable tech conferences on very niche topics that occur daily throughout the developed world.
I wouldn’t say I’m a prescriptivist either, but the purpose of language is to communicate and it seems very odd to me that there are people whom “computer science” is semantically equivalent to “javascript trivia”. I think that seemingly inexplicable mismatch in meaning is what the article is remarking upon; not being judgemental about Javascript qua Javascript, but that there’s such a gulf in ontologies among people in the same field.
I have heard this phrase many times when defending a prescriptivist perspective and it is never actually about the words confusing people, it’s about upholding the venerability of some term they’ve attached to themselves.
Ehhhh strong disagree on that one! There’s a difference between “that’s not a word” and “that word choice is likely to cause more confusion than clarity”. The former is a judgement, the latter is an observation.
I have not attached the term “computer science” to myself, I truly think it’s just confusing to use that phrase refer to minutiae of a particular programming language. I’m not saying that’s “wrong”, nor that javascript in particular is bad or anything like that, just that it is very contrary to my expectations and so saying “computer science” in that manner communicates something very different to me than whoever is using it in that way would intend.
I would say that it’s wrong and bad to impugn my motives and claim that I actually mean the exact opposite of what I said though!
How are you confused? Does it really call into question your own understanding of what the term “computer science” means?
I’m confused because when someone says something is regarding “computer science” I expect to see things related to the branch of mathematics of that name. In the linked article, they mention seeing the term used in a game show: If I was told I was going to be on a game show and the category was “computer science” I would prepare by studying the aforementioned field and if I then was presented with language-specific trivia instead, I would be a bit miffed.
What proportion of an undergraduate computer science degree would you say is dedicated to the branch of mathematics and what proportion is dedicated to various language trivia?
When I was in school, the computer science course I took were pure math, but I can’t speak to what other programs at other school at other times are like. I don’t really understand why you’re so vexed here; I’m not trying to “project a hierarchy” or say that one thing is better than another (I taught at a javascript bootcamp!), merely that I think the point the article is making is that it feels strange to discover one’s peers use familiar words to mean very different things. I’m just trying to explain my perspective; you’re the one who is repeatedly questioning my honesty.
90% mathematics, from my experience doing such a degree in Swansea in the early 2000s and teaching at Cambridge much later. Programming is part of the course, but as an applied tool for teaching the concepts. You will almost never see an exam question where some language syntax thing is the difference between a right or wrong answer and never something where an API is the focus of a question.
Things like computer graphics required programming, because it would make no sense to explore those algorithms without an implementation that ends up with pixels on a screen (though doing some of the antialiased line-drawing things on graph paper was informative).
Dijkstra’s comment was that computer science is as much about computers as astronomy is about telescopes. You can’t do much astronomy without telescopes, and you need to understand the properties of telescopes to do astronomy well, but they’re just tools. Programming is the same for computer science. Not being able to program will harm your ability to study the subject, even at the theoretical extremes (you can prove properties of much more interesting systems if you can program in Coq, HOL4, TOA+, or whatever than if you use a pen and paper).
Even software engineering is mostly about things that are not language trivia and that’s a far more applied subject.
When I got my undergraduate degree in 1987 it was literally a B.S. Applied Mathematics (Computer Science), and when we learned language trivia it was because the course was about language trivia (Comparative Programming Languages, which was a really fun one involving very different languages like CLU, Prolog, and SNOBOL).
If kids nowadays are learning only language trivia to get a CS degree, somebody is calling it by the wrong name.
In my experience, only one in four courses was even related to applied programming. The other courses were not language specific at all.
Even in the courses that related to applied programming, usually less than a week per semester was spent on language trivia.
Overall I’d say about 5%?
“I truly think it’s just confusing to use [the phrase “computer science” to] refer to minutiae of a particular programming language” -> “How are you confused?”
This is a bit underhanded. It’s confusing when someone says “computer science”, and then later it turns out they mean “Array.map returns an array and Array.foreach doesn’t” and not what the term has been used to refer to for a long time; algorithms, data structures, information theory, computation theory, PLT, etc. I am absolutely a linguistic descriptivist, but to claim that using a term in a way that doesn’t match the established usage is not cause for confusion is not actually a descriptivist take. It’d become less confusing once that becomes the established usage, but until we’re there, it’s still going to fail to communicate well and cause — say — confusion.
-Edsger Dijkstra
Words should mean specific things in technical fields.
Funny you should mention Dijkstra. Like Naur, he had a hard time getting on board with the term, “computer science.” Dijkstra favored the subtly but importantly different “computing science.” Naur preferred “dataology.”
In some countries the field is called informatics, I wonder why that didn’t happen in English speaking ones (or if there’s a difference I’m not aware of).
That term is used in various places with different meaning, including in the US, where it’s a term for “information science”. I.e. studies of things like taxonomy and information archiving. In other words that’s already used for several purposes so if you want to use it for clarity you need to look elsewhere.
some schools use the term, at my uni we had
I wish we had cybernetics instead.
Informatics, just like dataology, seemingly focuses on data (information). Of course, processing data is at the core of computing, but, IMO, this misses the point. The point is the computing process, not what the process is applied to. Computing science is great, but cybernetics is cool, and also wider: it encompasses complex systems, feedback, and interactions.
Taking Naur’s side for a moment, though, the really science-y parts of computer science curriculum (e.g. time complexity, category theory, graph theory) are basically just mathematics applied to data/information, so perhaps “datology” and “informatics” aren’t that far off the mark. Everything else that’s challenging about a CS undergrad curriculum (e.g. memory management, language design, parallel computation, state management) is just advanced programming that has accumulated conventions that have hardened over the years and are often confused with theory.
Interesting because I feel like cybernetics is both a subset of computing science (namely one concerned with systems that respond and adapt to their environment) and trans-disciplinary enough to evade this categorization.
To me it feels like computer science is a subset of cybernetics. Or, to be more precise: Computer science + Systems Science = Cybernetics.
Datology was incidentally what those courses were called when I took them as part of studing computational linguistics here in Sweden.
Well, Dijkstra would certainly not endorse committing a category error and conflating terms about a field with terms of a field.
After reading some of the author’s replies, it really does come off as bloviating about dictionary terms and real Computer Scientists.
It’s kind of an odd category, isn’t it? There are plenty of people who drive cars, but we generally don’t categorize daily commuters as Drivers.
I feel like we do? It becomes especially apparent whenever one is where cars are but is not in a car themselves (pedestrian, cyclist, motorcyclist) — there is a strong sense of “drivers” and “everyone else”.
I think I would characterize that among drivers as a lack of awareness for non drivers, at least in the USA. My point is that we don’t expect a sense of shared tribal affiliation between drivers like the author seems to expect between programmers.
It’s a continuum. JS syntax is somewhat CS (I don’t know, 0.2?), but likely it’s more “engineering” (0.9?). Saying that JS syntax is more engineering than CS is NOT negative towards JS. Neither engineering not CS are superior or lesser respect each other!
On the other hand, although having clearly defined terms is probably a lost battle, words are needed to communicate, and it’s definitely worthwhile to try to have stable, useful words that we can use to be able to explain and discuss things.
I think in some cases it would have been great if terms hadn’t lost their original meaning (I’m thinking devops, for example), but likely if I thought about it, I’d find terms where I would think their drift from their original meaning has been good.
…
I like the article, even if I disagree with it. I feel this generational gap with the newer crop of programmers. Likely my elders feel a similar gap with my cohort!
But we are always quick to judge that we are on a downhill slope. We’ve been so long on this downhill slope, yet we are still alive and kicking, so maybe it’s not so downhill.
(A local tech journalist dug a quote from Phaedrus [so over 2300 years old] where some old man yells at… writing! Search https://standardebooks.org/ebooks/plato/dialogues/benjamin-jowett/text/single-page#phaedrus-text for “At the Egyptian city of Naucratis” and enjoy!)
The new crop of programmers is just new. They haven’t specialized yet! They come with, at most, an undergrad education that hopefully exposed them to Context-Free Grammars and Turing Machines and a moderately-complicated datastructure like a red-black tree and a basic misunderstanding of what the halting problem is (they think it means you can’t write a program to tell whether a program will halt). Or maybe they come from a bootcamp and learned none of that. The career path of a programmer goes many places. Sure, some of this cohort will stay at the JS-framework-user level forever. That’s fine, it’s a job. Others will grow and learn and become specialists in very interesting areas, pushed by the two winds of their interests and immediate job necessities.
Yes and… the new crop of programmers likely includes a ton of people who do know. And also, the “new” crop of programmers has always included such folk.
What I’m saying, in every generation there’s always some people say the next generation will be a disaster. Hasn’t happened yet :D
A fresh out of school colleague of mine recently used Math.log2() to test whether a bit was set. They did study IPv4 and how e.g. address masks work. Not sure what to think about that.
I understand they spend most of time coding pretty complex Vue apps, but still.
Largely agree, but after having spent the last year or so getting up to speed with modern frontend dev, I’m cranking JS syntax alone to like a 0.7 on the CS scale. Half of the damn language is build-time AST transformations + runtime polyfill. Learning Babel is practically a compilers course.
I’m not sure I understand you correctly, but let me reiterate on something.
Something being CS or not does not make it harder or more deserving of merit. There are things that require different levels of intelligence in CS; there are plenty of easy things.
And there are lot of very difficult things in life which are not CS. Beating the world chess champion would make me much prouder than passing my first subjects in CS! As a less far-fetched example, there is plenty of software which is an engineering feat, but really is not a CS feat. (And viceversa, there’s a ton of “CS code” around which is rather simple in engineering terms.)
I’m not saying that your JS code was not CS-y- I would have to see it to give my opinion. But doing something hard with a computer does not mean you are doing CS (but it can!). And like, figuring out a complex CS proof frequently does not require any coding at all (but it can!).
I feel that if the teacher were instead explaining Nyquist-Shannon sampling theorem, the joke would work much better and be somewhat more balanced.
While analyzing algorithmic complexity can be useful, modern machines are quite unintuitive with register renaming and caches to the point that trying rough solutions and benchmarking often works better.
Nice talk! I worked through Nielson and Chuang a while back, and was pleasantly surprised to see that Nielson was one of the authors of quantum.country. (I skimmed it, and it looks like the site is just the first chapter of N+C).
A couple questions: You said that current QC can go up to about 100 qubits. Are those physical qubits or error corrected quibits (based on many physical quibits)? And are they general purpose (can apply any gate to between any pair of qubits for example ) or are they limited (like I’ve seen systems where you can only use a gate between qubit 0 and qubit 1 and between qubit 1 and 2, but there will be no gate between qubit 0 and 2)
Finally, since the M+C book came out 20 years ago, my knowledge of QC is probably dated. Since you work in the space, what’s new? Anything really cool beyond Shor and Grover?
Thanks!
The number of qubits I mention throughout the talk are physical qubits. Currently, we as an industry are the level of a handful of error-corrected qubits, at most. There isn’t a direct mapping between X physical == Y logical, because this depends on the choice and implementation of error correction, and it’s an area of active research. Roughly, we think that ~1000 physical qubits would yield 5-10 logical qubits. Roadmaps of the majority of hardware manufacturers target this amount in the next few years. Some claim huge jumps before 2030.
In typical superconducting QCs qubit connectivity is a big issue, and it’s fundamentally different to e.g. ion trap QCs, where almost arbitrary many-to-many connectivity is possible. (The advantage of superconducting over ion traps is the speed: the former is about 1000 times faster than the latter.) A connection between two qubits requires a physical thing (in our case - a so called tunable coupler). We can at most fit under 10 (usually 4) tunable couplers to 1 qubit, this is why most chips are arranged as a lattice like this:
So yeah, you can’t just have 2 arbitrary qubits connected. In practice, when you’re writing an algorithm in form of a quantum circuit, you treat any 2 qubits as connected because in the the pre-processing and compilation our software adds swap operations. In the picture above you can apply a 2-qubit gate to qubits Q1 and Q3, and the preprocessor would e.g. swap values of Q1 with Q2, apply the operation between Q2 and Q3, then swap back Q2 with Q1. This is similar to what happens in CPU memory. Of course, this adds overhead, reduces the quality of results, and increases the overall time for execution, which is very limited (nanoseconds; though in some circumstances we’ve reached a millisecond).
If your circuit needs an operation between Q1 and Q9, then the preprocessor has to make multiple swaps, iteratively moving the state in a chain of qubits back and forth. In practice, another preprocessing step (routing) would determine the best initial mapping of your qubits and actual physical qubits, so that if your circuit requires lots of interactions between Q1 and Q9, then your Q1 will more likely be mapped to physical Q8, for example. Routing and optimization is an NP-hard problem.
At IQM, where I work, we’re exploring a different architecture where qubits are connected via a so called computational resonator. [Some info here](https://www.meetiqm.com/newsroom/press-releases/iqm-to-deliver-czech-republic-first-quantum-computer-with-unique-star-topology. You can think of it as a bus which allows all-to-all connectivity. It still requires the QC to move the state, but it’s always at most 2 moves to apply a 2-qubit gate between any two qubits.
Some cool research is happening in error-correction algorithms. Check out qldpc and other approaches. In applications, there are exciting things in various optimization and simulations, like battery chemistry simulations. Overall, I think the best and most promising applications always boil down to simulating quantum systems.
Thanks for the long response!
The computational resonator looks really cool. It seems like a good solution to the connectivity problem. I’m having trouble finding more info other than that one press release, but I’m imagining the topology is probably the 24 qubits arranged in 24 point star, and in the center are 2 qubits with connectivity to each of the other ones. Then when you want interaction between any two you swap into the center do the gate, then swap back. In computing analogy, the center is like a registers/ALU and the other bits are the RAM.
Am I close? If so, do you think that it will scale to larger number of qubits? I imagine there are probably physical limits to how many qubits can connect to the “ center “ — I will check those links out, thanks again.
Currently, the only publicly available chip with a computational resonator has 6 qubits (IQM Star 6), and it can be accessed via cloud. Interestingly, you can disable some restrictions and access higher level states of the resonator.
I can’t share much about the current developments, but you are close in a sense that there are various configurations to achieve the balance between fabrication, connectivity, and fidelity. One can imagine building small stars and connecting them with longer-range couplers to other stars. There are some issues still due to the fact that it’s a 2D lattice in the end, while for some error correction techniques the ideal topology is actually a torus; but I don’t think many companies can build such complex 3D chips today.
I think there’s definitely a potential. Computational resonators can be relatively long, and that’s the main factor. Connecting many qubits to it is not, AFAIK, the biggest issue since it’s basically the same coupling as between any 2 qubits in a normal square lattice topology.
BTW, I made a mistake in my previous comment, I said “…increases the overall time for execution, which is very limited (nanoseconds…)” — I meant to say “microseconds”. Nanoseconds is the scale of execution of a single operation (gate). Currently, we can execute hundreds and hundreds of gates reliably.
New function of Google’s NotebookLM which published Sep. 2024 can covert any document to podcast
AFAIK, it generates a conversational audio file ABOUT the data you provided. It is not simply reading the original text, which I want. Also, it does not generate a RSS feed with podcasts.
My own: https://minifeed.net/ Always wanted a lobste.rs/HN-style feed of links rather than a traditional RSS reader. I ended up implementing kind of both, but in most cases I prefer to go to the original page on the author’s website anyway.