I ask the author to refrain from using HN clickbait titles for articles that nitpick nightly APIs. I also ask them and others to stop quoting Donald Knuth, “premature optimisation is the root of all evil” is surely passé at this point. Otherwise this is a neat critique. One of my favorite parts about Rust is that we have the ability to nitpick the unsafe bits and make ergonomic APIs that you otherwise couldn’t create in C, C++, Zig, etc.
They didn’t even quote the full thing when they said “but there is more”. Here’s the full quote, with the part they omitted boldfaced.
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
– Donald Knuth, Structured Programming with go to Statements, ACM Computing Surveys, Vol 6, No. 4, Dec. 1974, p. 268
This also doesn’t pass the guidelines on clickbaity titles. I’ve suggested a title change to “Issues with BorrowedBuf in Rust” and I suggest you do the same
imo, nothing implies the author is unaware of them. This is a contribution to the conversation of solving these problems, like those are. They explore similar ideas but the details and some motivations are fairly different.
We’ve implemented pipe syntax in GoogleSQL[19], the SQL dialect and implementation shared across all SQL systems at Google including F1[28], BigQuery[2], Spanner[19] and Procella[18], and the open-source release, ZetaSQL[12]
Besides that, if I got it right, it supports compiling against skeletons of standard system header files (the actual struct layouts for example can be missing, just the, say, POSIX mandated names present). Then it can deploy s/w pre-compiled that way on the actual OS, its architecture and the implementation details of the shipped system header files. This would allow software publishers to distribute binaries, which can be installed into multiple OS vendors’ systems.
I have loved Kotlin since ~2015 and can’t imagine actually using standalone Java nowadays. Null safety is such a big deal, and Java is still lagging behind. It’s incredible when a language can make its sibling langs easier to use, like with Kotlin inferring nullability of (properly annotated) Java members.
I debated putting anything about coroutines on here. In all honesty, in my 4 years of Kotlin use, I’ve rarely used them. […] When I do need machine-local async flows, I find that I trust the standard JVM thread pools or executors more. To be fair, this has nothing to do with coroutines and everything to do with our use of Spring Boot and Spring Security at Masset.
Kotlin’s coroutines are pretty cool. I made a prototype of job system, where each job relied on awaiting on sub-jobs, and where each job is a coroutine that can be paused/resumed. Modeling this with coroutines was an interesting approach, but a thread pool is definitely fine here.
IntelliJ+Gradle is still “required” though, and this tooling lock-in will prevent adoption. And damn is Gradle frustrating sometimes.
I looooove coroutines, but the article specifically points out that this is written from the backend perspective. For server-side code I think coroutines are mostly beneficial as A) a workaround for not having OS threads on your runtime or B) state machines that use TCO, and thus not very relevant on the JVM where you do have OS threads but not TCO. Plus IIRC Kotlin’s coroutines are stackless, right? So they’re a little nerfed.
They’re stackless but the compiler generates a continuation object to store your local variables in, so at resumption it can fill all those values back on the new stack it creates. It still has the coloring limitation so suspend functions can only be called from other suspend functions, which iiuc stackful coroutines like Lua don’t have.
This is IMO a) the best reason to deliver a high quality project and b) the biggest pitfall for a project’s success.
I had the privilege to spend a year working on a passion project, failing to make it work at first. One thing I learned: perhaps the biggest contributor to successful projects is ruthless pragmatism, putting personal pride at the second place.
The very short story:
I wanted to create the best possible solution for the problem that in music shows and parties, visuals were always either fully pre-rendered or never in sync with the music.
With a bunch of experience in realtime graphics programming and building music software, and newly added ways to connect to Ableton Live, I knew I could make it work. I built prototypes, toured with artists, proved it worked beautifully. Smoothly micro-adjusting movie to transport alignment solving video streaming delays, a recursive routing and mixing graphics pipeline, beat-based envelopes, so much good stuff was in there. Everyone I showed it to said they wanted to use it once it was done. However there was so much that it didn’t yet do. Non-realtime rendering, cross-platform support, flexible licencing, etc etc. I wanted to be proud before I considered it a product and thus never got to call it done. After a year of mostly fulltime work on it, and then many parttime hours in the following years, it was still not done. Life happened, I got kids, and it was still not done. I wasn’t selling the product so I couldn’t sustain development. I got a steady job and that would have been the end of it. Me wanting to be proud before calling it done ultimately would have made the project fail.
If it wasn’t for the other talented contributors that the project attracted, who got more space as my involvement dwindled. They were much more pragmatic, used SaaS for all code that was not core functionality, accepted many hiatuses (all three mentioned above, initially), built a website, and.. we just started selling.
As soon as a 1.0 was out, adoption began, income was being generated. Now, several years later, user numbers and income are steadily growing, initial hiatuses are filled; the feature set is still not complete, but the project is alive and well. I wouldn’t have started the project if it wasn’t for pride, but it wouldn’t have survived if pride would have stayed the main measure of success.
I appreciate you taking the time to share that story and the advice. Ruthless pragmatism is a good term and I’m going to keep that in mind this year. :)
I had the privilege to spend a year working on a passion project, failing to make it work at first. One thing I learned: perhaps the biggest contributor to successful projects is ruthless pragmatism and putting personal pride at the second place.
I am hoping to ship a passion project of my own this year and will be keeping this in my thoughts. It is also music related, with some ideas for integrating with FL Studio (probably using FL’s SDK(?) and JUCE’s new WebViews).
Congratulations on getting to a point where you can make this move.
One question:
Seamless C++ interop
Would it be feasible to do seamless Rust interop as well, that is, some way of easily using Rust libraries from Jank? If your bet is that all of the interesting libraries are C or C++ libraries, or at least that they’ll have C or C++ APIs, then I understand.
Seamless Rust interop is going to be tough. Tougher than with C++, mainly due to the tooling, at this point. I’ve been working with some LLVM folks who’ve been building tooling specifically for other langs to interact with C++ in a JIT fashion. Until we have something like that for Rust, there’s not much of an opening.
There are people thinking about a stable Rust-specific ABI, which would do a lot for interoperability with rust if/when it gets implemented. Unfortunately I can’t seem to find the specific proposal I remember seeing for a named alternative to repr(C) that would fulfill this function.
There’s far more than just the ABI for seamless interop.
We built some proof-of-concept prototypes for C++ interop in Verona. The abstract machine is that C++ code is confined to a region (all C++ code operates on objects in a single region, C++ has no notion of regions it thinks that all objects live in a single global memory). We used clang to build an AST from a set of C++ modules and could export types, and instantiate templates so we would be able to surface C++ templates as Verona generics where the mapping made sense. We could then use clang libraries to generate wrapper functions (every argument that isn’t a primitive is passed as a pointer and in the Verona world it’s just an opaque blob of stuff that we have a pointer to) that we could call with a simple ABI (these would then be inlined during code generation, so clang handled all of our C++ ABI issues). Accessing fields in C++ objects worked the same way: generate an accessor function, clang will error if you try to access a private field, forward the error to the user rewriting source locations, LLVM inlines the accessor function and generates a single load or store plus whatever address calculation is needed.
All of this was simple because the C++ abstract machine is very permissive. You might want to handle standard-library unique and shared pointers as special things, but that’s about it. This is also why things like Sol3 make it easy to create C++ / Lua interop: Lua’s GC just takes ownership of a copy of a C++ object. If that object is a smart pointer, deallocating it when GC runs may deallocate the underlying C++ object.
In contrast, there’s a lot of stuff in Lua that wouldn’t be valid Rust. Lua lets you take a reference to an object that’s reachable from another object. If everything is shared / weak pointers, that’s fine. In Rust, that is not permitted but there’s no simple way of surfacing the borrow checker into Lua. With a lot of work, you could make Lua do something dynamic to check borrows, but that’s basically reimplementing the Rust borrow checker as a dynamic graph-walking thing, and that’s going to be painful.
My intuition is that while there’s no impedance mismatch between C++ and Rust’s data models, you have to duplicate all of the interop work that has to do with the mapping between the languages, not the data. For C++ this involves parsing headers, not object files. Rust can produce C-headers and a C-compatible FFI but consuming that directly loses the Rust data model, for instance. You could reconstruct it on the other end if you know exactly how Rust maps its types to C’s, but round-tripping like that is bound to be error-prone. Lastly, Rust’s runtime and C’s runtime are actually different; Rust’s panic unwinding is a footgun you have to manually avoid when writing FFIs.
So, seamless interop? I don’t think so. You have to redo most of the interop work and be able to consume the objects Rustc produces.
I’d like to hear people’s experience with and ideas for using Rust from other languages. I agree with jeaye that at this point it’s a lot harder to do than C++ or C.
I took a cursory look at using it from Common Lisp and didn’t get very far. My ground rules were no going through C, and no modification of Rust code (i.e. no adding #[pyfunction] like PyO3).
Basically, I want cl-autowrap, but instead of (autowrap:include “something.h”), I want (rffi:use “some.crate”)
I’d have something like “import ”, and then have the functions and data types available in a package named after the crate, like (crate-name:functionname …). For bonus points, I’d like to do it in such a way that users of my Lisp library only need to install the Rust binary, and not the whole Rust build system (in C and Debian terms, they need libfoo, but not libfoo-dev).
Some bike lanes advocates claim that a city must install bike lanes to generate the demand that justifies them.
It seems Servo is going to be something similar: by manifesting an engine, they hope to generate the demand that keeps this project going.
The most celebrated aspect of Servo, in my opinion, is that it uses Rust and somehow that’s a Good Thing (TM).
It feels like someone celebrating their house pipes are great because they used Knippex Pliers to put them together.
Using Rust is alluring to create developer engagement (and it shows), but it’s not what would drive the adoption of Servo - being used is what will drive adoption. Rust is at best neutral in this regard, and waving it as if it were some kind of strong benefit feels odd to me. In any case, better Rust than historical C++ for sure.
Are you saying that you don’t understand the bike lane argument? The classic explanation goes like this… say you have a river, and pedestrians aren’t crossing it, because the only way to cross is to swim the river, so like 2 weirdos do this each year. Naysayers argue “there’s no point in building a bridge across this river, because only 2 people cross it each year!” But surprise, if you let people walk across the river instead of swimming, you’ll get a lot more foot traffic on the other side.
If we bring this analogy to the browser, it seems possible that the situation is similar. Servo being usable would result in it being used. If Igalia can actually manifest a working browser engine, and it’s not too much of a pain to integrate into other programs (such as a full web browser), then it’ll get used a lot more than a browser engine that doesn’t render web pages properly (or can’t be easily embedded). It probably helps if you partner with someone who definitely wants to use your engine, such as the partnership they have with Tauri, so you can be sure at least one group will use it and demand isn’t completely imaginary.
Yeah bike lanes is the worst analogy to make, because it’s obvious that they have utility, and there is demand
I’ve lived in some of the most crowdest cities (SF and NYC), and you absolutely need bike lanes.
For one, there are A LOT more people than there were 20 or 30 years ago. And two, the status quo 20 or 30 years ago wasn’t sufficient either
A parked car is objectively a bad economic use of space, compared to a bike lane
Also, two new browsers are moving to Swift - the one by the Browser Company, and LadyBird
C++ isn’t good enough for a highly concurrent program, that’s more than 10M lines, with hundreds of engineers in it
And I say this as someone who’s worked with the Chrome team, sat with them, etc. The quality of the code varies immensely, as you would expect for anything with hundreds of engineers in it at a given time.
(not saying Swift is necessarily the best, I don’t know that much about it)
I should add that my preference is not to have these huge browsers that require hundreds of engineers to build, but if you take that as a constraint, then there are downsides to doing it in C++ …
A small team can definitely write good C++. Large teams have problems in any language, but with C++ the problems can have pretty bad consequences that are not easy to work your way out of … it’s sort of a “one-way door”
I personally would be thrilled to have a browser that wasn’t a pile of CVEs in a trenchcoat. Having one that isn’t developed by an advertising company would be great too!
Me too. And having an embeddable browser engine that can be used in browsers that cater to specific accessibility needs not served by Firefox would be great. The only realistic options currently, as far as I can tell, are Chromium and WebKit, both by advertising companies.
It feels like someone celebrating their house pipes are great because they used Knippex Pliers to put them together.
If in 99% of houses flushing a turd of a particular size gave someone else remote code execution, you’d be pretty proud if your plumbing wasn’t vulnerable.
Rust/Servo did succeed in replacing selector matching and GPU rendering in Gecko where previous attempts in C++ have failed.
I don’t know if there’s demand for Servo specifically (web compat is hard and there’s more to engines than safety), but all major engines have issues with memory safety of C++, and are looking for a solution. Blink and Gecko are replacing parts with Rust. Apple is working on making a lower-level Swift to replace their C++.
I don’t know if there’s demand for Servo specifically (web compat is hard and there’s more to engines than safety),
I think it would be quite nice to have an alternative to Blink and WebKit for embedded / Electron-like use cases. Gecko isn’t embeddable, and even using it as an application framework isn’t exactly a well supported use anymore.
The Servo project has been started to replace crufty non-thread-safe C++ in Gecko, and Rust has been created specifically for Servo.
I hope you don’t mind, but a minor grammar nit: “was”, rather than “has been” in this case in English.
Yes, the difference in meaning is that “was” describes an event in the past, and “has been” describes something that was true for some period in the past.
I love having more diversity in the browser engine market - and maybe they can show what a CVE-less browser looks like. If Rust makes it easy to them, I’m not gonna complain. (Mozilla already showed that they got speedups from using rust by allowing them to multithread worry free.)
I and I imagine most English speakers of my acquaintance would assume both of these mean the same thing. “Smokeless” and “smoke free” both mean “without smoke” (though in different senses). Ditto “childless” and “child free” (with different valences). Etc.
“X-less” and “X-free” generally mean the same thing; without any X. There might be some exceptions but none are coming to mind, so your response looked a bit like pulling a weird technicality rather than a misunderstanding. Sorry for any rudeness!
A web engine is by nature a CVE-magnet, even if you don’t have any memory issue. But being free of most memory issues allows you to concentrate on everything else, and Rust has other correctness tricks up its sleeve that have nothing to do with memory or data races.
Performance-wise, beside Mozilla’s well-known “we failed to parallelize this in C++ but succeeded in Rust” return on experience, there’s Google’s “we isolate C++ libs in their own process but use Rust libs directly” to consider. Servo isn’t full-featured, fast, or battle-tested yet, but I’m hopeful.
Does Servo being a Rust project make it easier to embed in other applications?
I don’t speak from a whole lot of experience here, but my perception is C/C++ dependencies can be a pain to set up, and one of the benefits of the Rust ecosystem is the ease of installing dependencies. Need to render an HTML document in your app? Maybe someday the answer will be to import servo-whatever, and you won’t have to spend mental effort on stuff like (I’m making this up) installing the correct versions of libjpeg and cairo.
I’m not sure about rust per se but the post and additional content in linked posts highlights how their current development goals and partnership with Tauri has led them to make changes to make Servo specifically more ergonomic to embed in other applications.
Rust doesn’t make embedding easier (unless the host is in Rust), at best it makes compiling more straightforward. What matters for embedding is the architecture and intent of the project.
Servo is designed to be embeddable, and judging by how fast Tauri was able to integrate it, they do a good job. None of the other engines fare well in that regard: Gecko is bound to its browser, Blink is marginally better but has problematic leadership, the various webkits are lagging behind development, Ladybird is (I’m guessing) not interested in the outside world, Flow is proprietary… Servo is an enticing project, whether you care about Rust or not.
I think in theory Rust could be easier to embed because the contracts at interfaces are more explicit.
With C or C++ it can be unclear who is responsible for freeing an allocation or what preconditions need to hold about the environment and objects crossing boundaries. I guess this is true for any library though, not just embeddable things.
I suspect it depends a lot on the API. Most other languages allow more aliasing than Rust permits, so you’re likely to end up with a lot of things where the Rust type system prevents certain API misuse but a foreign API doesn’t. If I pass a borrowed Rust pointer to Lua, for example, ensuring that it isn’t captured is not trivial (I can probably wrap it in an object that is explicitly nulled at the end of the call, but then I have a thing that suddenly goes away from Lua at surprising times). Owning references are easier, but then borrows from them may be difficult.
You can almost certainly design an API that uses a narrow set of Rust that’s easy to bridge, but it requires some careful thought. From the sounds of it, the Igalia folks are doing that thinking, which is probably more important than the underlying language.
It’s always been a feature because it influences what runtime you need to depend on. This comes up all the time in managed languages, whether you need .NET installed, or JDK, or Python, etc.
It is rather a cost than a feature. Software has some benefits (functions-features, qualities…) and brings some costs (price, time to learn, dependencies to install, complexity to deal with…). Users starts with benefits (is software useful for me?) and then, only if significant benefits are present, they evaluate the costs (are they reasonable? lower then benefits?).
(of course, there are also users or customers that do not think this rational way)
P.S. I use the word „feature“ as some positive value, reason to buy or use. If you define it as an general characteristic that could be both positive or negative, then I agree that programming language is such a characteristic.
P.P.S. Similar discussion: Is price a selling point? If you offer just „cheap something“ will people buy it? Probably not. But if you offer some useful product for a reasonable price or cheaper than your competitors, than yes. However the main selling point is that usefulness, not the cheapness itself.
I’m actually totally gonna do higher-kinded types in Garnet, which is slightly insane but I think will be worth it.
tbh I saw this coming miles away (and thought I left a comment indicating such but I am failing to find it now). I hope the design comes out well, and I think Garnet will be better off with HKTs being added this “early”.
I’ve also read the abstract for 1ML like a dozen times… I should really sit down and read the code. Though I might wait for your Rust impl because, for me, ‘actually [reading] Rust is so much nicer than OCaml’. In my head this is somewhat related to Zig’s modules “just being structs”? I’m not comfortable enough with ML modules to understand the distinction.
Turns out you can write a generic iter() function that returns any type of borrow, by being able to write HKT generics over different kinds. So you can replace fn get<'a>(&'a self) -> &'a T and fn get_mut<'a>(&'a mut self) -> &'a mut T with a single fn get<'a, Borrow>(Borrow<'a, Self>) -> Borrow<'a, T>. You can write this right now in Zig, if not exactly nicely, so I’m reasonably sure it’ll work. Will it work well? Let’s find out.
I wrote a similar example in Zig of Pony’s deny capabilities embedded as a comptime var in a Ref structure. I wonder what the limits are here - what is the extent of type checking that can be implemented in comptime Zig? Add some system for adding syntax sugar and you’d get a Racket-like system for systems languages.
This is a list of things that are not trivial with a dismissive paragraph for each. I don’t think there’s much insight to be had here, even if it’s not wrong.
agree, plenty of discussions to be had on these topics but they’re not present in the article at all. the linked post on cross-platform is similarly hollow.
o3 high-compute costs not available as pricing and feature availability is still TBD. The amount of compute was roughly 172x the low-compute configuration.
It probably means that a task is ran on multiple GPU’s at once, and not just a couple but looks like a few racks worth of top-tier GPUs. I’m impressed they managed to achieve this sort of distributed computation. Transformer-based models are not very coducive to network-distribution, which they most likely needed here. I dont’ know how otherwise they could keep enough GPUs busy for only 1.3 minutes that would cost $2k.
Mixture of Experts models can be partially trained and executed in parallel, where each “expert” (effectively a sliver of FFN layers) could be on a different gpu. GPT4 is rumored to be using this architecture and they might have applied these learnings to the o series.
I’m impressed they managed to achieve this sort of distributed computation.
It’s interesting to see how scaling is pretty much exclusively on inference at the moment. That is interesting and probably will provide lots more variation. It’ll be interesting to see when that runs out of steam.
They mention “samples”, and that the low-compute setup used 6 of them and the high-compute used 1024.
So one “brute-forceish but not quite” way to get this parallelism given what we know about O1 (which may or may not also be true for O3): Run several “chains of thought” in parallel, and then when time’s up (or all of them are finished) pick whatever solution most of them got. Or use “tree of thought” and have them branch off at decision points, so they can share/reuse earlier parts of the token stream.
I’d postpone the doomsaying at least 6 months. I don’t have any insight to who owns and runs ARC, but it’s a small enough world and there’s certainly a surfeit of economic incentives to suspect that ARC and OpenAI are in cahoots in an attempt to boost OpenAI’s standing. In other words, assuming this is a rigged benchmark is applying Occam’s razor.
The ARC test itself is about 5 years old, and has mostly been notable for how very hard it’s been to get deep learning systems to do well on it. It was made by Francois Chollet, who has been highly skeptical of the LLM approach and has been a very vocal OpenAI critic right up until this. He worked at Google until recently (I don’t know where he went to after he quit Google, though).
It should be noted that he himself points out that beating ARC doesn’t mean that a system is an AGI - only that a system that can’t beat ARC definitely isn’t.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries.
In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Most backend server frameworks use templating instead.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
(I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type .astro that renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.
That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what .astro does (.rb, .py, .yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).
I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
Well that’s one way to look at it. Another is that popular developer culture doesn’t need to be unified. There can be one segment for whom “computer science” means “what does this javascript code snippet do”; I am not a linguistic prescriptivist so this does not bother me. It doesn’t put us in danger of, like, forgetting the theory of formal languages. It also isn’t really a new sensibility; this comic has been floating around for at least 15 years now. People tend to project a hierarchy onto this where the people who care about things like formal languages and algorithms are more hardcore than the javascript bootcamp world. But this is a strange preoccupation we would be better to leave behind. If you want to get in touch with “hardcore” popular developer culture you can participate in Advent of Code! It’s a very vibrant part of the community! Or go to one of the innumerable tech conferences on very niche topics that occur daily throughout the developed world.
I wouldn’t say I’m a prescriptivist either, but the purpose of language is to communicate and it seems very odd to me that there are people whom “computer science” is semantically equivalent to “javascript trivia”. I think that seemingly inexplicable mismatch in meaning is what the article is remarking upon; not being judgemental about Javascript qua Javascript, but that there’s such a gulf in ontologies among people in the same field.
I have heard this phrase many times when defending a prescriptivist perspective and it is never actually about the words confusing people, it’s about upholding the venerability of some term they’ve attached to themselves.
Ehhhh strong disagree on that one! There’s a difference between “that’s not a word” and “that word choice is likely to cause more confusion than clarity”. The former is a judgement, the latter is an observation.
I have not attached the term “computer science” to myself, I truly think it’s just confusing to use that phrase refer to minutiae of a particular programming language. I’m not saying that’s “wrong”, nor that javascript in particular is bad or anything like that, just that it is very contrary to my expectations and so saying “computer science” in that manner communicates something very different to me than whoever is using it in that way would intend.
I would say that it’s wrong and bad to impugn my motives and claim that I actually mean the exact opposite of what I said though!
I’m confused because when someone says something is regarding “computer science” I expect to see things related to the branch of mathematics of that name. In the linked article, they mention seeing the term used in a game show: If I was told I was going to be on a game show and the category was “computer science” I would prepare by studying the aforementioned field and if I then was presented with language-specific trivia instead, I would be a bit miffed.
What proportion of an undergraduate computer science degree would you say is dedicated to the branch of mathematics and what proportion is dedicated to various language trivia?
When I was in school, the computer science course I took were pure math, but I can’t speak to what other programs at other school at other times are like. I don’t really understand why you’re so vexed here; I’m not trying to “project a hierarchy” or say that one thing is better than another (I taught at a javascript bootcamp!), merely that I think the point the article is making is that it feels strange to discover one’s peers use familiar words to mean very different things. I’m just trying to explain my perspective; you’re the one who is repeatedly questioning my honesty.
90% mathematics, from my experience doing such a degree in Swansea in the early 2000s and teaching at Cambridge much later. Programming is part of the course, but as an applied tool for teaching the concepts. You will almost never see an exam question where some language syntax thing is the difference between a right or wrong answer and never something where an API is the focus of a question.
Things like computer graphics required programming, because it would make no sense to explore those algorithms without an implementation that ends up with pixels on a screen (though doing some of the antialiased line-drawing things on graph paper was informative).
Dijkstra’s comment was that computer science is as much about computers as astronomy is about telescopes. You can’t do much astronomy without telescopes, and you need to understand the properties of telescopes to do astronomy well, but they’re just tools. Programming is the same for computer science. Not being able to program will harm your ability to study the subject, even at the theoretical extremes (you can prove properties of much more interesting systems if you can program in Coq, HOL4, TOA+, or whatever than if you use a pen and paper).
Even software engineering is mostly about things that are not language trivia and that’s a far more applied subject.
When I got my undergraduate degree in 1987 it was literally a B.S. Applied Mathematics (Computer Science), and when we learned language trivia it was because the course was about language trivia (Comparative Programming Languages, which was a really fun one involving very different languages like CLU, Prolog, and SNOBOL).
If kids nowadays are learning only language trivia to get a CS degree, somebody is calling it by the wrong name.
“I truly think it’s just confusing to use [the phrase “computer science” to] refer to minutiae of a particular programming language” -> “How are you confused?”
This is a bit underhanded. It’s confusing when someone says “computer science”, and then later it turns out they mean “Array.map returns an array and Array.foreach doesn’t” and not what the term has been used to refer to for a long time; algorithms, data structures, information theory, computation theory, PLT, etc. I am absolutely a linguistic descriptivist, but to claim that using a term in a way that doesn’t match the established usage is not cause for confusion is not actually a descriptivist take. It’d become less confusing once that becomes the established usage, but until we’re there, it’s still going to fail to communicate well and cause — say — confusion.
Funny you should mention Dijkstra. Like Naur, he had a hard time getting on board with the term, “computer science.” Dijkstra favored the subtly but importantly different “computing science.” Naur preferred “dataology.”
In some countries the field is called informatics, I wonder why that didn’t happen in English speaking ones (or if there’s a difference I’m not aware of).
That term is used in various places with different meaning, including in the US, where it’s a term for “information science”. I.e. studies of things like taxonomy and information archiving. In other words that’s already used for several purposes so if you want to use it for clarity you need to look elsewhere.
Informatics, just like dataology, seemingly focuses on data (information). Of course, processing data is at the core of computing, but, IMO, this misses the point. The point is the computing process, not what the process is applied to. Computing science is great, but cybernetics is cool, and also wider: it encompasses complex systems, feedback, and interactions.
Taking Naur’s side for a moment, though, the really science-y parts of computer science curriculum (e.g. time complexity, category theory, graph theory) are basically just mathematics applied to data/information, so perhaps “datology” and “informatics” aren’t that far off the mark. Everything else that’s challenging about a CS undergrad curriculum (e.g. memory management, language design, parallel computation, state management) is just advanced programming that has accumulated conventions that have hardened over the years and are often confused with theory.
Interesting because I feel like cybernetics is both a subset of computing science (namely one concerned with systems that respond and adapt to their environment) and trans-disciplinary enough to evade this categorization.
I feel like we do? It becomes especially apparent whenever one is where cars are but is not in a car themselves (pedestrian, cyclist, motorcyclist) — there is a strong sense of “drivers” and “everyone else”.
I think I would characterize that among drivers as a lack of awareness for non drivers, at least in the USA. My point is that we don’t expect a sense of shared tribal affiliation between drivers like the author seems to expect between programmers.
It’s a continuum. JS syntax is somewhat CS (I don’t know, 0.2?), but likely it’s more “engineering” (0.9?). Saying that JS syntax is more engineering than CS is NOT negative towards JS. Neither engineering not CS are superior or lesser respect each other!
On the other hand, although having clearly defined terms is probably a lost battle, words are needed to communicate, and it’s definitely worthwhile to try to have stable, useful words that we can use to be able to explain and discuss things.
I think in some cases it would have been great if terms hadn’t lost their original meaning (I’m thinking devops, for example), but likely if I thought about it, I’d find terms where I would think their drift from their original meaning has been good.
…
I like the article, even if I disagree with it. I feel this generational gap with the newer crop of programmers. Likely my elders feel a similar gap with my cohort!
But we are always quick to judge that we are on a downhill slope. We’ve been so long on this downhill slope, yet we are still alive and kicking, so maybe it’s not so downhill.
The new crop of programmers is just new. They haven’t specialized yet! They come with, at most, an undergrad education that hopefully exposed them to Context-Free Grammars and Turing Machines and a moderately-complicated datastructure like a red-black tree and a basic misunderstanding of what the halting problem is (they think it means you can’t write a program to tell whether a program will halt). Or maybe they come from a bootcamp and learned none of that. The career path of a programmer goes many places. Sure, some of this cohort will stay at the JS-framework-user level forever. That’s fine, it’s a job. Others will grow and learn and become specialists in very interesting areas, pushed by the two winds of their interests and immediate job necessities.
Yes and… the new crop of programmers likely includes a ton of people who do know. And also, the “new” crop of programmers has always included such folk.
What I’m saying, in every generation there’s always some people say the next generation will be a disaster. Hasn’t happened yet :D
A fresh out of school colleague of mine recently used Math.log2() to test whether a bit was set. They did study IPv4 and how e.g. address masks work. Not sure what to think about that.
I understand they spend most of time coding pretty complex Vue apps, but still.
Largely agree, but after having spent the last year or so getting up to speed with modern frontend dev, I’m cranking JS syntax alone to like a 0.7 on the CS scale. Half of the damn language is build-time AST transformations + runtime polyfill. Learning Babel is practically a compilers course.
I’m not sure I understand you correctly, but let me reiterate on something.
Something being CS or not does not make it harder or more deserving of merit. There are things that require different levels of intelligence in CS; there are plenty of easy things.
And there are lot of very difficult things in life which are not CS. Beating the world chess champion would make me much prouder than passing my first subjects in CS! As a less far-fetched example, there is plenty of software which is an engineering feat, but really is not a CS feat. (And viceversa, there’s a ton of “CS code” around which is rather simple in engineering terms.)
I’m not saying that your JS code was not CS-y- I would have to see it to give my opinion. But doing something hard with a computer does not mean you are doing CS (but it can!). And like, figuring out a complex CS proof frequently does not require any coding at all (but it can!).
I feel that if the teacher were instead explaining Nyquist-Shannon sampling theorem, the joke would work much better and be somewhat more balanced.
While analyzing algorithmic complexity can be useful, modern machines are quite unintuitive with register renaming and caches to the point that trying rough solutions and benchmarking often works better.
lots of things with no better home just get added to the list of over 120 compiler built-in functions,
I got a similar impression initially, but it flipped to the opposite couple of months in. The big thing is that std in Zig is not special. There are no lang items. So, builtins are the entire interface between compiler and user code, it’s just that this interface isn’t particularly well-hidden in the guts of standard library, but is just there to be used.
In a similar vein, the fact that @import is a reverse syntax sugar is cute. Import is syntax in a sense that the argument must be a sting literal, and you can’t alias the import “function”, even at comptime. It fully behaves as a custom import “path” syntax, except that there’s no concrete syntax for it and a function-call is re-used.
I hadn’t realized that ABI stability was a goal of C++ until there was a showdown over it. I’d always known people to use pure C ABIs between C++ binaries. I remember KDE folks padding their classes so that they could add members in micro-versions, but I don’t think they attempted compatibility across major versions.
There are three kinds of ABI here, and it’s worth unpicking them. Especially since none of them are actually part of the C++ standard.
First, there’s the lowering C++ language constructs to machine code and data layouts. This is the C++ ABI for a given platform. It defines things like how vtables and RTTI are organised, how exception handling works, and so on. There are basically two that anyone cares about: the Itanium one that almost everyone uses (with a few minor tweaks, such as the 32-bit Arm or Fuchsia variants) and the Visual Studio one that Visual Studio uses. The Itanium C++ ABI has been stable (with additions for new features) for over 20 years. Visual Studio reserves the right to change their ABI every release of Visual Studio, but in practice they don’t very often.
This ABI allows you to compile two C++ source files with different compilers and link them together. You can mix clang and gcc (or, these days, clang and cl.exe) in individual files and link the result together. This used to churn a lot in all C++ compilers, which was one of the reasons Linus didn’t like C++, but everywhere except Visual Studio is stable now.
Next, there’s the C++ standard library ABI. In libc++, for example, a bunch of things are hidden behind macros to avoid breaking the ABI. The C++ standard tries quite hard to avoid requiring these (sometimes it fails), they’re mostly for places where people figured out a more efficient (but not binary compatible) implementation at some point and want to allow people who don’t care to opt in. If a standard-library function changed in such a way that its name mangling changed, for example, or if a library function that’s inline required a modification that would cause an ODR violation, that would break the standard-library ABI.
As I recall, libstdc++ changed its ABI a few times in the early 2000s and then once for C++11. libc++ doesn’t support any pre-C++11 standards, so hasn’t had to yet. If you link something against the standard library, you want to have very strong backwards compatibility guarantees because everything links the standard library and so if two libraries (or a library and a program) expect different standard library ABIs then they can’t coexist in the same process.
Finally, there’s the ABI for individual C++ libraries. As with C, it’s entirely up to the library vendor how strong they make their ABI-compatibility guarantees. You can expose type-erased interfaces from a shared library and provide versioned inline wrappers in headers that do the type erasure. If you do this, you can build long-term stable ABIs. Alternatively, you can put all class definitions in headers and end up with something that breaks ABIs every minor release. Both C and C++ codebases have done the thing where they add padding fields to allow future expansion without breaking ABIs. In C++, you need to make sure that you have out-of-line constructors (including copy / move constructors) so that replacing these with a type that needs copying / initialising differently will work. You probably need out-of-line comparison operators as well. You can often avoid this if you have types that don’t need subclassing or stack allocation by making constructors private and providing factory methods that return shared / unique pointers.
The C++ standards committee is primarily concerned with the first two. New things in the standard shouldn’t require backwards-incompatible changes to either of these (new features requiring new bits of ABI are fine).
Absolute must read. Previously, on Lobsters there was discussion about feminist philosophy in the context of programming language design & implementation. Ideas surrounding the cultural context that langdev exists in have been rattling around in my brain for a while, especially after Evan Czaplicki’s talk on The Economics of Programming Languages. It is exceptionally rare to find engineers that are well versed in langdev, let alone this in tune with its societal backdrop. Sadly, it is excruciatingly obvious that coming to this understanding was painful to the author and I wish them and anyone else pushing C++ forward the best.
I’m concerned that Bluesky has taken money from VCs, including Blockchain Capital. The site is still in the honeymoon phase, but they will have to pay 10x of that money back.
This is Twitter all over again, including risk of a hostile takeover. I don’t think they’re stupid enough to just let the allegedly-decentralized protocol to take away their control when billions are at stake. They will keep users captive if they have to.
Hypothetically, if BlueSky turned evil, they could:
ban outside PDSes to be able to censor more content
block outside AppViews from reading their official PDS
This would give them more or less total control. Other people could start new BlueSky clones, but they wouldn’t have the same network.
Is this a real risk? I’m not sure. I do know it’s better than Twitter or Threads which are already monolithic. Mastodon is great but I haven’t been able to get many non-nerds to switch over.
Hypothetically, the admins of a handful of the biggest Mastodon instances, or even just the biggest one, could turn evil and defederate from huge swathes of the network, fork and build in features that don’t allow third-party clients to connect, require login with a local account to view, etc. etc.
Other people could start clones, of course, but they wouldn’t have the same network.
(also, amusingly, the atproto PDS+DID concept actually enables a form of account portability far above and beyond what Mastodon/ActivityPub allow, but nobody ever seems to want to talk about that…)
The two situations are not comparable. If mastodon.social disappeared or defederated the rest of the Mastodon (and AP) ecosystem would continue functioning just fine. The userbase is reasonably well distributed. For example in my personal feed only about 15% of the toots are from mastodon.social and in the 250 most recent toots I see 85 different instances.
This is not at all the case for Bluesky today. If bsky.network went away the rest of the network (if you could call it that at that point) would be completely dead in the water.
While I generally agree with your point (my timelines on both accounts probably look similar) just by posting here we’ve probably disqualified ourselves from the mainstream ;) I agree with the post you replied to in a way that joe random (not a software developer) who came from twitter will probably on one of the big instances.
For what its worth I did the sampling on the account where I follow my non-tech interests. A lot of people ended up on smaller instances dedicated to a topic or geographical area.
While it’s sometimes possible to get code at scale without paying – via open source – it’s never possible to get servers and bandwidth at scale without someone dumping in a lot of money. Which means there is a threshold past which anything that connects more than a certain number of people must receive serious cash to remain in operation. Wikipedia tries to do it on the donation model, Mastodon is making a go at that as well, but it’s unclear if there are enough people willing to kick in enough money to multiple different things to keep them all running. I suspect Mastodon (the biggest and most “central” instances in the network) will not be able to maintain their present scale through, say, an economic downturn in the US.
So there is no such thing as a network which truly connects all the people you’d want to see connected and which does not have to somehow figure out how to get the money to keep the lights on. Bluesky seems to be proactively thinking about how they can make money and deal with the problem, which to me is better than the “pray for donations” approach of the Fediverse.
Your point is valid, though a notable difference with the fediverse is the barrier to entry is quite low - server load starts from zero and scales up more or less proportionally to following/follower activity, such that smallish but fully-functional instances can be funded out of the hobby money of the middle+ classes of the world. If they’re not sysadmins they can give that money to masto.host or another vendor and the outcome is the same. This sort of decentralisation carries its own risks (see earlier discussion about dealing with servers dying spontaneously) but as a broader ecosystem it’s also profoundly resilient.
a notable difference with the fediverse is the barrier to entry is quite low
The problem with this approach is the knowledge and effort and time investment required to maintain one’s own Mastodon instance, or an instance for one’s personal social circle. The average person simply is never going to self-host a personal social media node, and even highly technical and highly motivated people often talk about regretting running their own personal single-account Mastodon instances.
I think Mastodon needs a better server implementation, one that is very low-maintenance and cheap to run. The official server has many moving parts, and the protocol de-facto needs an image cache that can get expensive to host. This is solvable.
Right! I’ve been eyeing off GoToSocial but haven’t had a chance to play with it yet. They’re thinking seriously about how to do DB imports from Mastodon, which will be really cool if they can pull it off: https://github.com/superseriousbusiness/gotosocial/issues/128
That’s true, but I’ve been hooked on Twitter quite heavily (I’ve been an early adopter), and invested in having a presence there.
The Truth Social switcheroo has been painful for me, so now I’d rather have a smaller network than risk falling into the same trap again.
Relevant blog post from Bluesky. I’d like to think VCs investing into a PBC with an open source product would treat this differently than Twitter, but only time will tell.
OpenAI never open sourced their code, so Bluesky is a little bit different. It sill has risks but the level of risk is quite a bit lower than OpenAI was.
OpenAI open sourced a lot and of course made their research public before GPT3 (whose architecture didn’t change much[1]). I understand the comparison, but notably OpenAI planned to do this pseudo-non-profit crap from the start. Bluesky in comparison seems to be “more open”. If Bluesky turned evil, then the protocols and software will exist beyond their official servers, which cannot be said for ChatGPT.
[1]: not that we actually know that for a fact since their reports are getting ever more secretive. I forget exactly how open that GPT3 paper was, but regardless the industry already understood how to build LLMs at that point.
I also find it strange how the author only claims Twitter was a propaganda machine after the Musk acquisition, instead of simply always being the case. Twitter has always been the most egregious example of a political battlefield.
Just self host your own blog. There is zero reason to write any content on a blog platform.
In fact, this is literally the cure to technofeudalism. Just don’t use these platforms. The web is open, anyone can publish there. You can market via other channels than SEO via search engines.
I can think of plenty of reasons not to self host.. I think the golden rule should simply be; use your own domain name. And if you use a service you don’t own, frequently export your content.
I agree that self-hosting is a good thing! But I also think it not that easy to set up and maintain for everyone; and more importantly, threads like this one right now make it look like self-hosting is a prerequisite for writing about technofeudalism (or any other topic, actually). It would be a shame if people held back with writing only because they think they don’t publish their content in the “appropriate” way. Sure, a self-hosted website might be nicer. But publishing on Medium is still much better than not publishing at all.
& I agree with @amw-zero here, we could probably just reword it to “educating the masses to ‘host’ their own blog is a practical cure to technofeudalism” for some spectrum of ‘host’
When you say “self host”, did you mean buy a domain and set up a VPS with a web server? Or point DNS to a server on your personal internet connection? Or sign up with some free hosting service?
I’m curious about the easiest way to get started without relying on some large entity. Where should the line be drawn?
Yeah, for sure, but I decided to recommend write.as as “middle option” after seeing his linked profile on about.me, maybe he is awaking (not a cognitive dissonance hopefully), he is a professor and researcher, I have friends like him who don’t have time or like to use a static site generator and self-host it yet…
I think I’ve seen this project before. It looks like it has some interesting ideas, but the custom EULA gives off massive red flags. Quoting from a random section I scrolled to:
Assignment of Rights: You hereby assign to Ilya Lakhin (Илья Александрович Лахин) all rights, title, and interest in and to any derivative works you create based on the Work. This assignment is perpetual, irrevocable, worldwide, and royalty-free.
No thanks!
Similar work is being done in salsa which is inspired by rustc’s internals.
Exceptions: For the purposes of this Agreement, third-party works that remain separable from the Work, that use the application programming interfaces (APIs) of the Work only, and that do not copy, distribute, display, or sell the Work or any part of the Work to third parties in source or compiled form, shall not be considered “Derivative Work.”
The “Assignment of Rights” seems no different from the myriad CLAs developers sign all the time.
I’m in love with this proposal tbh. I understand the concerns that autoclaim loses “explicitness” (the word already appears over 20 times in these comments), but I view this change in behavior as a huge ergonomic win. I was just reading @withoutboats’ post Not Explicit (2017) earlier this week and find its splitting of “explicit” into a few different buckets quite useful here. Allow me to quote a bit:
Sometimes in frustration at “explicit is better than implicit” I am tempted to take the opposite position just to be contrarian - explicitness is bad, implicitness is better. In reality I do think that Rust is an uncommonly explicit language, but when I use the word explicit, I mean something more specific than most users seem to mean. To me, Rust is explicit because you can figure out a lot about your program from the source of it.
…
Sometimes, explicit is used to refer to requiring users to write code to make something happen. But if the thing will happen deterministically in a manner that can be derived from the source, it is still explicit in the narrow sense that I laid out earlier. Instead, this is about saying that certain actions should be manual - users have to opt in to making them happen.
Now allow me to juxtapose this with a snippet from Niko’s post:
// Rust today
tokio::spawn({
let io = cx.io.clone():
let disk = cx.disk.clone():
let health_check = cx.health_check.clone():
async move {
do_something(io, disk, health_check)
}
})
// Rust with autoclaim
tokio::spawn(async move {
do_something(cx.io, cx.disk, cx.health_check)
})
This is a very manual ritual we are all familiar with. Manual actions can be great for making expensive operations clear to readers, something Rust users are keenly aware of, but these .clone() calls are cheap operations that do not contribute to any meaningful insight of your code. You know what’s going on here: a(n atomic ref to a) number has been incremented. This is not a cost that I believe should be manual.
Even worse, if these types weren’t Rc/Arc, the .clone() ritual doesn’t communicate if these clones are cheap or expensive, and our existing mental pattern matching on this lambda ceremony may let us erroneously assume these areRc/Arc. Further, refactors to these types won’t require updating usages. With this proposal, if ctx.disk is replaced with a non-Claim type that is expensive to copy, it could not be autoclaimed and this usage would be a compilation error (iiuc).
On top of removing the auto-copying of arrays, this proposal looks great! I hope it goes far.
but these .clone() calls are cheap operations that do not contribute to any meaningful insight of your code.
That’s not necessarily true. When I see this code, there’s no guarantee that the nearest indication that those values are Rc/Arc‘d will be close by. Unless it is, that’s a loss of locality.
The explicit reading of it, currently, is that there will be no aliasing going on, lock-guarded or otherwise, because they’ll be moved unless they’re memcpy-safe.
That said, I wouldn’t have anything against something like this hypothetical syntax, which relies on some hypothetical RefCount trait that non-Rc/ArcClone-ables don’t implement (thus addressing the “One-clone-fits-all creates a maintenance hazard” aspect of the proposal):
I ask the author to refrain from using HN clickbait titles for articles that nitpick nightly APIs. I also ask them and others to stop quoting Donald Knuth, “premature optimisation is the root of all evil” is surely passé at this point. Otherwise this is a neat critique. One of my favorite parts about Rust is that we have the ability to nitpick the unsafe bits and make ergonomic APIs that you otherwise couldn’t create in C, C++, Zig, etc.
They didn’t even quote the full thing when they said “but there is more”. Here’s the full quote, with the part they omitted boldfaced.
This also doesn’t pass the guidelines on clickbaity titles. I’ve suggested a title change to “Issues with BorrowedBuf in Rust” and I suggest you do the same
There is no date on this web page, but it doesn’t cite any prior art or current art, like
https://research.google/pubs/sql-has-problems-we-can-fix-them-pipe-syntax-in-sql/
https://www.scattered-thoughts.net/writing/against-sql/ - talks about compositionality
https://www.edgedb.com/blog/we-can-do-better-than-sql (2019)
Lots of people are working on this, and from skimming, it doesn’t seem like the author is aware of them
I think it was published yesterday.
Dont forget The Third Manifesto https://wiki.c2.com/?TheThirdManifesto
The RSS feed has an article publish date. But yeah, the page itself should really be dated.
imo, nothing implies the author is unaware of them. This is a contribution to the conversation of solving these problems, like those are. They explore similar ideas but the details and some motivations are fairly different.
I’m curious what your impressions/summary of such efforts are (hoping you’re conversant in current art.)
Tangentially, are there any attempts to push SQL closer to relational calculus/datalog?
I don’t have a real informed opinion, but I found it interesting that the pipe syntax was tried in sqlite:
https://news.ycombinator.com/item?id=41347188
That is generally how things move forward – I think Google actually deployed this, and then another implementation copies it
(or doesn’t in this case – I think he said he would wait until its in the SQL standard)
But having something that’s actually implementable is a very good sign.
yup, the paper says
I did not know they all shared the same dialect!
There is TenDRA, interesting for a number of reasons
Adding the shareware DOS compiler and IDE “Pacific C”
Didn’t they freeware that for distribution with FreeDOS 1.1? I’m pretty sure that’s where I discovered it.
The only thing I know about TenDRA is you can change the value of
NULLto0x55555555.Besides that, if I got it right, it supports compiling against skeletons of standard system header files (the actual struct layouts for example can be missing, just the, say, POSIX mandated names present). Then it can deploy s/w pre-compiled that way on the actual OS, its architecture and the implementation details of the shipped system header files. This would allow software publishers to distribute binaries, which can be installed into multiple OS vendors’ systems.
I have loved Kotlin since ~2015 and can’t imagine actually using standalone Java nowadays. Null safety is such a big deal, and Java is still lagging behind. It’s incredible when a language can make its sibling langs easier to use, like with Kotlin inferring nullability of (properly annotated) Java members.
Kotlin’s coroutines are pretty cool. I made a prototype of job system, where each job relied on awaiting on sub-jobs, and where each job is a coroutine that can be paused/resumed. Modeling this with coroutines was an interesting approach, but a thread pool is definitely fine here.
IntelliJ+Gradle is still “required” though, and this tooling lock-in will prevent adoption. And damn is Gradle frustrating sometimes.
I looooove coroutines, but the article specifically points out that this is written from the backend perspective. For server-side code I think coroutines are mostly beneficial as A) a workaround for not having OS threads on your runtime or B) state machines that use TCO, and thus not very relevant on the JVM where you do have OS threads but not TCO. Plus IIRC Kotlin’s coroutines are stackless, right? So they’re a little nerfed.
They’re stackless but the compiler generates a continuation object to store your local variables in, so at resumption it can fill all those values back on the new stack it creates. It still has the coloring limitation so
suspendfunctions can only be called from othersuspendfunctions, which iiuc stackful coroutines like Lua don’t have.We use good old Maven with Kotlin (and IDEA).
This is IMO a) the best reason to deliver a high quality project and b) the biggest pitfall for a project’s success.
I had the privilege to spend a year working on a passion project, failing to make it work at first. One thing I learned: perhaps the biggest contributor to successful projects is ruthless pragmatism, putting personal pride at the second place.
The very short story:
I wanted to create the best possible solution for the problem that in music shows and parties, visuals were always either fully pre-rendered or never in sync with the music.
With a bunch of experience in realtime graphics programming and building music software, and newly added ways to connect to Ableton Live, I knew I could make it work. I built prototypes, toured with artists, proved it worked beautifully. Smoothly micro-adjusting movie to transport alignment solving video streaming delays, a recursive routing and mixing graphics pipeline, beat-based envelopes, so much good stuff was in there. Everyone I showed it to said they wanted to use it once it was done. However there was so much that it didn’t yet do. Non-realtime rendering, cross-platform support, flexible licencing, etc etc. I wanted to be proud before I considered it a product and thus never got to call it done. After a year of mostly fulltime work on it, and then many parttime hours in the following years, it was still not done. Life happened, I got kids, and it was still not done. I wasn’t selling the product so I couldn’t sustain development. I got a steady job and that would have been the end of it. Me wanting to be proud before calling it done ultimately would have made the project fail.
If it wasn’t for the other talented contributors that the project attracted, who got more space as my involvement dwindled. They were much more pragmatic, used SaaS for all code that was not core functionality, accepted many hiatuses (all three mentioned above, initially), built a website, and.. we just started selling.
As soon as a 1.0 was out, adoption began, income was being generated. Now, several years later, user numbers and income are steadily growing, initial hiatuses are filled; the feature set is still not complete, but the project is alive and well. I wouldn’t have started the project if it wasn’t for pride, but it wouldn’t have survived if pride would have stayed the main measure of success.
I appreciate you taking the time to share that story and the advice. Ruthless pragmatism is a good term and I’m going to keep that in mind this year. :)
I’m glad yours ended up working out.
I am hoping to ship a passion project of my own this year and will be keeping this in my thoughts. It is also music related, with some ideas for integrating with FL Studio (probably using FL’s SDK(?) and JUCE’s new WebViews).
Congratulations on getting to a point where you can make this move.
One question:
Would it be feasible to do seamless Rust interop as well, that is, some way of easily using Rust libraries from Jank? If your bet is that all of the interesting libraries are C or C++ libraries, or at least that they’ll have C or C++ APIs, then I understand.
Seamless Rust interop is going to be tough. Tougher than with C++, mainly due to the tooling, at this point. I’ve been working with some LLVM folks who’ve been building tooling specifically for other langs to interact with C++ in a JIT fashion. Until we have something like that for Rust, there’s not much of an opening.
I’m very interested, though.
There are people thinking about a stable Rust-specific ABI, which would do a lot for interoperability with rust if/when it gets implemented. Unfortunately I can’t seem to find the specific proposal I remember seeing for a named alternative to
repr(C)that would fulfill this function.crabi: https://github.com/rust-lang/rust/issues/111423
Thanks! That’s it.
There’s far more than just the ABI for seamless interop.
We built some proof-of-concept prototypes for C++ interop in Verona. The abstract machine is that C++ code is confined to a region (all C++ code operates on objects in a single region, C++ has no notion of regions it thinks that all objects live in a single global memory). We used clang to build an AST from a set of C++ modules and could export types, and instantiate templates so we would be able to surface C++ templates as Verona generics where the mapping made sense. We could then use clang libraries to generate wrapper functions (every argument that isn’t a primitive is passed as a pointer and in the Verona world it’s just an opaque blob of stuff that we have a pointer to) that we could call with a simple ABI (these would then be inlined during code generation, so clang handled all of our C++ ABI issues). Accessing fields in C++ objects worked the same way: generate an accessor function, clang will error if you try to access a private field, forward the error to the user rewriting source locations, LLVM inlines the accessor function and generates a single load or store plus whatever address calculation is needed.
All of this was simple because the C++ abstract machine is very permissive. You might want to handle standard-library unique and shared pointers as special things, but that’s about it. This is also why things like Sol3 make it easy to create C++ / Lua interop: Lua’s GC just takes ownership of a copy of a C++ object. If that object is a smart pointer, deallocating it when GC runs may deallocate the underlying C++ object.
In contrast, there’s a lot of stuff in Lua that wouldn’t be valid Rust. Lua lets you take a reference to an object that’s reachable from another object. If everything is shared / weak pointers, that’s fine. In Rust, that is not permitted but there’s no simple way of surfacing the borrow checker into Lua. With a lot of work, you could make Lua do something dynamic to check borrows, but that’s basically reimplementing the Rust borrow checker as a dynamic graph-walking thing, and that’s going to be painful.
My intuition is that while there’s no impedance mismatch between C++ and Rust’s data models, you have to duplicate all of the interop work that has to do with the mapping between the languages, not the data. For C++ this involves parsing headers, not object files. Rust can produce C-headers and a C-compatible FFI but consuming that directly loses the Rust data model, for instance. You could reconstruct it on the other end if you know exactly how Rust maps its types to C’s, but round-tripping like that is bound to be error-prone. Lastly, Rust’s runtime and C’s runtime are actually different; Rust’s panic unwinding is a footgun you have to manually avoid when writing FFIs.
So, seamless interop? I don’t think so. You have to redo most of the interop work and be able to consume the objects Rustc produces.
I’d like to hear people’s experience with and ideas for using Rust from other languages. I agree with jeaye that at this point it’s a lot harder to do than C++ or C.
I took a cursory look at using it from Common Lisp and didn’t get very far. My ground rules were no going through C, and no modification of Rust code (i.e. no adding #[pyfunction] like PyO3).
Basically, I want
cl-autowrap, but instead of (autowrap:include “something.h”), I want (rffi:use “some.crate”)I’d have something like “import ”, and then have the functions and data types available in a package named after the crate, like (crate-name:functionname …). For bonus points, I’d like to do it in such a way that users of my Lisp library only need to install the Rust binary, and not the whole Rust build system (in C and Debian terms, they need libfoo, but not libfoo-dev).
Previously, on Lobsters there was discussion of Vale’s exploration of seamless Rust interop.
Some bike lanes advocates claim that a city must install bike lanes to generate the demand that justifies them.
It seems Servo is going to be something similar: by manifesting an engine, they hope to generate the demand that keeps this project going.
The most celebrated aspect of Servo, in my opinion, is that it uses Rust and somehow that’s a Good Thing (TM).
It feels like someone celebrating their house pipes are great because they used Knippex Pliers to put them together.
Using Rust is alluring to create developer engagement (and it shows), but it’s not what would drive the adoption of Servo - being used is what will drive adoption. Rust is at best neutral in this regard, and waving it as if it were some kind of strong benefit feels odd to me. In any case, better Rust than historical C++ for sure.
Are you saying that you don’t understand the bike lane argument? The classic explanation goes like this… say you have a river, and pedestrians aren’t crossing it, because the only way to cross is to swim the river, so like 2 weirdos do this each year. Naysayers argue “there’s no point in building a bridge across this river, because only 2 people cross it each year!” But surprise, if you let people walk across the river instead of swimming, you’ll get a lot more foot traffic on the other side.
If we bring this analogy to the browser, it seems possible that the situation is similar. Servo being usable would result in it being used. If Igalia can actually manifest a working browser engine, and it’s not too much of a pain to integrate into other programs (such as a full web browser), then it’ll get used a lot more than a browser engine that doesn’t render web pages properly (or can’t be easily embedded). It probably helps if you partner with someone who definitely wants to use your engine, such as the partnership they have with Tauri, so you can be sure at least one group will use it and demand isn’t completely imaginary.
Yeah bike lanes is the worst analogy to make, because it’s obvious that they have utility, and there is demand
I’ve lived in some of the most crowdest cities (SF and NYC), and you absolutely need bike lanes.
For one, there are A LOT more people than there were 20 or 30 years ago. And two, the status quo 20 or 30 years ago wasn’t sufficient either
A parked car is objectively a bad economic use of space, compared to a bike lane
Also, two new browsers are moving to Swift - the one by the Browser Company, and LadyBird
C++ isn’t good enough for a highly concurrent program, that’s more than 10M lines, with hundreds of engineers in it
And I say this as someone who’s worked with the Chrome team, sat with them, etc. The quality of the code varies immensely, as you would expect for anything with hundreds of engineers in it at a given time.
(not saying Swift is necessarily the best, I don’t know that much about it)
Isn’t The Browser Company’s Arc browser mostly a Swift shell around Chromium?
Yup, iirc the core is called “CDK” (Chromium Dev Kit?), written in Swift & cross-platform.
I don’t know much about it, never used it
It is based on Chrome, though they have hired some original Chrome contributors/architects, so I assume they are doing a pretty deep reworking
i.e. starting with the shell is the logical thing to do, but not being afraid to fork it and rework the internals
Chrome definitely grew a lot of cruft and bugs over the years. When it was launched, it was a lean and focused program
I should add that my preference is not to have these huge browsers that require hundreds of engineers to build, but if you take that as a constraint, then there are downsides to doing it in C++ …
A small team can definitely write good C++. Large teams have problems in any language, but with C++ the problems can have pretty bad consequences that are not easy to work your way out of … it’s sort of a “one-way door”
I personally would be thrilled to have a browser that wasn’t a pile of CVEs in a trenchcoat. Having one that isn’t developed by an advertising company would be great too!
Me too. And having an embeddable browser engine that can be used in browsers that cater to specific accessibility needs not served by Firefox would be great. The only realistic options currently, as far as I can tell, are Chromium and WebKit, both by advertising companies.
If in 99% of houses flushing a turd of a particular size gave someone else remote code execution, you’d be pretty proud if your plumbing wasn’t vulnerable.
The Servo project has been started to replace crufty non-thread-safe C++ in Gecko, and Rust has been created specifically for Servo.
Literally, the first presentation introducing Rust is called “Project Servo”: http://venge.net/graydon/talks/intro-talk-2.pdf
Rust/Servo did succeed in replacing selector matching and GPU rendering in Gecko where previous attempts in C++ have failed.
I don’t know if there’s demand for Servo specifically (web compat is hard and there’s more to engines than safety), but all major engines have issues with memory safety of C++, and are looking for a solution. Blink and Gecko are replacing parts with Rust. Apple is working on making a lower-level Swift to replace their C++.
I think it would be quite nice to have an alternative to Blink and WebKit for embedded / Electron-like use cases. Gecko isn’t embeddable, and even using it as an application framework isn’t exactly a well supported use anymore.
I hope you don’t mind, but a minor grammar nit: “was”, rather than “has been” in this case in English.
Yes, the difference in meaning is that “was” describes an event in the past, and “has been” describes something that was true for some period in the past.
I love having more diversity in the browser engine market - and maybe they can show what a CVE-less browser looks like. If Rust makes it easy to them, I’m not gonna complain. (Mozilla already showed that they got speedups from using rust by allowing them to multithread worry free.)
All I have to do is sit and watch.
Zero security bugs seems unrealistic. Rust protects you against a few categories of bugs, and that’s great, but it’s not a panacea.
That is why I said CVE-less, not CVE-free.
I and I imagine most English speakers of my acquaintance would assume both of these mean the same thing. “Smokeless” and “smoke free” both mean “without smoke” (though in different senses). Ditto “childless” and “child free” (with different valences). Etc.
Oh, shoot. I could have recognised it with ‘childless’ - thanks!
Ever stared wordlessly at someone while speaking?
I bet you can enlighten me on that - apart from being unkind.
“X-less” and “X-free” generally mean the same thing; without any X. There might be some exceptions but none are coming to mind, so your response looked a bit like pulling a weird technicality rather than a misunderstanding. Sorry for any rudeness!
A web engine is by nature a CVE-magnet, even if you don’t have any memory issue. But being free of most memory issues allows you to concentrate on everything else, and Rust has other correctness tricks up its sleeve that have nothing to do with memory or data races.
Performance-wise, beside Mozilla’s well-known “we failed to parallelize this in C++ but succeeded in Rust” return on experience, there’s Google’s “we isolate C++ libs in their own process but use Rust libs directly” to consider. Servo isn’t full-featured, fast, or battle-tested yet, but I’m hopeful.
Does Servo being a Rust project make it easier to embed in other applications?
I don’t speak from a whole lot of experience here, but my perception is C/C++ dependencies can be a pain to set up, and one of the benefits of the Rust ecosystem is the ease of installing dependencies. Need to render an HTML document in your app? Maybe someday the answer will be to import
servo-whatever, and you won’t have to spend mental effort on stuff like (I’m making this up) installing the correct versions of libjpeg and cairo.I’m not sure about rust per se but the post and additional content in linked posts highlights how their current development goals and partnership with Tauri has led them to make changes to make Servo specifically more ergonomic to embed in other applications.
Rust doesn’t make embedding easier (unless the host is in Rust), at best it makes compiling more straightforward. What matters for embedding is the architecture and intent of the project.
Servo is designed to be embeddable, and judging by how fast Tauri was able to integrate it, they do a good job. None of the other engines fare well in that regard: Gecko is bound to its browser, Blink is marginally better but has problematic leadership, the various webkits are lagging behind development, Ladybird is (I’m guessing) not interested in the outside world, Flow is proprietary… Servo is an enticing project, whether you care about Rust or not.
I think in theory Rust could be easier to embed because the contracts at interfaces are more explicit.
With C or C++ it can be unclear who is responsible for freeing an allocation or what preconditions need to hold about the environment and objects crossing boundaries. I guess this is true for any library though, not just embeddable things.
I suspect it depends a lot on the API. Most other languages allow more aliasing than Rust permits, so you’re likely to end up with a lot of things where the Rust type system prevents certain API misuse but a foreign API doesn’t. If I pass a borrowed Rust pointer to Lua, for example, ensuring that it isn’t captured is not trivial (I can probably wrap it in an object that is explicitly nulled at the end of the call, but then I have a thing that suddenly goes away from Lua at surprising times). Owning references are easier, but then borrows from them may be difficult.
You can almost certainly design an API that uses a narrow set of Rust that’s easy to bridge, but it requires some careful thought. From the sounds of it, the Igalia folks are doing that thinking, which is probably more important than the underlying language.
+1
Programming language was never a feature. But we see again and again people using it as an argument. Quite funny.
It’s always been a feature because it influences what runtime you need to depend on. This comes up all the time in managed languages, whether you need .NET installed, or JDK, or Python, etc.
It is rather a cost than a feature. Software has some benefits (functions-features, qualities…) and brings some costs (price, time to learn, dependencies to install, complexity to deal with…). Users starts with benefits (is software useful for me?) and then, only if significant benefits are present, they evaluate the costs (are they reasonable? lower then benefits?).
(of course, there are also users or customers that do not think this rational way)
P.S. I use the word „feature“ as some positive value, reason to buy or use. If you define it as an general characteristic that could be both positive or negative, then I agree that programming language is such a characteristic.
P.P.S. Similar discussion: Is price a selling point? If you offer just „cheap something“ will people buy it? Probably not. But if you offer some useful product for a reasonable price or cheaper than your competitors, than yes. However the main selling point is that usefulness, not the cheapness itself.
Loving the work so far! Keep up the updates!
tbh I saw this coming miles away (and thought I left a comment indicating such but I am failing to find it now). I hope the design comes out well, and I think Garnet will be better off with HKTs being added this “early”.
I’ve also read the abstract for 1ML like a dozen times… I should really sit down and read the code. Though I might wait for your Rust impl because, for me, ‘actually [reading] Rust is so much nicer than OCaml’. In my head this is somewhat related to Zig’s modules “just being structs”? I’m not comfortable enough with ML modules to understand the distinction.
I wrote a similar example in Zig of Pony’s deny capabilities embedded as a comptime var in a
Refstructure. I wonder what the limits are here - what is the extent of type checking that can be implemented in comptime Zig? Add some system for adding syntax sugar and you’d get a Racket-like system for systems languages.This is a list of things that are not trivial with a dismissive paragraph for each. I don’t think there’s much insight to be had here, even if it’s not wrong.
I think you’re being a bit dismissive here. Some good points are made that can generate some insightful conversation.
Stone soup of discourse https://en.m.wikipedia.org/wiki/Stone_Soup
It’s more like the article is potatoes, there’s at least more substance than a stone.
agree, plenty of discussions to be had on these topics but they’re not present in the article at all. the linked post on cross-platform is similarly hollow.
Not sure what this means but it sure is weird.
From eyeballing the graph, you can see it’s about $3500 per query for the high-compute option.
For a bunch of problems that can be solved in 5 minutes by a competent human, that’s a tad expensive.
It probably means that a task is ran on multiple GPU’s at once, and not just a couple but looks like a few racks worth of top-tier GPUs. I’m impressed they managed to achieve this sort of distributed computation. Transformer-based models are not very coducive to network-distribution, which they most likely needed here. I dont’ know how otherwise they could keep enough GPUs busy for only 1.3 minutes that would cost $2k.
Mixture of Experts models can be partially trained and executed in parallel, where each “expert” (effectively a sliver of FFN layers) could be on a different gpu. GPT4 is rumored to be using this architecture and they might have applied these learnings to the o series.
It’s interesting to see how scaling is pretty much exclusively on inference at the moment. That is interesting and probably will provide lots more variation. It’ll be interesting to see when that runs out of steam.
It could also be a sequential search type of setup with one or more LLMs generating and one or more serving as “evaluators”.
They mention “samples”, and that the low-compute setup used 6 of them and the high-compute used 1024.
So one “brute-forceish but not quite” way to get this parallelism given what we know about O1 (which may or may not also be true for O3): Run several “chains of thought” in parallel, and then when time’s up (or all of them are finished) pick whatever solution most of them got. Or use “tree of thought” and have them branch off at decision points, so they can share/reuse earlier parts of the token stream.
It seems more significant that they’ve managed such an improvement on FrontierMath and SWE-Bench.
(Personally, I’m terribly concerned that the Unemployment Machine might actually work this time, even with all the previous false starts.)
I’d postpone the doomsaying at least 6 months. I don’t have any insight to who owns and runs ARC, but it’s a small enough world and there’s certainly a surfeit of economic incentives to suspect that ARC and OpenAI are in cahoots in an attempt to boost OpenAI’s standing. In other words, assuming this is a rigged benchmark is applying Occam’s razor.
The ARC test itself is about 5 years old, and has mostly been notable for how very hard it’s been to get deep learning systems to do well on it. It was made by Francois Chollet, who has been highly skeptical of the LLM approach and has been a very vocal OpenAI critic right up until this. He worked at Google until recently (I don’t know where he went to after he quit Google, though).
It should be noted that he himself points out that beating ARC doesn’t mean that a system is an AGI - only that a system that can’t beat ARC definitely isn’t.
Thanks for clarifying.
I have also encountered people (online) who didn’t know that you could render web pages on the server.
React was their default; I saw some static websites that were just static JSX in React. Like if you wanted to put up some photos of your apartment or vacation on your own domain, you would create a React app for that.
People learn by imitation, and that’s 100% necessary in software, so it’s not too surprising. But yeah this is not a good way to do it.
The web is objectively worse that way, i.e. if you have ever seen an non-technical older person trying to navigate the little pop-out hamburger menus and so forth. Or a person without a fast mobile connection.
If I look back a couple decades, when I used Windows, I definitely remember that shell and FTP were barriers to web publishing. It was hard to figure out where to put stuff, and how to look at it.
And just synchronizing it was a problem
PHP was also something I never understood until I learned shell :-) I can see how JavaScript is more natural than PHP for some, even though PHP lets you render on the server.
To be fair, JSX is a pleasurable way to sling together HTML, regardless of if it’s on the frontend or backend.
Many backend server frameworks have things similar to JSX.
That’s not completely true, one beautiful JSX thing is that any JSX HTML node is a value, you can use all your language at your disposal to create that HTML. Most backend server frameworks use templating instead. For most cases both are equivalent, but sometimes being able to put HTML values in lists and dictionaries and have the full power of the language do come in handy.
Well, that’s exactly what your OP said. Here is an example in Scala. It’s the same style. It’s not like this was invented by react or even by frontend libraries. In fact, Scalatags for example is even better than JSX because it is really just values and doesn’t even need any special syntax preprocessor. It is pure, plain Scala code.
Maybe, but then just pick a better one? OP said “many” and not “most”.
Fine, I misread many as most as I had just woken up. But I’m a bit baffled that a post saying “pick a better one” for any engineering topic has been upvoted like this. Let me go to my team and let them know that we should migrate all our Ruby app to Scala so we can get ScalaTags. JSX was the first such exposure of a values-based HTML builder for mainstream use, you and your brother comment talk about Scala and Lisp as examples, two very niche languages.
When did I say that this was invented by React? I’m just saying that you can use JSX both on front and back which makes it useful for generating HTML. Your post, your sibling and the OP just sound slightly butthurt at Javascript for some reason, and it’s not my favourite language by any stretch of the imagination, but when someone says “JSX is a good way to generate HTML” and the response is “well, other languages have similar things as well”, I just find that as arguing in bad faith and not trying to bring anything constructive to the conversation, same as the rest of the thread.
But the point is that you wouldn’t have to - you could use a Ruby workalike, or implement one yourself. Something like Markaby is exactly that. Just take these good ideas from other languages and use it in yours.
Anyway, it sounds like we are in agreement that this would be better than just adopting JavaScript just because it is one of the few non-niche languages which happens to have such a language-oriented support for tags-as-objects like JSX.
I found that after all I prefer to write what is going to end up as HTML in something that looks as much as HTML as possible. I have tried the it’s-just-pure-data-and-functions approach (mostly with elm-ui, which replaces both HTML and CSS), but in the end I don’t like context switching it forces on my brain. HTML templates with as much strong checks as possible is my current preference. (Of course, it’s excellent if you can also hook into the HTML as a data structure to do manipulations at some stage.)
For doing JSX (along with other frameworks) on the backend, Astro is excellent.
That’s fair. There’s advantages and disadvantages when it comes to emulating the syntax of a target language in the host language. I also find JSX not too bad - however, one has to learn it first which definitely is a lot of overhead, we just tend to forget that once we have learned and used it for a long time.
In the Lisp world, it is super common to represent HTML/XML elements as lists. There’s nothing more natural than performing list operations in Lisp (after all, Lisp stands for LISt Processing). I don’t know how old this is, but it certainly predates React and JSX (Scheme’s SXML has been around since at least the early naughts).
Yeah, JSX is just a weird specialized language for quasiquotation of one specific kind of data that requires an additional compilation step. At least it’s not string templating, I guess…
I’m aware, I wrote such a lib for common lisp. I was talking that in most frameworks most people use they are still at the templating world.
It’s a shame other languages don’t really have this. I guess having SXSLT transformation is the closest most get.
Many languages have this, here’s a tiny sample: https://github.com/yawaramin/dream-html?tab=readme-ov-file#prior-artdesign-notes
Every mainstream as well as many niche languages have libraries that build HTML as pure values in your language itself, allowing the full power of the language to be used–defining functions, using control flow syntax, and so on. I predict this approach will become even more popular over time as server-driven apps have a resurgence.
JSX is one among many 😉
I am not a web developer at all, and do not keep up with web trends, so the first time I heard the term “server-side rendering” I was fascinated and mildly horrified. How were servers rendering a web page? Were they rendering to a PNG and sending that to the browser to display?
I must say I was rather disappointed to learn that server-side rendering just means that the server sends HTML, which is rather anticlimactic, though much more sane than sending a PNG. (I still don’t understand why HTML is considered “rendering” given that the browser very much still needs to render it to a visual form, but that’s neither here nor there.)
The Bible says “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s”. Adapting that to today’s question: “Render unto the Screen the things that are the Screen’s, and unto the Browser the things that are the Browser’s”. The screen works in images, the browser works in html. Therefore, you render unto the Browser the HTML. Thus saith the Blasphemy.
(so personally I also think it is an overused word and it sounds silly to me, but the dictionary definition of the word “render” is to extract, convert, deliver, submit, etc. So this use is perfectly inline with the definition and with centuries of usage irl so i can’t complain too much really.)
You can render a template (as in, plug in values for the placeholders in an HTML skeleton), and that’s the intended usage here I think.
I don’t think it would be a particularly useful distinction to make; as others said you generally “render” HTML when you turn a templated file into valid HTML, or when you generate HTML from another arbitrary format. You could also use “materialize” if you’re writing it to a file, or you could make the argument that it’s compilers all the way down, but IMO that would be splitting hairs.
I’m reminded of the “transpiler vs compiler” ‘debate’, which is also all a bit academic (or rather, the total opposite; vibe-y and whatever/who cares!).
It seems that was a phase? The term transpiler annoys me a bit, but I don’t remember seeing it for quite a while now.
Worked very well for Opera Mini for years. Made very low-end web clients far more usable. What amazed me was how well interactivity worked.
So now I want a server side rendering framework that produces a PNG that fits the width of my screen. This could be awesome!
There was a startup whose idea was to stream (as in video stream) web browsing similar to cloud gaming: https://www.theverge.com/2021/4/29/22408818/mighty-browser-chrome-cloud-streaming-web
It would probably be smaller than what is being shipped as a web page these days.
Exactly. The term is simply wrong…
ESL issue. “To render” is fairly broad term meaning something is to provide/concoct/actuate, has little to do with graphics in general.
Technically true, but in the context of websites, “render” is almost always used in a certain way. Using it in a different way renders the optimizations my brain is doing useless.
The way that seems ‘different’ to you is the way that is idiomatic in the context of websites 😉
Unless it was being rendered on the client, I don’t see what’s wrong with that. JSX and React were basically the templating language they were using. There’s no reason that setup cannot be fully server-generated and served as static HTML, and they could use any of the thousands of react components out there.
Yeah if you’re using it as a static site generator, it could be perfectly fine
I don’t have a link handy, but the site I was thinking about had a big janky template with pop-out hamburger menus, so it was definitely being rendered client side. It was slow and big.
I’m hoping tools like Astro (and other JS frameworks w/ SSR) can shed this baggage. Astro will render your React components to static HTML by default, with normal clientside loading being done on a per-component basis (at least within the initial Astro component you’re calling the React component from).
I’m not sure I would call a custom file type
.astrothat renders TS, JSX, and MD/X split by a frontmatter “shedding baggage”. In fact, I think we could argue that astro is a symptom of the exact same problem you are illustrating from that quote.That framework is going to die the same way that Next.JS will: death by a thousand features.
huh couldn’t disagree more. Astro is specifically fixing the issue quoted: now you can just make a React app and your baseline perf for your static bits is now the same as any other framework. The baggage I’m referring to would be the awful SSG frameworks I’ve used that are more difficult to hold correctly than Astro, and of course require plenty of other file types to do what
.astrodoes (.rb,.py,.yml, etc.). The State of JS survey seems to indicate that people are sharing my sentiments (Astro has a retention rate of 94%, the highest of the “metaframeworks”).I don’t know if I could nail what dictates the whims of devs, but I know it isn’t feature count. If Next dies, it will be because some framework with SSG, SSR, ISR, PPR, RSC, and a dozen more acronym’d features replaced it. (And because OpenNext still isn’t finished.)
Astro’s design is quite nice. It’s flexing into SSR web apps pretty nicely I’d say. The way they organize things isn’t always my favorite, but if it means I can avoid ever touching NextJS I’m happy.
Well that’s one way to look at it. Another is that popular developer culture doesn’t need to be unified. There can be one segment for whom “computer science” means “what does this javascript code snippet do”; I am not a linguistic prescriptivist so this does not bother me. It doesn’t put us in danger of, like, forgetting the theory of formal languages. It also isn’t really a new sensibility; this comic has been floating around for at least 15 years now. People tend to project a hierarchy onto this where the people who care about things like formal languages and algorithms are more hardcore than the javascript bootcamp world. But this is a strange preoccupation we would be better to leave behind. If you want to get in touch with “hardcore” popular developer culture you can participate in Advent of Code! It’s a very vibrant part of the community! Or go to one of the innumerable tech conferences on very niche topics that occur daily throughout the developed world.
I wouldn’t say I’m a prescriptivist either, but the purpose of language is to communicate and it seems very odd to me that there are people whom “computer science” is semantically equivalent to “javascript trivia”. I think that seemingly inexplicable mismatch in meaning is what the article is remarking upon; not being judgemental about Javascript qua Javascript, but that there’s such a gulf in ontologies among people in the same field.
I have heard this phrase many times when defending a prescriptivist perspective and it is never actually about the words confusing people, it’s about upholding the venerability of some term they’ve attached to themselves.
Ehhhh strong disagree on that one! There’s a difference between “that’s not a word” and “that word choice is likely to cause more confusion than clarity”. The former is a judgement, the latter is an observation.
I have not attached the term “computer science” to myself, I truly think it’s just confusing to use that phrase refer to minutiae of a particular programming language. I’m not saying that’s “wrong”, nor that javascript in particular is bad or anything like that, just that it is very contrary to my expectations and so saying “computer science” in that manner communicates something very different to me than whoever is using it in that way would intend.
I would say that it’s wrong and bad to impugn my motives and claim that I actually mean the exact opposite of what I said though!
How are you confused? Does it really call into question your own understanding of what the term “computer science” means?
I’m confused because when someone says something is regarding “computer science” I expect to see things related to the branch of mathematics of that name. In the linked article, they mention seeing the term used in a game show: If I was told I was going to be on a game show and the category was “computer science” I would prepare by studying the aforementioned field and if I then was presented with language-specific trivia instead, I would be a bit miffed.
What proportion of an undergraduate computer science degree would you say is dedicated to the branch of mathematics and what proportion is dedicated to various language trivia?
When I was in school, the computer science course I took were pure math, but I can’t speak to what other programs at other school at other times are like. I don’t really understand why you’re so vexed here; I’m not trying to “project a hierarchy” or say that one thing is better than another (I taught at a javascript bootcamp!), merely that I think the point the article is making is that it feels strange to discover one’s peers use familiar words to mean very different things. I’m just trying to explain my perspective; you’re the one who is repeatedly questioning my honesty.
90% mathematics, from my experience doing such a degree in Swansea in the early 2000s and teaching at Cambridge much later. Programming is part of the course, but as an applied tool for teaching the concepts. You will almost never see an exam question where some language syntax thing is the difference between a right or wrong answer and never something where an API is the focus of a question.
Things like computer graphics required programming, because it would make no sense to explore those algorithms without an implementation that ends up with pixels on a screen (though doing some of the antialiased line-drawing things on graph paper was informative).
Dijkstra’s comment was that computer science is as much about computers as astronomy is about telescopes. You can’t do much astronomy without telescopes, and you need to understand the properties of telescopes to do astronomy well, but they’re just tools. Programming is the same for computer science. Not being able to program will harm your ability to study the subject, even at the theoretical extremes (you can prove properties of much more interesting systems if you can program in Coq, HOL4, TOA+, or whatever than if you use a pen and paper).
Even software engineering is mostly about things that are not language trivia and that’s a far more applied subject.
When I got my undergraduate degree in 1987 it was literally a B.S. Applied Mathematics (Computer Science), and when we learned language trivia it was because the course was about language trivia (Comparative Programming Languages, which was a really fun one involving very different languages like CLU, Prolog, and SNOBOL).
If kids nowadays are learning only language trivia to get a CS degree, somebody is calling it by the wrong name.
In my experience, only one in four courses was even related to applied programming. The other courses were not language specific at all.
Even in the courses that related to applied programming, usually less than a week per semester was spent on language trivia.
Overall I’d say about 5%?
“I truly think it’s just confusing to use [the phrase “computer science” to] refer to minutiae of a particular programming language” -> “How are you confused?”
This is a bit underhanded. It’s confusing when someone says “computer science”, and then later it turns out they mean “Array.map returns an array and Array.foreach doesn’t” and not what the term has been used to refer to for a long time; algorithms, data structures, information theory, computation theory, PLT, etc. I am absolutely a linguistic descriptivist, but to claim that using a term in a way that doesn’t match the established usage is not cause for confusion is not actually a descriptivist take. It’d become less confusing once that becomes the established usage, but until we’re there, it’s still going to fail to communicate well and cause — say — confusion.
-Edsger Dijkstra
Words should mean specific things in technical fields.
Funny you should mention Dijkstra. Like Naur, he had a hard time getting on board with the term, “computer science.” Dijkstra favored the subtly but importantly different “computing science.” Naur preferred “dataology.”
In some countries the field is called informatics, I wonder why that didn’t happen in English speaking ones (or if there’s a difference I’m not aware of).
That term is used in various places with different meaning, including in the US, where it’s a term for “information science”. I.e. studies of things like taxonomy and information archiving. In other words that’s already used for several purposes so if you want to use it for clarity you need to look elsewhere.
some schools use the term, at my uni we had
I wish we had cybernetics instead.
Informatics, just like dataology, seemingly focuses on data (information). Of course, processing data is at the core of computing, but, IMO, this misses the point. The point is the computing process, not what the process is applied to. Computing science is great, but cybernetics is cool, and also wider: it encompasses complex systems, feedback, and interactions.
Taking Naur’s side for a moment, though, the really science-y parts of computer science curriculum (e.g. time complexity, category theory, graph theory) are basically just mathematics applied to data/information, so perhaps “datology” and “informatics” aren’t that far off the mark. Everything else that’s challenging about a CS undergrad curriculum (e.g. memory management, language design, parallel computation, state management) is just advanced programming that has accumulated conventions that have hardened over the years and are often confused with theory.
Interesting because I feel like cybernetics is both a subset of computing science (namely one concerned with systems that respond and adapt to their environment) and trans-disciplinary enough to evade this categorization.
To me it feels like computer science is a subset of cybernetics. Or, to be more precise: Computer science + Systems Science = Cybernetics.
Datology was incidentally what those courses were called when I took them as part of studing computational linguistics here in Sweden.
Well, Dijkstra would certainly not endorse committing a category error and conflating terms about a field with terms of a field.
After reading some of the author’s replies, it really does come off as bloviating about dictionary terms and real Computer Scientists.
It’s kind of an odd category, isn’t it? There are plenty of people who drive cars, but we generally don’t categorize daily commuters as Drivers.
I feel like we do? It becomes especially apparent whenever one is where cars are but is not in a car themselves (pedestrian, cyclist, motorcyclist) — there is a strong sense of “drivers” and “everyone else”.
I think I would characterize that among drivers as a lack of awareness for non drivers, at least in the USA. My point is that we don’t expect a sense of shared tribal affiliation between drivers like the author seems to expect between programmers.
It’s a continuum. JS syntax is somewhat CS (I don’t know, 0.2?), but likely it’s more “engineering” (0.9?). Saying that JS syntax is more engineering than CS is NOT negative towards JS. Neither engineering not CS are superior or lesser respect each other!
On the other hand, although having clearly defined terms is probably a lost battle, words are needed to communicate, and it’s definitely worthwhile to try to have stable, useful words that we can use to be able to explain and discuss things.
I think in some cases it would have been great if terms hadn’t lost their original meaning (I’m thinking devops, for example), but likely if I thought about it, I’d find terms where I would think their drift from their original meaning has been good.
…
I like the article, even if I disagree with it. I feel this generational gap with the newer crop of programmers. Likely my elders feel a similar gap with my cohort!
But we are always quick to judge that we are on a downhill slope. We’ve been so long on this downhill slope, yet we are still alive and kicking, so maybe it’s not so downhill.
(A local tech journalist dug a quote from Phaedrus [so over 2300 years old] where some old man yells at… writing! Search https://standardebooks.org/ebooks/plato/dialogues/benjamin-jowett/text/single-page#phaedrus-text for “At the Egyptian city of Naucratis” and enjoy!)
The new crop of programmers is just new. They haven’t specialized yet! They come with, at most, an undergrad education that hopefully exposed them to Context-Free Grammars and Turing Machines and a moderately-complicated datastructure like a red-black tree and a basic misunderstanding of what the halting problem is (they think it means you can’t write a program to tell whether a program will halt). Or maybe they come from a bootcamp and learned none of that. The career path of a programmer goes many places. Sure, some of this cohort will stay at the JS-framework-user level forever. That’s fine, it’s a job. Others will grow and learn and become specialists in very interesting areas, pushed by the two winds of their interests and immediate job necessities.
Yes and… the new crop of programmers likely includes a ton of people who do know. And also, the “new” crop of programmers has always included such folk.
What I’m saying, in every generation there’s always some people say the next generation will be a disaster. Hasn’t happened yet :D
A fresh out of school colleague of mine recently used Math.log2() to test whether a bit was set. They did study IPv4 and how e.g. address masks work. Not sure what to think about that.
I understand they spend most of time coding pretty complex Vue apps, but still.
Largely agree, but after having spent the last year or so getting up to speed with modern frontend dev, I’m cranking JS syntax alone to like a 0.7 on the CS scale. Half of the damn language is build-time AST transformations + runtime polyfill. Learning Babel is practically a compilers course.
I’m not sure I understand you correctly, but let me reiterate on something.
Something being CS or not does not make it harder or more deserving of merit. There are things that require different levels of intelligence in CS; there are plenty of easy things.
And there are lot of very difficult things in life which are not CS. Beating the world chess champion would make me much prouder than passing my first subjects in CS! As a less far-fetched example, there is plenty of software which is an engineering feat, but really is not a CS feat. (And viceversa, there’s a ton of “CS code” around which is rather simple in engineering terms.)
I’m not saying that your JS code was not CS-y- I would have to see it to give my opinion. But doing something hard with a computer does not mean you are doing CS (but it can!). And like, figuring out a complex CS proof frequently does not require any coding at all (but it can!).
I feel that if the teacher were instead explaining Nyquist-Shannon sampling theorem, the joke would work much better and be somewhat more balanced.
While analyzing algorithmic complexity can be useful, modern machines are quite unintuitive with register renaming and caches to the point that trying rough solutions and benchmarking often works better.
I got a similar impression initially, but it flipped to the opposite couple of months in. The big thing is that std in Zig is not special. There are no lang items. So, builtins are the entire interface between compiler and user code, it’s just that this interface isn’t particularly well-hidden in the guts of standard library, but is just there to be used.
In a similar vein, the fact that
@importis a reverse syntax sugar is cute. Import is syntax in a sense that the argument must be a sting literal, and you can’t alias the import “function”, even at comptime. It fully behaves as a customimport “path”syntax, except that there’s no concrete syntax for it and a function-call is re-used.consider this term yoinked, I had been wondering what to call this lang-item-with-a-mustache.
Not to be confused with syntax salt, which is about having dedicated syntax which deliberately induces friction!
I hadn’t realized that ABI stability was a goal of C++ until there was a showdown over it. I’d always known people to use pure C ABIs between C++ binaries. I remember KDE folks padding their classes so that they could add members in micro-versions, but I don’t think they attempted compatibility across major versions.
There are three kinds of ABI here, and it’s worth unpicking them. Especially since none of them are actually part of the C++ standard.
First, there’s the lowering C++ language constructs to machine code and data layouts. This is the C++ ABI for a given platform. It defines things like how vtables and RTTI are organised, how exception handling works, and so on. There are basically two that anyone cares about: the Itanium one that almost everyone uses (with a few minor tweaks, such as the 32-bit Arm or Fuchsia variants) and the Visual Studio one that Visual Studio uses. The Itanium C++ ABI has been stable (with additions for new features) for over 20 years. Visual Studio reserves the right to change their ABI every release of Visual Studio, but in practice they don’t very often.
This ABI allows you to compile two C++ source files with different compilers and link them together. You can mix clang and gcc (or, these days, clang and cl.exe) in individual files and link the result together. This used to churn a lot in all C++ compilers, which was one of the reasons Linus didn’t like C++, but everywhere except Visual Studio is stable now.
Next, there’s the C++ standard library ABI. In libc++, for example, a bunch of things are hidden behind macros to avoid breaking the ABI. The C++ standard tries quite hard to avoid requiring these (sometimes it fails), they’re mostly for places where people figured out a more efficient (but not binary compatible) implementation at some point and want to allow people who don’t care to opt in. If a standard-library function changed in such a way that its name mangling changed, for example, or if a library function that’s inline required a modification that would cause an ODR violation, that would break the standard-library ABI.
As I recall, libstdc++ changed its ABI a few times in the early 2000s and then once for C++11. libc++ doesn’t support any pre-C++11 standards, so hasn’t had to yet. If you link something against the standard library, you want to have very strong backwards compatibility guarantees because everything links the standard library and so if two libraries (or a library and a program) expect different standard library ABIs then they can’t coexist in the same process.
Finally, there’s the ABI for individual C++ libraries. As with C, it’s entirely up to the library vendor how strong they make their ABI-compatibility guarantees. You can expose type-erased interfaces from a shared library and provide versioned inline wrappers in headers that do the type erasure. If you do this, you can build long-term stable ABIs. Alternatively, you can put all class definitions in headers and end up with something that breaks ABIs every minor release. Both C and C++ codebases have done the thing where they add padding fields to allow future expansion without breaking ABIs. In C++, you need to make sure that you have out-of-line constructors (including copy / move constructors) so that replacing these with a type that needs copying / initialising differently will work. You probably need out-of-line comparison operators as well. You can often avoid this if you have types that don’t need subclassing or stack allocation by making constructors private and providing factory methods that return shared / unique pointers.
The C++ standards committee is primarily concerned with the first two. New things in the standard shouldn’t require backwards-incompatible changes to either of these (new features requiring new bits of ABI are fine).
fwiw, MSVC 2015 and up at least promise binary compatibility, 2013 and earlier did not: https://learn.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-170
I assume name mangling is in the C++ ABI.
yup, here’s the Itanium spec
Yes, it’s quite a large part of the Itanium ABI spec.
Minor amendment about C++ ABI:
While https://libcxx.llvm.org/ mentions “targeting C++11 and above”, C++03 is still supported.
It’s a maintenance headache and a maintainer philnik recently has a patch to freeze C++03 headers in a subdirectory. https://discourse.llvm.org/t/rfc-freezing-c-03-headers-in-libc/77319
I have some notes about libc++’s ABI compatibility and its implementation strategy https://maskray.me/blog/2023-06-25-c++-standard-library-abi-compatibility
libstdc++ has https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html
Absolute must read. Previously, on Lobsters there was discussion about feminist philosophy in the context of programming language design & implementation. Ideas surrounding the cultural context that langdev exists in have been rattling around in my brain for a while, especially after Evan Czaplicki’s talk on The Economics of Programming Languages. It is exceptionally rare to find engineers that are well versed in langdev, let alone this in tune with its societal backdrop. Sadly, it is excruciatingly obvious that coming to this understanding was painful to the author and I wish them and anyone else pushing C++ forward the best.
I’m concerned that Bluesky has taken money from VCs, including Blockchain Capital. The site is still in the honeymoon phase, but they will have to pay 10x of that money back.
This is Twitter all over again, including risk of a hostile takeover. I don’t think they’re stupid enough to just let the allegedly-decentralized protocol to take away their control when billions are at stake. They will keep users captive if they have to.
Hypothetically, if BlueSky turned evil, they could:
This would give them more or less total control. Other people could start new BlueSky clones, but they wouldn’t have the same network.
Is this a real risk? I’m not sure. I do know it’s better than Twitter or Threads which are already monolithic. Mastodon is great but I haven’t been able to get many non-nerds to switch over.
Hypothetically, the admins of a handful of the biggest Mastodon instances, or even just the biggest one, could turn evil and defederate from huge swathes of the network, fork and build in features that don’t allow third-party clients to connect, require login with a local account to view, etc. etc.
Other people could start clones, of course, but they wouldn’t have the same network.
(also, amusingly, the atproto PDS+DID concept actually enables a form of account portability far above and beyond what Mastodon/ActivityPub allow, but nobody ever seems to want to talk about that…)
The two situations are not comparable. If mastodon.social disappeared or defederated the rest of the Mastodon (and AP) ecosystem would continue functioning just fine. The userbase is reasonably well distributed. For example in my personal feed only about 15% of the toots are from mastodon.social and in the 250 most recent toots I see 85 different instances.
This is not at all the case for Bluesky today. If bsky.network went away the rest of the network (if you could call it that at that point) would be completely dead in the water.
While I generally agree with your point (my timelines on both accounts probably look similar) just by posting here we’ve probably disqualified ourselves from the mainstream ;) I agree with the post you replied to in a way that joe random (not a software developer) who came from twitter will probably on one of the big instances.
For what its worth I did the sampling on the account where I follow my non-tech interests. A lot of people ended up on smaller instances dedicated to a topic or geographical area.
While it’s sometimes possible to get code at scale without paying – via open source – it’s never possible to get servers and bandwidth at scale without someone dumping in a lot of money. Which means there is a threshold past which anything that connects more than a certain number of people must receive serious cash to remain in operation. Wikipedia tries to do it on the donation model, Mastodon is making a go at that as well, but it’s unclear if there are enough people willing to kick in enough money to multiple different things to keep them all running. I suspect Mastodon (the biggest and most “central” instances in the network) will not be able to maintain their present scale through, say, an economic downturn in the US.
So there is no such thing as a network which truly connects all the people you’d want to see connected and which does not have to somehow figure out how to get the money to keep the lights on. Bluesky seems to be proactively thinking about how they can make money and deal with the problem, which to me is better than the “pray for donations” approach of the Fediverse.
Your point is valid, though a notable difference with the fediverse is the barrier to entry is quite low - server load starts from zero and scales up more or less proportionally to following/follower activity, such that smallish but fully-functional instances can be funded out of the hobby money of the middle+ classes of the world. If they’re not sysadmins they can give that money to masto.host or another vendor and the outcome is the same. This sort of decentralisation carries its own risks (see earlier discussion about dealing with servers dying spontaneously) but as a broader ecosystem it’s also profoundly resilient.
The problem with this approach is the knowledge and effort and time investment required to maintain one’s own Mastodon instance, or an instance for one’s personal social circle. The average person simply is never going to self-host a personal social media node, and even highly technical and highly motivated people often talk about regretting running their own personal single-account Mastodon instances.
I think Mastodon needs a better server implementation, one that is very low-maintenance and cheap to run. The official server has many moving parts, and the protocol de-facto needs an image cache that can get expensive to host. This is solvable.
Right! I’ve been eyeing off GoToSocial but haven’t had a chance to play with it yet. They’re thinking seriously about how to do DB imports from Mastodon, which will be really cool if they can pull it off: https://github.com/superseriousbusiness/gotosocial/issues/128
Worst case one moves off again. That’s a problem for a future date.
That’s true, but I’ve been hooked on Twitter quite heavily (I’ve been an early adopter), and invested in having a presence there. The Truth Social switcheroo has been painful for me, so now I’d rather have a smaller network than risk falling into the same trap again.
Relevant blog post from Bluesky. I’d like to think VCs investing into a PBC with an open source product would treat this differently than Twitter, but only time will tell.
OpenAI was a “non profit” until it wasn’t.
OpenAI never open sourced their code, so Bluesky is a little bit different. It sill has risks but the level of risk is quite a bit lower than OpenAI was.
OpenAI open sourced a lot and of course made their research public before GPT3 (whose architecture didn’t change much[1]). I understand the comparison, but notably OpenAI planned to do this pseudo-non-profit crap from the start. Bluesky in comparison seems to be “more open”. If Bluesky turned evil, then the protocols and software will exist beyond their official servers, which cannot be said for ChatGPT.
[1]: not that we actually know that for a fact since their reports are getting ever more secretive. I forget exactly how open that GPT3 paper was, but regardless the industry already understood how to build LLMs at that point.
There’s some irony in a post about “digital feudalism” being posted on Medium…
Also very strange to see Tor lumped in with billionaire-owned Twitter; what a bizarre comparison.
I also find it strange how the author only claims Twitter was a propaganda machine after the Musk acquisition, instead of simply always being the case. Twitter has always been the most egregious example of a political battlefield.
And crossposted to Substack.
I don’t think irony is the right word. perhaps congruity?
@mhatta would be nice to read your posts in another place like write.as (which is open and support ActivityPub, markdown etc) ; )
Just self host your own blog. There is zero reason to write any content on a blog platform.
In fact, this is literally the cure to technofeudalism. Just don’t use these platforms. The web is open, anyone can publish there. You can market via other channels than SEO via search engines.
“Just”.
That’s a lot of things wrapped up in “just”
The web is open, if you have a credit card that US companies accept.
I can think of plenty of reasons not to self host.. I think the golden rule should simply be; use your own domain name. And if you use a service you don’t own, frequently export your content.
What are the reasons to not self-host?
I think the use of “just” in your post is not a good idea: https://www.tbray.org/ongoing/When/202x/2022/11/07/Just-Dont .
I agree that self-hosting is a good thing! But I also think it not that easy to set up and maintain for everyone; and more importantly, threads like this one right now make it look like self-hosting is a prerequisite for writing about technofeudalism (or any other topic, actually). It would be a shame if people held back with writing only because they think they don’t publish their content in the “appropriate” way. Sure, a self-hosted website might be nicer. But publishing on Medium is still much better than not publishing at all.
see also https://www.todepond.com/wikiblogarden/better-computing/just/
& I agree with @amw-zero here, we could probably just reword it to “educating the masses to ‘host’ their own blog is a practical cure to technofeudalism” for some spectrum of ‘host’
When you say “self host”, did you mean buy a domain and set up a VPS with a web server? Or point DNS to a server on your personal internet connection? Or sign up with some free hosting service?
I’m curious about the easiest way to get started without relying on some large entity. Where should the line be drawn?
Yeah, for sure, but I decided to recommend write.as as “middle option” after seeing his linked profile on about.me, maybe he is awaking (not a cognitive dissonance hopefully), he is a professor and researcher, I have friends like him who don’t have time or like to use a static site generator and self-host it yet…
I think I’ve seen this project before. It looks like it has some interesting ideas, but the custom EULA gives off massive red flags. Quoting from a random section I scrolled to:
No thanks!
Similar work is being done in salsa which is inspired by rustc’s internals.
…which is quickly followed by:
The “Assignment of Rights” seems no different from the myriad CLAs developers sign all the time.
Oh good point, I completely missed that one; definitely gives off red flag vibes
I’m in love with this proposal tbh. I understand the concerns that autoclaim loses “explicitness” (the word already appears over 20 times in these comments), but I view this change in behavior as a huge ergonomic win. I was just reading @withoutboats’ post Not Explicit (2017) earlier this week and find its splitting of “explicit” into a few different buckets quite useful here. Allow me to quote a bit:
Now allow me to juxtapose this with a snippet from Niko’s post:
This is a very manual ritual we are all familiar with. Manual actions can be great for making expensive operations clear to readers, something Rust users are keenly aware of, but these
.clone()calls are cheap operations that do not contribute to any meaningful insight of your code. You know what’s going on here: a(n atomic ref to a) number has been incremented. This is not a cost that I believe should be manual.Even worse, if these types weren’t
Rc/Arc, the.clone()ritual doesn’t communicate if these clones are cheap or expensive, and our existing mental pattern matching on this lambda ceremony may let us erroneously assume these areRc/Arc. Further, refactors to these types won’t require updating usages. With this proposal, ifctx.diskis replaced with a non-Claimtype that is expensive to copy, it could not be autoclaimed and this usage would be a compilation error (iiuc).On top of removing the auto-copying of arrays, this proposal looks great! I hope it goes far.
That’s not necessarily true. When I see this code, there’s no guarantee that the nearest indication that those values are
Rc/Arc‘d will be close by. Unless it is, that’s a loss of locality.The explicit reading of it, currently, is that there will be no aliasing going on, lock-guarded or otherwise, because they’ll be
moved unless they’rememcpy-safe.That said, I wouldn’t have anything against something like this hypothetical syntax, which relies on some hypothetical
RefCounttrait that non-Rc/ArcClone-ables don’t implement (thus addressing the “One-clone-fits-all creates a maintenance hazard” aspect of the proposal):