1. 35

If there are any questions or remarks, I am right here!

1. 15

I wish I could invite this story multiple times. The perfect combination of being approachable, while still being packed with (to me) new information. Readable without ever being condescending.

One thing I learned was that DNA printers are a thing nowadays. I had no idea. Are these likely to be used in any way by amateur hackers, in the sense that home fusion kits are fun and educational, while never being useful as an actual energy source?

1. 14

So you can actually paste a bit of DNA on a website and they’ll print it for you. They ship it out by mail in a vial. Where is breaks down is that before you inject anything into a human being.. you need to be super duper extra totally careful. And that doesn’t come from the home printer. It needs labs with skilled technicians.

1. 7

Could any regular person make themselves completely fluorescent using this method? Asking for a friend.

2. 5

You may be interested in this video: https://www.youtube.com/watch?v=2hf9yN-oBV4 Someone modified the DNA of some yeast to produce spider silk. The whole thing is super interesting (if slightly nightmarish at times if you’re not a fan of spiders).

1. 2

So that’s going to be the next bioapocalypse then. Autofermentation but where as well as getting drunk, you also poop spider silk.

3. 8

Love the article. Well done.

1. 5

Thanks for the awesome article! Are there any specific textbooks or courses you’d recommend to build context on this?

1. 12

Not really - I own a small stack of biology books that all cover DNA, but they cover it as part of molecular biology, which is a huge field. At first I was frustrated about this, but DNA is not a standalone thing. You do have to get the biology as well. If you want to get one book, it would have to be the epic Molecular Biology of the Cell. It is pure awesome.

1. 2

You can start with molecular biology and then a quick study of bio-informatics should be enough to get you started.

If you need a book, I propose this one, it is very well written IMO and covers all this stuff.

2. 2

Great article! I just have one question. I am curious why this current mRNA vaccine requires two “payloads” ? Is this because it’s so new and we haven’t perfected a single shot or some other reason?

1. 2

As I understand it[1] a shot of mRNA is like a blast of UDP messages from the Ethernet port — they’re ephemeral and at-most-once delivery. The messages themselves don’t get replicated, but the learnt immune response does permeate the rest of the body. The second blast of messages (1) ensures that the messages weren’t missed and (2) acts as a “second training seminar”, refreshing the immune system’s memory.

[1] I’m just going off @ahu’s other blogs that I’ve read in the last 24 hours and other tidbits I’ve picked up over the last 2 weeks, so this explanation is probably wrong.

1. 2

It’s just the way two current mRNA vaccines were formulated, but trials showed that a single shot also works. We now know that two shots are not required.

1. 2

The creators of the vaccine say it differently here: https://overcast.fm/+m_rp4MLQ0 If I remember correctly, they claim that one shot protects you but doesn’t prevent you to be infective, while the second make sure that you don’t infect others

2. 1

Not an expert either, but I think this is linked to the immune system response, like some other vaccines, the system starts to forget, so you need to remind him what the threat was.

3. 1

Is there any information on pseudouridine and tests on virus encorporating it in their DNA?

The one reference in your post said that there is no machinery in cells to produce it, but the wiki page on it says that it is used extensively in the cell outside of the nucleus.

It seems incredibly foolhardy to send out billions of doses of the vaccine without running extensive tests since naively any virus that mutated to use it would make any disease we have encountered so far seem benign.

1. 1

Pseudouridine are RNA modifications that are done post-transcription, so after the RNA is formed.

That seems to mean (to me, who is not a biologist) that a virus would have to grow the ability to do/induce such a post-processing step. Merely adding Ψ to sequences doesn’t provide a virus with a template to accelerate such a mutation.

1. 1

And were this merely a nuclear reactor or adding cyanide to drinking water I’d agree. But ‘I’m sure it will be fine bro’ is how we started a few hundred environmental disasters that make Chernobyl look not too bad.

‘We don’t have any evidence because it’s obvious so we didn’t look’ does not fill me with confidence given our track record with biology to date.

Something like pumping rats with pseudouridine up to their gills then infecting them with rat hiv for a few dozen generations and measuring if any of the virus starts encorporating pseudouridine in its RNA would be the minimum study I’d start considering as proof that this is not something that can happen in the wild.

1. 2

As I mentioned, I’m not a biologist. For all I know they did that experiment years ago already. Since multiple laymen on this forum came up with that concern within a few minutes of reading the article, I fully expect biologists to be aware of the issue, too.

That said, in a way we have that experiment already going on continuously: quickly evolving viruses (such as influenza) that mess with the human body for generations. Apparently they encountered pseudouridine regularly (and were probably at times exposed to PUS1-5 and friends that might have swapped out an U for a Ψ in a virus accidentally) but still didn’t incorporate it into their structure despite the presumed improvement to their fitness (while eventually leading our immune system to incorporate a response to that).

Which leaves me to the conclusion that

1. I’d have to dig much deeper to figure out a comprehensive answer, or
2. I’ll assume that there’s something in RNA processing that makes it practically impossible for viruses to adopt that “how to evade the immune system” hack on a large scale.

Due to lack of time (and a list of things I want to do that already spans 2 or 3 lifetimes) I’ll stick to 2.

2. 1

I enjoyed the article, reminded me of my days at the university :-)

So here are some quick questions in case you have an answer:

• Where does the body store info about which proteins are acceptable vs not?
• How many records can we store there?
• Are records indexed?
• How does every cell in the body gets this info?
1. 12

It is called negative selection. It works like this:

1. Body creates lots of white blood cells by random combination. Each cell has random binding sites binding to specific proteins and will attack them.
2. Newly created white blood cells are set loose in staging area, which is presumed to be free of threats. All cells triggering alarm in staging area kill themselves.
3. White blood cells, negatively selected not to react to itself, mature and are released to production.
1. 1

Interesting, thanks for sharing!

2. 5

How does info spread through the body

I came across this page relatively recently and it really blew my mind.

glucose is cruising around a cell at about 250 miles per hour

The reason that binding sites touch one another so frequently is that everything is moving extremely quickly.

Rather than bringing things together by design, the body can rely on high-speed stochastic events to find solutions.

This seems related, to me, to sanxiyn’s post pointing out ‘random combination’ - the body:

• Produces immune cells which each attack a different, random shape.
• Destroys those which attack bodily tissues.
• Later, makes copies of any which turn out to attack something that was present.

This constant, high-speed process can still take a day or two to come up with a shape that’ll attack whatever cold you’ve caught this week - but once it does, that shape will be copied all over the place.

1. 2

I did some projects in grad school with simulating the immune system to model disease. Honestly we never got great results because a lot of the key parameters are basically unknown or poorly characterized, so you can get any answer you want by tweaking them. Overall it’s less well understood than genetics, because you can’t study the immune system in a petri dish. It’s completely fascinating stuff though: evolution built a far better antivirus system for organisms than we could ever build for computers.

1. 11

Surprised they don’t mention the singular value decomposition at all for dimensionality reduction. A friend also pointed me to the Johnson-Lindenstrauss Theorem, which “shows that a set of n points in high dimensional Euclidean space can be mapped into an O(log n/e^2)-dimensional Euclidean space such that the distance between any two points changes by only a factor of (1 +- e)”.

Maybe it’s because their solution needs to run online? But they don’t mention that in their blog post.

EDIT: Sorry, didn’t want to come across as totally negative. I definitely think the solution they ended up going with is clever. I just think it would be more interesting if they’d talked about more prior art.

1. 4

My understanding of SVD is that for it to make sense the dimensions need to be corelated in some way. This is not the case for our workload - for all intends and purposes the dimension are pretty random.

Johnson-Lindenstrauss Theorem - amazing. I’m not sure what implications are, would need to think about it. Thanks for the link!

1. 5

This is hilarious. Can’t wait for the next generation of Wall Streeters–this takes trying to exploit each other with deceptively written contracts to the next level. “Code is law!”

1. 6

Another rule of thumb I’ve heard is that humans have a lot of trouble keeping track of more than two quantifiers in a sentence. This was to introduce the Pumping Lemma, which uses three :P

1. 8

My favorite abuse of prime numbers is to solve interview questions that ask you to detect whether two strings are palindromes: Assign a prime to each letter of the alphabet the strings are made out of and multiply. If the numbers you get are the same, the strings are palindromes.

1. 4

Do you mean anagrams? :p

1. 1

Oh I do! orz

1. 1

Interesting approach, but I’m curious about in which ways this is better than a prefix tree?

1. 2

This is just a fun approach. As another comment mentions, use INTEGER[]. Having to do factorization for every comment whose parents you want to display is perhaps not expensive, but definitely not free.

1. 2

Recursive CTE is the solution in this case. Solves both “would need N joins” and “can’t count(*) comfortably”.

2. 2

I don’t think it is better than a prefix tree. Using prime numbers is kind of funny and original, but I’ve never used this approach in a real product and I normally wouldn’t!

1. 1

Ah, okay. Sorry for being confused :o

1. -1

Apparently the answer has a lot to do with the fact that .concat creates a new array while .push modifies the first array.

Does the fact that this is slow actually merit an article and benchmarks these days? What next, quickselect is 1000x faster than going through the whole list?

1. 15

Well, every programmer in a high-level PL has to learn where its abstractions leak at some point. Like, “allocations and copies are slow”. And since we keep making new people, articles like these are going to be written again an again. That’s a good thing.

1. 2

I’ve programmed in both JavaScript and C++ professionally at different times and I did not know that there was that much overhead associated with the new array creation. It’s a lot more than I would have guessed.

1. 1

You don’t really have to know unless you’re writing code that has to run really fast or very often. I learnt a lot about how expensive allocation is in Ruby by solving some Project Euler problems with it.

2. 11

The sheer effort and polish of the post makes it worthwhile. I’ll happily read about something I already know if it’s presented well.

1. 50

I’m not suggesting that we give up React and es7 and go back to writing server-templated web-apps like it’s 2012 again

ok. I’ll be the one to suggest going back to server templates. They’re low fat and gluten free, and come in all your favorite flavors. Try one today!

1. 8

The amazing thing is we managed to let a company whose application consists almost entirely of static content and buttons decide for us the best way to program rich and interactive web applications.

1. 2

Who?

1. 1

Facebook I presume, since this in response to a comment about React.

2. 1

Phoenix Live View comes to mind. Maybe not usable for mobile much, but it felt like a breath of fresh air to me and at the same time can be quite low-latency from user’s perspective.

1. 3

My favorite bit:

Yagni is not a justification for neglecting the health of your code base. Yagni requires (and enables) malleable code.

I also argue that yagni only applies when you introduce extra complexity now that you won’t take advantage of until later. If you do something for a future need that doesn’t actually increase the complexity of the software, then there’s no reason to invoke yagni.

1. 5

Very nice! Normally I find ASCII text is pretty easy to recognize anyways in split-view hex viewers, but this sounds like a pretty solid idea and I’m looking forward to trying it out next time I need it.

1. 7

The author has come up with a fancy name for systems allowing communication (well, but they actually just focus on shared nemory). The author uses the power of this abstraction to whine about WebExtensions and webpages that require JavaScript. I regret reading this because there is no payoff.

1. 2

I regret reading this because there is no payoff.

Delightfully savage.

1. 1

He whines about requiring JavaScript in exactly one parenthetical.

1. 3

Sadly MathML is essentially unusable because Google implemented and then removed MathML support from Chrome. Apparently their implementation was so poorly written that it introduced security issues. This caused a lot of anger, and is why MathJax etc. ended up becoming the way to display mathematics on the web. Nowadays MathJax isn’t so bad, but the whole situation is pretty silly.

https://bugs.chromium.org/p/chromium/issues/detail?id=152430#c43

1. 8

The browser vendors weren’t willing to change their products be compliant with the RFC, because websites already rely on the non-compliant behavior. They wrote their own spec that matches the behavior that the web relies on. That behavior apparently cannot be modeled using EBNF.

If you can write a grammar that matches the behavior of Internet Explorer, Chrome, and Firefox well-enough to not break web pages that currently work with all of them, I’m sure they’d love to see it.

1. 2

Yea, I find the sentiment being expressed here kind of silly. “Browser spec authors should give up on actually caring about writing specs that (a) actually standardize behavior, as opposed to describing some ideal that no one implements, and (b) maintain backwards compatibility with the actual web because it makes it hard to implement URI parsing for my hobby project!”

The implication is also that the spec writers are either dumb or purposely writing bad specs.

Maybe your hobby project doesn’t need to work with real world data, but if a spec writer decides things should just stop working, no browser will implement the spec. And if a browser decides the same, people will just stop using that browser.

It’s unfortunate that the state of things is like this. But people still write browser engines and spec compliant URI parsers from scratch in spite of this, because their goals are aligned with that of the spec—to work with real world data.

If you don’t care about parsing all the URIs browsers need to parse, there’s absolutely no shame in only parsing a restricted syntax. In fact, depending on your problem domain, this might be better engineering.

1. 2

I would say that this is a good use-case for warnings. Browsers (especially Chrome) have been discouraging poor practices for a while now. For example, HTTP pages don’t get the “padlock” that a significant proportion of users have been trained to look for; then HTTP resources on HTTPS pages were targetted; and now HTTP itself gets flagged as “not secure”.

If it makes sense to flag the entire HTTP Web, then I’d find it perfectly acceptable to (gradually) roll this out for malformed data too; e.g. if a page’s URL doesn’t follow a formal spec, if its markup or CSS is malformed, etc. This is justifiable in a few ways:

• As the article’s langsec argument states: if data isn’t validated against a (decidable) formal language before getting processed, there is potential for exploits.
• In the case of backwards compatibility, we can consider “legacy” content (created before the relevant specs) as if it complies with some hypothetical previous spec, like “Works for me in Netscape Navigator”. In which case that hypothetical spec is out of date, in the same way that e.g. Windows98 is out of date: it’s no longer maintained, and won’t receive any security patches. Hence it’s legitimate to consider such legacy formats as not secure.
• In the case of forwards compatibility, it can still be considered as not secure, since we’re skipping parts of the data we don’t understand, which were presumably added for a reason and may be needed for security.
1. 2

I agree with you in principle, but I don’t think this is a good idea in practice.

As the article’s langsec argument states: if data isn’t validated against a (decidable) formal language before getting processed, there is potential for exploits.

Using yacc, say, instead of writing your own recursive descent parser, doesn’t magically make your parser safe—in fact, far from it. I don’t buy the safety argument at all, having had the joy of encountering bizarre parser generator bugs.

Moreover the point is that the URI grammar here can’t be described as a CFG. So forget about even using a nice standard LR parser generator that outputs to a nice automaton, so the idea of having security by relying on parser generators used by lots of other people is moot.

In the case of backwards compatibility, we can consider “legacy” content (created before the relevant specs) as if it complies with some hypothetical previous spec, like “Works for me in Netscape Navigator”. In which case that hypothetical spec is out of date, in the same way that e.g. Windows98 is out of date: it’s no longer maintained, and won’t receive any security patches. Hence it’s legitimate to consider such legacy formats as not secure.

And now you have two parsers, so not only are any potential security exploits are still there, but it is now even more likely that you’ll forget about some edge case.

1. 1

I agree, but note that I’m not arguing that something or other is secure. Rather, I’m saying that adding a “Not Secure” warning might be justifiable in some cases, and this lets us distinguish between the extreme positions of “reject anything that disagrees with the formal spec” and “never remove any legacy edge-cases”.

The usual response to warnings is that everybody ignores them, but things like “Not Secure” seem to be taken seriously by Web devs. Google in particular has used that to great effect by incentivising HTTPS adoption, disincentivising HTTP resources on HTTPS pages, etc.

And now you have two parsers, so not only are any potential security exploits are still there, but it is now even more likely that you’ll forget about some edge case.

That is indeed a problem. Splitting parsing into a “strict parser” and “lax parser” can obviously never remove exploits, since we’re still accepting the old inputs. It can be useful as an upgrade path though, to “ratchet up” security. It gives us the option to disable lax parsing if desired (e.g. a ‘hardened mode’), it allows future specs to tighten up their requirements (e.g. in HTTP 3, or whatever), it allows user agents which don’t care about legacy inputs to avoid these problems entirely, etc.

I definitely agree that having backwards compatible users agents using multiple parsers is bad news for security in general, since as you say it increases attack surface and maintenance issues. That burden could be reduced with things like parser generators (not necessarily yacc; if it’s sufficiently self-contained, it could even be formally verified, like the “fiat cryptography” code that’s already in Chrome), but it’s certainly real and is probably the biggest argument against doing this in an existing general-purpose user agent

1. 2

I see. Yea, it seems like some way to deprecate weird URLs would be best. Just not sure how to do it. “Not Secure” seems like a reasonable idea, but I wonder if it’d become overloaded.

1. 16

This started out as a total rant about the current state of the web, insecurity and the lack of proper rigidity in specifications. I decided not to post it while I was all riled up. The next day I rewrote it in its current form. It’s still a bit one-sided as I’m still having trouble understanding their reasoning. I vainly hope they’ll either give me a more coherent explanation why they dropped the formal grammar, or actually fix it.

1. 16

The formal grammar doesn’t reflect reality. The browsers started diverging from it years ago, as did the server authors. Sad, but true of many many similar specifications. The WHATWG spec a descriptive spec, not a prescriptive one: it was very carefully reverse engineered from real behaviours.

1. 8

You can model that, too. Specs trying to model C with undefined behavior or protocol operation with failure modes just add the extra stuff in there somewhere. Preferably outside of the clean, core, expected functioning. You still get the benefits of a formal spec. You just have to cover more ground in it. Also, good to do spec-based test generation run against all those clients, servers, or whatever to test the spec itself for accuracy.

1. 2

… that’s exactly what these modern bizarro algorithmic descriptions of parsers are—rigorous descriptions of real behaviors that have been standardized. “Just add the extra stuff” and this is what you get.

It sounds like by a “formal spec” you mean a “more declarative and less algorithmic” spec, which definitely seems worthwhile. But be clear about what you want and how it’s different from what people have been forced to do by necessity in order to keep the web running.

1. 1

By formal spec, I mean formal specification: a precise, mathematical/logical statement of the standard. A combo of English and formal spec (esp executable) with both remove ambiguities, highlight complexities, and aid correct implementation.

Certain formal languages also support automatic, test generation from specs. That becomes a validation suite for implementations. A formal spec also allows for verified implementations, whether partly or fully.

1. 2

I am exceedingly familiar with what a formal specification is. I am pretty sure you are confused about the difference between rigor and a declarative style—the two are entirely orthogonal. It is possible to specify something in an algorithmic style and to be entirely unambiguous, highlight complexities, aid correct implementation, and support automatic test generation, moreover, this has been done and is done extremely often—industry doesn’t use (or get to use) parser generators all the time.

1. 1

Ok, good you know it. It’s totally possible Im confused on rigor. Ive seen it used in a few different ways. How do you define it?

1. 2

Sorry for the delay, renting a car :(

I would define rigor as using mathematics where possible and extremely precise prose when necessary to removing ambiguity, like you pointed out. Concretely, rigor is easier to achieve when the language you are writing in is well defined.

If you written using mathematical notation you get the advantage of centuries of development in precision—you don’t have to redefine what a cross product or set minus or continuity are, for example, which would be very painful to do in prose.

Specs try to achieve the same thing by using formalized and often stilted language and relying on explicit references to other specs. Because mathematicians have had a much longer time to make their formalisms more elegant (and to discover where definitions were ambiguous—IIRC Cauchy messed up his definition of convergence and no one spotted the error for a decade!) specs are often a lot clunkier.

For an example of clunkiness, look at the Page Visibility API. It’s an incredibly simple API, but even then the spec is kind of painful to read. Sorry I can’t link to the specific section, my phone won’t let me. https://www.w3.org/TR/page-visibility/#visibility-states-and-the-visibilitystate-enum

Separately, for an example of formal methods that looks more algorithmic than you might normally expect, see NetKAT, which is a pretty recent language for programming switches. https://www.cs.cornell.edu/~jnfoster/papers/frenetic-netkat.pdf

Certainly web spec authors have a long way to go until they can commonly use formalisms that are as nice as NetKATs. But they still have rigor, just within the clunky restrictions imposed by having to write in prose.

2. 5

I have to parse sip: and tel: URLs (RFC-3261 and RFC-3966) for work. I started with the formal grammar specified in the RFCs (and use LPeg for the parsing) and even then, it took several iterations with the code to get it working against real-world data (coming from the freaking Monopolistic Phone Company of all places!). I swear the RFCs were written by people who never saw a phone number in their life. Or were wildly optimistic. Or both. I don’t know.

1. 8

I may hazard a guess… I watched the inception of WHATWG and used to follow their progress over several years, so I have a general feeling of what they’re trying to do in the world.

WHATWG was born as an anti-thesis to W3C’s effort to enforce a strict XHTML on the Web. XHTML appealed to developers, both of Web content and of user agents, because, honestly, who doesn’t want a more formal, simpler specification? The problem was that the world “in large” is not rigid and poorly lends itself to formal specifications. WHATWG realized that and attempted to simply describe the Web in all its ugliness, complete with reverse engineered error handling of non-cooperative browsers. They succeeded.

So I could imagine the reasoning for dropping the formal specification is due to admitting the fact that it can’t be done in a fashion compatible with the world. Sure, developers would prefer to have ABNF for URLs, but users prefer browsers where all URLs work. Sorry :-(

1. 3

This is my understanding too, but you still need to nail down some sort of “minimally acceptable” syntax for URLs to prevent further divergence and to guide new implementations.

1. 6

Meanwhile, outside of Google, JavaScript also continued to evolve and surprisingly even became popular. In part to work around those IE garbage collection bugs we built Chrome which led to v8 which led to nodejs which means most web tools today are themselves written in JavaScript, unlike the Java used to build this sort of tool at Google.

This is not the right history. Google built Chrome because they became frustrated with having to cooperate with Mozilla; previously they had assigned engineers to work on Firefox. Chrome was almost certainly built so that Google would not have to worry about Mozilla changing their default search engine, or about Mozilla implementing/refusing to implement features that Google didn’t want/wanted.

1. 5

The author was an early engineer on the Chrome project and the former TL of the Linux port. He says “in part”, fully understanding that Chrome was built for a number of reasons and that some of them are relevant to his post while others, frankly, aren’t.

1. 2

Why do Medium posts about front-end development always make it seem like such a soap opera? The actual people I know who are into front-end development are all reasonable smart people.

I think there must just be some toxic mutual appreciation subsubculture of front-end “engineers” who like to blog drama.

The actual content of this post when you strip out all of the “I’ll support my French bros” and expletives is minuscule and entirely unoriginal.

1. 22

Well done EU! Google fully deserves this.

Personal anecdote: I worked at Samsung on the team working on its own browser. In 2012, Android WebKit was old and problematic. Google pushed Chrome for Android and my team was disbanded “due to pressure from Google”.

1. 4

You’d think Google would be more confident in their own browser. I hadn’t heard your story before, that is probably the worst one I’ve heard yet. They also do things like accidentally forget to test their websites on other browsers.

1. 2

I think Google was correct to be afraid. (But I would think so, wouldn’t I?) Historical case of Microsoft and Internet Explorer comes to mind.

1. 12

I understand that this might be serious, but it seriously reads like a parody of techie gear-obsession. GUIs (including vi) were invented for a reason, and though you don’t need to like them, when I read that “the file system is likewise adequate for organizing ones work if the user works out a reasonable naming convention” I can’t help but think of someone who exclusively uses a typewriter or someone else who uses only paper and fountain pen saying the exact same thing. And of course such people do exist, which makes the entire idea of claiming something extremely high up on the ladder of relative complexity is “adequate if the user is reasonable” rather silly.

1. 3

Reading through the original message, I found myself wondering if this were real or not as well. It seems like it’s a different form of hipsterism, based in computers instead of something more analogue.

I guess it’s nice if he actually enjoys that flow, but I find it hard to believe it’s more productive than opening a modern text editor.

1. 3

I mean, there have been other posts about how a lot of big authors still use WordStar. Maybe this was a parody of some kind? It kinda gets into Poe’s Law territory.

2. 2

Neal Stephenson wrote ‘Cyphernomicon’ and the Baroque Cycle books with a fountain pen for the first drafts and then used Emacs for the revisions and polishing up.

Edit: Neil Gaiman uses fountain pens exclusively for his writing, and has said that using computers actually reduces his productivity.

1. 2

My favorite quote by Patrick O’Brian was when he was asked what word processor he used:

I use pen and paper, like a Christian.

1. 5

Also known as McNamara’s fallacy.

https://en.wikipedia.org/wiki/McNamara_fallacy

TBH mathwashing doesn’t seem like a very good name.

1. 3

The advice i’ve seen around the lesswrong people give is to take the time and do all the math you reasonably can, but if despite all the calculations you still feel like it’s telling you to do the wrong thing, just do the right thing anyway. That doing the math is important for influencing that gut feeling, but you shouldn’t ignore it. Hadn’t known there was a name for doing the opposite though.

1. 1

Well I’d say it’s more descriptive than “artificial intelligence”, given we usually speak of cybernetics instead.

Do you have an alternative term to propose?