My take on the software verification prediction (granting the economic assumptions): after a few disastrous attacks, it turns out that inexperienced people are bad at specifying what exacly to verify, and LLMs still hallucinate sometimes, in exploitable ways.
Another tired dream-fantasy from another tedious True Believer who has never stopped to question how one goes from “spicy autocomplete” to “deus ex machina”.
LLMs do not think and do not reason, but they do squander vast amounts of energy and resources to do an extremely poor imitation of thinking and reasonin by remixing human-written text that contains thinking and reasoning. Sadly, many people are unable to discern this vital and essential difference, just as they can’t tell bot-generated images from reality.
Very elaborate prompting can embed human-created algorithms, in effect making LLM bots into vastly inefficient interpreters: multiple orders of magnitude less efficient even than the bloated messes of 202x code such as interpreters in Wasm running inside 4 or 5 nested levels of virtualisation.
The fundamentalist believers who have failed to grasp the point in paragraph 2 then use paragraph 3 to convince themselves of transubstantiation: that if you ignore the embedded code in a Sufficiently Complex prompt, then the result is magic.
You can’t lift yourself by your own bootstraps, contrary to Baron Munchäusen. You can’t bootstrap intelligence by remixing text that contains the result of intelligence: you can merely fake it. No matter how good the fake, it never magically becomes real. There is an implicit leap in here which the fundies try to paper over.
Even the best fake gold is not gold. Yes you can make gold, but transmutation is very very hard and the real thing remains cheaper, however precious the result.
The emperor might not have clothes, but the knock-on effects of invisible thread speculators in the marketplace cannot be ignored.
How much of this submission did you read, out of curiosity? Your quip of “never stopped to question” makes me wonder–given that the whole purpose of the series is questioning and speculation.
If there are specific bits and predictions it makes please quote them and question them here instead of giving the exact lame dismissal I warned about when I submitted the story.
I confess that I’ve not yet finished part 2 or even started part 3, but they seem to be getting progressively harder work for me as they descend further into fantasy.
I have spelled out my objection clearly and distinctly.
The entire article is built on a single assumption which is unquestioned: that LLM bots are intelligent, and that this intelligence is steadily increasing, and it’s only a matter of time until they equal and then exceed human intelligence.
I feel strongly that this assumption is false.
They aren’t, it isn’t, and they won’t.
As I said: they can’t think and they can’t reason. They can’t even count. There is no pathway from very clever text prediction (based on arbitrarily vast models, no matter how big) to reasoning.
The effect of speculation? A financial bubble, followed by a collapse, which will destroy a lot of people’s careers and livelihoods, and squander a vast amount of resources.
LLM is not AI and there is no route from LLMs to AGI. But the tech bros don’t care and some will get rich failing to make anything useful.
This is not done victimless crime. All humanity and the whole world are victims.
There is zero questioning of the central dogma. The miracle machines just keep miraculously getting better and better.
There is no consideration of resource limitations, of efficiency, of cost, of input corpus poisoning, of negative feedback loops, of the environmental degradation this would cause, nothing.
All there is is more and better miracle machines, and given the continual cost-free miracles which have no impediment, no side effects, no cost, then what it might do to people… Which is mostly positive until it’s too late.
The “and then a miracle occurs” step is never examined. When it’s convenient more miracles occur. Nobody seems to mind much.
There is nothing interesting here, to me or for me.
In essence it’s masturbatory fantasy, ad absurdam.
The emperor never did have any clothes, but he magically never catches cold, never gets sick, and the invisible clothes just keep getting better and better.
We don’t really have cautionary tale as a tag, so I did what I could.
I’m posting this because I think the extrapolation (and thank God it’s just extrapolation and that means it can be wrong!) is a good thought experiment, and though the stuff a decade or two out is floppier the predictions and analyses for this decade seem prescient.
For discussion purposes, I’d suggest posting only to either identify, agree, or disagree with the presented predictions and their underlying assumptions–otherwise, we’re just probably gonna get a bunch of uninformed and poorly articulated “I hate AI”, “sama is a psycopath”, “LLMs can’t code”, “xAI is bad because Musk is bad”, etc.
It was a good read. Definitely some plausible scenarios in there. Yet, I wish the author would drop the USA-tinted glasses (the “machinations” of the CCP, the Chinese “surveillance state” and Xi Thought Tutors, with no mention of the US use of genAI as propaganda and censorship in social media and, by extension, elections). With the current trend of twitter becoming C-SPAN (and social media brokering all media thanks to generative sugar-coating, thereby completing the transformation), the only real difference between the two superpowers will be the colors on the flag.
While I disagree on your last assertion (for historical reasons, not taking a side), I think you’re completely correct that the story does have a lot of the reflexive (and lazy!) Sinocriticism common these days.
My take on the software verification prediction (granting the economic assumptions): after a few disastrous attacks, it turns out that inexperienced people are bad at specifying what exacly to verify, and LLMs still hallucinate sometimes, in exploitable ways.
Another tired dream-fantasy from another tedious True Believer who has never stopped to question how one goes from “spicy autocomplete” to “deus ex machina”.
LLMs do not think and do not reason, but they do squander vast amounts of energy and resources to do an extremely poor imitation of thinking and reasonin by remixing human-written text that contains thinking and reasoning. Sadly, many people are unable to discern this vital and essential difference, just as they can’t tell bot-generated images from reality.
Very elaborate prompting can embed human-created algorithms, in effect making LLM bots into vastly inefficient interpreters: multiple orders of magnitude less efficient even than the bloated messes of 202x code such as interpreters in Wasm running inside 4 or 5 nested levels of virtualisation.
The fundamentalist believers who have failed to grasp the point in paragraph 2 then use paragraph 3 to convince themselves of transubstantiation: that if you ignore the embedded code in a Sufficiently Complex prompt, then the result is magic.
(I am intentionally referencing Clarke’s Third Law.)
You can’t lift yourself by your own bootstraps, contrary to Baron Munchäusen. You can’t bootstrap intelligence by remixing text that contains the result of intelligence: you can merely fake it. No matter how good the fake, it never magically becomes real. There is an implicit leap in here which the fundies try to paper over.
This is the “and then a miracle occurs” step.
The emperor has no clothes.
Even the best fake gold is not gold. Yes you can make gold, but transmutation is very very hard and the real thing remains cheaper, however precious the result.
FWIW I find stories like this a welcome counterpoint:
https://www.wheresyoured.at/wheres-the-money/
The emperor might not have clothes, but the knock-on effects of invisible thread speculators in the marketplace cannot be ignored.
How much of this submission did you read, out of curiosity? Your quip of “never stopped to question” makes me wonder–given that the whole purpose of the series is questioning and speculation.
If there are specific bits and predictions it makes please quote them and question them here instead of giving the exact lame dismissal I warned about when I submitted the story.
I read the entire thing, of course.
I confess that I’ve not yet finished part 2 or even started part 3, but they seem to be getting progressively harder work for me as they descend further into fantasy.
I have spelled out my objection clearly and distinctly.
The entire article is built on a single assumption which is unquestioned: that LLM bots are intelligent, and that this intelligence is steadily increasing, and it’s only a matter of time until they equal and then exceed human intelligence.
I feel strongly that this assumption is false.
They aren’t, it isn’t, and they won’t.
As I said: they can’t think and they can’t reason. They can’t even count. There is no pathway from very clever text prediction (based on arbitrarily vast models, no matter how big) to reasoning.
The effect of speculation? A financial bubble, followed by a collapse, which will destroy a lot of people’s careers and livelihoods, and squander a vast amount of resources.
LLM is not AI and there is no route from LLMs to AGI. But the tech bros don’t care and some will get rich failing to make anything useful.
This is not done victimless crime. All humanity and the whole world are victims.
I have now finished all of parts 2 and 3.
It was not time well spent.
There is zero questioning of the central dogma. The miracle machines just keep miraculously getting better and better.
There is no consideration of resource limitations, of efficiency, of cost, of input corpus poisoning, of negative feedback loops, of the environmental degradation this would cause, nothing.
All there is is more and better miracle machines, and given the continual cost-free miracles which have no impediment, no side effects, no cost, then what it might do to people… Which is mostly positive until it’s too late.
The “and then a miracle occurs” step is never examined. When it’s convenient more miracles occur. Nobody seems to mind much.
There is nothing interesting here, to me or for me.
In essence it’s masturbatory fantasy, ad absurdam.
The emperor never did have any clothes, but he magically never catches cold, never gets sick, and the invisible clothes just keep getting better and better.
We don’t really have
cautionary taleas a tag, so I did what I could.I’m posting this because I think the extrapolation (and thank God it’s just extrapolation and that means it can be wrong!) is a good thought experiment, and though the stuff a decade or two out is floppier the predictions and analyses for this decade seem prescient.
For discussion purposes, I’d suggest posting only to either identify, agree, or disagree with the presented predictions and their underlying assumptions–otherwise, we’re just probably gonna get a bunch of uninformed and poorly articulated “I hate AI”, “sama is a psycopath”, “LLMs can’t code”, “xAI is bad because Musk is bad”, etc.
It was a good read. Definitely some plausible scenarios in there. Yet, I wish the author would drop the USA-tinted glasses (the “machinations” of the CCP, the Chinese “surveillance state” and Xi Thought Tutors, with no mention of the US use of genAI as propaganda and censorship in social media and, by extension, elections). With the current trend of twitter becoming C-SPAN (and social media brokering all media thanks to generative sugar-coating, thereby completing the transformation), the only real difference between the two superpowers will be the colors on the flag.
While I disagree on your last assertion (for historical reasons, not taking a side), I think you’re completely correct that the story does have a lot of the reflexive (and lazy!) Sinocriticism common these days.