Brace yourselves for the coming AI winter. My popcorn is ready.
Self-driving fails. NVIDIA falls. Existing applications (computer vision, face detection/recognition, background separation, thumbnail selection, incremental improvement on targeting/recommendation, machine translation, learning to rank, etc etc) unaffected. This turn’s crop of AI technologies do really work. They do enable previously impossible applications. It just doesn’t work as well as some people think.
Maybe the high profile “AI” failures will make it less trendy to refer to all sorts of algorithmic processing as “Artificial Intelligence”… or at least one can hope.
I mean it is artificial intelligence, it’s just not artificial general intelligence. Narrow algorithms that can “learn” and adapt within a bounded context are a kind of AI, I think the term if anything should stop meaning artificial general intelligence because it’s useless and will potentially always be. An AI can beat you at chess or go in the same way a cockroach can beat you at tag. They have predefined decision heuristics defined through epochs of evolution, we’re just fortunate that our neural nets settle in a relatively brief amount of time.
This is the first time I have heard “ethics theatre” as a term.
I am going to be lifting that for future use.
What a marvelous term, and it explains so much.
Isn’t that synonymous with “virtue signalling”?
The link to the article published in The Royal Society seems to be worth reading.
I also suggest this reading to anybody interested in this topic: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078224
It tracks the history of this narrative and explains who profits from it.
This phenomenon has a clear political and ideological root and both journalists, politicians and engineers should fight against it together.
If you want to explore more related topics there’s plenty of related content in a reading list I’ve been developing in the last few months: https://github.com/chobeat/awesome-critical-tech-reading-list
I really enjoyed that paper. It’s a shame that AI is being dishonestly advertised. Researchers and companies paving the way have a responsibility to set reasonable expectations. We could have more productive dialog if we were able to discard the buzzword and pop-sci explanations. This most recent “AI” phase (like the past) has yielded advances that we should be proud of and unfortunately most of them will be under appreciated.
Instead of preaching “AI” why not take this opportunity to impress the effectiveness of math to a broader audience?
atm I don’t have time to fiddle with finding good/short examples and shooting them a p.r. but I think Evgeny Morozov and Mark Fisher could be added to that list (with Fisher being tangentially related as in whoever would be interested in this list sort of list would almost certainly be interested in Capitalist Realism for a broader picture)
Mark Fisher is next in line in my backlog, but I want to read it first. There’s already Inventing the Future in the reading list, that is clearly influenced by Mark Fisher’s work.
What do you suggest by Morozov?
Inventing the Future
Inventing the Future
that’s almost certainly relevant as well, but I haven’t read it sadly. I wanted to get into accelerationism but got so derailed once I started. which is awesome. been a very long journey so far but I do plan to circle back eventually. kind of ironic for lobste.rs to be so full of people drawing on Deleuze:D I know @steveklabnik likes him
btw, one word of advice, I have always found it useful to read leftcom critiques of whichever political bent catches my attention. it’s great for a dose of soberness, although ymmv and I might just be reflecting my preexisting sympathies. https://libcom.org/blog/back-future-rebranding-social-democracy-12042018
but I want to read it first
but I want to read it first
what were you referring to here? inventing the future?
No, I was referring to Capitalist Realism
And oh so many people believe it’s the end of days.
It’s possible that it’s very hard to get a computer to perform any of these tasks, but once it can perform one of them: FOOM!
It might actually be less threatening if we could get AI to do vision and NLP with human accuracy and find it couldn’t automatically also do these other hard things.
Yeah, so just don’t put all your eggs in one basket. Learn another skill too
I mean both can be true… Even small improvements in automation can have drastic impacts on the workforce.
It would be nice if we got rid of most administrator jobs, just as an idle musing
I was thinking about hospital administrative staff there to help with regulatory compliance and not healing people
Any organization larger than a dozen people needs some sort of specialization.
Modern hospitals are complicated workplaces. You need people who can clean up (including handling biohazard materials), maintain complex equipment, and meet customers/patients (in person or on the phone).
These people need to be paid, trained, and scheduled.
“Regulatory compliance” includes making sure staff doesn’t steal controlled substances and sell them for a profit (as an extreme example) or engage in testing and research that’s illegal or unethical.
There’s no doubt there’s a lot of mismanagement in modern healthcare, and outright perverse incentives in some cases, but dismissing anyone who’s not involved in “healing” is a bit disingenuous.
I think people necessary to keep machines and hospitals from decay and incorrect function are indeed necessary for healing, but they rarely are administrative in nature.
The advancements are very real and are definitely going to shake up the economy. However, the article is exactly right in pointing out that the narrative of powerlessness and inevitability play to the god-like fantasies of Silicon Valley chatter. Predicting the outcome isn’t easy – look at digital music – but there are many many futures to choose from.