Fortunately, at least this AI Winter won’t kill Lisp.
And Prolog. And OP5. And Rete. And agent-oriented programming like with Obliq language. And… what am I forgetting? We might be better off if they just marketed all that stuff for things other than AI or agents. Prolog still doing decent in academia and some commercial since it was a data or knowledge manipulation language (aka database stuff) as much as NLP or AI. Otherwise, quite a few paths were drained of funding by hype in the past.
Now, they’re throwing hype at many other cool technologies. Hopefully it’s a recession this time instead of a depression. The best might survive in usable form.
Good CS expert says: Most firms that think they want advanced AI/ML really just need linear regression on cleaned-up data.
I see a lot of this at my work currently. A handful of my coworkers talk about how we could be using ML and AI to analyze a bunch of our data and help us with monitoring and triaging the data we are looking at. In reality, I think we could gain more value with less effort and work by applying some simple statistical analysis on the data, even without cleaning, merging, and other efforts to normalize the data. To me, there are two aspects that are gained:
It’s so cheap. The educational information is out there for free and cheap and the educational requirements are fairly low. There are a lot of libraries to help you out and many of them are very easy to use.
It’s really easy to understand for both engineers and non-engineers. You can spend an hour reading Wikipedia about chi-squared tests or crack open any statistics and probability book and spend a week reading through all those topics. When a higher-up comes down asking why you didn’t detect something, you can definitively say, “this was our data, and the math says it’s insignificant.” It’s fairly easy to see what the math concludes, to potentially lead us towards tweaking the math and getting better results out of it. Alternatively, it’s very easy to see if the data was at fault and that better data cleaning needs to be employed. Testing the data is super easy and the computation is extremely fast, even on large data sets. All this seems incredibly hard with ML/AI.
I really appreciated this. So much sense about fashion & fad in tech that applies way beyond the current ML wave.
Re: this applying beyond the current ML wave, the quip from the article reminds me a lot of quips about Big Data:
Most firms that think they want advanced AI/ML really just need linear regression on cleaned-up data.
I’d venture that companies really want to believe they have an advanced AI/ML problem for similar reasons to why they really want to believe they have a Big Data problem. Previously Big Data was the bar, now Big Data With Next-Gen Intelligent Processing is the bar. When in reality they might well have smallish data where boring methods work just fine.
Incidentally this is one part of IBM’s Watson branding strategy that I don’t totally mind, although a lot of people hate it. Watson and its associated buzzwords like Cognitive Computing are umbrella brands for any IBM product plausibly related to decision-making, data analysis, etc., tied together with duct tape. Many AI people dislike it because they see it as empty business buzzwords, with these ads claiming a personified integrated intelligence suite when it’s really just 35 APIs doing random things. The plus side of that, though, is that a bunch of very solid traditional ML and stats is shoved in various corners of “IBM Watson”, so someone can use a boring old method under the cover of Integrating Watson Next-Generation Cognitive Intelligence.
So, the OP correctly notes that it’s not really an “AI Boom”. It’s more of a swelling in bullshit corporate “data science”. Most of these machine learning jobs are disappointing, insofar as the companies claim to be using “big data machine learning analytics” to appease investors. The truth is that the only real contribution of private-sector, non-research software engineers to the world over the past 10 years has been to unemploy people. I note that I’m damning myself as much as anyone else. We do the work but we don’t have the power to direct how our energies are actually used and the net effect is socially negative: unemploying people, electing Trump via fake news, getting people (“whales”) addicted to freemium web games, etc.
Real AI/R&D jobs are still thin on the ground and it’s criminal, but it won’t get much worse because it never got better. If you’re the kind of person who wants to do research, you’re not going to be satisfied with the typical bullshit that gets passed off as “data science”. The only story there is that data analyst salaries were too low relative to demand, but companies didn’t want to raise all salaries for data analysts, so they bifurcated the job into a prestige title and a regular title and let the former rise a small bit in salaries.
I can’t decide whether the AI hype is being driven by Silicon Valley hubris, or if the hype is just a feedback loop in which companies launch AI-based products to appear “cutting edge” because they fear being left behind.
Facebook firing their content moderators just as they launched their AI-based content moderation system is utterly baffling. Why not keep the humans on board for a quarter or so just to make sure the thing doesn’t go nuts and to provide a little extra training?
Ordinarily I would blame it on hubris and move on. But in this case, it almost feels like the act of purging the moderators was PR theater designed to demonstrate confidence in their tech (because everyone knows that good tech replaces humans). I really can’t decide. I eagerly await the inevitable retrospective interviews once the whole thing blows up.
I had a job interview with a big-data group. What I found was that they were on the verge of poorly re-inventing actuarial science. Of course they thought they were on the bleeding edge and had all this amazing IP that they had to save the world with. In reality, other non-tech firms had boring analysts with real university degrees in statistics doing this kind of work on legacy databases since the 80s.
I always thought I was missing something WRT AI’s hype. Part of the problem is I don’t know anything about AI/ML. From my uneducated perspective:
computing power has advanced sufficiently that things like self-driving cars are within reach of [wealthy] consumers. Computer vision + AI-like techniques makes this possible.
new modes of interaction with computers, such as mobile devices and AR, expand the possibilities of what AI could be applied to.
Please correct me if I’m wrong: AI’s hype is largely built around finding more things to apply it to due to the pervasiveness of technology, rather than a quantum leap in AI itself.