This is all sorts of wrong and walks into a common and terrible way to think about AI: “AI needs to directly solve the problem I want it to solve in a way that I recognize”.
AI has been a remarkably successful field because it throws this idea out. The field of AI is all about discovering what problem we really want to solve and then solving that new problem in a way that has nothing to do with our original intuitions.
You want accurate web search, maybe people are more likely to point to good results, that really means looking at connectivity, and here’s a cool algorithm to do it. You want good models of human language, maybe words that occur in similar contexts are similar (the distributional hypothesis), that really means learning to represent the co-occurrence structure in some way as an object (this is how all modern natural language processing works). This is AI. We don’t solve the original problem. We figure out what the original problem was from a different perspective which unlocks a new set of algorithms, and of course, a new kind of data that would be helpful for solving the problem.
I’m sure there’s someone out there working tirelessly to perfect all the disparate technologies - computer vision, control systems, depth perception, etc - required in order for a Tesla to successfully navigate a McDonald’s drive through. Just as they get it sorted and demonstrate its utility, McDonalds will probably just calculate and provide those routes as public information. After all, why bother with the maths and machine vision when you can just write it down in an XML file?
This is how you go down the rabbit hole of absurdity. Why don’t people do that right now? There is so much value to this. Trillions of dollars of value in automated cars. Any amount of data effort is cheap by comparison. You could literally have tens of millions of people working for years on data entry and it would be a minor rounding error. Seriously, why don’t we do this?
Because we have no idea what should be in those XML files. Once we develop the right perspective and we make the right algorithms, we will reduce this problem to something that we can state nicely, and we will know what will go into those files. And yes, in retrospect it will look like “This AI stuff is worthless, look all I had to do is run this ‘metadata’ algorithm and give it some annotations, we didn’t we do that before?”
Or another way to put it. When you don’t know how to do it and you see the results it’s amazing AI. In retrospect, it’s just a bunch of clever ways to get around what you feel might be the real problem (which is pretheoretic, you can’t state mathematically) in order to get cool results.
The article is about Google search. Perhaps the claim could be generalized, but the article doesn’t do so convincingly. Therefore, “Google” should be prominent in the title. A more apropos title would be: “Google search results rely on metadata more than some realize.”
Like many articles, this one is mired in the quagmire of the ambiguity of the “AI” term as well as different people’s expectations. I don’t see much point in feeding off this ambiguity or claiming that “AI was promised”. Promised by who?
There are many false dichotomies in the article. There is not a zero sum game between AI and metadata. They can and do work together. For example, having metadata features, content analysis, and graph analysis are all useful for AI. (As I say in (2), the term AI is debatable, but by many definitions, machine learning and optimization are included.)
A McDonalds drive through XML is like a rainbow table for decryption. Nice to have, and you should check it first, but doing so diminishes the claim that AI is navigating much the same as looking up a password doesn’t mean you’re decrypting.
This is all sorts of wrong and walks into a common and terrible way to think about AI: “AI needs to directly solve the problem I want it to solve in a way that I recognize”.
AI has been a remarkably successful field because it throws this idea out. The field of AI is all about discovering what problem we really want to solve and then solving that new problem in a way that has nothing to do with our original intuitions.
You want accurate web search, maybe people are more likely to point to good results, that really means looking at connectivity, and here’s a cool algorithm to do it. You want good models of human language, maybe words that occur in similar contexts are similar (the distributional hypothesis), that really means learning to represent the co-occurrence structure in some way as an object (this is how all modern natural language processing works). This is AI. We don’t solve the original problem. We figure out what the original problem was from a different perspective which unlocks a new set of algorithms, and of course, a new kind of data that would be helpful for solving the problem.
This is how you go down the rabbit hole of absurdity. Why don’t people do that right now? There is so much value to this. Trillions of dollars of value in automated cars. Any amount of data effort is cheap by comparison. You could literally have tens of millions of people working for years on data entry and it would be a minor rounding error. Seriously, why don’t we do this?
Because we have no idea what should be in those XML files. Once we develop the right perspective and we make the right algorithms, we will reduce this problem to something that we can state nicely, and we will know what will go into those files. And yes, in retrospect it will look like “This AI stuff is worthless, look all I had to do is run this ‘metadata’ algorithm and give it some annotations, we didn’t we do that before?”
Or another way to put it. When you don’t know how to do it and you see the results it’s amazing AI. In retrospect, it’s just a bunch of clever ways to get around what you feel might be the real problem (which is pretheoretic, you can’t state mathematically) in order to get cool results.
The article is about Google search. Perhaps the claim could be generalized, but the article doesn’t do so convincingly. Therefore, “Google” should be prominent in the title. A more apropos title would be: “Google search results rely on metadata more than some realize.”
Like many articles, this one is mired in the quagmire of the ambiguity of the “AI” term as well as different people’s expectations. I don’t see much point in feeding off this ambiguity or claiming that “AI was promised”. Promised by who?
There are many false dichotomies in the article. There is not a zero sum game between AI and metadata. They can and do work together. For example, having metadata features, content analysis, and graph analysis are all useful for AI. (As I say in (2), the term AI is debatable, but by many definitions, machine learning and optimization are included.)
A McDonalds drive through XML is like a rainbow table for decryption. Nice to have, and you should check it first, but doing so diminishes the claim that AI is navigating much the same as looking up a password doesn’t mean you’re decrypting.