If Facebook-scale companies are AIs, then so is any reasonably large human bureaucracy, and AIs are thousands of years old - you could call the Achaemenid Empire in 500 BC an AI by this line of reasoning.
There is a nugget of truth in this argument but I wouldn’t go as far classifying companies as AI agents. The term is already almost meaningless and such expansion would dilute it even further.
For example, you could use the same logical steps to “prove” that a married couple is an AI! The social custom and related legislature are definitely man-made, and (hopefully) the couple “fulfills its purpose in a smart ways”. Whatever that purpose may be :)
I think a more fruitful line of thinking would be to consider how could you tell if a company was ran by an AI agent. At least to me the panicking about superhuman AIs seems like a projection of fears of current multinational companies.
Isn’t the original statement of ‘I would love to see a definition of “intelligence” which includes humans and Deep Learning but excludes companies.’ inclsuive of “I’d like to see a definition of “intelligence” which includes humans and Deep Learning”?
They are semantically and logically equivalent but I was using my statement as a rhetorical device.
From my understanding, @qznc uses a definition of the term “intelligence” that encompasses human intelligence and Deep Learning (and companies, too).
That’s not my definition of “intelligence”, but I admit my definition cannot be really articulated beyond “I know it when I see it”. Therefore, I would like to inspect the other definition to compare it against mine on a case by case basis.
For example, I have 2 tasks immediate at hand in space and time - I need to fix a sagging front porch stair (made of wood), and I need to prepare a car to turn it in to a dealership, and receive a new car.
None of these tasks require any sort of advanced training in the form of formal education, but they are far beyond the scope of any machine to perform without the guidance of a human.
no no, not referring to the simulation argument. Just that we don’t actually know that we are anything other than a biological mechanism that approximates gradient descent.
I just can’t agree that there decisions of companies are artificial by default, as they are defined by the choices of employees.
Unless these employees are AI or listen to AI, the companies are not representative of AI.
This sounds borderline like an attempt to let employees further scapegoat their immoral behavior, and that’s a big enough problem without us pretending they aren’t responsible for the choices that their companies make.
Hi, I just wandered in here. I haven’t even read the article yet. But, “each company is basically an AI” is something that I’ve said for years. One of my pet theories. (I didn’t invent it, I read it somewhere and adopted it.)
I don’t think it’s an attempt to scapegoat immoral behavior on behalf of the employees. I think it’s an origin tale for the world we live in. I’ll tell you why.
Unless these employees are AI or listen to AI…
The employees follow corporate policy. The rules are the AI.. the employees are the CPUs/RAM/servos/actuators that the policy runs (executes) on. That is, the policy is the code, and the employees are the platform. Note that we’re talking about old-school symbolic logic AI here, not that new “machine learning” stuff.
Within this pet theory, the phenomenon has been going on for hundreds of years.
In order for the theory to make sense, “Corporate policies” should be defined pretty broadly… It may be corporate policy to “buy low, sell high” or “increase brand recognition” (note the lack of a termination condition on that one!). It may have been corporate policy at Smith Co., some years ago in some port city, to buy all available cottonseed oil, full stop. Maybe this policy was put in place after some conflict with the Jones Co. Doesn’t matter why: the employees of Smith Co. know that they’ll be rewarded if they bring home a wagon-load of cottonseed oil.
The “policy as AI” theory applies every kind size of and kind of organization–well, anything large enough to create a policy and compel employees to follow it even when the original authors aren’t present anymore. Law firms (all of them?). Automobile mechanics’ shops (only some of them!). Clubs. Tribes. Religions. Governments.
And that’s where we live. In a world where written worlds (and now code) have real power.
Corporate policy? In most cases, it is defined by humans. Implemented by humans. Can only be changed by humans. Sometimes humans fear to stand up against bad practices and such, but those are still peoples’ choices.
In order for this theory to work, we need to define human thought as artificial intelligence as well. In that case why are we referring to one as artificial?
Companies may be intelligent, but it’s not artificial for the most part. It still sounds like people wanting to blame something they can’t fix as a scapegoat.
As for “we can’t fix it”, well… I think we can do anything. Some things require monumental effort and present monumental risk. It is after those kinds of changes happen that monuments are built!
The whole suggestion you’ve made in your other post that I’m “overestimating the autonomy of most corporate workers” is exactly the kind of “can’t fix it” attitude that I’m referring to. “Companies are AI” is a scapegoat to social responsibility, because people are afraid to take monumental risk.
Those of us who do take those risks, it doesn’t go very far because it’s not even slightly close to a norm. People can just shrug it off, which makes things more disconcerting because the risk becomes ineventful or even more monumental.
Yeah - the largest and most important job of those of us who take risks to make things better is very much to normalize what we’re doing and encourage others to do it too. We have the amount of autonomy that we decide to take.
I finally thought of the right way to phrase my response, maybe.
The “companies are AI” is not a wait to scapegoat anything, it’s not a “we can’t fix it” attitude. It’s a call to arms.
“AI” isn’t the right word really. It’s certainly distracting. Companies are word-golems and they eat people. Sorry, they are emergent properties of mindbogglingly complex systems of commerce and law and they eat people.
What we (myself and the author and the other author linked in the comments) are trying to say is that if we want to “fix it”, we’re gonna have to dig deep, because our opponents are incorporeal, invisible geniuses without natural lifespans. Let me put a point on that in case you missed it: our opponents are not people. Not even bad/selfish people. They just live in this haunted world with the rest of us. Maybe selfish people would do less damage in a different world. If I sound like I’m trying to forgive some people who have made the world worse, it’s because I am. I believe in forgiveness. But that’s just like, my opinion, man. Don’t let it distract you from this technical discussion about the mechanics of commercial entities.
I think that the type of AI that corporations are is entirely blind to lots of types of information. This thread, for example. They are also dramatically slower than an individual human. They are 99% dependent on humans to carry out their “metabolic” functions. (That used to be 100%. Watch out for a downward spike in this number.)
…They have genes (or memes, if you recall what that word actually means) and lateral transfer happens constantly, but maybe not bidirectionally. Company B tends to become more like Company A over time as policy makers jump ship, but it’s more common for those corporate types to move from a less successful company to a more successful one, right?
People can (and do, every day) retard or reverse the growth of a corporation by knowingly acting in the interest of people instead of doing whatever thing the company needed them to do. Perhaps this practice could be made more popular by, you know, paying people to do it.
Perhaps you’re overestimating the autonomy of most corporate workers?
There are a lot of mechanical operations… Picture the “inbox” on the desk (or screen) of a “buyer”. (Actual job title.)
Let’s say that each item in the inbox is a list of objects. Maybe a BOM. There’s probably also some metadata, like some calendar date before which the items must arrive.
“Policy” tells the worker to find a source for each item on the list and buy it. Policy says to look over your list of known sources and call each one (or connect to their API via some application you already have). Maybe policy says that if none of your known sources have the item, to start doing web searches.. Policy says to check estimated delivery time against the aforementioned date. Etc.
I’m implying that “corporate policy” includes all that on-the-job training.
I think that without all that ‘code’–the rules, policies, workflows, etc., large corporations could not function.
This “corporations are AI” theory calls upon us to think of the policies and rules as entities unto themselves that exhibit emergent properties.
Finally, why indeed are we referring to anything as ‘artificial’? (Everything in this world is ‘natural’ except ghosts. It is as natural for humans to wear polyester as it is for hermit crabs to wear shells.) “Machine intelligence” is probably a better term in the long run.
I think the argument is correct but not particularly well-argued. I found Charles Stross’s article on the same topic to be much more insightful: http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html
If Facebook-scale companies are AIs, then so is any reasonably large human bureaucracy, and AIs are thousands of years old - you could call the Achaemenid Empire in 500 BC an AI by this line of reasoning.
There is a nugget of truth in this argument but I wouldn’t go as far classifying companies as AI agents. The term is already almost meaningless and such expansion would dilute it even further.
For example, you could use the same logical steps to “prove” that a married couple is an AI! The social custom and related legislature are definitely man-made, and (hopefully) the couple “fulfills its purpose in a smart ways”. Whatever that purpose may be :)
I think a more fruitful line of thinking would be to consider how could you tell if a company was ran by an AI agent. At least to me the panicking about superhuman AIs seems like a projection of fears of current multinational companies.
I would love to see a definition of “intelligence” which includes humans and Deep Learning but excludes companies.
I’d like to see a definition of “intelligence” which includes humans and Deep Learning.
In my view, {deep|machine} learning isn’t intelligence, it’s applied statistics.
Isn’t the original statement of ‘I would love to see a definition of “intelligence” which includes humans and Deep Learning but excludes companies.’ inclsuive of “I’d like to see a definition of “intelligence” which includes humans and Deep Learning”?
They are semantically and logically equivalent but I was using my statement as a rhetorical device.
From my understanding, @qznc uses a definition of the term “intelligence” that encompasses human intelligence and Deep Learning (and companies, too).
That’s not my definition of “intelligence”, but I admit my definition cannot be really articulated beyond “I know it when I see it”. Therefore, I would like to inspect the other definition to compare it against mine on a case by case basis.
For example, I have 2 tasks immediate at hand in space and time - I need to fix a sagging front porch stair (made of wood), and I need to prepare a car to turn it in to a dealership, and receive a new car.
None of these tasks require any sort of advanced training in the form of formal education, but they are far beyond the scope of any machine to perform without the guidance of a human.
Of course none of us have any strong proof that we’re not also a process of applied statistics.
I assume you’re referring to the Simulation argument.
There are no (scientific) proofs for or against it, because it’s a metaphysical concept.
no no, not referring to the simulation argument. Just that we don’t actually know that we are anything other than a biological mechanism that approximates gradient descent.
Ah, that’s much more interesting :D
The contention that companies are intelligences plays directly to Searle’s Chinese Room.
This video illustrates some arguments for why companies may or may not be AI’s: https://www.youtube.com/watch?v=L5pUA3LsEaw
I just can’t agree that there decisions of companies are artificial by default, as they are defined by the choices of employees.
Unless these employees are AI or listen to AI, the companies are not representative of AI.
This sounds borderline like an attempt to let employees further scapegoat their immoral behavior, and that’s a big enough problem without us pretending they aren’t responsible for the choices that their companies make.
Hi, I just wandered in here. I haven’t even read the article yet. But, “each company is basically an AI” is something that I’ve said for years. One of my pet theories. (I didn’t invent it, I read it somewhere and adopted it.)
I don’t think it’s an attempt to scapegoat immoral behavior on behalf of the employees. I think it’s an origin tale for the world we live in. I’ll tell you why.
The employees follow corporate policy. The rules are the AI.. the employees are the CPUs/RAM/servos/actuators that the policy runs (executes) on. That is, the policy is the code, and the employees are the platform. Note that we’re talking about old-school symbolic logic AI here, not that new “machine learning” stuff.
Within this pet theory, the phenomenon has been going on for hundreds of years.
In order for the theory to make sense, “Corporate policies” should be defined pretty broadly… It may be corporate policy to “buy low, sell high” or “increase brand recognition” (note the lack of a termination condition on that one!). It may have been corporate policy at Smith Co., some years ago in some port city, to buy all available cottonseed oil, full stop. Maybe this policy was put in place after some conflict with the Jones Co. Doesn’t matter why: the employees of Smith Co. know that they’ll be rewarded if they bring home a wagon-load of cottonseed oil.
The “policy as AI” theory applies every kind size of and kind of organization–well, anything large enough to create a policy and compel employees to follow it even when the original authors aren’t present anymore. Law firms (all of them?). Automobile mechanics’ shops (only some of them!). Clubs. Tribes. Religions. Governments.
And that’s where we live. In a world where written worlds (and now code) have real power.
Corporate policy? In most cases, it is defined by humans. Implemented by humans. Can only be changed by humans. Sometimes humans fear to stand up against bad practices and such, but those are still peoples’ choices.
In order for this theory to work, we need to define human thought as artificial intelligence as well. In that case why are we referring to one as artificial?
Companies may be intelligent, but it’s not artificial for the most part. It still sounds like people wanting to blame something they can’t fix as a scapegoat.
As for “we can’t fix it”, well… I think we can do anything. Some things require monumental effort and present monumental risk. It is after those kinds of changes happen that monuments are built!
The whole suggestion you’ve made in your other post that I’m “overestimating the autonomy of most corporate workers” is exactly the kind of “can’t fix it” attitude that I’m referring to. “Companies are AI” is a scapegoat to social responsibility, because people are afraid to take monumental risk.
Those of us who do take those risks, it doesn’t go very far because it’s not even slightly close to a norm. People can just shrug it off, which makes things more disconcerting because the risk becomes ineventful or even more monumental.
Yeah - the largest and most important job of those of us who take risks to make things better is very much to normalize what we’re doing and encourage others to do it too. We have the amount of autonomy that we decide to take.
Yeah, totes… If you just let the world be shitty then it’ll never change :O
<3
I finally thought of the right way to phrase my response, maybe.
The “companies are AI” is not a wait to scapegoat anything, it’s not a “we can’t fix it” attitude. It’s a call to arms.
“AI” isn’t the right word really. It’s certainly distracting. Companies are word-golems and they eat people. Sorry, they are emergent properties of mindbogglingly complex systems of commerce and law and they eat people.
What we (myself and the author and the other author linked in the comments) are trying to say is that if we want to “fix it”, we’re gonna have to dig deep, because our opponents are incorporeal, invisible geniuses without natural lifespans. Let me put a point on that in case you missed it: our opponents are not people. Not even bad/selfish people. They just live in this haunted world with the rest of us. Maybe selfish people would do less damage in a different world. If I sound like I’m trying to forgive some people who have made the world worse, it’s because I am. I believe in forgiveness. But that’s just like, my opinion, man. Don’t let it distract you from this technical discussion about the mechanics of commercial entities.
I think that the type of AI that corporations are is entirely blind to lots of types of information. This thread, for example. They are also dramatically slower than an individual human. They are 99% dependent on humans to carry out their “metabolic” functions. (That used to be 100%. Watch out for a downward spike in this number.)
…They have genes (or memes, if you recall what that word actually means) and lateral transfer happens constantly, but maybe not bidirectionally. Company B tends to become more like Company A over time as policy makers jump ship, but it’s more common for those corporate types to move from a less successful company to a more successful one, right?
People can (and do, every day) retard or reverse the growth of a corporation by knowingly acting in the interest of people instead of doing whatever thing the company needed them to do. Perhaps this practice could be made more popular by, you know, paying people to do it.
Perhaps you’re overestimating the autonomy of most corporate workers?
There are a lot of mechanical operations… Picture the “inbox” on the desk (or screen) of a “buyer”. (Actual job title.)
Let’s say that each item in the inbox is a list of objects. Maybe a BOM. There’s probably also some metadata, like some calendar date before which the items must arrive.
“Policy” tells the worker to find a source for each item on the list and buy it. Policy says to look over your list of known sources and call each one (or connect to their API via some application you already have). Maybe policy says that if none of your known sources have the item, to start doing web searches.. Policy says to check estimated delivery time against the aforementioned date. Etc.
I’m implying that “corporate policy” includes all that on-the-job training.
I think that without all that ‘code’–the rules, policies, workflows, etc., large corporations could not function.
This “corporations are AI” theory calls upon us to think of the policies and rules as entities unto themselves that exhibit emergent properties.
Finally, why indeed are we referring to anything as ‘artificial’? (Everything in this world is ‘natural’ except ghosts. It is as natural for humans to wear polyester as it is for hermit crabs to wear shells.) “Machine intelligence” is probably a better term in the long run.
You are ignoring the people above that employee by suggesting that it’s “policy”.
No, it’s a manager who thinks that’s a good idea.
I’ve made it a habit to identify examples of non-sequiturs, and this is a good one.
If you mean non sequitur in the sense of literary device, then I agree. In the sense of logical fallacy, I disagree because there is no deduction in the sentence.
I was torn between calling it a non-sequitur and a circular argument.
Thanks for pointing me to the distinction.