1. 15

I figure this is relevant because it goes along so nicely with “Your brain does not process information and it is not a computer”. I know very well that techies tend not to understand much about continental philosophy, and they would be surprised to learn that philosophers have been dealing for decades with questions about intellect and knowledge that computer science is only barely starting to ask. Dreyfus' critique is a perfect example. The philosophers cited in the intro - Merleau-Ponty and Heidegger - were writing almost 100 years ago and the critique was written 50 years ago, and yet the critique is still largely relevant.

I often wonder what computing would look like if it embraced the ideas from continental philosophy? I think it would be entirely different, and probably more advanced. I’m thinking in particular about ideas such as Deleuze and Guattari’s rhizome, which is awfully close to a graph structure, or virtuality.

  1.  

  2. 6

    An interesting more recent paper from him, which I like as much for the doubling-down title as anything else, is the critical retrospective “Why Heideggerian AI Failed And How Fixing It Would Require Making It More Heideggerian”.

    1. 2

      Ooh, I was not aware of this paper. Thanks!

    2. 4

      Choice quote from Dreyfus:

      In 1963, when I was invited to evaluate the work … on physical symbol systems, I found to my surprise that, far from replacing philosophy, these pioneering researchers had learned a lot, directly and indirectly, from us philosophers: e.g., Hobbes’ claim that reasoning was calculating, Descartes’ mental representations … Kant’s claim that concepts were rules … and Wittgenstein’s postulation of logical atoms in his Tractatus. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program. (“Why Heideggerian AI Failed And How Fixing It Would Require Making It More Heideggerian” 1).

      However, even Dreyfus admits that…

      In general, by accepting the fundamental assumptions that the nervous system is part of the physical world, and that all physical processes can be described in a mathematical formalism which can in turn be manipulated by a digital computer, one can arrive at the strong claim that the behavior which results from human “information processing,” whether directly formalizable or not, can always be indirectly reproduced on a digital machine. (“What Computers Can’t Do” 194-95)

      1. 5

        Without context, it isn’t clear from your second quote that Dreyfus actually agrees with that position–he could well merely be pointing out that if you take those axioms as being true, it follows that one would end up concluding that digital machines can reproduce human behavior.

        There is also a decent argument to be made that while the behavior may be reproduced, the reasoning behind it could be vastly different or even nonexistent. A monkey smoking a cigarette is not a human.

        1. 4

          I should have elaborated more.

          Dreyfus seems to be saying that human intelligence is not directly symbol manipulation / calculation.

          This is a lot like how neural networks are implemented by computer programs following discrete rules, but it is difficult/impossible to explain their “reasoning” for classifying inputs as a discrete computer program.

          It may be possible to indirectly run human intelligence on a computer by simulating their brain at the chemical level. However it does not seem possible to extract “software” out of the brain and directly run it on a computer.

      2. 3

        I think Dreyfus' critique was less absolute than people are imagining.

        For example, Dreyfus seems in favor of the possibility of “Heideggerian AI”,see: “Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian” http://cid.nada.kth.se/en/HeideggerianAI.pdf

        1. 3

          I read about a third of http://www.amazon.com/Understanding-Computers-Cognition-Foundation-Design/dp/0201112973/ at a coffeeshop one night, which is a direct attack on analytic philosophy as implemented in computers. Frankly, I didn’t care for it; the theory of mind and model building, which is a known and understood vital part of rational life, was tossed in the trash. While that might make for a good basis for stochastic AI engineering (genetic algos, machine learning, etc - note what the 90s and 2000s were all about!), it seems to make for really poor science if you want to talk about getting a computer to understand something.

          It’s worthwhile to note that the current trend in top publicized AI seems to be a synthesis of the stochastic algorithms with some level of GOFAI-derived techniques.