1. 33
  1.  

  2. 14

    I hadn’t actually thought of this as on-topic here, until I realized we have the cogsci tag… :)

    I have a longstanding disagreement with this position, just to admit my bias up front. And I’m aware that the below views are controversial. I don’t intend to get into any fights about them, but I want to offer them.

    Also, I’ve met people who find the brain-as-computer metaphor to be deeply upsetting, emotionally. If that’s you, fair warning that I go into some detail below, and I do advise skipping the rest of this or at least making sure you feel prepared.

    Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

    Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

    But then we keep seeing studies like the one described in this SciAm article, beginning to investigate exactly that symbolic representation. I can’t find them right now, but I’ve seen similar news items go by with regard to visual and spatial memory.

    Certainly I’ll readily agree that “algorithm” suggests a system that works more often than it fails, and that’s not how I’d describe the brain. I’ll give this author that part of the discussion.

    The brain is architecturally very different from a silicon computer. It has far more in common with a distributed system than with a single CPU core; all sorts of neurological phenomena come down to things happening out of sync with each other, and I’m not sure any have ever come down to an invalid pointer. (Glossolalia is interesting to think about in that context, but there’s no reason to suspect that there’s any analogue of pointers or addresses. But surely I don’t need to explain how suggestive the nature of neurons is as far as the possible mechanisms for structured information.)

    Humanity understands the brain even less than it understands the security implications of Intel’s management engine. That present lack of understanding makes it easy to get away with claims that it’s fundamentally incomprehensible, that it does do certain things that it manifestly doesn’t, and that it doesn’t do things that it manifestly does.

    Surely, processing information is the one thing we can be certain the brain actually does!

    1. 7

      In the Scientific American article you link, people were shown dots, and depending on the number of dots, a different part of a “numerosity representation” responded.

      I think whether the neural response is indicative of a representation is the sort of thing the aeon article is questioning. On the one hand we have empirical data - a stimulus elicits a neural response. On the other hand we have a theoretical framework - the researchers call this correspondence between stimulus and brain state a representation. Certainly there is a correspondence between the dots and brain activity. But is this a representation?

      As an aside, one of the main reasons we talk about representation in the first place is to explain how we make inferences. To explain this, we say we have representations of objects in our mind, we can manipulate these representations, making them interact with each other according to their properties, and in this way we can infer something about the world.

      Normally, when we program, to represent something we think about a type and properties of a type. What are the properties of numbers shown by this experiment? How do these number representations interact with other representations to produce reasoning? How do they interact with each other to produce addition or multiplication and so on? How do they let us reason about larger numbers? Are these correspondences present for all interactions with all numbers, or just some?

      There’s no empirical answers to these questions in the article – what’s offered on the empirical side is a raw correspondence between stimulus and brain state. To call this a representation is a theoretical move on the researchers' part to explain what they’ve found. This is okay I guess, but they aren’t actually testing whether the correspondence they’ve found is a representation in the computational sense or not.

      1. 2

        Sorry for taking so long to reply; this deserved an answer this morning, but work…

        I think the heart of what you’re saying really comes down to the same question of “what is a representation, and why does this particular definition of it matter to the original question”, which has meanwhile been discussed insightfully elsewhere in the thread. I can see that we differ on it, but I don’t have a reply that hasn’t already been made, so I’ll let this thread stay high-signal. :)

      2. 2

        One problem is that we don’t have consensus among the various participants on what “processing” or “information” means.

      3. 15

        It’s almost as if human brains are some kind of neural network or something.

        1. 12

          This article strikes me as wholly unconvincing. I couldn’t even make myself read it all, because the first 2/3'rds or so seemed to be just a lot of assertions with no justification. At best it seems that this may be vacuously true in a pedantic / overly strict sense. “The brain doesn’t store representations of dollar bills” or what have you. That’s probably true. There’s no reason to think that you have an exact image of a dollar bill in your head at all times. That seems pretty irrelevant to me, as it appears that we must store at least some fuzzy representation of the dollar bill, in order to recognize it, or to describe it - from memory - to the amount of detail that we can.

          But digital computers don’t necessarily have to work with exact representations either.. ergo all the recent successes we’ve seen on image recognition using artificial neural networks, etc.

          Personally I suspect that the brain is a biological implementation of a sort of bayesian pattern matching system which does, indeed, share quite a lot with computers - unless you just define that away by saying “a computer is something that works differently from the way the brain does”.

          1. 8

            As if a computer has an exact representation of a dollar bill. An EXACT representation would take an uncountable amount of resources.

            1. 5

              This was my main contention with this article. No such “exact” representation of data occurs on a computer, either. It’s not as if people are saying our brains work exactly like computers, anyhow.

              1. 2

                @mindcrime, the question of “representation” in the brain of course is very interesting and complicated, but as a starting point, I find the concept of complex feature selective neurons in the brain very fascinating and illuminating. A quick run down is here https://en.wikipedia.org/wiki/Grandmother_cell#Face_selective_cells . This topic was made very popular by the “Jennifer Aniston” cell. The who cell, you ask? That’s the risk you run of trying to popularize your research by tying it to the fickle star of popular culture :)

              2. 6

                I’ve always tended to see the “brain is an information-processing computer” view as more of a loose metaphor, or heuristic guide for a certain type of cognitive science (the kind that makes heavy use of computer modeling), or inspiration for a certain kind of AI. If taken as a strong philosophical view it has a lot more problems, yes. There are a handful of philosophers who argue for an even stronger position, that everything is fundamentally information and information-processing; a kind of alternative monism to the more classical one, atomism, that held everything is fundamentally atoms and atomic interactions. But many people explicitly or implicitly have more of an in-between view, that that brain is an information-processing entity in a way that rocks aren’t. Drawing that line in a really rigorous way, versus a kind of heuristic “you know what I mean way”, gets trickier. And while Ray Kurzweil is a smart guy, careful, rigorous, nuanced analysis isn’t really his thing, so I wouldn’t read too much into the fact that Kurzweil has a particular position sketched in really broad strokes.

                I find it at least interesting to think about as an AI researcher. Since we’re constructing something artificial, if we do so on the basis of a seriously mistaken understanding of what we’re trying to construct it could pose problems. Although it also might not: it’s at least in principle possible to model things to a certain degree of accuracy without the models being fundamentally “correct” in the sense of deeply aligning with reality (vs. being merely useful approximations). But it’s a line of investigation I’m glad some people pursue. There’s a minor sub-current you might call “philosophical critiques of AI assumptions” that goes in that vein, like Hubert Dreyfus and Phil Agre drawing out how a lot of early AI work implicitly had a Cartesian dualist view of the mind.

                1. 6

                  This should be obvious to anyone with a cursory knowledge of brain function; a cursory knowledge of computer history will reveal that von Neumann was modelling his architecture off what he modeled the brain as. Hence the old idea of computers as electronic brains.

                  Unfortunately - and this is the point the author is raising - our paradigm has collocated the brain and the computer. It is important to attempt to step outside the paradigm and the current operative model to review the metacorrectness of both.

                  1. 11

                    Yes it does. Yes it is. The author’s definition of computer is incredibly narrow.

                    1. 6

                      Given that a human brain can fully emulate the behavior of a computer (I mean, we did invent the damn things), wouldn’t it be vacuously true that the human brain does everything a computer can do, and more, hence it is a computer of sorts? Not (e.g.) a von Neumann architecture computer, but a computer nevertheless? I’m sure I’m missing something…

                      I guess this gets into arguments over semantics of words like whether a computer is also a calculator if it’s running a calculator program, which I suspect most people would say is true for a smartphone but false for a desktop computer, for no real good reason.

                      Maybe another objection is that computers operate according to well-defined algorithms programmed into them by humans, but humans don’t have a clean equivalent. It’s true that humans have reflexes and suffer from optical illusions, but they are also (occasionally) capable of consciously exercising volition in complex, well-defined forms in order to accomplish their goals. I think at this point we venture into the territory of the nature of free will and other philosophical wastelands.

                      1. 1

                        Modern chips are so complicated that we need algorithms to design them. No one person can keep the design “in their head” anymore. This is the same of computer programs.

                      2. 5

                        The author’s definition of computer is incredibly narrow.

                        This I think is the crux of many of the disagreements brought up here. We, in our field, have (as is reasonable) a somewhat more nuanced understanding of what a “computer” can be. The author seems to be operating under the assumption that “computer” == “wikipedia-page-level understanding of a single core processor” or something.

                        1. 7

                          I think it would be interesting to make a study of what exactly “a computer” and “not a computer” means to those who make this common “the brain is not a computer” claim (or “not a computer program” claim etc).

                          It seems to be something like a thing that works in a series of discernible steps and thing that involves explicit articulable logic akin to natural language.

                          The thing about this kind of definition is it tends to underestimate what even this structure is capable of. A lot of it seems like a confusion between what computers or programs, constructed with explicit logic/steps can do as programmed by human in the here and now and what computers are broadly theoretically capable of.

                          And the thing this distinction has some validity as an intuitive distinction that can hold for a while. But the key thing is this distinction probably won’t hold indefinitely or even for that long.

                        2. 2

                          No it doesn’t. No it isn’t. Your definition of knowledge is incredibly narrow. See for example knowing-how vs knowing-that(

                        3. 4

                          “Stuff goes in, but where does it go? You can’t explain that.”

                          1. 4

                            Oh, by the way, reading the tagline again, about not finding the 5th symphony in the brain. Try this on for size: By electrically stimulating (rather crudely too) different parts of the temporal lobe and proximal to the hippocampus in conscious humans, early surgeon/researchers were able to evoke melodies and complete childhood memories, recalled extremely vividly. wikipedia article

                            Microstimulation, as this is called, is a way to futz with the operation of the brain, putting in artificial signals inside - into the middle of neural circuits - and if you do it right, you can figure out what the person perceives of that futzing. Its a trip!

                            1. 8

                              This article is stupid.

                              The very subtitle:

                              Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

                              is blatantly false. Brain injuries can cause amnesia or cognitive deficits; is the contention that recollection is not the retrieval of knowledge or that problem-solving is not the processing of information? Because that’s stupid.

                              The example with the dollar bill in fact proves that the student’s brain does store a representation of the bill! Their drawing from memory has digits in each corner and a face in the middle, as the actual bill does. It’s missing a boatload of irrelevant detail, because the brain is smart enough to parse out and store only important details, but if they weren’t storing anything, they wouldn’t be able to draw anything at all.

                              …neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.

                              This is self-contradictory on the face of it! How would the author describe storing a file to a hard drive? Surely the drive is “changed in an orderly way” that allows us to retrieve the file from it? And surely singing a song counts as “retrieval”? What is the author trying to say? Any interpretation I can come up with is either pointless or wrong.

                              I could go on tearing it apart, but I get really frustrated by stupid writing such as this. If the author’s contention is semantic quibbling with the words “store”, “retrieve” and “process”, that’s pointless and not worth an article; otherwise, the article is just self-contradictory.

                              I’ll leave off with this: the brain is an organ for the processing of information. It does not work in the same way as engineered computers because evolution is the great grandmaster of finding obtuse solutions to problems, but it does many of the same things: it takes input (from sensory neurons), sends output (to motor neurons), and performs complicated transformations (“processing”) with historical feedback (“memories”) from one to the other. Since computers also do all of those things, they form a useful metaphor. Anyone saying any more (or less!) outside the context of a peer-reviewed neuroscience paper is almost certainly talking out their ass.

                              1. 11

                                Anyone saying any more (or less!) outside the context of a peer-reviewed neuroscience paper is almost certainly talking out their ass.

                                I think you’re dismissing the real problem. Admittedly it’s common in neuroscience articles to talk about the brain as a computer. But this is a theoretical framework that many neuroscientists use to understand empirical data. Some neuroscientists do call this into question. Since the mind is still an unsolved problem for science, the question is whether this theoretical framework that we use to understand the empirical data is the right one. In philosophy of mind this concept is more openly controversial. The homonculus problem, Chinese room, and symbol grounding problem are all examples of objections people have raised over the years to this theory that broadly can be called representationalism.

                                The example with the dollar bill in fact proves that the student’s brain does store a representation of the bill! Their drawing from memory has digits in each corner and a face in the middle, as the actual bill does. It’s missing a boatload of irrelevant detail, because the brain is smart enough to parse out and store only important details, but if they weren’t storing anything, they wouldn’t be able to draw anything at all.

                                If you look at a stomach, the stomach’s state changes when food enters. But is it storing information? We can think about the stomach organ in terms of information. We can say that it stores information when its state changes. But that might not be the best metaphor to understand the stomach.

                                In the dollar bill example, a person sees a dollar bill, a person is asked to draw it, and they draw it. What is stored in the person’s mind? Is it a copy of the dollar bill? Why is the model incomplete? Why are some details deemed more important? If it’s truly a copy, why should “importance” matter, and how do we define importance? We know there is some internal state to the person’s mind. Is this internal state best described as being a copy of a dollar bill? Perhaps these “holes” or deficiencies in the “representation” point to some other mechanism than representation taking place.

                                Let’s shift gears. In machine learning there is the concept of model-free reinforcement learning. Model-free learning is used to create agents that can learn to adapt to environments without recourse to a model. An example is Q-learning. Instead of modeling it’s environment, an agent just works by navigating through a state space according to the rules of the Q-learning algorithm. At each state in the MDP it assigns a reward value. When an agent experiences reward it retroactively updates prior states with a discounted reward value. Over time a Q-learning agent is able to find a way of acting in its environment that maximizes reward.

                                It’s a pretty interesting approach. You can apply Q-learning to all sorts of different environments, with all sorts of different conditions, and the agent tends to find an optimal way of acting (“policy”). But what’s interesting is it does so without storing models of each of its environments - instead agents learn reward values associated with states.

                                For example, one problem is where you have a car in a ditch and the agent tries to drive the car, building up momentum, to get itself out of the ditch. A Q-learning agent is able to do this without a representation of car or ditch. Another problem is finding the way through a maze. Again a Q-learning agent can find an optimal policy for the maze without relying on a representation of “maze”. It doesn’t have a maze-type with properties called locations (or whatever). It uses the same algorithm for mazes as it does for cars. In this sense it’s “model-free.”

                                In this case we could say that the agent is storing information about its environment. And that’s one way to think about it. But what the author of the article is criticizing isn’t simply the concept of information as applied to the brain, but the entire computational metaphor for the brain, and in that metaphor, the role that information has, and that role is to feed into representations.

                                Q-learning is a simple example of learning that is not dependent on representation. That’s not to say the brain works via Q-learning, but it shows that alternatives to representationalism are conceivable.

                                I admit that thinking about the mind in terms of representations seems very intuitive to a lot of people. But representationalism is not the only way to think about the mind. Thinking about the mind as representational carries with it assumptions that limit the theoretical alternatives we’re willing to explore. Personally I’m of the mind that our limited theoretical framework is the reason we haven’t made more progress in this area, and why true AI is a long way away.

                                1. 4

                                  Q-learning is a simple example of learning that is not dependent on representation. That’s not to say the brain works via Q-learning, but it shows that alternatives to representationalism are conceivable.

                                  If someone exhibited a process for turning the a Q-learning policy into a rough picture of the maze (at the “first dollar bill picutre” level of accuracy), would you say that the policy was a representation of the maze? (Stipulate that such a process would have to work on more than one maze, so the process itself doesn’t somehow encode the maze).

                                  I would, I think the author of the article wouldn’t. I think this semantic interpretation of “representation” is one of the main points of disagreement.

                                  1. 4

                                    Can we nail down what is meant by “representation”? Because in reading over your comment, that seems to be a major disagreement here. By “representation” I mean any encoding of particular information, and I think it’s essentially axiomatic that the brain encodes and stores information because it can recover it at will in the absence of relevant input stimuli. (Surely it is agreeable that the request “draw a dollar bill” does not in and of itself contain enough information to reconstruct an oblong rectangle with numbers in the corners and a face in the middle.) I do not mean “a jpeg of a dollar bill”, or anything directly comparable. The encoding of information in the brain is complicated and, to the best of my knowledge, not at all understood (cf. evolution being a crafty bastard that puts little value on simplicity), but I don’t think it’s arguable that information encoding takes place.

                                    Similarly, a neural network clearly encodes information about whatever it’s being trained on in its internal state. It doesn’t store it as, for instance, a graph of maze nodes with pointers representing edges, obviously; but it contains information about the object of its training, because it is more effective after being trained than before.

                                    If the disagreement is whether or not this encoding counts as “representation”, I think that’s semantic quibbling and more or less pointless. Maybe there’s a neuroscientific jargon definition which is more in line with “jpeg” than “any encoding”, but then that definitional tension has no place in a popular science article.

                                    If neuroscientists actually talk about the brain in terms of “this is the CPU lobe, this is the RAM lobe”, etc., that’s obviously false. The internal structure of the brain bears no resemblance to anything an intelligent engineer would ever design. But, while I am certainly not at all versed in the state of neuroscience commentary and research, no neuroscientist I’ve ever talked to has expressed that view.

                                    1. 4

                                      Can we nail down what is meant by “representation”?

                                      Try this: http://plato.stanford.edu/entries/mental-representation/

                                2. 3

                                  A breath of fresh air.

                                  1. 2

                                    Information processing is digital. Signal processing is analog.

                                    I agree with the author, we aren’t information processors.

                                    1. 5

                                      This article on analog computers may be interesting to you. I loved playing with analog computers as a freshman. You could do cool stuff like integrals, solve coupled partial differential equations, all with a patch board, a box of resistors, some amps and an oscilloscope!

                                      1. 2

                                        There is, in mathematics, a formal definition of information; see the work of Claude Shannon (Wikipedia). It includes both analog and digital signals.

                                        Whether that’s applicable to philosophy of mind is precisely what’s at issue here.

                                      2. 1

                                        Author really needs to find memnotechnics and be free.