1. 22
  1.  

  2. 5

    The xor swap thing is a hack, but it does rest on a more fundamental truth, which is that XOR is a reversible operation in a way that AND and OR aren’t. If a XOR b == c, then given c and b, I can get back to a. In fact, given any two, I can get the third. The same applies to an operation like +. But it’s not true for an operation like AND or OR. If a AND b == c, then c only gives me information about a when b is true; when b is false I know nothing about a. OR is the same except it’s indistinct about truth while AND is indistinct about falsehood. Of the 16 possible logical functions identified by the 16 truth tables on two variables, only XOR and its inverse XNOR (aka logical equality or iff) preserve information because they’re sensitive to a change in every variable at every state.

    1. -1

      XOR also set upon us thousands of cost-cutting MBAs.

      Marvin Minsky showed that a single perceptron could not compute the XOR function. This was never doubted. It’s a trivial result. Minsky probably didn’t think much of it. It was an example under which a very simple algorithm (guaranteed to converge if the data were linearly separable) would not work.

      However, it brought on the AI winter when cost-cutting MBAs started using it to argue that “AI” and “neural networks” were a failure. “Neural networks” couldn’t even learn XOR! That’s not true, of course. A single perceptron can’t learn XOR and no one thought much of the fact.

      Of course, we know a lot about neural nets that we didn’t know in the ‘60s. We know where gradient descent (back-propagation) works well and what its failure modes are. We know about ReLU units. We know about the importance of validation data. We know that neural nets generally aren’t the best approach to exact binary operations.

      1. 14

        However, it brought on the AI winter when cost-cutting MBAs started using it to argue that “AI” and “neural networks” were a failure. “Neural networks” couldn’t even learn XOR!

        I really, really doubt that. I read lots of books from that era when I studied AI. I played with some of the tools, too. The AI Winter came from the cycle of over-promising what an AI method/company would do, large to massive investment in time/money into that, and not getting those results on top of new problems. A big example would be Japan’s Fifth Generation project which probably was under-performing rules engines or stochastic search on dumb, fast processors despite them putting huge amounts of money and talent into it. This problem led to the AI Winter that took down LISP and Prolog in industry in general. Rules engines lived on in languages such as Java under banner of Business Process Management.

        1. 9

          You confuse the decline in perceptron studies of 1970s (which did follow the Minsky book) with ‘AI Winter’ of 1990s. The latter was mostly due to failure of expert systems and knowledge engineering.

          The ‘cost-cutting MBAs’ didn’t have anything to do with either.

          1. 3

            I don’t know why is so down-voted without explanation!

            Minsky killed neural networks (not the AI winter) and he did it on purpose to get more ARPA funding. When he and Papert published their book they were right that feed forward perceptrons could not manage XOR and multi layer perceptrons were almost imposible to train, even when they already knew multi-layer were able to process XOR.

            This book was so influential it mad impossible for Rosenblatt (the father of the perceptron) to succeed obtaining any military funding. Minsky & Papert never wanted to debate the merits of neural networks, but concentrated on the limitations of the Perceptron (a small part of the neural network idea) to solve what they called a ‘group of interesting problems’.

            The book destroyed Rosenblatt work in the AI world, delaying the use of neural networks for 20/30 years.