1. 9
  1.  

  2. 12

    Yes. The curse of dimensionality gets in the way of explaining high-dimensional models. Historically, this sort of opaque classifier has been used to discriminate. We have discussed this stuff before.

    1. 3

      This is something that I think a lot about, because on the one hand opaque models can be discriminatory… but on the other hand, humans are the ultimate opaque model, and humans can lie, even to themselves, about why they made a given decision.

      1. 4

        Humans are compassionate more often than not, and can be reasoned with, and we try to design our societies to mitigate for our biases with checks and balances (even though we often fail at this). Machine learning models on the other hand have the ability to be used at scale, with the veneer of being more ‘rational and objective’ than humans. So I’m not sure it’s good enough to say ‘humans are opaque, so then we should be fine with machine learning models being opaque too’, which is what many comments in this thread are suggesting.

    2. 7

      In general, I think we should answer this the same way we would answer it for a human expert who was being asked to make the same decisions with the same raw data.

      In some cases, you really do need the expert to be able tell you how they arrived at their conclusions. But in other cases, the whole point of using an expert is that you want decisions that can’t be reduced to a simple chain of concrete calculations. You want their intuition, which by definition can’t be fully explained.

      On some level, “An AI should be able to explain every decision,” is another way of saying, “We should be able to enumerate a list of rules that arrives at the same decisions as the AI, and thus eliminates the need to use an AI at all.”

      1. 1

        But that could be useful. What if there were a meta-AI whose function is to discover those rules and make those associations plain?

        1. 2

          Sounds like a sufficiently-smart compiler to me.

      2. 6

        Are we concerned that the decisions of humans are inscrutable?

        1. 3

          No. This has been a known problem for a while (though maybe not as widely known as it should be). Off the top of my head DARPA has a project focusing on explainable AI

          This headline is surprisingly, not a victim of Betteridge’s law of headlines.

          1. 3

            Whether this is a good article or not, the answer here is pretty clearly “yes”, for “AI” as generally understood.

            1. 3

              Reading your and Corbin’s statements, I think I was inserting a “current” before the AIs. I think most people that have thought about the implications realize that explanations will eventually become a necessary precondition for using them in greater depth. But that’s going to require some heavy lifting technically.

              So, I’ll amend my answer from “No” to “No in their current state/applications, but eventually yes”.

          2. 3

            I understood the question as asking “Should we be concerned that people think the decisions of AIs are inscrutable?”. With that in mind, here’s my take:

            I don’t think this is a technical question. For me, it highlights that a machine is just an extension of a human.

            It’s like a fork. A fork only works well when you’re holding it. If it’s sitting on the floor, someone might hurt themselves. It’s not the machine which needs improvement. The human must act with the understanding that AI’s are not all-knowing gods.

            1. 3

              The decisions of natural intelligences are inscrutable. Quite often, they don’t understand their own decision processes. If they do, it’s a challenge for them to explain in a way that is actually useful to anyone else. And even if they are able to understand and explain, they also possess the intelligence to tell plausible lies!

              People should understand that, in all probability, the more “genuinely intelligent” AI becomes, the more these statements will be true of AI as well. And, sure, there are some countervailing factors (an intelligence embodied in a computer can be saved, loaded, forked, experimented on, given different stimuli, have its brain dissected, then reset), but there does come a time when you have to ask if it’s even ethical to think about doing that kind of thing to a system that’s capable of telling you a story about its own thought processes, or even one that’s capable of telling you lies you want to hear.