1. 3

Radiologist takes a model that classifies hip fractures, augments the dataset with one-sentence clinical descriptions (consistent structure, 26-word vocab), and builds a recurrent (recursive?) neural network that generates descriptions to go with the predictions.

From the tl;dr:

  1. Humans explain their decisions to each other through language, and this is considered sufficient to be trustworthy.
  2. Language explanations of visual decisions are usually very simple, because we can rely on the inbuilt knowledge of our audience to interpret the basics. We can do the same with AI, training systems to produce very simple text that is convincing to humans.
  3. In our work human doctors found this approach trustworthy, and much preferred to saliency maps (the dominant approach in interpretability methods).
  1.  

  2. 2

    This is pretty neat. It goes to show that to make an effective AI, you don’t have to have it know everything or do everything; if you take a step back and narrow the focus and come up with some simple rules (in this case relying on the fact that medical experts can understand jargon) then you can still make something that’s quite powerful. Although not AI in the true sense of the word, the boids algorithm is another example of this. If you wanted to simulate a flock of birds as they move through the air, you could probably come up with a way to do it that takes into account speed, line of sight, how many others are in proximity, etc. And yet boids is only three rules and does a pretty damn good job.

    1. 1

      Great TL;D!!!