1. 16

  2. 3

    computers are less than excellent at detecting objects or identifying faces in sideways images

    This surprises me. Taking rotation into account in an object recognition system is literally textbook stuff, e.g. Russell and Norvig, “Artificial Intelligence: A Modern Approach”, Third Edition:

    To search rotations as well, we use two steps. We train a regression procedure to estimate the best orientation of any [target object] present in a window. Now, for each window, we estimate the orientation, reorient the window, then test whether a vertical [target object] is present with our classifier.

    And tutorials I’ve seen tell you to add arbitrary scale/cropping/rotation to training images so that the classifier doesn’t only work with ideal images.

    1. 1

      Thanks for sharing this. It’s something I hadn’t considered, but this isn’t the first gotcha I’ve seen that nobody talks about. I once spent a week trying to work with an ImageNet classifier and wondering why it was getting 0% accuracy, only to discover that the ordering of the class labels in the original ImageNet data is totally different from the ordering everyone actually trains on, because Caffe decided to order them alphabetically.

      I’m always glad to see this kind of tradecraft get mentioned; it has a tendency to get lost in discussion of models and abstract concepts, maybe because it’s an arbitrary implementation detail, but it’s critical for replicating results.