1. 28
  1.  

  2. 8

    Just as a heads-up: the ML tag refers to the ML programming language and its relatives like OCaml. In fact, I don’t think there is a tag in the tag list for data analysis/statistics/machine learning – these story usually get tagged under either AI or maths, as far as I can tell.

    [Edited to linkify the tags]

    1. 4

      Oops, thanks for pointing that out. Fixed!

    2. 1

      Medium-term, I think we’re going to need some new benchmarks for what constitutes “performance” in the image-recognition domain, at least when applied to open domains like self-driving cars where robustness is probably at least as important as headline classification accuracy (this may be less important if your domain is more controlled, like object recognition in Amazon warehouses or something).

      Classification accuracy on ImageNet is the main one used so far, and it’s been useful in driving progress, but it doesn’t include any measure of robustness, either to noise or (especially) to adversarial examples. It’s not yet entirely clear what the right metric should be though, which is why I think research is currently at the stage of doing case-studies, essentially demonstrating a proof-by-existence of various phenomena before trying to generalize.