1. 42
  1.  

  2. 9

    This could be seen as an interesting precedent for the legal aspect of using cameras in public:

    If the image quality is high enough to allow facial recognition your are in breach of GDPR.

    1. 3

      Did I miss something from the article or are these two points correct:

      • the author uploaded one face into Rekognition (so she has a Rekognition DB of 1 face)
      • the author then selected one picture containing two people and Rekognition detected that she is one person (with 90% certainty)

      Can somebody who knows Rekognition elaborate into what Rekognition uses in this case to decide 90% certainty? What would change if she created a Rekognition database with 100 different faces? Would the cetrainty be different or still the same, because Amazon internally might use their knowledge about millions of faces?

      I am trying to understand if this can really be used for mass surveillance or not. German police has tested similar systems and precision was extremely bad. You just get too many false positives (or false negatives, which in their case they wanted to avoid, because they wanted to show they can detect criminals). So my assumption is that also Rekognition would create a lot of false positives for the face if given a video stream of a whole day.

      1. 2

        I think it comes down to number of features. You can see there’s like 32 or more features, each with a gazillion possible values. This is enough to create a “fingerprint” of your face. Information is crazy.

      2. 0

        Compare this to the more thoughtful approach Google has taken for its “celebrity recognition API”:

        1. The API only recognizes celebrities, who may opt out of being available with the API. Customers can’t add more celebrities.
        2. It’s only available to manually-screened organizations in media/entertainment.

        I wonder at what point this API won’t be useful any more if enough celebrities opt out.

        Disclaimer: I work at Google, but not on this API.

        1. 2

          It’s only available to manually-screened organizations in media/entertainment.

          I think Google picking and choosing which organisations and industries get to use their AI systems is a bug, not a feature.

          1. 1

            Having a moral compass and exercising discretion is bare minimum for behaving ethically.

            1. 1

              I don’t think so. Google should be able to decide a company is not going to use this product ethically and refuse service.

              A devious example: a company analyzes CC feeds, recognizes celebrities, and anyone who pays (paparazzi or stalkers.) Not good.

              1. 1

                I agree. But that’s not what is happening here - they are only allowing a very small set of customers in, not banning those who abuse the systems.

                (And in this specific case I also question their judgement: media/entertainment & doesn’t have a great reputation for ethics. Because I think they’re talking more about gossip sites than journalism).

                Edited to add: Google also has a bad track record in this area. Just today: https://www.bleepingcomputer.com/news/google/google-now-bans-some-linux-web-browsers-from-their-services/