We do not support the use of this project in applications that violate privacy and security. We are using this to help cognitively impaired users to sense and understand the world around them.
Shame they didn’t put such a stipulation on their license.
What, people think that this kind of opressiontech just magically appears? If you work on these things, you’re going to contribute to the problem. We need to take more responsibility for our code.
How do you propose they go about enforcing this via their license?
The boundaries of what face-recognition means to privacy are definitely still being figured out - it’s a conversation society has been reluctant to have. Personally, I’m immensely happy to see it, for a personal reason: I often feel like I’m at a privacy disadvantage because almost anyone who’s ever met me has filed my face in their brain’s database, but prosopagnosia prevents me from doing the same for theirs. And the researcher’s statement seems to recognize the importance of that use-case, which I don’t usually see acknowledged as a thing worth caring about.
Note that this is a single CMU researcher implementing a published algorithm; I don’t think this particular software release is enabling anything that wasn’t reasonably possible already, and clearly an engineer working alone isn’t in a position to formulate the guiding principles for a privacy area nobody’s feelings are clear on. And then it’s a very long way from those guiding principles to something legally meaningful. I definitely believe this conversation needs to happen, and perhaps efforts like this will motivate it.
I’m not sure that the privacy disadvantage here is well-solved by removing other people’s privacy. :(
I do agree with that.
I mean, there are plenty of commercial vendors of face-recognition software designed with features that aid surveillance. I’m sure they’re already benefiting from the upstream research. And then there’s photo tagging on social networks; those companies have the resources to do this without help. These things are happening with or without open-source efforts.
I’ve never seen even a single piece of software that uses face-recognition for accessibility purposes, which is the only use-case where an open-source implementation is likely to make a difference to what gets written.
The other outcome though, getting everyone to stop using Facebook and Google+ is never going to happen unfortunately.
I wonder if there have been legal cases for people trying to be untagged in photos. Probably. “Right to be untagged”?
Well, Crockford has a “Do No Evil” sort of clause in the JSON license ( http://www.json.org/license.html ).
IBM one day called him up and asked for a specific relicensing of it without that clause–so he dutifully elided it for them only.
One can imagine something similar happening here.
IBM being uniquely permitted to do evil because of a quirk of software licensing sounds like a premise for a hokey cyberthriller!
This led to problems for other projects with Crockford’s license though…
I’m not a lawyer and couldn’t possibly comment on whether such vague wording will accomplish anything close to its intent. :)
Arguably. While the purview of cogsci absolutely includes studying what computers can tell us about the mind, I’m not sure anyone is specifically thinking about deep neural networks and what they mean for cogsci; certainly this project doesn’t, nor does the paper it refers to.