1. -5
    Google's plan to destroy free speech online law twitter.com

Online comments across the net, across various major websites, are using this “Perspective” API.

The word “Islam” is classified as “70% toxic”, while “Christianity” is classified as “32% toxic”.

“Clinton is good” is classified as “8% toxic”, while “Clinton is bad” receives a “toxic” rating of “67%”.


  2. 10

    This tastes less to me of “Google hates free speech” and tastes more of “optimistic software people think they can solve social problems by throwing technology at it.” Which is usually tragic or hilarious, depending on how far you are from the explosion.

    1. 1

      Yeah, it’s this. Good intentions, cleverness, and naïveté run smack into the real world.

      I find it interesting, though, that they’re willing to just throw something like this out on the web for people to play with. For sure it’s a way to collect data, and improve the system, but these kind of topics are magnets for outrage (and fauxrage). And let’s not forget that people have no idea how machine learning works. They haven’t even heard the term. Statistics, inference, etc are Greek to them. To the extent they know what “AI” is they’re thinking of IBMs Watson ads, or the Terminator. The AI portrayed by those ads seems smart enough that it if it came up with the kind of results that people are getting from this API then you’d be forgiven for thinking the machine was crazy, or evil. Especially when the story is filtered through multiple media outlets. And then this thing comes out with Google’s name on it…

      I think we, as engineers, need to be careful with this stuff. There’s a lot of good that can (has, is) coming from machine learning, but a few missteps could irreparably taint public perception of the field.

      1. -4

        Nazis didn’t know they were Nazis either.

        1. 5

          Their hats had skulls on them, they probably knew they were Nazis

        1. 4

          I didn’t click all the way down the Twitter thread but these examples seem pretty accurate. If you see a page seriously considering that “9/11 was an inside job” there’s probably a hell of a lot going wrong. The Islam/Christianity example isn’t about the religions, it’s about the popular depiction of those religions in the comment section of YouTube. The Clinton one especially - even if you assume both side’s of America’s political divide are perfectly equal in their attitudes towards appropriate online discourse - a discussion where someone says ‘X is good’ is a lot more likely to be positive and upbeat than one that starts from ‘X is bad’, for any value of X.

          And even if you think Google has their thumb on the scale trying to censor discussions that don’t fit their political worldview… there’s still no plan here. This title is hysterical clickbait.

          1. 0

            Only an insane or completely delusional person would justify having a single company control the comments sections across various major websites online.

            1. 2

              I’ll keep an eye out for a person justifying that.

              1. -5

                Shouldn’t be hard to find a mirror.

                1. 11

                  I saw a devilishly handsome and steadfastly correct man. Thank you for recommending such a wonderful experience.

              2. 1

                They already tried it the other way. They’re outsourcing moderation because it’s too expensive to do it themselves.

            2. 1

              Looking at the errors it looks like it’s detecting negative sentiment and topics that happen to come up a lot in abusive rants, and like a lot of naïve models, it doesn’t necessarily grasp how the words of a given sentence relate to each other and doesn’t understand phrasings that it wasn’t trained on.

              I can see how imperfect filters could be useful to moderators, just like a human moderator might look for curse words and some other ones like “idiot” periodically to find potentially concerning posts. The sprinkling of “machine learning” jargon on it, and the possibility that people will thus believe in it too much, seems unhelpful.

              Also agree with @pushcx that the title’s overblown. It’s so-so at its job, but not as if that’s a nefarious scheme to target non-Google-friendly ideas, and the idea that this API is on its way to world domination is questionable, too.