1. 11

  2. 16

    Computers allow us to measure objectively the properties of text.

    I sort of cringe whenever I see this sort of statement. The computer is calculating a number objectively, sure, in that it implements some algorithm to do so, can run it repeatably, can apply the same algorithm to different texts, etc. But if you call that number “sentiment polarity”, well, the computer is not objectively calculating sentiment polarity. Someone has, very subjectively, written an algorithm that they claim measures this concept, and I think we shouldn’t be too credulous about accepting such claims.

    1. 4

      Eh, you can objectively the absorption spectra of human skin–that doesn’t mean we should automatically assume that that is a good way of driving policymaking.

      The nice thing about having these objective (really: deterministic) measures of text is that we can apply them over and over and get the same results, and if we find two sets of text (hypothetically, Roosevelt and Trump speeches), and the metrics come out similar but we disagree, we can conclude:

      • the metrics are failing to account for something
      • the metrics are correct, but perhaps our own biases are wrong

      That’s what makes algorithmic analyses useful…they let us eliminate human error and bias in the drudgery.

      Oh, and they’re what drive a lot of advertising and marketing these days–so, somebody thinks that they’re useful.

      1. 17

        Yeah, I’m not saying we shouldn’t do NLP or quantitative text metrics, I just have a minor allergy to variations of, like, this was objectively determined by an algorithm. The more low-level and “technical” the metrics are the less objectionable though (to me). I can believe that you can objectively determine frequency of pronoun usage in a text, but it’s a bit higher bar to objectively determine “sentiment” of a text. Though I have an extra bone to pick with that one in particular just because sentiment-analysis algorithms are mostly pretty bad, to the point where I’m not sure it’s helpful for anything except journalists over-interpreting results that they’re labeled “sentiment”. I’d rather they’d invented some new NLP-specific jargon term. Then I’d have no objections to objectively measuring the “Jacobsen Frn3 metric” or something.

        (Neuroscientists are frequently guilty of this too, doing perfectly competent measurements, but then adding a subjective layer of wild overinterpretation of the results, claiming they’ve shown something about “mood” or “free will” or “love” or whatever, where what they actually objectively measured is not in any really solid way connected to these concepts.)

        1. 1

          Ah, fair enough. Thanks for clarifying. :)

    2. 3

      I think the findings make sense if you consider he used a speechwriter, like every President before him. Improvisational Trump is a wildly different orator than this Trump, even if he delivery is similar.

      1. 2

        I’d love to see someone run all of the inauguration speeches through Watson’s Personality Insights service.

        disclosure: I’m an IBMer, too busy to do it myself