A friend just emailed me
this story, which discusses
this project by David Skillicorn and Ayron Little of Queens University. Their project is also discussed
here. The project in question basically is an attempt to quantify the degree of spin in the speeches of electoral candidates, in particular those of today's Canadian federal election.
The theory behind it is based on work by
David Pennebaker at UT. Pennebaker claims that the language of deceivers or liars differs in measurable ways from the language of truth tellers. So, for example, liars use fewer 1st-person pronouns, more negative emotion words, and more action verbs. Note that the definition of the 1st-person pronouns is uncontroversial, but what counts as negative emotion and action is more subjective.
OK, suppose these findings about usage are empirically based (that is not something that is immediately clear to me). The questions that are still unanswered are the following:
First, are the "reasons" for each usage properly attributed? For example, do liars avoid 1st person pronouns really to avoid responsibility, or for some other reason?
Second, can you get a false positive? That is, while a liar might use relatively fewer 1st-person pronouns, does that mean that all avoiders of 1st-person pronouns are liars? Put another way, the model claims that liars underuse 1st person pronouns. To then detect underuse of such pronouns, and infer deception on the part of the speaker, amounts to begging the question.
Third, is "spin" the same as "deception"? Both are misleading - but it may be that the theory of language and lying applies to only deception in the sense of knowingly uttering something false. Spin, on the other hand, is more of a matter of framing than of outright falsehood.
Fourth, one could easily load their model of spin to fit their expected results - this is what I mean by the title of this post. I don't think that Skillicorn and Little did this, but hypothetically one could detect a greater occurrence of a particular structure in the language of a particular candidate, and design a model post-hoc that associates that structure with whatever trait the analyst wants to attribute to that candidate. Similarly, someone could come along and propose that each of these measures actually detects happiness, or existential turmoil, or whatever.
The only thing the measures show between the candidates in question is different linguistic patterns. What's unfortunate about this is that, despite the authors' own admonition that the results "be taken with a grain of salt", the algorithm ends up looking like an objective and credible measure of spin. A real test of the model would require an independent measure of the truthfulness of each candidate. I'm all for detecting spin wherever it occurs, but I'd prefer not to see someone falsely portrayed as a spinner with an unproven methodology.