Ever get a call from number you don’t recognize?  Ever make the mistake of answering it? I know I have.

Recently, the people that own and operate the site were given the rare opportunity to preview an advance copy of a book!  Being that that the subject was something that is going to be a highly relevant topic upon its release date, I took the bait.  My issue however is that I was unsure how to approach such an article. I will say upfront this is well researched, all the arguments made in the book flow logically, and are diligently cited by respected academic sources.  Do I do this right and feature a worthy beer, or do I do this right and generate as much interest as possible? In the spirit of the book’s subject, I decided to review the comment total as a proxy for the interest in my past articles and determined Glibs are much more interested when I drink something terrible.

This is my review of Honey Brown

The book is titled Data in Decline by Steve A. Wood

Given the recent headlines going from predicting blue waves, crimson rushes, brutal mobs, silent majorities, et cetera, all coupled with standard internet tough guy talk between all sides, it seems all too timely in its release.  Everyone in the media are driving narratives based on polls, that suggest national or local political sentiment. The problem of course is in several recent elections the polls were wrong, most notably the 2016 Presidential Election.  We can speculate how these broken polls affected current political discourse, given that both sides insist they are in the majority thus agendas should fit accordingly and the other side can just shut up. The truth is we really don’t know because there is no reliable way of determining that outside of election day, and quite frankly even then it shouldn’t matter because our system of government is designed to respect the opinions of the minority.

Still, there must be a better way of performing these polls, but not until first identifying what is going wrong with present methodology.  Because of the complexity of the subject at hand this is not a book that should be reviewed in a single article. Today the excerpts I am going to focus on are internal biases that arguably drive poor polling results.  

A cliche that comes up in discussions in right of center circles about polls is that nobody in the comment section claims to ever be contacted by a poll.  Personally, I have—on multiple occasions—during the campaign season of nearly every election since I was old enough to vote. The only respite was 2008, but I was in Middle East at the time.  I will let everyone here speculate as to why they keep calling me but sampling biases are always a cited reason. An interesting thing Wood points out, is it may not be the biases of the pollers rather than the pollee being revealed.

Canvassing also creates both a self-selection bias for the simple fact that people don’t often like stopping to talk to people on the street. A canvasser’s cause is generally readily apparent, so individuals with a particular interest in a given subject are thus far more likely to stop and talk to the canvasser. In contrast, others may project their negative biases onto the canvasser and deliberately ignore them as a result. While this can help researchers reach certain quotas, it skews the perceived level of support because little information is gathered from those with less substantial interest in the subject matter.

In the last few weeks I was contacted four separate times by somebody working for a campaign, all of whom were looking for information from me along with gauging my interest in voting.  For those interested in knowing: yes, all were from Democrat campaigns. Two attempts were from actual volunteers that came to my door.  While I do not believe I am an intentionally sour person to speak with, it is something I have been accused of in the past.  I made no attempt at hiding my distaste for their being at my door from while maintaining as polite a tone as possible.  At least that is my side of the story—it is not like I pulled a gun, or that they can prove in court I wasn’t under duress at the time.

One simply wanted me to register to vote in the Democrat primary.  The conversation took about 3 minutes in spite of my having to explain that not being a Republican does not make me a Democrat.  The other actually did ask me what issue I cared about the most, and instead of the standard Glib retort (gay, pot-smoking Mexicans) I asked if he had a list on the tablet he was carrying; I thought it would help reveal who he worked for.  The canned response, “not trusting republicans in power,” with no analogue for the other side suggested who was paying this volunteer.  In the end my only response was, “the economy.”  He then left me alone.

I continued further into the book where Wood discusses potential reasons why the polling data itself may be subject to sampling bias.  He provides thoughtful suggestions why this is the case, and presents examples with citations to corroborate his claims. Such as:

If strongly partisan Democrats are far more likely to respond to an opinion poll than strongly partisan Republicans—which is arguably the case since these same polls indicate 52% of strong Democrats trust polls compared to 27% of strong Republicans14—the results of those polls are likely to contain bias. The effect is comparable to Literary Digest’s oversampling of Republicans in 1936 by drawing respondents from populations made up of voters who tended to be more Republican than the overall electorate.

That this disparate impact comes at the same time as the rise in narrowcast media, which allows individuals to curate and filter which information makes its way into their consciousness, makes obtaining participatory buy-in from study population members much more difficult than it has been in the past. People are becoming far more accustomed to actively filtering what information they take in. Everything from ad blockers to phone call filters have allowed confirmation bias, “the seeking or interpreting of evidence in ways that are partial to existing beliefs,”15 to flourish in our daily lives.

True.  We all live in a bubble of our own creation.  Don’t think you live in a bubble?  Guess what this website is.  If past discussions here and other dark corners of the internet are indicative of the overall sentiment to polling is they are as trustworthy or more appropriately, untrustworthy as the media outlet reporting it.   Its to the point others will simply cite betting odds in Europe as more trustworthy or even use crude methods to neutralize the bias in the data (i.e. just add 5 points to the Republican’s result).

Another example cited as a reason the data is subject to bias:

Facebook defines advertising fatigue as “[w]hen everyone in your target audience has already seen your ad many times, it becomes more expensive to achieve desirable results.”35 More broadly, over-tasking human awareness with frequent interruptions and distractions substantially reduces peoples’ overall functionality;36 populations which have been inundated for extended periods are already operating at a base capacity of 60% at best.37 As audiences become saturated with ads, it becomes increasingly expensive and difficult to reach them, capture their focus, and engage them by any means.

Indeed, I ignore things on my screen as I tire of reading it.  It certainly helps that many web pages all put the ads in the same place which is allows for more efficiency in ignoring.  These ads sometimes lead to a survey.  This is not the only bias that suggests the only people responding to a poll are people that actually want to respond.

Although the Bradley Effect has largely been written off by social scientists, the term has evolved to essentially cover all cases in which respondents lie or otherwise deliberately provide false data to pollsters. The concept continues to live on because the general principle of survey respondents misinforming interviewers has seemingly manifested in other forms.

The Shy Tory Factor is one of those manifestations, one which focuses on political parties and philosophies in general rather than specific individuals. This phenomenon was first discovered in Great Britain, where it was found that Conservative voters may refuse to answer pollsters honestly, indicating that they supported the Tory party less than they did. This effect has also been found to understate support for the Republican Party in the United States.66

[…]

However, due to the already questionable nature of polls, it is possible that the Shy Tory Factor as it is observed is in truth a manifestation of compounded sampling bias and self-selection bias.67 This is difficult to reconcile with the fact that the effect seems to be more pronounced in surveys where the respondents have higher levels of personal contact with the research team, but is worth considering.68

Sounds like there is a some level of truth to the theory that in 2016 people were not willing to tell somebody outside their inner circle they supported Donald Trump. To be perfectly fair, I only mention this because it does confirm my own biases.

If there are so many problems within the polling data that seem so obvious once it is spelled out logically like this, why has there not been any drive to update polling methods?

Just as politicians can suffer from record low approval ratings yet are continually re-elected, pollsters’ clients keep committing themselves to the same groups and practices which have increasingly failed in the first decades of the 21st century. Congressional representatives and senators who keep their jobs despite their track records have about as much of a reason to change as researchers who keep their jobs despite theirs.

Right.  There is no incentive in changing anything if the desired result of staying in power continues to be achieved.

 

Data in Decline, by Steve A. Wood will be made available on Kindle on 15 October 2018.  Stay tuned next week for part two where I will provide more excerpts that discuss the problems professional polls encounter when accounting for sampling biases, and their failure to address them.

As for the beer…Honey Brown is terrible. It tastes like adult onset diabetes in a can, and I cannot in good conscience recommend it.  I would almost rather have purchased another Earthquake in its stead. Almost. Honey Brown: 1.8/5.