“3 out of 4 Americans believe that killing animals for meat is immoral, according to a MSNBC-MediaMatters-PETA poll. Based on this information, Congress is debating new rules put forth by the FDA to tax meat production at a higher rate than vegetables.”

Insert dead babies joke here.

The infant mortality rate is nearly double in Mississippi and Alabama than it is in New York and California!!!! Oh wait, It is 0.4% in NY and CA and 0.8% in MS and AL. And wait, there’s more! There’s a longstanding and well-known correlation between poverty and higher infant mortality. CA and NY are 2 of the richest states and MS and AL are 2 of the poorest states. What, exactly, is the point of this article except to poor shame the south?

Every media outlet has published a metric ton of articles that start just like this. It’s lazy writing, but it’s also the platinum standard for crafting a narrative. See, people are social animals that are primed to go with the crowd. When that subconscious impulse is manipulated through polling, people’s behavior becomes malleable. When you have some basic understanding about the strata of voters and their belief systems, you can get them to do your bidding without them even knowing.

Three basic concepts make public opinion polling an irresistible tool used to bias an audience: herd behavior, identity politics, and aura of authority. The formula is simple, using a favorable polling result, generalize the findings so that it is implied that a majority of an identity group believe a certain way, relying on herd behavior to solidify support of the belief within the identity group.

The ironic part is that none of what makes public opinion polling such a strong tool is based in reality. The herd behavior is based on an illusion. By and large, support for a politically controversial position sits somewhere between 40% and 60%, meaning that nearly half of people oppose said controversial position. Further, polling doesn’t allow for enough nuance to differentiate between being opposed to legalizing machine guns and being for the repeal of the 2nd Amendment. Identity politics, as well, is subtle. Take, for example, the approval/disapproval ratings of prominent politicians. If identity politics were the primary driver of public opinion, the surges and drops in approval ratings would be quite attenuated.

However, the illusion of universal agreement is very powerful.

Social Science is Modern Day Astrology

The holy grail of science is replicability. If you can produce an effect in one study, you should be able to replicate the conditions and achieve the same effect in a successive study. In physics or chemistry, this is usually fairly straight forward. Barring some unknown environmental variable affecting the experiment, the bowling ball and the feather land on the ground at the same time in a vacuum. The sodium and the water create a highly exothermic reaction when combined.

https://www.exposingtruth.com/wp-content/uploads/2014/11/astrology.pngSocial science is much squishier, both in methodology and in result. When you’re working with people, they don’t behave like molecules in a vacuum. They lie, they are affected by minor biases in your methodology, they are subject to many weird psychological effects like the placebo effect, and they don’t take kindly to being locked in a laboratory for 15 years for a longitudinal study.

Resultantly, more social science is done by “poll” than by “experiment.” Not that the experimental method is any better. I experienced the infamous psychological experiment where they flash pictures of different races of people and then time how fast you click on the good word or the bad word.

This has led many skeptics to put scare quotes around social “science”, which more and more resembles phrenology than physics. Adding more fuel to the fire is the “replicability crisis.” The replicability crisis affects both experimental and poll based studies. Essentially, social science can’t find the same effect two times in a row. Not only that, but they can make a study say that any effect exists (such as, listening to songs about old people makes you younger).

However, in a world that fucking loves science and decides social policy by sound byte, the internal crisis in social science becomes a very public issue. As discussed in Part 1, science journalism is a farce. When an ethically compromised journalism industry interacts with an ethically compromised social science industry, you get science journalism that is slave to the agenda of the media. We live in a world where science is subservient to the state. If you publish something that aligns with the state’s goals, you get media coverage and additional grant funding. If you try to publish something that goes against the state’s goals, you get undermined at every step.

Manipulating the Results: Bias in the Experiment

People are quite malleable as I’ve already said, and this is evident in the results of studies. Wording is very important. Want an anti-abortion poll result, mention “mother” and “convenience.” Want a pro-abortion poll result, mention “choice” and “woman”. I’ll let the next example speak for itself.

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties,” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There are quite a few known phenomena that influence studies, as well.

http://4.bp.blogspot.com/-ZvAHjZ1cGBA/TVzjodUFwyI/AAAAAAAAAmA/Z-C44vRRVvg/s1600/scales-of-justice.jpgAcquiescence Bias – Making a statement and asking the poll taker to agree or disagree. Usually folks with lower education  will agree disproportionately with the statement in comparison to when the same issue is asked in a question format.

Social Desirability Bias – We saw a bunch of this last election cycle. People tend not to like to tell others about their illegal or unpopular opinions, so they’ll simply lie to make the poll giver like them.

Question Order Bias (“Priming the Pump”) – Ask a question that will likely get a positive or negative reaction, then follow it with a question you want to influence in that positive or negative way. For example, if I were to ask y’all whether you like the current spending levels of the federal government and then followed it up with a question of whether you like deep dish pizza, the pizza question will be skewed negative.

Interviewer Effect – Related to the Social Desirability Bias. The poll taker changes their responses based on characteristics of the poll giver. For example, if a woman is giving a poll on equal pay, the poll taker may respond more favorably than if a man gives the poll.

Observer Effect – The poll taker is subtly affected by the poll giver’s unconscious cues, resulting in their responses being biased toward the poll giver’s expectations. For example, if the poll giver expects that black people will answer a question a certain way, they may change their inflection when asking the question in a way that influences a black poll taker to answer in that way.

This still ignores the cognitive biases that we have talked about in Parts 1 and 2.

How do you sort through all this crap and get to a real, measurable effect? You design a good experiment. How do you design a good experiment when taking a survey? You don’t.

Manipulating the Results: Playing with the Data

Okay, so we have highly questionable data from a shit survey, but at least we’re now in the realm of math. Nothing can go wrong here!

I’m going to start with a book recommendation: How to Lie with Statistics

A core requirement of legitimate polling is “randomization.” Taking a random sample of the group you’re trying to study is what allows you to generalize the results to the group as a whole. If you do something to disrupt the random sample, you weaken the ability to generalize the results to the group as a whole.

How do people screw with the random sample?

Weighting – Let’s say you’ve done a 1,000 person survey, but you’re concerned that your relatively small (but random) sample isn’t actually representative of the world. See, you’re a savvy poll taker and you know that a recent poll showed that there are 41% Democrats, 37% Republicans, and 22% Independents in the locality of your poll, and your poll has 39% Democrats, 40% Republicans, and 21% Independents. We’ll just inflate the results of the Democrats in the poll to reflect 41%, deflate the result of Republicans to reflect 37% and mildly inflate the results of the Independents to 22%, and we’ll do our further analysis based on this massaged data. Of course, this assumes that the pollster’s understanding of reality is correct, and it screws with the randomization of the data, resulting in a strong danger that the data no longer reflects reality.

Margin of Error – You survey 1,000 people, and 44% love Trump and 46% hate him. Therefore, Trump is unpopular on the net. Well, except for the margin of error. For a 1,000 person survey in a country of ~300 million, the results are roughly correct. Roughly correct means that your poll (and others designed in the same way) is within 3% of the reality 95% of the time. This, of course, assumes a representative (read random) sample.

Data dredging – Let’s do a huge survey asking a zillion questions. Then let’s go fishing for correlation between variables. We’ll just ignore that correlation does not imply causation, because who actually believes in that. It actually makes for some amusing reading.

Fudging the data – How about we do 15 runs of the survey, pick the 3 that most support my hypothesis, and publish a paper with the results of those 3 data runs?

A more technical issue is highlighted in Anscombe’s quartet. Four completely different sets of data that are statistically identical. Why? Let me tell a story from Poli Sci 300-something, Statistics for Political Science. One of the main statistical analyses performed by Poli Sci statisticians is linear regression. Linear regression (which you may remember from 5th grade math) is trying to fit data to a straight line (technically you can fit it to another curve). However, the problem is that you have to predetermine the type of curve you’re fitting it to. It doesn’t self-tailor. If you have an exponential relationship between being libertarianism and small government views, it won’t fit well to the straight line regression. It struck me, sitting in that class, how much statistical analysis was an art, not a science. If you don’t understand the math and conceptual understanding behind the numbers (as most social science students don’t), you’re going to come to somewhat worthless results when doing statistical analysis.

The Results are Garbage In the First Place: The Telephone Problem

Garbage in, garbage out. It’s pretty much my motto. It’s especially true with public opinion polling. Let’s quickly mention two issues so you get a sense for the type of garbage being used in modern public opinion polls. No need to linger on this issue.

http://copywritercollective.com/howtobeacopywriter/wp-content/uploads/phone-call.jpeg1) Self-selection bias – This has always been there. Who is likely to answer a telephone poll? Is there some inbuilt bias caused by some declining to participate? Is there a destruction of the randomness of the sample if it takes 3,000 phone calls to get 1,000 poll takers?

2) The shift away from landlines – This is new. Currently, less than 50% of people still have landlines. Cell phones really screw up some of the assumptions behind the methodology of telephone polling. For example, if a pollster wanted to survey people in central Indiana about some local issue, it’s possible that I would get a phone call. I don’t live in central Indiana, and haven’t for over 5 years. What does it mean for the poll that I’m not in the expected cohort? Nothing good. However, it’s easy enough to ask where I live at the beginning. What about the other way around. A pollster is trying to survey northern Virginians about some local issue. I’m essentially disenfranchised by that poll because my area codes is central Indiana. Further, cell phones make it really easy to block unknown numbers, resulting in even fewer “hits” for each phone call.

Knowing the Public: What Motivates Voting Behavior?

Now that I’ve thoroughly shattered your trust in the public opinion poll, let me shatter your trust in the people being polled. Let’s talk about a truly experimental social science study looking at beliefs and voting patterns.

The nature of belief systems in mass publics 1964Phillip Converse (His earlier work, The American Voter, is a good read, too)

The interesting result of this experimental study of people’s beliefs and voting habits is this:

There are 5 different types of voters:

  1. Ideological – Able to abstract their issue positions into larger conceptualizations (principles) and set those conceptualizations relative to other ideologies.
  2. Near Ideological – Have awareness of an ideological spectrum, but their positions don’t particularly rely on an ideology.
  3. Group Interest – Good ol’ identity politics. I’m black therefore I vote Democrat.
  4. Nature of the Times – Something bad happened in the world when Republicans were in power so I’m voting Democrat.
  5. No Issue Content – I vote because…well… argle bargle, incoherent rambling, no making sense. Seriously, this is the category where the pollster couldn’t make any sense of their motivations for their beliefs.

There must be a bunch of people in groups 1 and 2, a ton in 3, and a smaller amount in groups 4 and 5, right? That would be the sort of society I want to live in.

Sorry to disappoint.

Group 1 (Ideologues) – 2.5%

Group 2 (Near Ideologues) – 9%

Group 3 (Identititarians) – 42%

Group 4 (Idiots who can rationalize their opinons) – 24%

Group 5 (Idiots who can’t even coherently explain the reason for their opinions to a pollster) – 22.5%

This was taken in the early 1960s. Wanna bet it’s even worse today? Identity politics wins because that’s how a plurality of people think. Principals over principles is a thing because 42% of people care about principals and 11.5% (generously) care about principles. http://1.bp.blogspot.com/__raLP5kqFm4/TSWPnZ81AJI/AAAAAAAAAXU/a3oQD6tiuDg/s1600/110106+afgrond.jpg

The sickening part is that group 4 and 5 vote. (Table 1 of the study shows the percentages for the study as a whole and for likely voters, with only marginal changes to the percentages).

I could type more about the horrifying prospects of society based on this study, but I think it’s more impactful to let the data sink in. 89% of people base their worldview/politics/beliefs on something other than a set of principles/ethics/morals. Almost 50% have blatantly idiotic reasons for holding their opinions.

As a final note, 35% of respondents randomly varied across opposing positions for issues in successive interviews. There wasn’t a trend in these changes, which made the pollster come to the conclusion that these people weren’t able to come to the same opinion two interviews in a row.

Quick Takeaways from the series of articles

  • The media is untrustworthy, and not just in the obviously biased ways
  • Gell-Mann amnesia is real
  • Science journalism is neither about science nor is it good journalism
  • Any conclusion drawn from social science should be viewed with great skepticism
  • Anything being pushed based on majoritarian or poll-tested bases is probably shit
  • By thinking in terms of principles, you’ve elevated yourself into rarified air. Most people struggle to even rationalize their opinions.