Part 1: How public opinion polls lie,
either intentionally or by accident.
"There are ... lies, damn lies, and
Most results announced by public opinion polling agencies in recent years are quite meaningful, easy to understand, helpful, and of known accuracy. Most are simple attempts to assess the public's opinion on one or more topic. They are particularly useful to politicians who often gear their voting decisions to match either the majority opinion of the public, or the majority opinion of the voters within their party, or whatever opinion they feel will maximize their chance of being re-elected. Polling data is also very useful to advocacy groups, particularly if the data shows the public's response for persons of different ages, educational attainments, genders, political affiliations, etc. Advocacy groups often use polling data to more effectively design campaigns that influence public opinion over time.
However, when the topic of a poll is very controversial, or if the stakes are very high, some polls may end up with distorted polling numbers. Some times the polling agency's goal is not to estimate public opinion accurately, but to convince the public to change their mind, or to convince movers and shakers that public opinion is very different from what it really is. On the other hand, distorted results can be the result of the designers' carelessness, lack of knowledge, or even random chance during the polling design process.
Numerous U.S. national polls and polls within states have been conducted on same-sex marriage (SSM) by secular, political, and religious polling groups. Some have been sponsored by lesbian, gay, bisexual, transgender (LGBT) groups; others by religious or secular groups. There is often great variations in the results -- to the point that some wonder if fraud had taken place.
Some "cooking" of the results may have happened. For example, we have noticed one political polling group that asks their customers what their goals are, and then designs a poll and a subsequent explanation of the data to meet those goals. However, we suspect that fraud may not be the most common cause of differences among polls. Sometimes, it can be the result of carelessness, ignorance and/or chance.
The following techniques can warp polling results:
We should all be aware of them:
The exact question asked: For example, consider the following questions:
"Should marriage be redefined to allow homosexual couples to marry?" Even though the rules regarding who can marry and who cannot have been redefined many times in the history of the United States, the term "redefine marriage" has a very negative connotation to many voters.
"Do you believe that marriage should be between one man and one woman?" The respondent might answer the question differently depending upon whether they are answering with regard to themselves only, or their family, or their faith group, or the entire country.
"Do you believe that same-sex couples should be able to marry?" This is relatively neutral way to ask the question.
"Do you believe that loving, committed same-sex couples should be allowed to marry?" This phrasing of the question might bias results towards an positive answer.
"Should same-sex couples have the right to marry?" Human rights are great importance to many Americans. This question might give the most positive responses of all.
It would be fascinating to see a poll that randomly selects a question on SSM from a group of, say, five alternatives like the ones above, and then reports on how the responses differ among the questions asked. We have never seen such a poll. They probably don't exist, because they would reveal to the public how much the exact wording of the question does matter. That might well lower the credibility of the entire polling industry in the public's eyes.
Questions asked prior to the question on SSM can set up the respondent to answer in a desired way. See the two examples below.
Simply by phoning only land lines, or only cell phones, or some ratio of land lines to cell phones that does not match the actual usage, can result in an entirely different age distribution of respondents. This may shift the end results. Young adults, on average, tend to be far more supportive of SSM than the elderly. They also tend to make greater use of cell phones.
By simply phoning land lines during normal working hours on weekdays will shift the age distribution towards older, retired adults, and thus shift the end results without the bias being too obvious.
Lacking data: Some polls are reported without releasing the number of respondents and the margin of error. 2,3,4 This makes the data essentially meaningless. See the discussion below.
A polling group can simply propose many alternatives to the respondents. For example, ask them which arrangement they would prefer:
For same-sex couples to be allowed to marry, or
To enter into a civil union with all of the benefits, protections, and responsibilities of marriage except that they can't refer to their relationship as a marriage, or
To enter into a domestic partnership with some of the benefits of marriage, or
No state recognition of the relationship at all.
Giving many options would be almost certain to result in only a minority of respondents favoring same-sex marriage. The polling group could then accurately, but deceptively, describe the poll as indicating that most adults oppose SSM. What they don't reveal is that each of the options is opposed by most of the respondents.
One example of bias in the design of a public opinion poll:
The first example of an apparently bad design is a poll commissioned by the Family Research Council (FRC) during 2014-MAY-01 to 04. The FRC is a conservative Christian para-church organization, which has been designated by the Southern Poverty Law Center as an anti-gay hate group. 5
In order to understand the problems within this survey, it is first necessary to discuss how Plan B and other forms of emergency contraception (EC) work -- as understood by both scientists and by many pro-life advocates. EC is taken after unprotected sex in order to avoid becoming pregnant. This medication is often called by a misleading term the "morning after pill." It should be taken as soon as possible; it can be taken later than the morning after.
At first, researchers thought that it might work in some combination of three ways:
By suppressing or delaying ovulation. This is of little or no concern for most adults because one is only dealing with an ovum -- an egg -- at the time. A human egg is not a person.
By preventing conception if ovulation had occurred. This is also of little or no concern because preventing conception by keeping the ovum and spermatozoa apart is what condoms do.
By preventing implantation of the blastocyst -- the fertilized ovum -- in the wall of the uterus. If EC works in this way then it raises a massive concern among many pro-life adults and many others. They generally believe that pregnancy begins during the process of conception, and that the resulting fertilized ovum is a human person. To them, taking a pill that prevents implantation in the uterus is the equivalent of having an abortion, which in turn is equivalent to murdering the blastocyst, a human being in their belief.