This is a look into an example of survey research bias, survey tactics and methodology, and how they can actually affect your brand perception. I tried to do this without political judgment or taking sides. 

Have you ever seen this type of poll (below)? I received this one a few months ago, and then again recently in an email from the Republican National Committee. Just a quick look shows key problems in the survey’s execution:

Survey Research Bias

Simply based on what I’m seeing in my inbox and on social media, this kind of survey is becoming more popular: one question, multiple choice, I only need 15 seconds of your time…

I don’t know the ultimate objective of this particular survey (I’ll get to that point later), but I find three big problems with a survey constructed this way:

  1. Researcher bias
  2. Respondent bias
  3. Inconsistent messaging

In this post I examine all three of these problems, with examples and suggested quick fixes for each. For more information on how our Networked Survey™ technology helps limit researcher and respondent bias, take a look at Agreeable Research solutions.

Addressing researcher bias

It comes in many different forms, but at a high level, researcher bias occurs when survey creators knowingly or unknowingly influence the answers respondents provide (through survey design or methodology), or direct the subsequent analysis of the survey data toward desired results.

In short, it’s how question-askers influence question-takers, or how survey data analysts guide the creation of insights.

Looking at this example, if respondents are only allowed four options to rate the president’s performance—Great, Good, Okay or Other—the results will be heavily skewed toward the apparent desired result of the questioner (i.e. that the president’s job performance so far has been “Okay” or better). Even if enough respondents answer “Other” and provide reasons why, the default positive answers shed light on the results that this survey seeks to gather.

This issue is known as “wording bias,” where the words or options provided have the power to shape respondent answers and ultimately the survey results. Therefore, it’s difficult to see the results of a survey like this as wholly accurate, or accurately representative of the audience being surveyed.

QUICK FIX: The survey can be better constructed using a variable range of answers, like a Likert-type scale, where 1 = Poor and 5 = Great. This limits the influence of the survey language on respondent answers, and the survey creator can still allow participants to submit verbatim answers.

Addressing respondent bias

When thinking ahead to the ultimate value of this survey’s data, there are two major considerations on the side of the respondent (question taker). The first is the respondent pool itself. The email I received begins with this message: “The President has asked us to reach out to some of our top supporters for a one-question poll, and as one of our best, you’ve been chosen to participate.”

Again, no political judgments, but the only reason I got this email is that I’m on the RNC email list. That fact in itself does not determine my level of support for the organization (remember this when I discuss the objectives of the survey).

However, their email shows that the RNC wants to measure the president’s job performance rating only among its top supporters. This is not necessarily an issue, so long as the results are accurately positioned as such.

Survey Research BiasBut when you click on any answer in the email, you’re taken to a page titled “Official Presidential Job Performance Poll” (pictured). This title insinuates a level of authority, definitiveness and objectivity—it is “official,” after all—that’s inconsistent with the limited respondent pool.

While this issue can be addressed with better messaging and positioning, the issue of “sponsor bias” cannot. Sponsor bias occurs when respondents know (or think they know) who’s asking the survey questions. This often results in respondents providing answers that are skewed by their feelings toward the survey sponsor.

Since I received this email from the RNC, and the email ends with the sentence, “We’ll be sure to pass it along to President Trump,” my responses may be influenced by my sentiment toward both parties.

QUICK FIX: If continuing with this methodology, position the survey and its results as a study of RNC supporters. If looking to create an “official” performance rating, work with a third party to administer the survey anonymously so respondents don’t know who’s gathering the information.

Addressing inconsistent survey messaging

As I mentioned, the email I received begins with, “The President has asked us to reach out to some of our top supporters…” and ends with, “We’ll be sure to pass it along to President Trump.” All of this leads me to believe the RNC is working to better inform and provide research to the president.

However, the footer of the email (pictured) contradicts this thought entirely, stating this email is “Not authorized by any candidate or candidate’s committee.”

Survey Research Bias

For context it should be noted that on Inauguration Day, President Trump filed the paperwork to be an official candidate for re-election. So it’s further confusing that when I reply to the email, my message is directed to “donations@donaldtrump.com.”

If the RNC’s email is not authorized by any candidate or candidate’s committee, then why is the president requesting this survey outreach, and how is the RNC authorized to direct funds to “Donald J. Trump for President”? This could be a semantic argument along political, legal or regulatory lines. Regardless, these points of inconsistency would make me suspicious as to the motives of the survey issuer, the objectives of the survey, and any findings coming out of the research.

QUICK FIX: Draft clearer language as to the motivations and objectives of the survey research, what it will be used for, and how its results will be shared. In addition, ensure coordination between the sponsor of the survey, the email communications and the response mechanisms.

The objectives of the survey

Many of the issues I brought up in this post can be addressed or cleared up by determining the ultimate objectives of the survey itself.

  • The “wording bias” could be a deliberate tactic to generate a study with more favorable results to share within the Republican Party.
  • The limited respondent pool could be specifically used for a survey designed to provide insights into RNC supporters.
  • The “sponsor bias” could be a concerted effort by the RNC to be more transparent in its data gathering initiatives.

All of these are completely valid.

One consideration I didn’t address was the valuable information this kind of survey would provide the RNC about its email list. Remember I said earlier that the only reason I got this email is that I’m on the RNC email list. They likely didn’t have very much information on my level of support for their organization or the president.

By having respondents select among their options—Great, Good, Okay, Other—members of their email list are self-selecting into categories based on their approval of the president’s job performance. With this information, the RNC can more effectively target email communications to each category, and even use these email addresses to create custom audiences on Facebook and Twitter for more effective ad targeting.

If this is the ultimate objective of their survey outreach, then they will most likely gain the kind of data and insights they’re looking for. Other than this, the RNC should reevaluate its research tactics and methodology to produce more accurate and defendable findings.

Interested? Let Us Share More Materials With You!