3 Consumer Insights Lessons From Recent Political Polling Failures

Last year was a bad one for the polling profession. While economists, along with weather forecasters, aren’t expected to be right very often, pollsters are supposed to be able to predict election outcomes. The two big “oops” stories of last year, Brexit and the election of Donald Trump, created much hand-wringing and soul searching among pollsters – along with, of course, the extreme cases of hand-wringing that that were generated among EU and Clinton fans.

The conventional wisdom is that standard polling protocols produced the wrong results for two reasons: They failed to weight the data properly, and – more importantly – they failed to recognize and take into account peoples’ reticence to admit their true voting intentions. While I don’t necessarily disagree with these two observations, there are important lessons to be learned.

In the case of the US election, we can study the one poll that got it right. The outlier, ridiculed every single day until it was proved to be correct, was the LA Times/USC Dornsife Daybreak poll.

The Daybreak poll took a very different approach to sampling and to what they asked of respondents. Let’s take a look at how it differed, and why this might have worked so well.

First, rather than focusing on finding fresh groups of people for each of their surveys, and providing careful weighting schemas to ensure cross-sample matching, the LA Times/USC survey identified a single group of people whose attitudes and intentions they tracked over time. As with any longitudinal panel, once you’ve satisfied yourself that you’ve got a really good and representative group, you are able to eliminate the variability that comes from matching samples in subsequent research phases.

But more importantly, with longitudinal samples, you can really delve deep and identify what’s changing, among who, and why. Typical tracking data will tell you that you’re up two points or down three points. With longitudinal research, you also learn that there’s a lot of churn – there are maybe 10 to 12% of people who have shifted in a certain direction, with these individual shifts being offset by 8 to 14% of people who have shifted in the opposite direction. The ability to identify and diagnose these underlying changes is a key to what makes longitudinal panel data so powerful – whether used for polling or to understand how advertising may be influencing consumer behavior.

The second difference between the other polls and the LA Times/USC Daybreak poll is in how they asked the questions. Instead of asking how likely you are to vote, and who you are going to vote for, the Daybreak poll respondents were asked, each week, week after week: What is the percent likelihood that you’ll vote for Hillary Clinton? What’s the percent likelihood that you’ll vote for Donald Trump? What’s the percent likelihood that you’ll vote at all? By analyzing how these numbers shifted on an individual level, the pollsters were able to very accurately predict shifts over time in candidate preference and – ultimately – accurately predict the election outcome.

When it comes to advertising research, the obvious questions that typical tracking research might ask would include whether the respondent has seen any ads for brand X, and whether they are considering buying the brand. The more nuanced, longitudinal approach would be to identify those who actually remember engaging with specific ads, and to compare longitudinal changes in brand predispositions among those who’ve seen the ads to the changes that have occurred among those who haven’t. Longitudinal design dynamics overlaid with the right questioning provide an unparalleled means by which to isolate and illuminate the whys that underlie subtle shifts in attitudes and behavior.

There’s a third difference between the LA Times/USC poll and all the rest: The producers of this data weren’t afraid to state their findings clearly and unequivocally. It’s not easy to be the one voice that sees a different reality and has the courage of your convictions to stand up and say what you believe. We salute the team that produced and publicized these findings. From experience, we know it’s not always easy to be the one source that uses a different methodology, thereby developing insights that are unique and newsworthy. But that can get discounted by those who’ve always looked at things in more conventional ways.

For the Communicus team, the three lessons are clear:

  1. Appreciate the unique power of longitudinal research.
  2. Develop questioning protocols that are highly accurate when used longitudinally.
  3. Be bold and unafraid to proclaim with confidence the insights that emerge.

 

Photo by Flickr user Vox Efx, licensed under Creative Commons

 

How's Your Advertising Working?

Author: Jeri Smith

After joining the firm in 1992, Ms. Smith was named CEO of Communicus in 2009. She provides research-based consulting and helps clients achieve better understanding of the effects of their advertising campaigns. After analyzing the impact of how effective a client’s marketing communication platform is working, Smith works to identify strategies that will produce better returns on a brand’s marketing communications investments. Read More »

Share This Post On