Categories
Politics and civics

What’s in a number? The Reuters poll on Trump’s immigration ban (and understanding polls)

The Reuters story claiming that a majority of Americans support Trump’s executive order on immigration is flawed but the headline hides its weakness.

I’ve been skeptical of news reports about political polling for most of my adult life. That skepticism was triggered Tuesday by a Reuters-Ipso poll: Trump’s travel ban polarizes America.

The Jan. 30-31 poll found that 49 percent of American adults said they either “strongly” or “somewhat” agreed with Trump’s order, while 41 percent “strongly” or “somewhat” disagreed and another 10 percent said they don’t know.

Really? Really?

It didn’t take long to discover that my initial skepticism was valid.

Do half of Americans really support Trump’s executive order banning people from seven Middle Eastern countries?

Maybe. Maybe not.

It’s not possible to tell from this poll.

I kid you not.

Let’s begin with the fact that 44% of Americans are neither Republican nor Democrat in their political affiliation; these independents account for only 12% of this survey sample. The sample is overly partisan. Then there’s the use of a questionable “measure of accuracy” (credibility interval) for this online poll. The sad fact is that this poll/news story does more to stoke division than it does to explain our differences.

Let’s delve into the data (not what the journalists said about the data).

First, 14% of poll respondents said that they were unaware of the executive order. If you’re unaware, shouldn’t your answers about the EO be tossed out the window?

But when we get to the focus of the story, question TM1139Y17 (number 10), only 8% said that they were unfamiliar with the executive order. How can 6% of the survey respondents be simultaneously “unaware” and yet “familiar”? (Yes, I’m assuming the 8% is a subset of the 14%.)

In addition, 26% of the 1,201 respondents had “heard of it but [did] not know any details.”

Summary: 1-in-3 of those surveyed knew nothing about the executive order and yet their “opinions” inform the basis of the news story headline and lede.

Here’s that question (number 11):

Do you agree or disagree with the Executive Order that President Trump signed blocking refugees and banning people from seven Muslim majority countries from entering the U.S.?

And here’s the response:

ipsos poll executive order

Second, immigration was the focus of the second question:

Which of the following is closer to your opinion: Banning people from Muslim countries is necessary to prevent terrorism or The United States should continue to take in immigrants and refugees?

No news here: 43% support a ban, 44% do not support a ban, and 14% did not respond.

ipsos poll ban

Fewer people say that they support a ban to prevent terrorism than support the executive order. 

That cognitive dissonance is jarring.

But the answers are revealing in another way.

This is an overly partisan sample, where the respondent ratio does not represent the political dynamics of the country; the opinions here demonstrate an extreme partisan divide. If the relative handful of independents in this poll accurately represent the 44% of Americans who told Gallup that they are independent voters, then this is simply a story of partisanship.

News flash! Republicans think the GOP president did the right thing; Democrats do not. Republicans think America should ban Muslim immigrants; Democrats do not. 

Do you really think that lede would have generated clicks and shares and likes? Me, neither.

Political polls and how they are reported got a black eye in June in the UK and November in the US when election results and public opinion appeared to be at odds. Jon Cohen, formerly at The Washington Post and now at Survey Monkey, said in late 2016 that political polling is “facing a moment of reckoning.”

This poll, and the Reuters article pushing it, suggests that neither these pollsters nor news service agree.

 

Why does this matter?

Because, group polarization. 

Social psychologists have determined that discussions among like-minded individuals (the group) leads members to hold a more extreme position than that “indicated by the members’ predeliberation tendency” (pdf).

Group polarization is linked to confirmation bias, which is our unconscious tendency to seek out and interpret evidence that reinforces – not challenges – our current beliefs.

Thus the Reuters flamboyant headline leads to sharing by those who agree, which further reinforces that belief in others when they see it.

Researchers at the University of Colorado have shown that “people often underestimate the effects of those conversations” on their opinions. Jessica Keating, a graduate student in CU’s psychology and neuroscience department, told reporters:

We argue that it’s basically impossible to do anything about the problem if you are not first aware of it. There is little incentive to seek out a diverse array of sources for information if you don’t know that having a less diverse array of sources is going to cause these effects…. If you only watch Fox News or you only watch MSNBC or you only talk to people who have similar ideas to you, our research suggests that you will, over time, become more extreme in your beliefs.

Ten years before Keating and her professors conducted their research, faculty at the University of Chicago Law School found similar results.

The result of deliberation was to produce extremism — even though deliberation consisted of a brief (15 minute) exchange of facts and opinions… The division between liberals and conservatives became much more pronounced… After deliberation, members of nearly all groups showed, in their post-deliberation statements, far more uniformity than they did before deliberation.

 

We are innumerate

In addition to being unaware of confirmation bias or how easily we might be influenced by a friend’s opinion, we’re not very good when it comes to understanding risk or probability or statistics. From David Spiegelhalter, the Winton professor for the public understanding of risk at Cambridge University:

We know that people think 30 out of 1,000 is bigger than 3 out of 100. We know that we make numbers look bigger by manipulating the denominator… I thought people would know that 3 out of 100 is equal to 3% is equal to 0.03. But they are very different!

… humans are very bad at understanding probability. Everyone finds it difficult, even I do. We just have to get better at it. We need to learn to spot when we are being manipulated.

Journalists need to do better, too.

One of the best books to make this argument is A Mathematician Reads the Newspaper (1995) by John Allen Paulos. Pick up a copy (or two or three – give them to friends and family).

Being able to critically assess “the news” in all its glory is a foundational 21st century skill we must master if we are to retain a democratic system of government.

 

Finally, a word about “who benefits”

It’s the first question the inspector asks in a murder mystery and it should be the first question we ask when a “news story” jerks our chain.

So who is Reuters?

The Reuter news agency was established in 1851 in London. In 2007-2008, the Thomson Corporation acquired its parent, the Reuters Group. At that time, Thomson controlled about 53% of the new company, Thomson Reuters, which required a waiver of the longstanding Reuters principle limiting maximum ownership by any one person or group to 15%.

As of March 3, 2016, The Woodbridge Company, a Canadian private holding company based in Toronto, owned approximately 59.6% of Thomson Reuters (pdf)Woodbridge is the primary investment vehicle for family members of the late Canadian newspaper mogul, Roy Thomson.

 

What follows is a primer on polls.

1. What about this poll’s reliance on “credibility intervals”?

The American Association for Public Opinion Research has a stern warning about “online surveys and other types of nonprobability-based polls … that … measure the theoretical accuracy of nonprobability surveys.”

The public should not rely on the credibility interval in the same way that it can with the margin of sampling error. Moreover, the Association continues to recommend the use of probability based polling to measure the opinions of the general public (emphasis added).

Nevertheless, there is nothing in the Reuters article that tells the reader (or editor) that this story is based upon an online poll, a non-probability survey. To discover that, you have to search for Ipsos news release (not linked on Reuters article) and then click the sidebar link called “Topline“. Topline just screams “this is the poll details” doesn’t it?

Again, from the American Association for Public Opinion Research:

Credibility intervals are explicitly dependent on underlying assumptions tied to the statistical model chosen for the study, whereas classical margins of sampling error depend only on sampling design (as well as implicit assumptions underlying the weighting adjustments)…

The credibility interval depends on the statistical model that the researcher chose for the study. If the underlying assumed model fails to hold, so too does the validity of the credibility interval… the underlying error associated with such polls remains a concern. Consequently, AAPOR urges caution when using credibility intervals or otherwise interpreting results from electoral polls using non-probability online panels (emphasis added).

We’re supposed take at face value an unknown model built with questionable methodology that relies on a sample that we know does not represent the make-up of the American public.

Really?

2. What makes poll results reliable (for anything other than clickbait)?

The correlation between poll results and accuracy is complex and rests on at least three factors under the control of the pollster.

The first is a public statement of reliability: sampling error, a function of how many people you talk to in your target population.

The second, sample composition: who you talk to. How was the sample was selected, and how was it statistically adjusted or “weighted” to conform to sociographics and demographics of the target population?

Finally, how well-designed are the questions?

A. Margin of Error

When I started writing about US politics in 2004, I chaffed at how frequently news articles failed to include the margin of error (MoE). To a point, you can reduce the MoE in a poll by increasing the sample size, the number of people you talk to (either face-to-face, by phone, or via Internet poll). “Decreasing marginal returns” kicks in quickly after a sample size of 1,000.

sampling error
Chart via American Association for Public Opinion Research

What this means is that there isn’t a huge increase in poll accuracy even if you double the number of people from 2,500 to 5,000. So it’s really important for journalists and those of us who read (or listen) to reports about polls to understand the limits of sample size.

For a sample size of 1,000, the margin of error is +/-3% at the 95% confidence level, a measure of probability. This measure of accuracy means that if the pollster were to execute the same survey 100 times, then 95 times the results should be within 3% of the beliefs of the actual population.

In addition, the survey margin of error applies only to the sample as a whole. But most news stories delve into the differences between subgroups (Republicans versus Democrats, men versus women, etc). Those subgroups have a much greater margin of error, as the chart above shows.

What does this mean?

If a sub-group sample size is 200, the MoE is +/- 6.9%. That means any difference between group A and group B needs to be 13.8% in order for there to a 95% chance that the two groups are truly different, that the difference isn’t resulting from sampling error. But even then, there is a 5% chance that the two groups are the same. 

In November, pollsters were reporting “confidence” in their projections (not their polls) that were considerably less than 95%. But that’s not what the headlines or talking heads said. Boring!

B. Sample composition

In the universe of political opinion surveys, it’s unlikely we’ll see random probability sampling of the entire population. In other words, a survey designer doesn’t reach into the bucket of all Americans registered to vote, for example, and question the first 1,000 people pulled out of the proverbial hat.

Instead, survey designers usually craft a representative sample of adults. Often this is a representative sample of adults registered to vote.

These methods are proprietary. You won’t find them disclosed in a news article or the public statement of questions and responses (assuming that such a beast exists).

These methods are also fraught with potential potholes. For example, once upon a time, political pollsters used the telephone directory as their “bucket”. But that choice no longer comes close to providing a representative sample of the American electorate.

C. Question design

There are books devoted to the art and science of designing survey questions. The short advice: questions need to be phrased in such a way as not to “lead” the respondent to an answer. You’re more likely to find well-phrased questions in polls conducted by organizations like Gallup than those conducted by a political campaign (or political party) which are used to trumpet a desired result.

Question order (framing) matters, too.

Other writing on political polls: Pew/WaPost headlines overstate partisan opinion over NSA eavesdropping

By Kathy E. Gill

Digital evangelist, speaker, writer, educator. Transplanted Southerner; teach newbies to ride motorcycles! @kegill

8 replies on “What’s in a number? The Reuters poll on Trump’s immigration ban (and understanding polls)”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: