Monday, May 05, 2008

How much does the Pollster matter for Trend?


















One of the things we think about a lot at Pollster.com is the quality of polling. Mark Blumenthal's post on the North Carolina poll demographics here is a great example of how much variability we see among polls, all trying to hit the same target population.

This issue is also raised by those who would like to exclude some polls from our trend estimates. If one "bad apple" spoils the barrel, then this is a serious issue for our efforts to estimate the state of the races here.

We've stuck to our principle that we include all available polls without cherry picking (to shift the fruit metaphor!) but we don't do that out of blind faith. Rather we do it because the empirical evidence shows that the effects of single pollsters are generally small, certainly compared to the other sources of uncertainty about the state of the race.

Here I take a look at this issue for North Carolina and Indiana.

There are four elements that affect how much a pollster influences our trend estimate.

First, the pollster's results must be "different" from the trend we'd estimate without them. If a pollster happened to hit our trend dead on every time, their influence would reinforce our trend estimate, but not change it. So for a poll to affect the trend, it needs to be different from what we'd otherwise estimate.

Second, the pollster needs to produce results that are systematically different from the trend. If a pollster bounces around the trend, some high and some low, then the net effect is small, even if individual polls are rather far off the trend.

Since the trend is determined across all pollsters, these first two points are another way of saying that the pollster must differ from what other pollsters are getting.

Third, volume matters. In some states, a single pollster accounts for a substantial proportion of all polling, while other pollsters contribute only a single poll. The former obviously have more potential influence than the latter. But high volume of polls doesn't matter if they are consistently close to (and scattered around) the trend estimate based on other polling. The problem comes when the prolific pollster is also rather different from others, and especially if there are few other pollsters active in the state.

Fourth, polls late in the game can have more leverage on the "current" trend estimate. So a pollster that does several polls but only in the last week before election day can have more influence on the current estimate than they would if those polls were spread over the entire pre-election period. Again, such an effect is only visible if the late polls are different from other polling.

Having an effect on the trend could be a very good thing if the pollster is right while others are wrong. The problem is how do you know a priori which pollster will be right THIS TIME. Experience this year demonstrates that a good day can be followed by a bad day, or both on the same day.

It is also important to put these effects in perspective across all polls we see in a race. The individual polls are highly variable. Our data often finds polls covering plus or minus 5, 6 or even 7 points of our estimated trend for an individual candidate, and double that for the margin between two candidates. There is a lot of noise out there, and the whole point of our trend estimator is to extract the signal from the noise. Our estimator (especially the "standard" estimator I'm using here, as opposed to the "sensitive" estimator we also check) is designed to resist polls that are "way off" (i.e. outliers) but at the same time be able to follow the common trend across polls. (I'm going to not go into the details of our local regression estimator here, which is not a simple rolling average. Let's hold that for another day. The FAQ on this is coming.)

So let's take a look at the North Carolina plot way up there at the top of this post. The horizontal axis is scaled to show the range of poll results we've seen in the state since April 1. This provides perspective on how much variation you see from poll to poll in the raw results.

The red "whiskers" at the bottom of the plot are the individual polls taken over this time. There is a bit more than a 25 point range in the Obama-Clinton margin during this period. Since the trends in the state have been relatively flat, only a little of this variation is due to "real change".

Our trend estimate based on all polls is the vertical blue line, which as of Monday afternoon is +8.6 points in Obama's favor.

How much do individual pollsters matter for this estimate? PPP has done the most polling in the state. If we take them out, the trend estimate drops to 7.0, a shift of 1.6 points on the difference (or an average of .8 points for each candidate, moving in opposite directions of course).

At the opposite extreme, removing Insider Advantage from our estimator produces a 10.7 point Obama lead, a shift of 2.1 points on the difference, or 1.05 points per candidate.

For most other pollsters, the effect is far smaller, even for relatively frequent pollsters such as SurveyUSA and ARG.

So the maximum effect of removing a single pollster is a shift between a 7.0 and a 10.7 point Obama lead. A shift of 3.7 points on the difference can matter in a close race, but that difference is relatively small compared to the variation we see in individual polls. Indeed, the four polls completed 5/4 show a range of +3 to +10 for the Obama margin. (They average a +7.25, compared to our trend estimate of +8.6.)

There is less polling in Indiana, so we might expect more influence since there are fewer polls to stabilize the trend estimator.


















Here the current estimate using all polls is -6.2, a lead for Clinton. The range of results we get from excluding pollsters is from -4.1 (excluding SurveyUSA) to -8.7 (excluding Zogby). That is a bit larger than North Carolina, as expected. But put this in the perspective of the range of raw poll results for Indiana, which is from -16 to +5 in polls taken since April 1. The six latest polls as of Monday, all ending on 5/4, range from -12 to +2.


To sum up. Which polls we include affect our results. That both has to be and should be. We WANT the data to matter, and of course it does. What we don't want is for individual polls to make such large differences for our results that inclusion or exclusion decisions become critical. The results we see here show that we SHOULD be somewhat uncertain as to the trend, as it depends upon which individual pollsters are included. What is somewhat different in our approach at Pollster.com is we want to emphasize this uncertainty and put it in perspective, rather than produce a single number and treat that as if it were "certain". That is why we always show the individual polls spread around our trend estimate in the charts. All estimates have uncertainty. We need to understand both the value of the estimate and the uncertainty inherent in it. Pollster effects are part of that story.

However, what is crucial is that these effects on the trend estimate are small compared to the range of variability we see across individual polls. The goal of our trend estimator is to produce a better estimate than what any single poll (or pollster) can provide. By that standard pollster effects on the trend are modest compared to the variability across individual polls.

Evaluating the accuracy of the polls is a different topic, one we'll revisit again on Wednesday.