Far be it from lil ole me to disagree with Nate Silver, the polling guru who has gained even more celebrity this cycle because he correctly foretold President Obama's victory.
But Silver, in his latest postcriticizes how internal numbers can often mislead the media -- and the candidate. Although that is true -- and certainly was in Mitt Romney's case, as Silver points out -- the real story here is one that is general to the media -- reporters should know which pollsters to trust -- and specific to Nevada -- very few pollsters should be trusted here.
I have found in covering politics for 26 years that there a lot of bad polling out there, and it seems to be getting worse with the explosion of robopollers and the underrepresentation of minorities. But there also seems to be a prevelaent willingness among political reporters to publish polls, including internal numbers, without first grilling the leaker about the instrument. And, in the best case, demanding to see the whole survey to gauge its credibility, even if some of the results are off the record.
That's called, you know, reporting -- as opposed to stenography.
I was surely guilty in my earlier years of gullibly reporting polls, internal and otherwise, without taking the time to consider motivations and demand more information. But it should be a rule among political journalists not to simply publish numbers without asking at least a few basic things about question order and demographics.
Not all polls are created equal, nor are all internal numbers created equal. But some internal polling can be very useful, as it was two years ago.
Nevada is notoriously difficult to poll, with three shifts, many cell phones and a burgeoning Latino population, many of whom don't speak great English. I thought Silver was using surveys that were marginalizing the Hispanic vote and did not have close to an accurate model of what the turnout would be, especially those conducted by a certain "newspaper" that were way off the mark.
Why did I think so? Because I analyzed the internals of most of the polling done here, and I was privy to polling from the Reid campaign by Mark Mellman that seemed to have the best model for what the electorate would look like.
And that, of course, is the key: Pollsters are only as good as the model they have built for Election Day. If they have the wrong mix of Democratrs and Republicans or the wrong percentages of minorities, their polls are likely to be very poor predictors of what will happen.
In Nevada two years ago, only Mellman -- and perhaps Glen Bloger -- had models that made any sense. So Silver's mistake was averaging together garbage, which produces...garbage.
There was a similar phenomenon this cycle in Nevada. Many pollsters had Nevada going to Obama, but few had the margin as large as the eventual 6 points. Except Mellman, who also foretold the U.S. Senate race being very, very close.
I think Silver is on the money when he writes today: "Some reporters make the mistake of assuming that information is valuable simply because it is private or proprietary. But the information that makes it to the reporter’s ears, or into his in-box, may be something that the campaign wants him to hear or see."
But maybe not for the reason Silver thinks. Yes, campaigns always release polling numbers for a reason -- mostly to try to parry a growing narrative they don't like, or, more often, to help the candidate raise money.
Silver exaggerates when he continues: "The release of internal campaign polls is not that different from any other form of spin in this respect — but that is precisely the point. Internal numbers that a campaign releases to the public should be thought of less as scientific surveys and more as talking points."
Why can't they be both? They often are.
I think the Timesman is closest to the mark when he writes: "Nonetheless, the seeming inaccuracy of Mr. Romney’s internal polls ought to present a warning to future campaigns. The problems with internal polls may run deeper than the tendency for campaigns to report them to the public in a selective or manipulative way. The campaigns may also be fooling themselves."
It is clear that Romney's folks here (they were convinced he was within the margin of error and lost by 6) were deluded by their internal numbers. That was true nationally as well: Why else would Karl Rove have told folks that the other polls were overestimating President Obama's base loyalty (they weren't) and that Romney was likely to win Ohio (not so)?
But the real lessons about internal polls are these: For the candidates, hire someone who is helping you truly measure where the race is. For the media, don't trust any polls, internal or otherwise, if the person giving you the results won't let you see them or won't give you much information about the survey.
Far be it from lil ole me to disagree with Nate Silver, the polling guru who has gained even more celebrity this cycle because he correctly foretold President Obama's victory.
But Silver, in his latest post criticizes how internal numbers can often mislead the media -- and the candidate. Although that is true -- and certainly was in Mitt Romney's case, as Silver points out -- the real story here is one that is general to the media -- reporters should know which pollsters to trust -- and specific to Nevada -- very few pollsters should be trusted here.
I have found in covering politics for 26 years that there a lot of bad polling out there, and it seems to be getting worse with the explosion of robopollers and the underrepresentation of minorities. But there also seems to be a prevelaent willingness among political reporters to publish polls, including internal numbers, without first grilling the leaker about the instrument. And, in the best case, demanding to see the whole survey to gauge its credibility, even if some of the results are off the record.
That's called, you know, reporting -- as opposed to stenography.
I was surely guilty in my earlier years of gullibly reporting polls, internal and otherwise, without taking the time to consider motivations and demand more information. But it should be a rule among political journalists not to simply publish numbers without asking at least a few basic things about question order and demographics.
Not all polls are created equal, nor are all internal numbers created equal. But some internal polling can be very useful, as it was two years ago.
In 2010, almost every public poll indicated the U.S. Senate race here was going Sharron Angle's way. Few thought Harry Reid had any chance to survive. Silver, who I thought was averaging in too much bad data and told him so, gave Angle a more than 80 percent chance of winning and predicted she would defeat Reid by 3 percentage points. She lost by 6.
Nevada is notoriously difficult to poll, with three shifts, many cell phones and a burgeoning Latino population, many of whom don't speak great English. I thought Silver was using surveys that were marginalizing the Hispanic vote and did not have close to an accurate model of what the turnout would be, especially those conducted by a certain "newspaper" that were way off the mark.
Why did I think so? Because I analyzed the internals of most of the polling done here, and I was privy to polling from the Reid campaign by Mark Mellman that seemed to have the best model for what the electorate would look like.
And that, of course, is the key: Pollsters are only as good as the model they have built for Election Day. If they have the wrong mix of Democratrs and Republicans or the wrong percentages of minorities, their polls are likely to be very poor predictors of what will happen.
In Nevada two years ago, only Mellman -- and perhaps Glen Bloger -- had models that made any sense. So Silver's mistake was averaging together garbage, which produces...garbage.
There was a similar phenomenon this cycle in Nevada. Many pollsters had Nevada going to Obama, but few had the margin as large as the eventual 6 points. Except Mellman, who also foretold the U.S. Senate race being very, very close.
I think Silver is on the money when he writes today: "Some reporters make the mistake of assuming that information is valuable simply because it is private or proprietary. But the information that makes it to the reporter’s ears, or into his in-box, may be something that the campaign wants him to hear or see."
But maybe not for the reason Silver thinks. Yes, campaigns always release polling numbers for a reason -- mostly to try to parry a growing narrative they don't like, or, more often, to help the candidate raise money.
Silver exaggerates when he continues: "The release of internal campaign polls is not that different from any other form of spin in this respect — but that is precisely the point. Internal numbers that a campaign releases to the public should be thought of less as scientific surveys and more as talking points."
Why can't they be both? They often are.
I think the Timesman is closest to the mark when he writes: "Nonetheless, the seeming inaccuracy of Mr. Romney’s internal polls ought to present a warning to future campaigns. The problems with internal polls may run deeper than the tendency for campaigns to report them to the public in a selective or manipulative way. The campaigns may also be fooling themselves."
It is clear that Romney's folks here (they were convinced he was within the margin of error and lost by 6) were deluded by their internal numbers. That was true nationally as well: Why else would Karl Rove have told folks that the other polls were overestimating President Obama's base loyalty (they weren't) and that Romney was likely to win Ohio (not so)?
But the real lessons about internal polls are these: For the candidates, hire someone who is helping you truly measure where the race is. For the media, don't trust any polls, internal or otherwise, if the person giving you the results won't let you see them or won't give you much information about the survey.
Comments: