What do close polls actually tell us?
On Saturday, I reported here on the blog about Siena’s 48th Senate poll. Patty Ritchie was up 4% over Darrel Aubertine. My headline was “Ritchie up 4% in 48th Senate poll”. I also reported in the post that the poll’s margin of error was 4.7%, meaning the result was within the margin of error.
In response to the post, commenter notinthevillage wrote this:
If the confidence intervals overlap, as is the case here, nobody is in the lead. The reported mean (47% and 43%) are no more likely to be the correct number then any other number within the confidence interval.
This has raised a couple questions in the newsroom.
First, for statisticians: is this true? If a result is within the margin of error, does it really mean it’s as likely as any other outcome with the margin of error? Is there any statistical difference between Ritchie up 4% and Ritchie up 2% or Aubertine up 2%? Do those different outcomes tell us anything at all about the relative momentum in the race?
Nationwide, polls are most often reported for their results first (and in the headline). Whether those results are within the margin of error is usually within the first paragraph or two.
Siena itself headlines this poll “Ritchie has slim 4-point edge over Aubertine”. Obviously, Siena wants to make news. They wouldn’t if every poll within the margin of error was reported as “neck-and-neck” or “dead heat” or “tied”.
How would you word the headline?
Second, for everyone, what do you think of the newsworthiness of polls? Are they important snapshots of the race, or horserace reporting that distracts from the issues?
Tags: election10
Distracting and not really newsworthy.
I rank polls right up there or should it be down there with how the stock market is doing.
Maybe add long range weather forecasts to the mix of useless news.
My problems with the polls is how I believe they are becoming less and less meaningful with more and more people not answering their phones to be polled and how some people are influenced by what the polls are claiming. They have become a form of ads for some people.
I think reporting the actual results and the margin of error is a good way to do it.
Most people are used to those terms and understand the margin-of-error thing.
It’s down to the wire, and pollsters like to be accurate. (at least successful ones are concerned with getting it right).
It will be interesting to see who got what right.
Dunno about polls, but these photos tell us Gov.-elect Paladino is very confident:
http://wnymedia.net/smith/2010/10/carl-paladinos-halloween-drunktacular/
What all polls tell us is this: “Ignore me and make up your own mind.”
Tell Nancy Pelosi, “we deem the election to have already taken place, and she can go now”.
I always lie to pollsters. Doesn’t everyone?
I think the only true value in polls is the possibility they will increase voter turnout,
IMO until there is a substantial lead well outside the margin of error, who cares? And even then, who is getting polled? I know I hung on at least 4 pollsters this last couple months, so who is taking the time to answer while trying to get supper ready, the kids homework done and chores finished?
Polls are an attempt to gauge status, true. But reporting on polls is desperation in a newsroom. Reporting on exit polls before the polls close should be criminalized.
In a former life as an insurance manager, we used to say that figures don’t lie…liars figure. You can make numbers say whatever you want them to say.
I don’t htink polls are news, I think they effect outcomes in a several ways, and I don’t think they’re very honest. Even if the pollsters have no bias (unlikely, IMHO), campaigns pick and choose which poll they’re going to report/support. The fact that polls have such different results is the best indicator of their (lack of) value.
They should not be allowed. Some people choose whether or not they’re going to vote based on reported poll results.
Maybe there should be a poll to determine how many people don’t vote because the polls convince them the race is over. Wonder what the margin of error would be.
Like economists, statisticians and people who use statistics for a living are a contentious lot. However, all would agree, I think, that your headline was wrong and Siena (like most polling firms) was being intentionally deceptive. Siena was gaming NCPR (and the rest of the press), because they need the press exposure to keep the funds streaming in.
All statisticians would agree that the best conclusion you can make is that Ritchie – Aubertine is a toss-up in the Siena Poll. Some statisticians would argue that, in addition, it is OK to conclude that Ritchie is somewhat more likely to be in the lead than Aubertine. However, there is also a reasonable chance that Aubertine is in the lead. This is in no way the same thing as saying that Ritchie has a slight lead (or even more nonsensical: ‘a slight lead that is within the margin of error’). This is not being picky – there is a fundamental logical difference between stating that someone ‘is slightly more likely to be in the lead’ than stating someone ‘is in a slight lead’. Note that others (including myself) think it is not even helpful to state that Ritchie ‘is slightly more likely to lead’ – it would be better to call it a toss-up and leave it at that. It’s not sexy, but it is correct.
As for your second question, this year has been worse than ever for its focus on polls. One problem is the mysterious alchemy that seems to go into ‘likely-voter’ models. Worse is the fact that both parties and their allies now use polls as a method to rouse the supporters, demoralize the opponents, and create a fake ‘consensus’ around some issue so that voters will be convinced to go with the (constructed) majority. Look at the blizzard of polls – many from organizations that appear to be new to the business. I am hopeful that voters are becoming cynical about the whole-thing. I am interested in issues, not ‘horse-races’ and fake data. More likely, it will just become another standard tool to influence the gullible.
I turn to reporting from NCPR to inform and enlighten me on matters that affect me and my life in the North Country. Your coverage of polls does neither. What does it mean to me, really, to hear that Aubertine or Richie may or may not be “in the lead” “down to the wire.” It is nonsense. What I need to know about Aubertine or Richie I can glean from their words or from other sources that I trust. What the polls say is trivial and has no influence on what I think of either of them, or on whether or not I vote, or how I vote.
Granted, your coverage of the polls may tell me something I didn’t know about statistics and perhaps you may even enlighten me further about how far removed the art of polling is from science, but otherwise I think polls are a very low value distraction and I think your time would be better spent on more substantial matters.
So, in other words, polls are useless to the electorate at large, and are only valuable as propaganda toosl. About as meaningful as push polls.
Come to think of it, they are more or less asking, “Would you bother to vote if you knew most likely voters are going to vote for Candidate A?”
I sent the link to this post to my daughter, who is studying statistics in grad school. Her reply to me: “The comment by TomL is pretty good – it really is a toss-up.”
Polls are fodder for the 24/7 news cycle – they give reporters & commentators something “new” to report & discuss. The trouble is that most reporters and commentators do not understand statistics well enough to tell us what TomL and my daughter said. Given these comments, would NCPR be willing to lead the way to accurate reporting of poll results by boning up on the math?
I think the results are in. Most of us, according to the latest poll, would like you to promise to never report what polls are saying again. Never, ever again.