#Any landline is a source of bias
Explore tagged Tumblr posts
in-sightjournal · 5 months ago
Text
Ask A Genius 979: The Never-Book "Dumbass Genius"
Scott Douglas Jacobsen: A question from someone else: When is your book coming out? Rick Rosner: Yeah, well, I don’t have an autobiography coming out. I semi-abandoned that. It’s still there if somebody wants to pay me to write it. But before that book comes out, I’m writing this other book that’s a more fictionalized version of what it’s like to be competent in the world with a whole different…
View On WordPress
0 notes
theliberaltony · 5 years ago
Link
via Politics – FiveThirtyEight
Beware: Reading polls can be hazardous to your health. Symptoms include cherry-picking, overconfidence, falling for junky numbers and rushing to judgment. Thankfully, we have a cure. Building on an old checklist from former FiveThirtyEight political writer Harry Enten, here are some guidelines you should bear in mind when you’re interpreting political polling — in primary season and beyond.
What to watch for during the primaries
People who try to discredit early primary polls by pointing out that, say, Jeb Bush led early polls of the GOP field in 2016 are being disingenuous. Should these polls be treated with caution? Sure, but national primary polls conducted in the calendar year before the election are actually somewhat predictive of who the eventual nominee will be. Earlier this year, fellow FiveThirtyEight analyst Geoffrey Skelley looked at early primary polling since 1972 and found that candidates who polled better in the months before the primaries wound up doing better in the eventual primaries. In fact, those who averaged 35 percent or higher in the polls rarely lost the nomination.
High polling averages foreshadowed lots of primary votes
Candidates’ share of the national primary vote by average polling level in the first half of the year before the presidential primaries and polling average in the second half of that year, 1972-2016
First half Second half Poll Avg. Share who became nominee Avg. Primary Vote share Share who became nominee Avg. Primary Vote share 35%+ 75% 57% 83% 57% 20%-35% 36 27 25 25 10%-20% 9 8 9 12 5%-10% 3 7 10 10 2%-5% 5 5 0 4 Under 2% 1 2 1 1
We included everyone we had polling data for, no matter how likely or unlikely they were to run. If a candidate didn’t run or dropped out before voting began, they were counted as winning zero percent of the primary vote.
Sources: POLLS, CQ Roll call, DAVE LEIP’s atlas of u.s. presidential elections
And if we go one step further and account for a candidate’s level of name recognition, early national primary polls were even more telling of who might win the nomination. As you can see in the chart below, a low-name-recognition candidate whose polling average climbed past 10 percent in the first half of the year before the primaries had at least a 1 in 4 shot at winning, which actually put them ahead of a high-name-recognition candidate polling at the same level.
This is why we believe that national primary polls are useful (even this far out) despite the fact that they are technically measuring an election that will never happen — we don’t hold a national primary. For this reason, early-state polls are important, too, especially if they look different from national polls. History is littered with examples of national underdogs who pulled off surprising wins in Iowa or New Hampshire, then rode the momentum all the way to the nomination. And according to analysis from RealClearPolitics, shortly after Thanksgiving is historically when polls of Iowa and New Hampshire start to come into closer alignment with the eventual results.
But don’t put too much faith in early primary polls (or even late ones — they have a much higher error, on average, than general-election polls). Voters’ preferences are much more fluid in primaries than they are in general elections, in large part because partisanship, a reliable cue in general elections, is removed from the equation. And voters may vacillate between the multiple candidates they like and even change their mind at the last minute, perhaps in an effort to vote tactically (i.e., vote for their second choice because that candidate has a better chance of beating a third candidate whom the voter likes less than their first or second choice).
On the flip side, early general-election polls are pretty much worthless. They are hypothetical match-ups between candidates who haven’t had a chance to make their case to the public, who haven’t had to withstand tough attacks and who still aren’t on many Americans’ radar. And these polls aren’t terribly predictive of the eventual result either. From 1944 to 2012, polls that tested the eventual Democratic and Republican nominees about a year before the election (specifically, in November and December of the previous year) missed the final margin by almost 11 percentage points, on average — though it’s worth noting that they were more accurate in 2016, missing by around 3 points.1
Early general-election polls are usually way off the mark
Average error in general-election polls that tested the two eventual nominees in November and December of the year before the election, for presidential elections from 1944 to 2012
Polling Accuracy A Year Before The Election Election Average GOP Poll Lead GOP Election Margin Absolute Error 1944 -14.0 -7.5 6.5 1948 -3.8 -4.5 0.7 1956 +22.0 +15.4 6.6 1960 +3.0 -0.2 3.2 1964 -50.3 -22.6 27.7 1980 -15.5 +9.7 25.2 1984 +7.2 +18.2 11.0 1988 +18.0 +7.7 10.3 1992 +21.0 -5.6 26.1 1996 -13.0 -8.5 4.5 2000 +11.9 -0.5 12.4 2004 +8.7 +2.5 6.2 2008 -0.3 -7.3 6.9 2012 -2.8 -3.9 1.0 Average 10.6
No odd-year November-December polling was available for the 1952, 1968, 1972 and 1976 elections.
Source: Roper Center for Public Opinion Research
In other words, at this stage in the cycle, primary polls can be useful but are by no means infallible, while general-election polls can safely be ignored. That may seem frustrating, but just remember that pollsters aren’t trying to make predictions; they’re simply trying to capture an accurate snapshot of public opinion at a given moment in time.
What to keep in mind generally
There are some guidelines you should remember at any time of the year, however. First, some pollsters are more accurate than others. We consider the gold standard of polling methodology to be pollsters that use live people (as opposed to robocalls) to conduct interviews over the phone, that call cell phones as well as landlines and that participate in the American Association for Public Opinion Research’s Transparency Initiative or the Roper Center for Public Opinion Research archive. That said, the polling industry is changing; there are some good online pollsters too. You can use FiveThirtyEight’s Pollster Ratings to check what methodology each pollster uses and how good its track record has been. (And if a pollster doesn’t show up in our Pollster Ratings, that might be a red flag.)
Another reason to pay attention to the pollster is for comparison purposes. Because pollsters sometimes have consistent house effects (their polls overestimate the same party over and over), it can be tricky to compare results from different pollsters. (For this reason, FiveThirtyEight’s models adjust polls to account for house effects.) When looking for trends in the data over time, it’s better to compare a poll to previous surveys done by that same pollster. Otherwise, what looks like a rise or fall in the numbers could just be the result of a different methodological decision or, especially for non-horse-race questions, the way the question is worded. The order in which questions are asked can matter too; for example, asking a bunch of questions about health care and then asking for whom respondents would vote might bias them to pick the candidate they think is best on health care.
In addition, note who is being polled and what the margin of error is. Polls conducted among likely voters are the best approximation of who will actually cast a ballot, although when you’re still several months away from an election, polls of registered voters are much more common, and that’s fine. For non-electoral public opinion questions, like the president’s approval rating, many polls use a sample that will try to match the demographic profile of all adults in the U.S., and that’s fine, too. As for margin of error … just remember that it exists! For example, if a poll of the 2018 Florida governor race showed former Tallahassee Mayor Andrew Gillum ahead of former Rep. Ron DeSantis 47 percent to 46 percent with a margin of error of plus or minus 4 points, you’d want to keep in mind that DeSantis may actually have been leading at the time. Remember, too, that the margin of error applies to each candidate’s polling number, not to the difference between the candidates. So if both numbers are off by the margin of error, the difference between them could be off by twice as much. In this case, that could mean Gillum dropping to 43 percent and DeSantis jumping up to 50 percent, going from a 1-point deficit to a 7-point lead.
Sample size is important too — a smaller sample means a larger margin of error — but good polling is expensive, so the best pollsters may wind up with smaller samples. And that’s OK. As long as you heed the margin of error, a poll with a sample size of, say, 300 isn’t inherently untrustworthy. That said, don’t dive too much into one poll’s crosstabs — that’s where sample sizes do get unacceptably small and margins of error get unacceptably big. This is one reason not to trust commentators who try to “unskew” a poll by tinkering with its demographic breakdown, or who say that a poll’s results among, say, black voters are unbelievable and therefore the whole poll is too. These people are usually trying to manufacture better results for their side, anyway.
Speaking of which, consider the motive of whoever is sharing the survey. Polls sponsored by a candidate or interest group will probably be overly favorable to their cause. You should be especially suspicious of internal polls that lack details on how they were conducted (e.g., when they were conducted, who was polled, their sample size and their pollster). If you get your news from a partisan outlet, it may also selectively cover only polls that are good for its side. And even the mainstream media might be inclined to overhype a poll as “shocking” or a margin as “razor-thin” because it makes for a better headline.
Next, beware of polls that have drastically different results from all the others. They often turn out to be outliers — although not always (every new trend starts with one poll), which is why you shouldn’t throw them out either. Instead, just use a polling average, which aggregates multiple polls and helps you put the outlier into proper context. We at FiveThirtyEight use averages for that very reason.
And even if a new trend does emerge, wait a bit before declaring it the new normal. Big events — candidate announcements, debates, conventions — can have dramatic effects on the polls, but they are often fleeting.
Finally, come to terms with the fact that polls won’t perfectly predict the final results. Polls are a lot more accurate than people sometimes give them credit for, but polling error is real. Since 1998, polls conducted within a few weeks of the election have missed by an average of 3-10 points, depending on the type of campaign. So trust the polls, but hold onto some uncertainty right up until the moment election results start rolling in.
1 note · View note
jacobbellwoar · 6 years ago
Text
Biases and Surveys
One of the topics that I found rather interesting in my high school statistics class was that of survey biases. We spent entire classes discussing the fact that creating a perfectly neutral statistical sample is essentially impossible, short of creating a census. There is good reason for this, too - the attention to detail required in order to eliminate bias is both fascinating and practically unattainable. It was quite easy, I found, to fall down a certain rabbit hole of finding bias-creating sources.
Take, for example, a relatively standard and common survey topic - election predicting. People want to know who’s going to vote, and who they plan to vote for, as a natural part of the election process. But the question is - who should we ask? It is both highly impractical and extremely inefficient to ask the entire voting population - it’s the reason the US census is only taken once every ten years. So, we conclude, we should select a small sample of the population that perfectly represents the voter base as a whole - a simple enough solution. It’s HOW we select those people, and which questions we ask them, where the difficulty lies. For now, I’ll only take a look at the former, as there is enough content in the latter question to constitute a post of its own.
For starters, we have to choose the best way to find a sufficient variety of people. And, right from the start, we can conclude that any form of selection that requires voluntary response is more or less useless. After all, who would be more likely to respond to a political survey on their own volition - someone who couldn’t care less about the topic, or someone who feels very strongly one way or the other? This might cause us to conclude that a candidate who has a significant level of support among a large but uninspired group if people is losing to a candidate that can rally a small number of incredibly vitriolic people to their cause.
So, we may conclude, we have to contact people directly. But how should this be done? Traditionally, polls were conducted over landline telephones, but, in a modern society, this would place a heavy selection in favor of older voters, who are far more likely to still use traditional telephones over smartphones or other mobile devices. However, if we were to move to cell phones or emails as our form of contact, we would find the inverse to be true, and we would receive a disproportionate number of young voices in our poll. We could, perhaps, conclude that we should mix and match several different methods of data collection, but even then we would have to consider how different question delivery methods may affect responses.
Furthermore, for the data that we do collect over the phone, timing is also a key element. Calling during work hours may yield a higher number of unemployed or college-aged people, but calling during the evening may select against working class populations who tend to have later hours. Calling on weekends may select against young people or frequent churchgoers. So, because we cannot select a single time that does not disadvantage a certain group, we may conclude that we should spread our times out to account for different schedules, time zones, and lifestyles. The exact timings and proportions, however, are impossible to tune perfectly, and may always introduce an unavoidable amount of bias.
1 note · View note
canna-base · 7 years ago
Text
Survey says veterans strongly back legalizing medical marijuana
WASHINGTON – Veterans tend to be an older, more conservative group, but that doesn’t stop them from advocating for a radical change in direction from the nation’s outdated policies on marijuana.
The American Legion represents many conservative veterans, but that doesn’t stop it from taking progressive positions on certain key issues.
A survey released by the American Legion, the nation’s largest wartime veterans’ service organization, indicates near-unanimous support among veterans for research on medical marijuana, with an overwhelming majority supporting legalization of the drug for medical purposes.
Related Articles
Tumblr media
White House opioid commission rejects medical marijuana as pain relief option
Tumblr media
Slideshow: Athletes who support cannabis
Tumblr media
FDA cracks down on medical marijuana cancer treatment claims
Tumblr media
Marin County medical cannabis delivery ordinance takes shape
Tumblr media
Golf and weed: Players who want to score low first get high
aside.wpf_related.inset-links.right {float:right;clear:right;}The Legion adopted a resolution last year calling on the Drug Enforcement Administration to reclassify marijuana from its current illicit status to a category “that, at a minimum, will recognize cannabis as a drug with potential medical value.” The resolution also called on the DEA “to license privately funded medical marijuana production operations.”
The Legion made it clear, however, that its support of marijuana ends where fun begins.
“It is very important to note that The American Legion is NOT advocating for recreational use of marijuana,” its statement said.
This is one of several progressive moves for an organization that represents voters who largely voted for President Donald Trump, according to exit polls.
Last month, the Legion urged Trump to veto Republican legislation overturning a Consumer Financial Protection Bureau rule prohibiting financial firms from imposing arbitration on consumers and barring class-action lawsuits.
When Trump moved to ban transgender people from the military, the Legion rebuffed him, saying: “Any requirement that disqualifies an able-bodied person from serving in our armed forces should be based solely upon its proven adverse effect on readiness, and nothing else. . . . Should that standard become questionable, The American Legion relies on the judgment of the senior leadership of the military” and not, notably, the president.
After the August white supremacist violence in Charlottesville, the Legion said it considers that those who display the Confederate flag are “by very definition anti-American,” because the Confederates waged war against the United States.
Tumblr media
A new survey shows near-unanimous support for the right to use medical marijuana among members of the American Legion. (Doug Duran/Cannifornian file)
By the way, the Legion allowed women members from its beginning in 1919, not a common practice then, according to Joseph M. Plenzler, the organization’s spokesman. They could vote for the organization’s national commander before they could vote for the president of the nation.
Regarding medical marijuana, the key finding from the survey of 802 people, 513 veterans and 289 of their caregivers is that 9 in 10 favor medical research on it and four-fifths support legalizing medical marijuana.
“Support for medical cannabis, and research on medical cannabis is high across veterans and caregivers, all age ranges, gender, political leanings and geography,” said a memorandum from Five Corners Strategies, which conducted the survey for the Legion.
But two important points: The survey was commissioned by an organization that supports the research, and the survey has limitations. The most important limitations are that the sample was limited to a list of landline phones, a method which excludes veterans in wireless-only households as well those in which a veteran household was incorrectly identified on the list. Also, the survey recruited respondents using automated recorded voice. That has the benefit of eliminating human bias in those asking survey questions. But it achieves lower cooperation rates than live interviewers.
There is broad support for medical cannabis across various categories, according to the survey:
Geography: “The support for research and legalization is spread across the country, in states where medical cannabis is currently legal and in states where it is not.”
Special report: Cannabis & fitness
Political orientation: Among veterans and caregivers federally legalized medical marijuana is supported by about 9 in 10 self-identified liberals, almost as many conservatives and 7 in 10 independents.
Age: While support for medical cannabis research and legalization drops the older veterans and their caregivers become, it remains strong even among senior citizens. Among those 60 and older, more than three-quarters favored medical marijuana. All in the 18 to 30 age group agreed.
It’s worth noting that 60 percent of those surveyed were 60 or older. That’s one reason for the finding that “the majority of veterans surveyed that are using cannabis are over the age of 60,” according to the memo.
“It is also clear from the survey that veterans are accessing cannabis to assist them in states with and without medical marijuana programs,” Five Corners added.
Veterans and their caregivers are not alone in favoring a more sensible approach to medical marijuana. A Yahoo News/Marist poll found 84 percent supported it in March, including majorities of Americans of all ages. The lowest level was 65 percent among those ages 69 and older.
The Legion is making progress in its quest to change Uncle Sam’s old ways. Last week, all 10 Democrats on the House Committee on Veterans’ Affairs urged VA to research the impact of medical marijuana on chronic pain and post-traumatic stress disorder among veterans.
So far, however, the Department of Veterans Affairs has not changed its policy. VA Secretary David Shulkin, however, has indicated a willingness to examine medical marijuana when federal law allows.
“There may be some evidence that this (medical marijuana) is beginning to be helpful,” Shulkin said when asked about it in May, the last time he commented on it. “And we’re interested in looking at that and learning from that. But until the time that federal law changes, we are not able to be able to prescribe medical marijuana for conditions that may be helpful.”
To subscribe to The Cannifornian’s email newsletter, click here.
The post Survey says veterans strongly back legalizing medical marijuana appeared first on The Cannifornian.
The post Survey says veterans strongly back legalizing medical marijuana appeared first on Cannabis for Chronic Pain.
source https://canna-base.com/survey-says-veterans-strongly-back-legalizing-medical-marijuana/
0 notes
polixy · 7 years ago
Text
Q&A: Pew Research Center’s president on key issues in U.S. polling
Q&A: Pew Research Center’s president on key issues in U.S. polling;
Even as the polling industry tries to recover from real and perceived misses in U.S. and European elections in recent years, new studies have provided reassuring news for survey practitioners about the health of polling methodology.
In this Q&A, Michael Dimock, president of Pew Research Center, talks about recent developments in public opinion polling and what lies ahead.
There’s a widespread feeling that polling failed to predict the 2016 election results. Do you agree?
President Trump’s victory certainly caught many people by surprise, and I faced more than one Hillary Clinton supporter who felt personally betrayed by polling. But the extent to which the expectation of a Clinton victory was based on flawed polling data – or incorrect interpretation of polling data – is a big part of this question.
Polling’s professional organization, the American Association for Public Opinion Research (AAPOR), spent the last several months looking at the raw data behind the pre-election polls in an attempt to answer this question. (Note: The leader of the AAPOR committee tasked with this inquiry is Pew Research Center’s director of survey research, Courtney Kennedy.) While it might surprise some people, the expert analysis found that national polling in 2016 was very accurate by historical standards. When the national polls were aggregated, or pulled together, they showed Clinton winning among likely voters by an average of 3.2 percentage points. She ended up winning the popular vote by 2.1 points – a relatively close call when looking at presidential polling historically, and significantly closer than what polls suggested in 2012.
Of course, as we all well know, the president isn’t chosen by the national popular vote, but by the Electoral College – so it is the state poll results, rather than the nationwide surveys, that were particularly relevant for those trying to project election results. And the state polls, according to the report, “had a historically bad year.” In particular, in several key Midwestern states with a history of voting Democratic at the presidential level – including Wisconsin, Michigan and Pennsylvania – Clinton was narrowly ahead in pre-election polls, only to be edged out on Election Day. So what happened?
The AAPOR committee report offers at least two factors that were at play. First, data suggests that a number of voters made up their minds in the final days of the campaign. And those late deciders broke for Trump by a significant margin. In the battleground of Wisconsin, for example, 14% of voters told exit pollsters that they had made up their minds only in the final week; they ultimately favored Trump by nearly two-to-one. Yet nearly all of the polling that drove expectations of a Clinton victory in Wisconsin was conducted before the final week of the campaign, missing this late swing of support.
Secondly, unlike in other recent elections, there was a stark education divide in candidate support that some state-level polls missed. Pollsters have long talked about the importance of gender, religion, race and ethnicity as strong correlates of voter preference. Last year, education was also a strong correlate. A number of pre-election polls that didn’t account for it – by adjusting, or “weighting,” their samples to better reflect the full population – were off if they had too many highly educated voters, who tended to vote for Clinton, and too few low-education voters, who tended to vote for Trump.
What does that all mean? Well, polls may have been an accurate representation of voter preferences at the time they were taken. But preferences can change, particularly in a fast-moving campaign. Combine that with an education gap that wasn’t apparent in other recent elections – and wasn’t reflected in some state-level surveys – and you can see why some of those state polls did a poor job of projecting the ultimate outcome. The key for survey practitioners is that both these types of errors can be addressed by known methods.
If polls can get election outcomes wrong, doesn’t that mean polling in general is unreliable?
No. There are important differences between election polling and other kinds of survey work.
Forecasting elections doesn’t just involve asking people whether they support candidate A or candidate B. It also involves trying to determine whether respondents will act on their preferences by casting a ballot at all. And that extra step of identifying “likely voters” has long been one of the most challenging things for pollsters to do. Survey respondents generally do a better job of telling you what they think than what they are going to do, especially when it comes to voting. It is this extra step – where a lot of assumptions about factors associated with turnout need to be made – that is quite distinct from the principles of random sampling and good question design that make survey research valid and reliable.
Another wrinkle when it comes to election polling is that we’re now in an era when the aggregation of polls emphasizes their use specifically as a forecasting tool, and asserts degrees of certainty to those forecasts. This is much like the weatherman using a variety of tools to forecast the weather. But I don’t think this is the primary, or even a good, use of polling.
In fact, most survey work is not engaged in election forecasting. Instead, it’s meant to get beyond the surface and into people’s heads – to truly understand and explain their values, beliefs, priorities and concerns on the major issues of the day. These kinds of surveys are aimed at representing all citizens, including those who might not vote, write to their member of Congress or otherwise participate in the political process.
After each election, there is a tendency for the winning candidate to claim a mandate and point to the results as evidence of the public’s will. But given that so many citizens don’t vote, and that many who do vote don’t like either of the options before them, elections aren’t necessarily reflective of the will of all people. A deep, thoughtful survey can help address this disconnect by presenting the voice of the public on any number of issues.
Let’s talk about response rates. In recent decades, fewer people have been responding to telephone surveys. Does that mean poll results are becoming less accurate?
We spend a lot of time worrying about declining response rates. There is no doubt that the share of Americans who respond to randomized telephone surveys is low and has fallen over time – from 36% of those called in 1997 to 9% in 2016. A low response rate does signal that poll consumers should be aware of the potential for “nonresponse bias” – that is, the possibility that those who didn’t respond may be significantly different from those who did participate in the survey.
But a Pew Research Center report released last month shows that survey participation rates have stabilized over the past several years and that the negative consequences of low response rates for telephone survey accuracy and reliability are limited. In particular, there’s no evidence that Democrats or Republicans are systematically more or less likely to participate in telephone surveys.
There’s also a misperception that the rise of mobile phones is a problem for survey accuracy, but it’s not. To be sure, mobile phones are a factor: It’s now estimated that roughly 95% of U.S. adults own a mobile phone, and more than half live in households that have no landline phone at all.
To meet this reality, the Center conducts the majority of interviews – 75% – via mobile phones. And we’ve actually found that it has improved the representativeness of our surveys by improving our ability to reach lower-income, younger and city-dwelling people – all of whom are more likely to be mobile-only.
Has “big data” reduced the relevance of polling, or will it in the future?
Calling a sample of 1,000 to 1,500 adults may seem quaint in the new world of big data. Why even collect survey data when so much information already exists in the digital traces we leave behind as part of our daily lives?
While it’s possible that some of the more straightforward tasks that polling had traditionally provided – like tracking candidate popularity, consumer confidence or even specific brand images – might eventually be tracked by algorithms crunching massive public databases, for the foreseeable future getting beyond the “what” to the “why” of human behavior and beliefs requires asking people questions – through surveys – to understand what they are thinking. Big data can only tell you so much.
And the existence of big data doesn’t equal access to data. While researchers could potentially learn a lot about Americans’ online, travel, financial or media consumption behaviors, much of this data is private or proprietary, as well as fragmented, and we don’t yet have the norms or structure to access it or to make datasets “talk” to other sources.
So rather than feeling threatened by big data, we see it as a huge opportunity and have made some big investments in learning more. We’re particularly interested in work that tries to marry survey research to big data analytics to improve samples, reach important subpopulations, augment survey questions with concrete behaviors and track changes in survey respondents over time. The future of polling will certainly be shaped by – but probably not replaced by – the big data revolution.
What is the future of polling at Pew Research Center?
We were founded by one of the giants of the field – Andy Kohut – and have built much of our reputation on the quality of our telephone polls. But we are also an organization that has never rested on its laurels. We spend a lot of time and money taking a hard look at our methods to make sure they remain accurate and meaningful.
I remain confident that telephone surveys still work as a methodology, and the Center will keep using them as a key part of our data collection tool kit. And I’m particularly proud that the Center is at the forefront of efforts to provide the data to test their reliability and validity, with no punches pulled.
But we aren’t stopping there. For example, we now have a probability-based online poll that accounts for about 40% of our domestic surveys. We tap into lots of government databases in the U.S. and internationally for demographic research. We’ve got a Data Labs team that is experimenting with web scraping and machine learning. We’ve used Google Trends, Gnip and Twitter, ComScore, Parse.ly, and our own custom data aggregations to ask different kinds of questions about public behaviors and communication streams than we ever could with polls alone.
All in all, this is an interesting time to be a social scientist. There are lots of big changes in American politics, global relationships, media and technology. Our methods will continue to change and evolve in response. At the end of the day, though, our obligation remains the same: to gather the public’s opinions reliably and respectfully, to analyze and assess what people tell us with the utmost care, and to share what we learn with both transparency and humility.
Topics: 2016 Election, Political Typology, Polling, Research Methods, Voter Participation
; Blog – Pew Research Center; http://www.pewresearch.org/fact-tank/2017/06/16/qa-pew-research-centers-president-on-key-issues-in-u-s-polling/; ; June 16, 2017 at 10:18AM
0 notes
in-sightjournal · 5 months ago
Text
Ask A Genius 978: Immediate Pre-Debate Thoughts on Trump and Biden
Scott Douglas Jacobsen: I’m looking at the 538 polls from June 25 to June 17. As far as I can tell from the favorability and unfavorability ratings, Trump is about 10 or 11% unfavourable. Rick Rosner: Trump’s net unfavourable rating is about 11.5%.That doesn’t look good. So, he’s in the hole by 11.5%, which seems good for Biden, except Biden’s in the hole by more than 17%. Jacobsen: Why do…
View On WordPress
0 notes
theliberaltony · 4 years ago
Link
via Politics – FiveThirtyEight
At FiveThirtyEight, we strive to accumulate and analyze polling data in a way that is honest, informed, comprehensive and accurate. While we do occasionally commission polls, most of our understanding of American public opinion comes from aggregating polling data conducted by other firms and organizations. This data forms the foundation of our polling averages, election forecasts and much of our political coverage.
In building our polling database, we aim to be as inclusive as possible. This means we will collect any poll that has been made publicly available and that meets a few basic standards:
The poll must include the name of the pollster, survey dates, sample sizes and details about the population sampled. If these are not included in the poll’s release, we must be able to obtain them in order to include the poll.
Pollsters must also be able to answer basic questions about their methodology, including but not limited to the medium through which their polling was conducted (e.g., landline calls, text, etc.), the source of their voter files and their weighting criteria.
However, there are some types of polls we don’t include, such as:
“Nonscientific” polls that don’t attempt to survey a representative sample of the population or electorate.
Polls like Civiqs or Microsoft News tracking polls that are produced using MRP (short for “multilevel regression with poststratification”), a modeling technique that produces estimates based on survey interviews that are then used to calculate probabilities; these probabilities are then projected onto the composition of the population in question. While this is a valid technique for understanding public opinion data, we exclude them because we consider them more like models than individual polls. (As an analogy, we think of this as using someone else’s barbecue sauce as an ingredient in your own barbecue sauce.)
An additional list of edge cases in which we may exclude polls can be found in this article. For instance, we exclude polls that ask voters who they support only after revealing leading information about the candidates.
We do include internal polls that are publicly available, except in one unusual circumstance (a general election poll sponsored by a candidate’s rival in the primary1). Internal and partisan polls have an asterisk next to the pollster’s name. The asterisk does not indicate whether the pollster itself is partisan, but whether the money that paid for the poll came from a partisan source. Polls are considered partisan if they’re conducted on behalf of a political party, campaign committee, PAC, super PAC, 501(c)(4), 501(c)(5) or 501(c)(6) that conducts a large majority of its political activity on behalf of one political party.
Additionally, if we find that a sponsor organization is selectively releasing polls favorable to a certain candidate or party, we may also categorize that organization as partisan. We generally go out of our way to not characterize news organizations as partisan, even if they have a liberal or conservative view. But selectively releasing data that favors one party is a partisan action, and such polls will be treated as partisan. These classifications may be revisited if a sponsor ceases engaging in this behavior.
Polls we suspect are fake will also not be included until we conduct a thorough investigation and can confirm their veracity. We will permanently ban any pollster found to be falsifying data. Additionally, we reserve the right to ban polls sponsored by a particular organization that consistently engages in dishonest or nontransparent behavior that goes beyond editorializing and political spin.
Our guidelines for inclusion are intentionally permissive. We aim to capture all publicly available polls that are being conducted in good faith. While some polls have a more established track record of accuracy than others — and we do take that into account in our models and political analysis — we exclude polls from our dataset only in exceptional circumstances.
Below are some questions we’ve been asked often over the years about the types of polls we collect. Take a look and if you still have questions or find a poll we don’t have, please email us at [email protected].
Frequently Asked Questions
Q: Which races do you collect polls for? A: We collect polls for presidential, Senate, House and gubernatorial races in addition to presidential approval polls and congressional generic ballot polls at the national level. At this time, we do not collect primary polls other than for the presidency — except in cases of a “jungle primary,” as it’s possible for a candidate to win the seat outright. The latest polls page includes all polls publicly released within two years of an election. If we don’t have any polls for a race in a specific state, that means we weren’t aware of any polls there.
Q: Can I download this data? A: Yes! There are links at the bottom of the latest polls page, but you can also download this data and more from our data repository for all our polls, forecasts and other data projects. Unfortunately, however, we are not able to share historical data for presidents’ approval ratings. You can find additional information on historical presidential approval ratings, and guidelines for acquiring that dataset, on the Roper Center’s website.
Q: Why do you sometimes add old polls to the latest polls page? A: Polls are added to our latest polls page as soon as they are added to our database. If older polls show up on the polls page, it is because they have recently been added to our database — either because we were only just made aware of them or because they had previously been unreleased.
Q: What do the pollster grades mean? A: We calculate a grade for each pollster by analyzing its historical accuracy and methodology. You can read our full ratings and methodology to better understand how we calculate the grades.
Q: What does it mean when a pollster has a rating like “A/B” or a dotted circle around the rating? A: For pollsters with a relatively small sample of polling within three weeks of an election, we show a provisional rating (“A/B,” “B/C,” or “C/D”) rather than a precise letter grade and use a dotted circle to emphasize that the rating is provisional.
Q: Why do some pollsters not have a grade? A: For some pollsters, we do not have any polls from the last three weeks of an election cycle, which means we can’t evaluate their historical performance. We do include polls from these pollsters in our polling averages and models, but we are unable to assign them a rating based on their historical performance; therefore, they receive less weight in our averages and models.
Q: Do you weight or adjust polls? A: Yes. When we calculate our polling averages, some polls get more weight than others. For example, polls that survey more people or have a historical track record of accuracy get more consideration in calculating the average than polls with small sample sizes or polls that have been historically less accurate. Additionally, our polling averages apply adjustments for things like a pollster’s house effect (a measure for how consistently a pollster leans toward one party or candidate) or trends in polls from similar states. For more information on how we weight and adjust polls for polling averages, read this detailed explanation of our 2020 methodology.
Q: When do you show third-party candidates on the latest polls page or in your polling averages? A: We include third-party candidates in polls that ask about them. To find those polls, enter the candidate’s last name in the search box on our latest polls page. As for our averages, if enough pollsters ask questions that include a specific third-party candidate, we will include that candidate.
Q: Why are the sample sizes sometimes missing for polls? A: If a poll does not have a sample size listed, the pollster or sponsor did not report it and we are actively working to obtain it. These polls are still included in our averages and models with an imputed sample size until we obtain the actual sample size.
Q: Why do the values in some polls add up to more than 100 percent? A: Values in some polls may add up to more than 100 percent due to rounding. For example, if a pollster published a poll with an approval rating of 46.5 percent and a disapproval rating of 53.5 percent, then both options would automatically round up to 47 percent and 54 percent, respectively.
Q: Why do the margins in some polls not match what the pollster reports? A: This also boils down to rounding. For example, if a pollster puts one candidate at 45.2 percent and another at 45.6 percent, we’ll display these two candidates at 45 percent and 46 percent, respectively. That means we’ll display their margin as a difference of 0 even though the actual margin is 0.4.
Q: Why are there sometimes multiple versions of a specific poll question? A: There are several reasons for this, but here are three of the most common explanations. First, a pollster may release a question with multiple populations, such as all adults, registered voters or likely voters, and we include all these populations in our database and on our polls page. Second, a pollster may publish multiple results by estimating likely voter turnout in a few different ways; in such cases, we may include all of them if the pollster does not specify one result as its preferred estimate. Third, a pollster may ask a horse race question in different ways; for example, there may be one head-to-head question between two candidates, and a second question asking about every candidate running. We include all versions of this question in our database, averaging poll questions with identical populations for our averages and models.
Still have questions? Send us an email and we’ll do our best to sort it out.
0 notes