Public opinion poll: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Shamira Gelbman
(more copyedits, removed inaccurate footnote)
mNo edit summary
 
(72 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{subpages}}
{{subpages}}


A '''public opinion poll''' is a questionnaire used to measure [[public opinion]], or the attitudes held collectively by a population. Because of the impracticality of administering the questionnaire to all of a large population's members, public opinion polls assess the opinions of the total population by surveying a sample of size N, where N is sufficiently large and representative to produce statistically valid results. Unscientific public opinion polls, which collect information from a haphazard sample, date back at least to the use of [[straw polls]] to predict American election results in the 1820s. Once modern statistical sampling theory was invented around 1930, [[George Gallup]] and other advertisers started to create nationwide public opinion polls, using about 1500 cases. The American public became fascinated with polls, which spread to many democratic countries by 1940. Totalitarian countries did not allow them.
A '''public opinion poll''' is a questionnaire used to measure [[public opinion]], or the collective attitudes held by a population. Because of the impracticality of administering a questionnaire to all of a large population's members, public opinion polls assess the opinions of the total population by surveying a sample that is sufficiently large and representative of the population as a whole to produce statistically valid results.
 
Polling is by far the predominant means for measuring public opinion in this day and age and poll administration practices have grown increasingly sophisticated and rigorous since the 1930s inception of the enterprise. Nevertheless, it remains an imperfect instrument, the accuracy of which is frequently compromised by a variety of factors over which even the most diligent pollsters exert limited control.


==History of opinion polls==
==History of opinion polls==


The first known example of an opinion poll was a local straw vote conducted by a newspaper ''The Harrisburg Pennsylvanian'' in 1824; it showed [[Andrew Jackson]] leading [[John Quincy Adams]] by 335 votes to 169 in the contest for the presidency. Such straw votes—unweighted and unscientific— gradually became more popular; but they remained local, usually city-wide phenomena. In 1916, the large-circulation U.S. magazine ''[[Literary Digest]]'' embarked on a national survey (partly as a circulation-raising exercise) and correctly predicted Woodrow Wilson's election as president. Mailing out millions of postcards and simply counting the returns, the ''Digest'' correctly called the following four presidential elections.
While [[straw poll|straw polling]], or the estimation of public opinion based on informal sampling and survey procedures, dates back at least to the early nineteenth century, the emergence of scientific public opinion polling is a much more recent development.


In 1936, however, the ''Digest'' came unstuck. Its 2.3 million "voters" constituted a huge sample; however they were generally more affluent Americans who tended to have Republican sympathies. The ''Literary Digest'' saw the bias but did not know how to correct it. The week before election day, it reported that [[Alf Landon]] was far ahead of Franklin D. Roosevelt. At the same time, [[George Gallup]] conducted a far smaller, but more scientifically-based survey, in which he polled a demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The ''Literary Digest'' went out of business soon afterwards, while the polling industry started to take off .
=== The emergence of modern scientific public opinion polling ===
Prior to the 1930s, the most impressive attempt at public opinion polling was the [[Literary Digest poll]], a national-scale poll conducted by the now-defunct ''[[Literary Digest]]'' magazine as a means for forecasting U.S. presidential election results. Its first run in 1916 correctly predicted [[Woodrow Wilson]]'s reelection based on a simple tabulation of the returns of millions of postcard questionnaires that had been sent to Americans listed in telephone directories and state automobile registries. The exercise was repeated to good effect in the next four election cycles, in each case accurately predicting the presidential victor based on the returns of extraordinarily large numbers of questionnaires from Americans nationwide.
 
In the meantime, several up-and-coming public opinion researchers, including [[Archibald Crossley]], [[Claude Robinson]], [[Elmo Roper]], and [[George Gallup]], were underway with major breakthroughs in the development of statistical sampling methodology.  
 
In 1935, Gallup founded the American Institute of Public Opinion, precursor to the still-extant [[Gallup Organization]], and proposed a challenge to the Literary Digest poll's preeminece as a means for establishing the legitimacy of his [[quota sampling]] technique. Leading up to the 1936 presidential election, ''Literary Digest'' conducted its usual postcard survey. While its 2.3 million respondents constituted an extraordinarily large sample, relying on telephone and automobile listings — especially during the [[Great Depression]] — yielded results that did not reflect the voting intentions of the public at large. A week before election day, ''Literary Digest'' predicted that [[Alf Landon]], the Republican Party candidate, would win the election by a large margin. Gallup, on the other hand, conducted his own survey with a much smaller but demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The ''Literary Digest'' went out of business soon afterwards, while the polling industry started to take off.


[[Gallup]] launched a subsidiary in Britain, where it correctly predicted Labour's victory in the 1945 general election, in contrast with virtually all other commentators, who expected the Conservative Party, led by Winston Churchill, to win easily.
[[Gallup]] launched a subsidiary in Britain, where it correctly predicted Labour's victory in the 1945 general election, in contrast with virtually all other commentators, who expected the Conservative Party, led by Winston Churchill, to win easily.
=== The 1948 crisis of confidence ===
=== The polling industry's rebound ===
Soon after the 1948 election, the [[Social Science Research Council]] (SSRC) formed an independent, academic [[Committee on the Analysis of Pre-election Polls and Forecasts]] to investigate the pollsters' methods and pinpoint why they failed to predict Truman's victory.


By the 1950s, polling had spread to most democracies. Nowadays they reach virtually every country, although in more autocratic societies they tend to avoid sensitive political topics. In Iraq, surveys conducted soon after the 2003 war helped measure the true feelings of Iraqi citizens to Saddam Hussein, post-war conditions and the presence of US forces.
By the 1950s, polling had spread to most democracies. Nowadays they reach virtually every country, although in more autocratic societies they tend to avoid sensitive political topics. In Iraq, surveys conducted soon after the 2003 war helped measure the true feelings of Iraqi citizens to Saddam Hussein, post-war conditions and the presence of US forces.
=== Recent developments ===


For many years, opinion polls were conducted mainly face-to-face, either in the street or in people's homes. This method remains widely used, but in some countries it has been overtaken by telephone polls, which can be conducted faster and more cheaply. Because of the common practice of telemarketers to sell products under the guise of a telephone survey and due to the proliferation of residential call screening devices and use of cell phones, response rates for phone surveys have been plummeting. Mailed surveys have become the data collection method of choice among local governments that conduct a [[citizen survey]] to track service quality and manage resource allocation. In recent years, [[Internet]] and [[short message service]] (SMS, or text) surveys have become increasingly popular, but most of these draw on whomever wishes to participate rather than a scientific sample of the population, and are therefore not generally considered accurate.
For many years, opinion polls were conducted mainly face-to-face, either in the street or in people's homes. This method remains widely used, but in some countries it has been overtaken by telephone polls, which can be conducted faster and more cheaply. Because of the common practice of telemarketers to sell products under the guise of a telephone survey and due to the proliferation of residential call screening devices and use of cell phones, response rates for phone surveys have been plummeting. Mailed surveys have become the data collection method of choice among local governments that conduct a [[citizen survey]] to track service quality and manage resource allocation. In recent years, [[Internet]] and [[short message service]] (SMS, or text) surveys have become increasingly popular, but most of these draw on whomever wishes to participate rather than a scientific sample of the population, and are therefore not generally considered accurate.


==Potential for inaccuracy==
==Polling procedures==
===Sampling error===
 
===Design===


All polls based on samples are subject to [[sampling error]], which refers to the extent to which the opinions expressed by the sample do not reflect the opinions of the population as a whole. Sampling error is typically expressed as a [[confidence interval]] of plus or minus some number of percentage points associated with a statistical [[confidence level]].  
Designing a public opinion poll entails several steps, from determining the population of interest and adopting an appropriate method for recruiting a representative sample of that population to developing a questionnaire that is well-suited to obtaining unbiased results.


The margin of error does not reflect other sources of error, such as measurement error.  A poll with a random sample of 1,000 people has margin of sampling error of 3% for the estimated percentage of the whole population. A 3% margin of error means that 95% of the time the procedure used would give an estimate within 3% of the percentage to be estimated. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500-1,000 is a typical compromise for political polls. (Note that to get 500 complete responses it may be necessary to make thousands of phone calls.)[http://www.publicagenda.org/polling/polling_error.cfm]
===Administration===


===Nonresponse bias===
Public opinion polls may be self-administered, in which case respondents read and complete the questionnaire on their own. Internet polls are nearly always self-administered. Polls can also be administered by an interviewer, or an individual who reads the questionnaire to respondents and records their responses on their behalf. Interviewer-administered polling can take place in person, that is, in the format of face-to-face conversations between the interviewer and respondents. More commonly, though, interviewer-administered polls are conducted by telephone.


Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population. Because of this [[selection bias]], the characteristics of those who agree to be interviewed may be markedly different from those who decline.  That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes.  If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, the final results will be unbiased.  If the people who do not answer have different opinions then there is bias in the results.  In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own formulas on how to adjust weights to minimize selection bias.[http://abcnews.go.com/images/pdf/responserates.pdf]
===Data analysis===


===Response bias===
==Sources of inaccuracy==


Survey results may be affected by [[response bias]], where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. For example, respondents might be unwilling to admit to unpopular attitudes like racism or sexism, and thus  polls might not reflect the true incidence of these attitudes in the population. If the results of surveys are widely publicised this effect may be magnified - the so-called "spiral of silence."
Various factors affect public opinion poll results' validity and reliability -- that is, the extent to which they accurately reflect the true population opinions that they are intended to measure. These factors might be sorted into three broad categories: those stemming from the use of a sample to estimate the opinions of a whole population; those stemming from the design of the questionnaire used to gather information from respondents; and those stemming from the way in which the questionnaire is administered.


===Wording of questions===
===Sampling error and bias===


It is well established that the wording of the questions, the order in which they are asked and the number and form of alternative answers offered can influence results of polls. Thus comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. [http://www.publicagenda.org/issues/red_flags.cfm?issue_type=higher_education#affirmative_action][http://www.publicagenda.org/issues/red_flags.cfm?issue_type=gay_rights#benefits][http://www.publicagenda.org/issues/red_flags.cfm?issue_type=abortion#mixed] This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey.[http://www.publicagenda.org/polling/polling_stages.cfm] One way in which pollsters attempt to minimize this effect is to ask the same set of questions over time, in order to track changes in opinion. Another common technique is to rotate the order in which questions are asked. Many pollsters also [[split-sample]]. This involves having two different versions of a question, with each version presented to half the respondents.  
All polls administered to population samples are subject to [[sampling error]], which refers to the extent to which the opinions expressed by the surveyed sample do not reflect the opinions of the population as a whole. Sampling error is typically expressed as a [[confidence interval]] of plus or minus some number of percentage points associated with a statistical [[confidence level]]. For example, the maximum sampling error (MSE) for a sample of 1050 drawn from a population of 1,000,000 is +/-3 percentage points at the 95% confidence level; this means that there is a 95 percent chance that the results of a survey administered to that sample fall within a 6-point range around the true opinion of the population as a whole.


The most effective controls, used by [[attitude (psychology)|attitude]] researchers, are:
Pollsters can reduce sampling error by administering a poll to a larger sample. For example, a sample of 10,000 drawn from the 1,000,000-member population would yield an MSE of +/-1% at the 95% confidence level, and a sample of 100,000 would reduce the MSE to just +/-0.3%. In practice, however, increasing a sample size enough to reduce sampling error substantially usually entails undue financial and logistical costs.


* asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with [[psychometrics|psychometric]] measures such as reliability coefficients, and
Sampling error does not reflect other sampling-related sources of inaccuracy, including [[sampling bias]], which comes about when a poll is administered to a sample, however large, that is not representative of the population as a whole. A form of [[selection bias]], sampling bias can be the result of a variety of factors, including convenience sampling, the use of an inappropriate sampling frame, and non-response bias.


* analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.
====Convenience sampling====


These controls are not widely used in the polling industry.
[[Convenience sampling]] is the practice of administering a poll to individuals who are easiest to recruit regardless of their representativeness of the population whose opinions are intended to be measured.  


===Coverage bias===
====Sampling frame bias====


Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used, as was the experience of the ''Literary Digest'' in 1936. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without. Alternately, in some places, many people have only [[mobile telephone]]s. Because pollers cannot call mobile phones (it is unlawful to make unsolicited calls to phones where the phone's owner may be charged simply for taking a call), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success. Several studies of mobile phone users by the Pew Research Center in the U.S. concluded that the absence of mobile users was not unduly skewing results, at least not yet. [http://pewresearch.org/obdeck/?ObDeckID=80]
A sampling frame is a defined set of individuals within a population from which a sample is to be drawn. It may but does not necessarily consist of a literal list of all of the population's members. In fact, exhaustive population lists often do not exist or cannot be readily obtained by pollsters. When this is the case, pollsters use some sort of proxy frame, which may consist of a literal list, such as a directory of listed telephone numbers, or a figurative one, as in the case of [[random digit dialing]], which samples from a hypothetical "list" of all possible telephone number permutations. [[Coverage error]] is the discrepancy between such non-exhaustive sampling frames and the full population. To the extent that a sampling frame's non-coverage of the population systematically excludes some segments of the population, poll results will suffer from [[coverage bias]]. For example, random digit dialing excludes those population members who do not have telephones, a group that is not evenly distributed within the population since it is most likely comprised of individuals at the lower end of the socioeconomic spectrum. A telephone directory sampling frame yields still more coverage error and bias, since it excludes not only those population members who do not have telephones, but also those who do but have unlisted numbers. This second excluded group is also not likely to be evenly distributed within the population; for example, the burgeoning "cell-phone only" sector, whose phone numbers are unlisted by default, draws disproportionatly from the younger segments of the population.


An oft-quoted example of opinion polls succumbing to errors was the British election of 1992. Despite the polling organisations using different methodologies virtually all the polls in the lead up to the vote (and [[exit poll]]s taken on voting day) showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.
While coverage bias is typically associated with undercoverage, or the exclusion of one or more segments of the population, it is also possible for a sample to suffer from overcoverage -- that is, the inclusion of individuals who do not strictly belong in the population of interest. For example, pre-election polls in the United States sometimes use a sampling frame that includes all adult Americans regardless of whether they're registered or likely to vote. To the extent that the opinions of non-voters, who are disproportionately young, less-educated and non-affluent, differ systematically from voters', their inclusion skews the results and makes it difficult to use the for election forecasting and campaign strategy purposes. To avoid this problem, many polling organizations limit their pre-election poll samples to either registered voters or, increasingly, to "likely voters," whom they typically identify with a battery of questions at the start of the poll about past voting behavior and levels of political interest.<ref>See, e.g., Frank Newport, [http://www.gallup.com/poll/109135/Who-Likely-Voters-When-They-Matter.aspx/ "Who Are Likely Voters and Why Do They Matter?"] Gallup Organization, July 28, 2008 (accessed May 15, 2009).</ref>
 
====Non-response bias====
 
Whereas coverage bias stems from pollsters' sampling frame choices, [[non-response bias]] is caused by respondents' decision whether or not to participate in polls. Since some people do not answer calls from strangers or refuse to respond to polls, samples may lack population representativeness despite pollsters' best efforts to construct them appropriately. As with those excluded from participation due to the use of inappropriate sampling frames, the characteristics of the people who agree to be interviewed may be systematically different from those who decline. To the extent that this is the case, non-response bias ensues and contributes to inaccurate polling results.
 
===Nonattitudes and insincere opinions===
 
Also known as "pseudo-opinions," nonattitudes refer to the propensity for respondents to express an opinion despite not actually having one. First identified by political scientist [[Philip Converse]] in 1964,<ref>Philip E. Converse, "The Nature of Belief Systems in Mass Publics," in ''Ideology and Discontent'', David E. Apter, ed. (New York: Free Press, 1964) pp. 206-61; see also Converse, "Attitudes and Non-Attitudes: Continuation of a Dialogue," in ''The Quantitative Analysis of Social Problems'', Edward R. Tufte, ed. (Reading, MA: Addison-Wesley, 1970) pp. 168-89.</ref> the problem of nonattitudes is a constant source of vexation for public opinion researchers.
 
There are a variety of steps pollsters can take to minimize error stemming from nonattitudes. The simplest fix is to offer a "no opinion" option among the questionnaire's response alternatives. Pollsters might also include screening questions at the start of a poll to gauge which respondents are unlikely to have a true attitude based on their self-reported level of passion or knowledge about the subject of the poll. Or, they might ask open-ended follow-up questions that push respondents to explain why they favor or oppose it. Finally, they can construct a "mushiness index" based on respondents' answers to a battery of questions regarding their level of interest, information, and opinion stability regarding the poll's subject.
 
A related source of inaccuracy in public opinion polling is respondent insincerity, or the expression of opinions that are not sincerely held. Often, this takes the form of [[social desirability response bias]] (SDRB), which refers to respondents' tendency to provide answers that, true or not, present them in the most socially acceptable light.
 
SDRB is frequently cited as a factor in the [[Bradley effect]] (also known as the Wilder effect or Bradley-Wilder effect) that is sometimes evident in elections featuring a black candidate running against a white opponent.<ref>For example, see Michael W. Traugott and Vincent Price, "A Review: Exit Polls in the 1989 Virginia Gubernatorial Race: Where Did They Go Wrong?," ''Public Opinion Quarterly'' 56:2 (1992) pp. 245-253.</ref>
 
===Question effects===
 
Another potential source of inaccuracy in public opinion polling is the content of the questionnaire itself. Specifically, the wording of questions, the order in which they are asked, and the response alternatives that are made available to respondents can all influence the results of public opinion polls.  
 
====Question wording====


In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:
There are various ways in which the wording of a poll's questions can affect the accuracy of its results.


* '''Late swing'''. The Conservatives gained from people who switched to them at the last minute, so the error was not as great as it first appeared.
Perhaps the most blatant way in which question wording may induce inaccurate polling results is through the inclusion of leading questions, or those whose wording leads respondents to answer a certain way regardless of their true opinions.


* '''Nonresponse bias'''. Conservative voters were less likely to  participate in the survey than in the past and were thus underrepresented.
====Question order====


* The '''spiral of silence'''. The Conservatives had suffered a sustained period of unpopularity as a result of economic recession and a series of minor scandals. Some Conservative supporters felt under pressure to give a more popular answer.
Even when all of a poll's questions are optimally worded, it is possible for the order in which they are asked to bias its results. In the typical question order bias scenario, respondents' answers to questions early on in the poll affect their answers to subsequent questions, often because they want to avoid coming across as inconsistent or hypocritical.


The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organisations have adjusted their methodologies and have achieved more accurate predictions in subsequent elections.
====Response alternatives====


==Polling organizations==
===Mode of interview and interviewer effects===


There are many polling organizations.  The most famous remains the very first one, the [[Gallup Organization]], which was created by [[George Gallup]] in 1935.
Poll results might also be skewed by the method used to administer the poll.  


Other major polling organizations in the U.S. include:
Polls that are not self-administered -- that is, those in which responses are recorded by an interviewer rather than the respondent himself -- are also subject to interviewer effects, or inaccurate results due to the tendency for respondents to tailor their responses based on their perception of the interviewer's race, gender, or age.


*The [[Pew Research Center]] conducts polls concentrating on media and political beliefs.
== Bad polling examples ==
*The [[Harris Poll]].
*The [[Roper Poll]].
*The [[World Public Opinion]] provides in-depth information and analysis on public opinion from around the world on international issues.
*[[Nielsen Ratings]], virtually always for television.
*Garin Hart Yang (Democratic)
*Ayres, McHenry & Associates (Republican)
*Penn, Schoen & Berland Associates (Democratic)
*Moore Information (Republican)
*Frederick Polls (Democratic)
*OnMessage Inc. (Republican)
*Hickman-Maslin Research (Democratic)
*The Tarrance Group (Republican)
*[[Greenberg Quinlan Rosner]]  (Democratic)
*[[Public Opinion Strategies]]  (Republican)
*[[Quinnipiac Polls]], run by [[Quinnipiac University]] in Hamden, Connecticut, and started as a student project.
*The [[National Opinion Research Center]].
*[[Public Agenda]], conducts research bridging the gap between what American leaders think and what the public really thinks.


In Britain the most notable "pollsters" are:
An oft-quoted example of opinion polls succumbing to errors was the British election of 1992. Despite the polling organisations using different methodologies virtually all the polls in the lead up to the vote (and [[exit poll]]s taken on voting day) showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.
*[[MORI]]. This polling organisation is notable for only selecting those who say that they are "likely" to vote. This has tended to favour the Conservative Party in recent years.
*[[YouGov]], an online pollster.
*[[GfK NOP]]
*[[ICR/International Communications Research|ICR]]
*[[ICM (polling)|ICM]]
*Populus, official [[The Times]] pollster.


In [[Australia]] the most notable companies are:
In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:
*[[Newspoll]] - published in [[News Limited|News Limited's]] ''[[The Australian]]'' newspaper
*[[Roy Morgan Research]] - published in the [[Crikey]] email reporting service
*[[Galaxy Research|Galaxy Polling]] - published in [[News Limited|News Limited's]] tabloid papers
*[[ACNielsen|AC Nielsen Polling]] - published in [[Fairfax Media|Fairfax]] newspapers


In [[Canada]] the most notable companies are:
* '''Late swing'''. The Conservatives gained from people who switched to them at the last minute, so the error was not as great as it first appeared.
*[[Angus Reid Strategies]]
*[[Ipsos-Reid]]
*Environics
*Ekos
*Decima
*Leger
*CROP


In [[Nigeria]] the most notable polling organization is:
* '''Nonresponse bias'''. Conservative voters were less likely to  participate in the survey than in the past and were thus underrepresented.
*[[NOI poll|NOI-Gallup poll]]


All the major [[television network]]s, alone or in conjunction with the largest [[newspaper]]s or [[magazine]]s, in virtually every country with elections, operate polling operations, alone or in groups.  
* The '''spiral of silence'''. The Conservatives had suffered a sustained period of unpopularity as a result of economic recession and a series of minor scandals. Some Conservative supporters felt under pressure to give a more popular answer.


Several organizations monitor the behaviour of pollsters and the use of polling data, including PEW and, in Canada, the Laurier Institute for the Study of Public Opinion and Policy.[http://www.wlu.ca/lispop/lispop]
The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organisations have adjusted their methodologies and have achieved more accurate predictions in subsequent elections.


The best-known failure of opinion polling to date in the U.S. was the prediction in 1948 that Thomas Dewey would defeat Harry S. Truman.  Major polling organizations, including Gallup and Roper, indicated a landslide victory for Dewey.


In britain, most polls failed to predict the Conservative election victories of 1970 and 1992, and Labour's victory in 1974. However, their figures at other elections have been generally accurate.


==The influence of opinion polls==
==Influence of public opinion polls==


By providing information about voting intentions, opinion polls can sometimes influence the behaviour of electors. The various theories about how this happens can be split up into two groups: bandwagon/underdog effects, and strategic ('tactical') voting.
One controversial aspect of public opinion polling, especially when it comes to pre-election polling, is that by promulgating information about a population's voting intentions, polls can influence voting behavior. There are two principal ways in which this occurs: bandwagon effects and strategic voting.


A [[Bandwagon effect]] occurs when the poll prompts voters to back the candidate shown to be winning in the poll. The idea that voters are susceptible to such effects is old, stemming at least from 1884; Safire (1993: 43) reported that it was first used in a political cartoon in the magazine Puck in that year. It has also remained persistent in spite of a lack of empirical corroberation until the late 20th century. [[George Gallup]] spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980's onward the Bandwagon effect is found more often by researchers (Irwin & van Holsteyn 2000).
A [[bandwagon effect]] occurs when a poll prompts voters to back the candidate who appears to be in the lead. The idea that voters are susceptible to such effects is old, stemming at least from 1884; Safire (1993: 43) reported that it was first used in a political cartoon in the magazine Puck in that year. It has also remained persistent in spite of a lack of empirical corroberation until the late 20th century. [[George Gallup]] spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980's onward the Bandwagon effect is found more often by researchers (Irwin & van Holsteyn 2000).


The opposite of the bandwagon effect is the [[Underdog effect]]. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be 'losing' the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the Bandwagon effect (Irwin & van Holsteyn 2000).  
The opposite of the bandwagon effect is the [[Underdog effect]]. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be 'losing' the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the Bandwagon effect (Irwin & van Holsteyn 2000).  
Line 128: Line 127:


These effects only indicate how opinion polls ''directly'' affect political choices of the electorate. Other effect can be found on journalists, politicians, political parties, civil servants etc. in, among other things, the form of media framing and party ideology shifts.
These effects only indicate how opinion polls ''directly'' affect political choices of the electorate. Other effect can be found on journalists, politicians, political parties, civil servants etc. in, among other things, the form of media framing and party ideology shifts.
==Bibliography==
* Asher, Herbert, 1998: Polling and the Public. What Every Citizen Should Know, fourth edition, Washington, D.C.: CQ Press.
* Bourdieu, Pierre. "Public Opinion does not exist" in ''Sociology in Question'', London, Sage (1995)
* Bradburn, Norman M. and Seymour Sudman. ''Polls and Surveys: Understanding What They Tell Us'' (1988)
* Cantril, Hadley. ''Gauging Public Opinion'' (1944)
* Converse, Jean M.  ''Survey Research in the United States: Roots and Emergence 1890-1960'' (1987), the standard history
* [http://www.questia.com/PM.qst?a=o&d=8971691 Crespi, Irving.  ''Public Opinion, Polls, and Democracy'' (1989)]
* Gallup, George. ''Public Opinion in a Democracy'' (1939)
* [http://www.questia.com/PM.qst?a=o&d=100501261 Glynn, Carroll J., Susan Herbst, Garrett J. O'Keefe, and Robert Y. Shapiro. ''Public Opinion'' (1999)] textbook
* Irwin, Galen A. and Joop J. M. Van Holsteyn. ''Bandwagons, Underdogs, the Titanic and the Red Cross: The Influence of Public Opinion Polls on Voters'' (2000).
* [http://www.questia.com/PM.qst?a=o&d=28537852 Lavrakas, Paul J. et al eds. ''Presidential Polls and the News Media'' (1995)]
* [http://www.questia.com/PM.qst?a=o&d=8540600 Moore, David W. ''The Superpollsters: How They Measure and Manipulate Public Opinion in America'' (1995)]
* [http://www.questia.com/PM.qst?a=o&d=28621255 Niemi, Richard G., John Mueller, Tom W. Smith, eds. ''Trends in Public Opinion: A Compendium of Survey Data'' (1989)]
* [http://www.questia.com/PM.qst?a=o&d=104829752 Oskamp, Stuart and P. Wesley Schultz; ''Attitudes and Opinions'' (2004)]
* Robinson, Claude E. ''Straw Votes'' (1932).
* Robinson, Matthew ''Mobocracy: How the Media's Obsession with Polling Twists the News, Alters Elections, and Undermines Democracy'' (2002)
* [http://www.questia.com/PM.qst?a=o&d=89021667 Rogers,  Lindsay. ''The Pollsters: Public Opinion, Politics, and Democratic Leadership'' (1949)]
* [http://www.questia.com/PM.qst?a=o&d=71288534 Traugott, Michael W. ''The Voter's Guide to Election Polls'']  3rd ed. (2004)
* James G. Webster, Patricia F. Phalen, Lawrence W. Lichty; ''Ratings Analysis: The Theory and Practice of Audience Research'' Lawrence Erlbaum Associates, 2000
* [http://www.questia.com/PM.qst?a=o&d=59669912 Young, Michael L. ''Dictionary of Polling: The Language of Contemporary Opinion Research'' (1992)]
==Primary sources==
* [http://www.questia.com/PM.qst?a=o&d=98754501 Cantril, Hadley and Mildred Strunk, eds. ''Public Opinion, 1935-1946'' (1951)], massive compilation of many public opinion polls from US, Britain, Canada, Australia, and elsewhere.
* Gallup, Alec M. ed. ''The Gallup Poll Cumulative Index: Public Opinion, 1935-1997'' (1999) lists 10,000+ questions, but no results
* Gallup, George Horace, ed. ''The Gallup Poll; Public Opinion, 1935-1971''  3 vol (1972)  summarizes results of each poll.
==External links==
* [http://www.angus-reid.com/index.cfm Angus Reid Global Monitor - The world's largest free-access online public opinion database]
* [http://www.worldpublicopinion.org WorldPublicOpinion.org - Online publication covering worldwide opinion on international policy issues]
* [http://www.publicagenda.org/ Public Agenda - Nonpartisan, nonprofit group that tracks public opinion data in the United States]
* [http://www.ncpp.org/?q=home National Council on Public Polls - An association of polling organizations in the United States devoted to setting high professional standards for surveys]
* [http://answers.vizu.com/ Vizu Answers - Online Opinion Polling]


==References==
==References==
<references/>
<references/>[[Category:Suggestion Bot Tag]]

Latest revision as of 11:00, 8 October 2024

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Catalogs [?]
 
This editable Main Article is under development and subject to a disclaimer.

A public opinion poll is a questionnaire used to measure public opinion, or the collective attitudes held by a population. Because of the impracticality of administering a questionnaire to all of a large population's members, public opinion polls assess the opinions of the total population by surveying a sample that is sufficiently large and representative of the population as a whole to produce statistically valid results.

Polling is by far the predominant means for measuring public opinion in this day and age and poll administration practices have grown increasingly sophisticated and rigorous since the 1930s inception of the enterprise. Nevertheless, it remains an imperfect instrument, the accuracy of which is frequently compromised by a variety of factors over which even the most diligent pollsters exert limited control.

History of opinion polls

While straw polling, or the estimation of public opinion based on informal sampling and survey procedures, dates back at least to the early nineteenth century, the emergence of scientific public opinion polling is a much more recent development.

The emergence of modern scientific public opinion polling

Prior to the 1930s, the most impressive attempt at public opinion polling was the Literary Digest poll, a national-scale poll conducted by the now-defunct Literary Digest magazine as a means for forecasting U.S. presidential election results. Its first run in 1916 correctly predicted Woodrow Wilson's reelection based on a simple tabulation of the returns of millions of postcard questionnaires that had been sent to Americans listed in telephone directories and state automobile registries. The exercise was repeated to good effect in the next four election cycles, in each case accurately predicting the presidential victor based on the returns of extraordinarily large numbers of questionnaires from Americans nationwide.

In the meantime, several up-and-coming public opinion researchers, including Archibald Crossley, Claude Robinson, Elmo Roper, and George Gallup, were underway with major breakthroughs in the development of statistical sampling methodology.

In 1935, Gallup founded the American Institute of Public Opinion, precursor to the still-extant Gallup Organization, and proposed a challenge to the Literary Digest poll's preeminece as a means for establishing the legitimacy of his quota sampling technique. Leading up to the 1936 presidential election, Literary Digest conducted its usual postcard survey. While its 2.3 million respondents constituted an extraordinarily large sample, relying on telephone and automobile listings — especially during the Great Depression — yielded results that did not reflect the voting intentions of the public at large. A week before election day, Literary Digest predicted that Alf Landon, the Republican Party candidate, would win the election by a large margin. Gallup, on the other hand, conducted his own survey with a much smaller but demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The Literary Digest went out of business soon afterwards, while the polling industry started to take off.

Gallup launched a subsidiary in Britain, where it correctly predicted Labour's victory in the 1945 general election, in contrast with virtually all other commentators, who expected the Conservative Party, led by Winston Churchill, to win easily.

The 1948 crisis of confidence

The polling industry's rebound

Soon after the 1948 election, the Social Science Research Council (SSRC) formed an independent, academic Committee on the Analysis of Pre-election Polls and Forecasts to investigate the pollsters' methods and pinpoint why they failed to predict Truman's victory.

By the 1950s, polling had spread to most democracies. Nowadays they reach virtually every country, although in more autocratic societies they tend to avoid sensitive political topics. In Iraq, surveys conducted soon after the 2003 war helped measure the true feelings of Iraqi citizens to Saddam Hussein, post-war conditions and the presence of US forces.

Recent developments

For many years, opinion polls were conducted mainly face-to-face, either in the street or in people's homes. This method remains widely used, but in some countries it has been overtaken by telephone polls, which can be conducted faster and more cheaply. Because of the common practice of telemarketers to sell products under the guise of a telephone survey and due to the proliferation of residential call screening devices and use of cell phones, response rates for phone surveys have been plummeting. Mailed surveys have become the data collection method of choice among local governments that conduct a citizen survey to track service quality and manage resource allocation. In recent years, Internet and short message service (SMS, or text) surveys have become increasingly popular, but most of these draw on whomever wishes to participate rather than a scientific sample of the population, and are therefore not generally considered accurate.

Polling procedures

Design

Designing a public opinion poll entails several steps, from determining the population of interest and adopting an appropriate method for recruiting a representative sample of that population to developing a questionnaire that is well-suited to obtaining unbiased results.

Administration

Public opinion polls may be self-administered, in which case respondents read and complete the questionnaire on their own. Internet polls are nearly always self-administered. Polls can also be administered by an interviewer, or an individual who reads the questionnaire to respondents and records their responses on their behalf. Interviewer-administered polling can take place in person, that is, in the format of face-to-face conversations between the interviewer and respondents. More commonly, though, interviewer-administered polls are conducted by telephone.

Data analysis

Sources of inaccuracy

Various factors affect public opinion poll results' validity and reliability -- that is, the extent to which they accurately reflect the true population opinions that they are intended to measure. These factors might be sorted into three broad categories: those stemming from the use of a sample to estimate the opinions of a whole population; those stemming from the design of the questionnaire used to gather information from respondents; and those stemming from the way in which the questionnaire is administered.

Sampling error and bias

All polls administered to population samples are subject to sampling error, which refers to the extent to which the opinions expressed by the surveyed sample do not reflect the opinions of the population as a whole. Sampling error is typically expressed as a confidence interval of plus or minus some number of percentage points associated with a statistical confidence level. For example, the maximum sampling error (MSE) for a sample of 1050 drawn from a population of 1,000,000 is +/-3 percentage points at the 95% confidence level; this means that there is a 95 percent chance that the results of a survey administered to that sample fall within a 6-point range around the true opinion of the population as a whole.

Pollsters can reduce sampling error by administering a poll to a larger sample. For example, a sample of 10,000 drawn from the 1,000,000-member population would yield an MSE of +/-1% at the 95% confidence level, and a sample of 100,000 would reduce the MSE to just +/-0.3%. In practice, however, increasing a sample size enough to reduce sampling error substantially usually entails undue financial and logistical costs.

Sampling error does not reflect other sampling-related sources of inaccuracy, including sampling bias, which comes about when a poll is administered to a sample, however large, that is not representative of the population as a whole. A form of selection bias, sampling bias can be the result of a variety of factors, including convenience sampling, the use of an inappropriate sampling frame, and non-response bias.

Convenience sampling

Convenience sampling is the practice of administering a poll to individuals who are easiest to recruit regardless of their representativeness of the population whose opinions are intended to be measured.

Sampling frame bias

A sampling frame is a defined set of individuals within a population from which a sample is to be drawn. It may but does not necessarily consist of a literal list of all of the population's members. In fact, exhaustive population lists often do not exist or cannot be readily obtained by pollsters. When this is the case, pollsters use some sort of proxy frame, which may consist of a literal list, such as a directory of listed telephone numbers, or a figurative one, as in the case of random digit dialing, which samples from a hypothetical "list" of all possible telephone number permutations. Coverage error is the discrepancy between such non-exhaustive sampling frames and the full population. To the extent that a sampling frame's non-coverage of the population systematically excludes some segments of the population, poll results will suffer from coverage bias. For example, random digit dialing excludes those population members who do not have telephones, a group that is not evenly distributed within the population since it is most likely comprised of individuals at the lower end of the socioeconomic spectrum. A telephone directory sampling frame yields still more coverage error and bias, since it excludes not only those population members who do not have telephones, but also those who do but have unlisted numbers. This second excluded group is also not likely to be evenly distributed within the population; for example, the burgeoning "cell-phone only" sector, whose phone numbers are unlisted by default, draws disproportionatly from the younger segments of the population.

While coverage bias is typically associated with undercoverage, or the exclusion of one or more segments of the population, it is also possible for a sample to suffer from overcoverage -- that is, the inclusion of individuals who do not strictly belong in the population of interest. For example, pre-election polls in the United States sometimes use a sampling frame that includes all adult Americans regardless of whether they're registered or likely to vote. To the extent that the opinions of non-voters, who are disproportionately young, less-educated and non-affluent, differ systematically from voters', their inclusion skews the results and makes it difficult to use the for election forecasting and campaign strategy purposes. To avoid this problem, many polling organizations limit their pre-election poll samples to either registered voters or, increasingly, to "likely voters," whom they typically identify with a battery of questions at the start of the poll about past voting behavior and levels of political interest.[1]

Non-response bias

Whereas coverage bias stems from pollsters' sampling frame choices, non-response bias is caused by respondents' decision whether or not to participate in polls. Since some people do not answer calls from strangers or refuse to respond to polls, samples may lack population representativeness despite pollsters' best efforts to construct them appropriately. As with those excluded from participation due to the use of inappropriate sampling frames, the characteristics of the people who agree to be interviewed may be systematically different from those who decline. To the extent that this is the case, non-response bias ensues and contributes to inaccurate polling results.

Nonattitudes and insincere opinions

Also known as "pseudo-opinions," nonattitudes refer to the propensity for respondents to express an opinion despite not actually having one. First identified by political scientist Philip Converse in 1964,[2] the problem of nonattitudes is a constant source of vexation for public opinion researchers.

There are a variety of steps pollsters can take to minimize error stemming from nonattitudes. The simplest fix is to offer a "no opinion" option among the questionnaire's response alternatives. Pollsters might also include screening questions at the start of a poll to gauge which respondents are unlikely to have a true attitude based on their self-reported level of passion or knowledge about the subject of the poll. Or, they might ask open-ended follow-up questions that push respondents to explain why they favor or oppose it. Finally, they can construct a "mushiness index" based on respondents' answers to a battery of questions regarding their level of interest, information, and opinion stability regarding the poll's subject.

A related source of inaccuracy in public opinion polling is respondent insincerity, or the expression of opinions that are not sincerely held. Often, this takes the form of social desirability response bias (SDRB), which refers to respondents' tendency to provide answers that, true or not, present them in the most socially acceptable light.

SDRB is frequently cited as a factor in the Bradley effect (also known as the Wilder effect or Bradley-Wilder effect) that is sometimes evident in elections featuring a black candidate running against a white opponent.[3]

Question effects

Another potential source of inaccuracy in public opinion polling is the content of the questionnaire itself. Specifically, the wording of questions, the order in which they are asked, and the response alternatives that are made available to respondents can all influence the results of public opinion polls.

Question wording

There are various ways in which the wording of a poll's questions can affect the accuracy of its results.

Perhaps the most blatant way in which question wording may induce inaccurate polling results is through the inclusion of leading questions, or those whose wording leads respondents to answer a certain way regardless of their true opinions.

Question order

Even when all of a poll's questions are optimally worded, it is possible for the order in which they are asked to bias its results. In the typical question order bias scenario, respondents' answers to questions early on in the poll affect their answers to subsequent questions, often because they want to avoid coming across as inconsistent or hypocritical.

Response alternatives

Mode of interview and interviewer effects

Poll results might also be skewed by the method used to administer the poll.

Polls that are not self-administered -- that is, those in which responses are recorded by an interviewer rather than the respondent himself -- are also subject to interviewer effects, or inaccurate results due to the tendency for respondents to tailor their responses based on their perception of the interviewer's race, gender, or age.

Bad polling examples

An oft-quoted example of opinion polls succumbing to errors was the British election of 1992. Despite the polling organisations using different methodologies virtually all the polls in the lead up to the vote (and exit polls taken on voting day) showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.

In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:

  • Late swing. The Conservatives gained from people who switched to them at the last minute, so the error was not as great as it first appeared.
  • Nonresponse bias. Conservative voters were less likely to participate in the survey than in the past and were thus underrepresented.
  • The spiral of silence. The Conservatives had suffered a sustained period of unpopularity as a result of economic recession and a series of minor scandals. Some Conservative supporters felt under pressure to give a more popular answer.

The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organisations have adjusted their methodologies and have achieved more accurate predictions in subsequent elections.


Influence of public opinion polls

One controversial aspect of public opinion polling, especially when it comes to pre-election polling, is that by promulgating information about a population's voting intentions, polls can influence voting behavior. There are two principal ways in which this occurs: bandwagon effects and strategic voting.

A bandwagon effect occurs when a poll prompts voters to back the candidate who appears to be in the lead. The idea that voters are susceptible to such effects is old, stemming at least from 1884; Safire (1993: 43) reported that it was first used in a political cartoon in the magazine Puck in that year. It has also remained persistent in spite of a lack of empirical corroberation until the late 20th century. George Gallup spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980's onward the Bandwagon effect is found more often by researchers (Irwin & van Holsteyn 2000).

The opposite of the bandwagon effect is the Underdog effect. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be 'losing' the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the Bandwagon effect (Irwin & van Holsteyn 2000).

The second category of theories on how polls directly affect voting is called strategic or tactical voting. This theory is based on the idea that voters view the act of voting as a means of selecting a government. Thus they will sometimes not choose the candidate they prefer on ground of ideology or sympathy, but another, less-preferred, candidate from strategic considerations. An example can be found in the general election of 1997. Then Cabinet Minister, Michael Portillo's constituency of Enfield was believed to be a safe seat but opinion polls showed the Labour candidate Stephen Twigg steadily gaining support, which may have prompted undecided voters or supporters of other parties to support Twigg in order to remove Portillo. Another example is the Boomerang effect where the likely supporters of the candidate shown to be winning feel that s/he is "home and dry" and that their vote is not required, thus allowing another candidate to win.

These effects only indicate how opinion polls directly affect political choices of the electorate. Other effect can be found on journalists, politicians, political parties, civil servants etc. in, among other things, the form of media framing and party ideology shifts.

References

  1. See, e.g., Frank Newport, "Who Are Likely Voters and Why Do They Matter?" Gallup Organization, July 28, 2008 (accessed May 15, 2009).
  2. Philip E. Converse, "The Nature of Belief Systems in Mass Publics," in Ideology and Discontent, David E. Apter, ed. (New York: Free Press, 1964) pp. 206-61; see also Converse, "Attitudes and Non-Attitudes: Continuation of a Dialogue," in The Quantitative Analysis of Social Problems, Edward R. Tufte, ed. (Reading, MA: Addison-Wesley, 1970) pp. 168-89.
  3. For example, see Michael W. Traugott and Vincent Price, "A Review: Exit Polls in the 1989 Virginia Gubernatorial Race: Where Did They Go Wrong?," Public Opinion Quarterly 56:2 (1992) pp. 245-253.