Talk:Public opinion poll: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Shamira Gelbman
m (→‎Straw polling: oops, forgot to sign earlier)
imported>Shamira Gelbman
(moved stuff from main page)
 
Line 18: Line 18:
Moving this here for now (maybe to eventually be moved to the Straw poll article, but more likely not worth it since it's verbatim from Wikipedia):  
Moving this here for now (maybe to eventually be moved to the Straw poll article, but more likely not worth it since it's verbatim from Wikipedia):  
The first known example of an opinion poll was a local straw vote conducted by a newspaper ''The Harrisburg Pennsylvanian'' in 1824; it showed [[Andrew Jackson]] leading [[John Quincy Adams]] by 335 votes to 169 in the contest for the presidency. Such straw votes—unweighted and unscientific— gradually became more popular; but they remained local, usually city-wide phenomena. [[User:Shamira Gelbman|Shamira Gelbman]] 17:39, 17 May 2009 (UTC)
The first known example of an opinion poll was a local straw vote conducted by a newspaper ''The Harrisburg Pennsylvanian'' in 1824; it showed [[Andrew Jackson]] leading [[John Quincy Adams]] by 335 votes to 169 in the contest for the presidency. Such straw votes—unweighted and unscientific— gradually became more popular; but they remained local, usually city-wide phenomena. [[User:Shamira Gelbman|Shamira Gelbman]] 17:39, 17 May 2009 (UTC)
== Question wording ==
Moving this here for now:
Thus, comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. [http://www.publicagenda.org/issues/red_flags.cfm?issue_type=higher_education#affirmative_action][http://www.publicagenda.org/issues/red_flags.cfm?issue_type=gay_rights#benefits][http://www.publicagenda.org/issues/red_flags.cfm?issue_type=abortion#mixed] This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey.[http://www.publicagenda.org/polling/polling_stages.cfm] One way in which pollsters attempt to minimize this effect is to ask the same set of questions over time, in order to track changes in opinion. Another common technique is to rotate the order in which questions are asked. Many pollsters also [[split-sample]]. This involves having two different versions of a question, with each version presented to half the respondents.
The most effective controls, used by [[attitude (psychology)|attitude]] researchers, are:
* asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with [[psychometrics|psychometric]] measures such as reliability coefficients, and
* analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.
These controls are not widely used in the polling industry.
[[User:Shamira Gelbman|Shamira Gelbman]] 15:02, 27 June 2009 (UTC)

Latest revision as of 09:02, 27 June 2009

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Catalogs [?]
 
To learn how to update the categories for this article, see here. To update categories, edit the metadata template.
 Definition A survey used to measure a population's beliefs and attitudes [d] [e]
Checklist and Archives
 Workgroup categories Politics, Sociology and Economics [Categories OK]
 Talk Archive none  English language variant American English

Straw Poll

I just noticed straw poll redirects here to public opinion poll, but it should be its own article. Can it be un-redirected? Shamira Gelbman 22:35, 4 May 2009 (UTC)

Never mind; I figured out how to do it last night. Shamira Gelbman 17:38, 17 May 2009 (UTC)

Old coverage bias section

Moving this here for now:

Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used, as was the experience of the Literary Digest in 1936. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without. Alternately, in some places, many people have only mobile telephones. Because pollers cannot call mobile phones (it is unlawful to make unsolicited calls to phones where the phone's owner may be charged simply for taking a call), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success. Several studies of mobile phone users by the Pew Research Center in the U.S. concluded that the absence of mobile users was not unduly skewing results, at least not yet. [1] Shamira Gelbman 04:27, 15 May 2009 (UTC)

Straw polling

Moving this here for now (maybe to eventually be moved to the Straw poll article, but more likely not worth it since it's verbatim from Wikipedia): The first known example of an opinion poll was a local straw vote conducted by a newspaper The Harrisburg Pennsylvanian in 1824; it showed Andrew Jackson leading John Quincy Adams by 335 votes to 169 in the contest for the presidency. Such straw votes—unweighted and unscientific— gradually became more popular; but they remained local, usually city-wide phenomena. Shamira Gelbman 17:39, 17 May 2009 (UTC)

Question wording

Moving this here for now:

Thus, comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. [2][3][4] This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey.[5] One way in which pollsters attempt to minimize this effect is to ask the same set of questions over time, in order to track changes in opinion. Another common technique is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents.

The most effective controls, used by attitude researchers, are:

  • asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with psychometric measures such as reliability coefficients, and
  • analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.

These controls are not widely used in the polling industry.

Shamira Gelbman 15:02, 27 June 2009 (UTC)