March 2023
Polls and Public Opinion: A Guide for Legislators
Nick Ruderman | Research Officer
Public opinion polls (or surveys) are an inescapable element of modern politics and governance. This paper provides a practical guide to the critical analysis and interpretation of polls and their connection to public opinion for legislators and legislative staff. It begins with an overview of the evolution of polling techniques and their respective strengths and weaknesses. Because of the apparent effects of polling on voting behaviour, various efforts have been made in Ontario and other jurisdictions to regulate the dissemination of polls. The evidence supporting those presumed effects, and the associated legislative and regulatory measures, are discussed. The paper concludes with a series of tips for the critical analysis and interpretation of poll results.
Public opinion polls (or surveys) are an inescapable element of modern politics and governance. Since their early development in the post-war period, systematic empirical studies of public opinion have held out the promise of presenting public policymakers with valid and reliable data on public attitudes towards policy issues. Such knowledge would allow politicians to campaign more effectively, and once in power, to better respond to the public’s policy preferences and more effectively deliver public services.
Some argue that the rise of modern public opinion polling carries democratic benefits, presenting policymakers with a fuller, more representative view of the public mood than the view provided by the narrower segment of citizens who troop to the polls to make their voices heard.[1] Others are more critical of the place of polling in modern policymaking and governance, depicting the contemporary focus on poll results, particularly “horserace” studies examining topline trends in party and candidate support, as dangerous distractions from substantive policy discussions.[2]
This paper provides a practical guide to the study of public opinion and the interpretation of public opinion polls. It first provides an overview of the evolution of the study of public opinion, from early qualitative studies to probability samples of the public, to approaches based on social media and other forms of large-scale self-selected data. It proceeds to examine the ways in which polls might influence public opinion, and the legislative and regulatory measures taken to govern the use of public opinion polls. It concludes by providing practical advice for legislators and their staff for the critical analysis and interpretation of polls in the fast-changing landscape of public opinion research.
The Study of Public Opinion
Analyses of public sentiment are centuries-old: though the first use of the term public opinion in its modern sense is often dated to the mid-eighteenth century,[3] interest on the part of leaders and citizens in the views of the general public is far older.[4] Unscientific straw polls, consisting of surveys “taken in scattered taverns, militia offices, and public meetings” became increasingly common elements of newspapers’ election coverage in the nineteenth century.[5] The challenge confronting such methods, however, concerns their ability to serve as the basis for reliable generalizations about the broader population’s attitudes
The Rise of Probability Sampling
It would not be until the early twentieth century that public opinion survey methods based on probability sampling would be developed.[6] As Hillygus notes, “[t]he ability to generalize from the sample to the population rests on the use of probability sampling. Probability samples are ones that use random selection.”[7] Gallup polls published in advance of the 1936 US general election are often described as “mark[ing] the beginning of scientific election polling.”[8] Though several variations of probability sampling methods exist (e.g., systemic sampling, stratified sampling), the principle underlying the method is aptly illustrated by George Gallup’s well-known comparison to sampling soup: “as long as it was a well-stirred pot, you only need a single sip to determine the taste.”[9]
Provided random selection of respondents is achieved, a statistical property known as the central limit theorem means that it is also possible to specify the degree of uncertainty associated with survey estimates – typically expressed in poll reporting as a margin of error (or, more formally, the sampling margin of error) and associated confidence level. The margin of error is a measure of variation, or uncertainty, that reflects the fact that findings generated from the analysis of a sample will inevitably diverge from true population characteristics, if only slightly, due simply to chance (i.e., random sampling error).[10] In effect, it represents how close one might reasonably expect the views captured in the survey to reflect the views in the whole population, at a certain confidence level.[11] How likely it is that the true population value falls within the range specified by a margin of error is reflected by the associated confidence level, typically calculated at 95 percent (i.e., nineteen times out of twenty).
Polling Comes to Canada
Representative public opinion polling became widespread somewhat later in Canada and the United Kingdom than it did in the United States. The first scientific Canadian public opinion poll was “conducted by the Liberal Party of Canada in 1942, when the Mackenzie King government attempted to determine the likely outcome of a forthcoming plebiscite on conscription.”[12] Between 1942 and 1965, scientific public opinion surveys began to be employed by Canadian governments and politicians at both the federal and provincial levels. Indeed, as Lachapelle notes: [d]uring the 1945 general election, the Canadian Institute of Public Opinion conducted the first election poll, but it was only during the 1960s that opinion polling really began to take flight.[13]
It was 1965 that also saw the advent of the first academic election survey in Canada, the Canadian Election Study (CES). The CES has since been fielded in every federal election from 1965 to the present. This academic survey allows for in-depth statistical analysis of the factors that shape Canadians’ voting patterns.
The evolving nature of the data collection methods used by the CES mirror changes occurring within the polling industry: face-to-face interviews gave way to random digit dialing (RDD) telephone surveys, waves of panel data, and post-election mail-back surveys.[14] Recent changes to the design include an online component with a sample size sufficient to allow for more granular analysis of voting behaviour in different regions of the country.
Whereas most survey data are cross-sectional – based on interviews conducted at approximately the same point in time – panel data involves re-interviewing the same respondents at two or more points in time. Panel data allows for more convincing tests of causal hypotheses (i.e., the ability to distinguish causation from mere correlation between variables) than do cross-sectional datasets, and allow for more in-depth investigations of patterns of individual-level attitudinal change.[15]
Data Collection Methods
Academic surveys like the CES, as well as commercial polls and polls done for governments or political parties, can be collected through a variety of modes (or methods). Those modes can be divided into three broad categories: personal (face-to-face) interviews, telephone interviews, and self-completed questionnaires.[16] There is no perfect data collection method; each has notable strengths and weaknesses, as well as implications for how to analyze and interpret the data.
Personal Interviews
Face-to-face interviews are a method of survey data collection that some argue, on balance, generate “the richest and most complete information in public opinion polling.”[17] Once the dominant mode of survey data collection, face-to-face interviews were gradually supplemented by telephone interviewing beginning in the mid-twentieth century.[18] Though increasingly uncommon, certain nationally funded government and academic surveys (e.g., the standard Eurobarometer survey) continue to employ face-to-face data collection.
Face-to-face data collection has several notable strengths. A well-documented problem confronting public opinion research relates to what is often described as “social desirability bias:” effectively, respondents will misrepresent their attitudes, particularly as they pertain to controversial issues, to conform more closely to perceived societal expectations and norms.[19] Perhaps counter-intuitively, experimental data suggests that face-to-face surveys reduce respondents’ inclination to present themselves in socially desirable ways as compared to telephone interviews.[20]
Respondents also tend to be more willing to respond to longer questionnaires in person, and to be more engaged and cooperative when interviews are conducted face-to-face.[21] Response rates are typically higher than for telephone or self-administered surveys.[22] Further, personal interviews allow the interviewer to directly observer and collect data on non-verbal behaviour (e.g., outward signs of nervousness or disinterest) and respondent characteristics
Face-to-face interviews also have disadvantages, including the considerable cost associated with inperson interviews (e.g., hotels, meals, transportation), and a potential for increased interviewer effects. In a face-to-face setting, the potential for a poorly trained interviewer to alter respondent’s answer to survey questions in sometimes unpredictable ways is enhanced.[23]
Telephone Interviews
Telephone interviews are likely the most widely employed mode of public opinion data collection.[24] Unlike face-to-face interviews and self-completed mail-back questionnaires, telephone surveys can be administered very quickly. Although academic surveys fielded via telephone, such as the CES’s campaign period and post-election components, include extensive efforts to recontact respondents who were selected, it is not uncommon for commercial polls to be collected in a single evening (though such an approach may involve accepting a lower response rate).
The use of computer-assisted telephone interviewing (CATI) technologies allows for even faster and more efficient data entry and analysis. CATI involves interviewers sitting at video display terminals and directly entering responses into a computer, eliminating the need for an additional data entry process, and allowing for running totals of survey results.[25] Some pollsters employ pre-recoded voice and interactive voice recognition (IVR) methods rather than a live interviewer working with CATI. Despite certain advantages (e.g., standardization regarding the interviewer’s delivery of questions, increased speed, and decreased cost when compared to CATI), such an approach raises concerns regarding atypically low response rates and additional challenges associated with establishing who in the household is responding to the survey.[26]
With respect to cost, telephone polling generally occupies a middle ground between face-to-face interviews, and self-completed (mail-back or online) questionnaires, less expensive than the former and more expensive than the latter. This method of data collection naturally also faces challenges. Since the rise of caller ID, declining response rates for telephone surveys have been a problem that has raised concerns over the representativeness of survey samples obtained through this method (though caller ID is far from the only factor increasing rates of non-response, and this issue affects all methods of survey data collection to some extent).[27] It is now increasingly common for individuals not to answer calls from unrecognized phone numbers. Relative to face-to-face interviews, respondents also tend to have less patience for lengthy questionnaires and be somewhat more suspicious of the interview more generally.[28]
Self-Completed Questionnaires
Mail surveys
Self-completed questionnaires have traditionally taken the form of surveys that are completed by respondents on paper and mailed back to researchers. A key advantage of mail surveys is cost: since interviewers are not required, mail surveys are generally substantially less expensive to field than either live telephone or personal interviews. The lack of an interviewer also eliminates potential interviewer effects, and self-completed questionnaires may also reduce respondents’ likelihood of altering their answers to portray themselves in more socially desirable ways.[29]
Mail surveys also carry some disadvantages. Response rates for mail surveys tend to be lower than for either telephone interviews or personal interviews, though there is some evidence that this gap may be diminishing as response rates for those other modes of data collection decline.[30] Importantly, it is not possible to determine who in a household has completed the survey, or to collect a variety of relevant contextual data that is often collected through other modes (e.g., respondents’ reactions to questions, the amount of time taken to respond to specific questions, and the order in which the questions were answered). Limits also exist on the type of questions that can be asked using this format (e.g., “branching” questions, or questions only presented to those who respond in a certain way to an initial prompt). It is also important to note that this method is relatively time consuming: researchers must typically wait several weeks to collect data through this mode and analyze results.[31]
Online Surveys
Online (Internet, or web) surveys are now increasingly common both in academic studies and in the polling industry more broadly, largely supplanting mail surveys as the dominant mode of self-completed questionnaire.[32] A variety of probability and non-probability sampling procedures can be used for online surveys. Probability methods generally take the form of either mixed-mode surveys with a web option, or panels of Internet users or of the full population.[33]
It is worth noting, however, as Hillygus points out, [t]he majority of web-based surveys, including those by well-known firms . . . rely on non-probability online panels. In such cases, the respondents are (nonrandomly) recruited through a variety of techniques: website advertisements, targeted emails, and the like.[34]
Panels may be recruited by pollsters through traditional probability sampling methods such as RDD, or through non-probability methods, which can then form a reservoir of respondents who can be readily and inexpensively recontacted.[35] On the other hand, mixed-mode probability polls simply offer respondents the option of completing the survey by phone or online.[36] The relatively low cost and speed of online data collection, paired with its flexibility with respect to question types (audio and video materials can be readily employed, for instance) form its principal advantages.
Concerns about probability-based web surveys most often concern low response rates and associated concerns about the extent to which samples are representative.[37] Such concerns may be mitigated to some extent through appropriate weighting (discussed in the final section of this paper). A 2014 Pew Research study of the effects of different modes of data collection of survey results (mode effects), based on random assignment of respondents to either phone or web surveys, found that mode differences tended to be on average relatively modest, and that “many commonly used survey questions evidence no mode effect.”[38] Larger differences, however, were observed in responses to questions on certain topics, particularly “questions where social desirability bias could play a role in the responses.”[39] These include questions touching on deeply personal issues (e.g., life satisfaction and financial troubles) and perceptions of discrimination against minority groups. In the latter case, respondents speaking to a live interviewer by phone were more likely to indicate that discrimination was common.[40]
Non-Probability Sampling Approaches
As Asher points out, “probability sampling is typically cited as the number-one characteristic that makes a poll or survey scientific.”[41] However, a variety of methods for the study of public opinion are regularly employed by governments, academics, opinion research firms that rest on non-probability samples. Depending on the topic and the way in which these methods are employed and interpreted, they can also provide valuable insights into public opinion, though such methods have important limitations with respect to their ability to provide a representative portrait of public attitudes.
First, as noted above, many web polls are not based on the probability sampling methods. Procedures such as quota sampling (i.e., recruiting a fixed number of respondents in certain demographic categories) or weighting can help ensure that a survey sample resembles the broader population. However, it is not possible to assess how respondents differ from the broader population along many characteristics that might be related to outcomes of interest, and it is not possible to calculate the sampling margin of error associated with the results.[42] Even so, such surveys vary greatly in quality; in a comprehensive study of different non-probability online studies, the Pew Research Centre found that “samples with more elaborate sampling and weighting procedures and longer field periods produced more accurate results.”[43]
Focus groups, widely employed in the public and private sectors, consist of relatively small non-probability samples. Though focus groups often conjure images of market research and new product testing, such groups are widely used by governments, party strategists, polling firms and academics.[44] Though not designed to be fully representative, focus groups provide an opportunity to collect more nuanced data on public attitudes than is often possible in the more structured context of a typical survey, and can inspire hypotheses that are later tested using probability samples.[45]
The increased availability of very large online sources of data on public attitudes, such as data from web searches, web apps, native apps on mobile devices, and social media, combined with the increased difficulties obtaining representative samples (i.e., non-response bias) described above, have led some researchers to explore other non-probability sampling approaches.[46] For instance, during past provincial and federal elections, TV Ontario (TVO)’s flagship current affairs program The Agenda has relied extensively on opinion data from Advanced Symbolics poll “Polly,” which estimates the state of public opinion (both top line trends and seat projections) using social media data. Other studies have employed weighted data from web apps such as the Vote Compass application, hosted by the CBC, in combination with traditional scientific probability samples, to examine public opinion on political issues.[47]
The idea that polls and poll reporting would shape public opinion is intuitive. Swings in public support for a party or candidate might be expected to influence the behaviour of voters for a variety of reasons. In Canada’s single-member plurality (SMP) electoral system, strategic voting considerations are reported by some voters, even while studies suggest that the impact of strategic voting on electoral outcomes is more modest than is often assumed.48 Indeed, online initiatives have been developed to provide riding-by-riding advice for voters interested in casting their ballot strategically based, in part, on poll results.[49]
Claims about the impact of polls on the public are ubiquitous. Lachapelle’s influential study Polls and the Media in Canadian Elections: Taking the Pulse, Volume 16 of the research studies commissioned by the Royal Commission on Electoral Reform and Party Financing, canvassed over 90 briefs presented to the Commission and identified a number of possible effects, including:
- “the bandwagon effect (electors rally to support the candidate leading in the polls);
- the underdog effect (electors rally to support the candidate trailing in the polls);
- the demotivating effect (electors abstain from voting out of certainty that their candidate will win);
- the motivating effect (electors vote because the polls alert them to the fact that an election is going on);
- the strategic effect (electors decide how to vote on the basis of the relative popularity of the parties according to the polls); and
- the freewill effect (electors vote to prove the polls wrong).”[50]
Lachapelle notes, however, that the question of which of these effects is most strongly supported by the evidence remains a matter of controversy among researchers. The underdog, demotivating, and freewill effects may appear plausible, though on balance the evidence suggests that effects are more likely to operate in the opposite direction: polls that show a candidate leading tend to give campaigns a boost.[51] As Traugott and Lavrakas note: [p]oll results that show a candidate ahead or gaining momentum can stimulate contributions or volunteers, energize the staff, or even stimulate voter turnout at the end of a campaign.[52]
Naturally, accurately generalizing about the effects of polls on public opinion in all circumstances is a challenge. The institutional context (e.g., the electoral system), the dynamics of the race (e.g., the extent of the spread between the leading and the trailing candidates), and the electoral history of the riding in question all might be expected to influence the nature of poll effects.
Push Polling
Beyond the effects of poll reporting on vote choice, it is possible for poll administration to influence the voting behaviour of individual electors. When polls are conducted by groups aiming to influence the political attitudes or behaviour of respondents rather than to accurately assess the state of public opinion, they are referred to as “push polls.” Push polling can be defined as “a form of negative campaigning that is disguised as a political poll. “Push polls” are actually political telemarketing – telephone calls disguised as research that aim to persuade large numbers of voters and affect election outcomes, rather than measure opinions.”[53]
The potential for polls to influence voting behaviour has led to legislative measures to impose various requirements on polls and poll reporting, particularly late in the election period. The most common types of polling restrictions are those placed on the timing in relation to the election that the polls are published, and those placed on the information that must be disclosed by those distributing polls.
Early Efforts at Regulation
As Lachapelle notes, although efforts to regulate polling in Canada began as early as 1939, when the British Columbia legislature passed a law prohibiting the release of polls during election campaigns, it was not until the 1970s that the topic received sustained attention.[54] Between 1976 and 1979, the Ontario legislature considered several bills that would prohibit or restrict polling during elections, none of which passed. Similarly, at the federal level, at least 22 bills were introduced restricting election polling during the 1970s.[55]
The Lortie Commission
It was the Royal Commission on Electoral Reform and Party Financing (the Lortie Commission) that provided the final impetus for reform at the federal level, addressing the issue in its 1991 report. That report recommended that the federal government ban the publication of polls from midnight on the day preceding election day until the voting ended on the evening of election day. This prohibition would reduce the impact of a last-minute poll, to which the parties and candidates often could not respond. Combined with the existing advertising blackout, the ban would provide voters with “a period of reflection” at the end of the campaign to assess the parties and candidates.[56]
In response to the report, the federal government introduced an amendment to the Canada Elections Act in 1993 that banned the publication of polls during the final 72 hours of a federal election campaign. This amendment was supported by the opposition parties. The period of 72 hours, which was longer than Lortie recommended, had been proposed by the parliamentary committee appointed to review the Royal Commission’s proposals. The Thomson and Southam newspaper conglomerates challenged the amendment in court, contending that it violated section 2(b) of the Canadian Charter of Rights and Freedoms, which guarantees the right to freedom of expression, and section 3, which guarantees the right to vote. Ultimately, after Ontario Court and Court of Appeal judgements in favour of the government, the Supreme Court ruled 53 in favour of the newspapers.[57]
Federal Framework
In response to the ruling, the Government introduced a new package of amendments to the Canada Elections Act in Parliament in 1999, which banned the publication of new poll results on election day rather than the original 72-hour “blackout period."[58]
The Act now also contains provisions that relate to the transmission of election survey results to the public during an election period.[59] Depending on factors such as who transmits the election survey results (e.g., the first person to transmit), there may be specific requirements to disclose certain details of an election survey (e.g., name of the person or organization that conducted the survey, date or period when the survey was conducted, the population from which the sample of respondents was drawn).[60] A sponsor of an election survey is required to publish a report on the results of the survey that includes information about the method used to collect data, the wording of the survey questions and, if applicable, the margins of error for the data collected.[61]
The Act also contains rules regarding election surveys conducted (or caused to be conducted) by third parties (i.e., a person or group other than a political party that is registered under an Act of a province) during a pre-election period or an election period.[62] Elections Canada explains that: [a]n election survey is a regulated activity when it is conducted by or on behalf of a third party during an election period and the results are used:
- in deciding whether or not to organize and carry out regulated activities, or
- when organizing and carrying out partisan activities or transmitting advertising messages.[63]
Ontario Framework
The Election Finances Act contains a prohibition against releasing new election survey results on polling day in an electoral district before the close of all the polling stations in that electoral district. The prohibition applies to persons, organizations, and entities (including political parties, constituency associations, corporations, trade unions and third parties).[64]
Under the Act, the term election survey refers to “an opinion survey of how electors voted or will vote at an election or respecting an issue with which a political party or candidate is associated.”[65] The Act does not contain rules that address the way election surveys may be conducted.
The Act also contains rules governing political advertising but notably excludes several types of actions from the definition of political advertising, such as “communication in any form directly by a person, group, corporation or trade union to their members, employees or shareholders, as the case may be” and “the making of telephone calls to electors only to encourage them to vote.”[66]
Push Polling
As noted in the previous section, push polling might be regarded as a form of political advertising, and thus might already be subject to restrictions. Even so, certain jurisdictions in the United States have passed laws that explicitly define and impose restrictions on the practice of push polling.
For instance, New Hampshire’s elections statute (Title LXIII: Elections - Chapter 664 - Political Expenditures and Contributions) addresses push polling and these provisions apply to a range of elections (e.g., state primary, general, and special elections and presidential primary, city, town, school district, and village district elections).[67] The Department outlines the requirements to conduct a push-poll as follows:
- [informing] the recipient who the telephone call is being made on behalf of, in support of, or in opposition to a particular candidate for public office; and
- [identifying] the candidate by name; and
- [providing] a telephone number from where the push-polling is being conducted.[68]
A 2012 Boston Globe article stated that “New Hampshire residents may be among the most pollster-besieged in the nation … In an effort to shield voters, the state in 1998 banned certain forms of push-polling, a practice that seeks to plant negative information about a candidate.”[69] The Federal Election Commission issued an advisory opinion in 2012 on New Hampshire’s push polling law and its disclaimer obligations when telephone surveys are conducted on behalf of federal candidates, their campaign committees, or federal political committees.
What are the essential questions that should be asked when examining a public opinion poll? It should be emphasized that there is no perfect public opinion poll. Different methodologies involve trade-offs between such characteristics as cost, speed, detail, and sampling error, interviewer effects, and non-response bias. The questions below provide a guide to salient considerations when evaluating the results of public opinion polls, and weighing through their strengths, weaknesses, and appropriate uses and interpretations.
- Who conducted and sponsored the poll?
This is the simplest consideration, and yet it is worth noting initially.[70] Sponsorship by a political campaign or interest group with a vested interest in a certain outcome does not automatically discredit a poll, though it can act as a signal that should lead a critical poll consumer to examine the survey methodology even more carefully.
-
How were the data collected, and with what sampling process?
Different modes of data collection – face-to-face, telephone, mail, or online – have different strengths and weaknesses (see section Data Collection Methods). Some argue that the sampling method is not typically the primary source of inaccuracies in poll results from reputable polling firms,[71] and indeed, as discussed above, studies have shown relatively modest mode effects.[72] Even so, it is important for a critical consumer of polls to know the sampling method employed since it can affect the appropriate approach to data analysis and interpretation. For instance, whereas online polls can be collected using probability methods (i.e., selection through address-based sampling or RDD, with Internet access provided to those who do not have it), most online polls use non-probability samples. Where non-probability sampling methods are employed, sampling error cannot be calculated.[73]
Further, although mode effects are in most cases modest, questions about certain issues are more likely to be affected by the mode of data collection than are others. Questions about sensitive and controversial topics, including deeply personal issues (e.g., life and financial satisfaction), are more likely to be affected by the mode of data collection.[74] When such issues are of interest, it is important to note that polling through live telephone interviews may be particularly vulnerable to social desirability effects.[75]
The mode of data collection can also affect a poll’s response rate. No method of data collection, however, is immune from the problem of non-response bias (see Tip #5), and recognition of the trade-offs involved in the data collection mode is important. For instance, as Hillygus points out, [t]echnologies such Interactive Voice Response (IVR) have the potential to reduce measurement bias introduced by the interactions of human interviews, but they simultaneously increase nonresponse error or exacerbate coverage problems because people are less inclined to answer questions from a robocall.[76]
- What is the margin of error?
As noted above, the margin of error cannot be calculated for online non-probability polls. However, for polls collected with traditional probability sampling methods, the margin of error offers an important guide to how closely the survey sample reflects the views of the broader population.
Further, it is important to note that when sub-groups are considered – for instance, the vote intention of those under 35 in a representative survey of the Canadian population – the sampling error for survey estimates must be calculated on the basis of that reduced sample size, which increases the margin of error substantially.[77]
- Have the results been weighted, and if so, how?
As Asher notes, “weights are used to correct for biases—that is, to make the sample’s demographic characteristics more accurately reflect the population’s overall characteristics.”[78] Using demographic data on the population such as gender, age, ethnicity, or other variables, pollsters can adjust survey results to more accurately reflect known population characteristics. Note, however, that weighting can also affect the margin of error. As the Pew Research Center notes, if these effects are not taken into account, those reporting poll results may be claiming a greater degree of precision than is warranted.[79]
- What is the response rate?
Studies suggest that response rates have been in long-term decline, raising concerns about survey quality.[80] The key concern associated with this trend pertains to non-response bias, which occurs when those who decline to participate in surveys or cannot be contacted differ systematically from those who do choose to participate. As non-response rates increase, the assumptions underlying random or probability sampling are violated, and the extent to which survey samples are representative of the population increasingly comes into question.
Declining response rates have been attributed to a variety of possible causes, ranging from the increased popularity of cell phones and caller ID (which allow citizens to screen out calls from unknown numbers), to an increased hostility toward legitimate researchers due to intrusive telemarketers and push pollsters, to lowered levels of societal trust more generally.[81]
A given poll’s response rate might be affected by the mode of data collection, as well as the length of time taken to collect the data (commercial polls fielded in a single evening will inevitably have much lower response rates than academic studies that make extensive effects to recontact randomly selected respondents).[82] The salient point is that the more of the original probability sample that remains intact, the more confident one can be that potential unobserved sources of bias (i.e., those that were not measured by the survey) do not differ systematically between survey respondents and the broader population.
- Is the full questionnaire available?
It is important to evaluate the questionnaire (instrument) when considering the results of a poll for several reasons. First, question wording can matter a great deal to survey responses. Problems with compound or “double-barrelled” questions, problems with question clarity, or argumentative or leading questions, can all have substantial effects on the findings of a poll.[83]
Effects on survey results can also arise due to context and question order. The effects can be subtle or substantial; Asher goes so far as to argue that “the strategic placement of questions is one of the most effective ways to “doctor” a survey.”[84]
Lastly, when evaluating the questionnaire, one should ask whether the questions probe issues about which respondents are likely to have genuine views. This issue, often referred to as the problem of “non-attitudes,” is especially important when it comes to providing point estimates of population views, rather than predictors of directional support.[85]
- More generally, are those who administer the survey fully transparent about their methodology and results?
Lastly, it can be helpful to assess the extent to which the pollster is open and transparent, not only with their questionnaire, but with other aspects of their methodology and results. For instance, have they published a full report outlining their methodology, including their sampling strategy, their mode of data collection, the dates on which the data were collected, and their results? A lack of willingness to disclose this information – any indication of a “black box” where one might expect to find detailed, replicable methodological information – might reasonably be viewed as a sign that further scrutiny is required.
Is the polling company willing to share their microdata (i.e., anonymized individual-level survey results) for scrutiny and analysis by other researchers? If those data are currently being embargoed (a common practice for a period of six months to two years on academic and non-academic research), is there a date set in which the data will be available to other researchers?
Despite the lamentations of those who argue that opinion surveys are distractions from substantive policy debates, or that they have supplanted bold political leadership, public opinion polling does not appear to be receding from its central place in public debate and public policymaking anytime soon.[86] Polls can offer insights into public attitudes about key policy questions and help guide decisionmakers to more effective service delivery. They can also help legislators more accurately assess the public appetite for different sorts of policy interventions. When the right questions are asked, and pollsters’ claims are carefully evaluated against their data, consumers of polls can be more confident in their interpretation of this crucial form of social scientific evidence.
Notes
[1] For instance, Sidney Verba argues: “[s]ince participation depends on resources and resources are unequally distributed, the resulting communication is a biased representation of the public. Thus, the democratic ideal of equal consideration is violated. Sample surveys provide the closest approximation to an unbiased representation of the public because participation in a survey requires no resources and because surveys eliminate the selection bias inherent in the fact that participants in politics are self-selected.” (“The Citizen as Respondent: Sample Surveys and American Democracy: Presidential Address, American Political Science Association, 1995.” American Political Science Review 90, no. 1 (1996), pp. 1-7).
[2] See, for instance, Sean Jeremy Westwood, Solomon Messing, and Yphtach Lelkes, “Projecting Confidence: How the Probabilistic Horse Race Confuses and Demobilizes the Public,” The Journal of Politics 82, no. 4 (2020); J. Scott Matthews, Mark Pickup and Fred Cutler, “The Mediated Horserace: Campaign Polls and Poll Reporting,” Canadian Journal of Political Science 45, no. 2 (2012), pp. 261-287; and “The media: all horse race, all the time,” Policy Options, April 1, 2007.
[3] Guy Lachapelle, Polls and the Media in Canadian Elections: Taking the Pulse, Vol. 16 of the Research Studies, Royal Commission on Electoral Reform and Party Financing and Canada Communication Group (Toronto and Oxford: Dundurn Press, 1991), p. 5. Lachapelle attributes the coining of the term to Jean-Jacques Rousseau, while serving as France’s foreign affairs secretary.
[4] For a discussion of analyses of public opinion in Ancient Greece, for instance, see Herbst, “The History and Meaning of Public Opinion” in New Directions in Public Opinion (second edition), ed. Adam J. Berinksy (New York: Routledge, 2016), pp. 22-24.
[5] Ibid., pp. 5-6; D. Sunshine Hillygus, “The Evolution of Election Polling in the United States,” Public Opinion Quarterly 75, no. 5 (2011), pp. 962-981.
[6] Lachapelle, p. 7.
[7] D. Sunshine Hillygus, “The Practice of Survey Research” in New Directions in Public Opinion (second edition), ed. Adam J. Berinksy (New York: Routledge, 2016), p. 39.
[8] D. Sunshine Hillygus, “The Evolution of Election Polling in the United States,” Public Opinion Quarterly 75, no. 5 (2011), p. 964.
[9] Ibid. Please see the final section of this paper for a more detailed discussion of this sampling method and the types of helpful statistical tools its use enables.
[10] Michael W. Traugott and Paul J. Lavrakas, The Voter’s Guide to Election Polls (third edition) (Lanham: Rowman & Littlefield, 2004), p. 165.
[11] Andrew Mercer, “5 things to know about the margin of error in election polls,” Pew Research Center, September 8, 2016.
[12] Lachapelle, pp. 10-11.
[13] Lachapelle, pp. 10-11. Adams places this development later still, arguing that “the modern era of public opinion polling really dates from the 1970s and 1980s when political parties, academics, and the media became hooked on polling.” (xiv.)
[14] Canadian Election Study, “What’s New?”
[15] Asher, pp. 198-199. Note, however, that panel data are not without disadvantages. The difficulty in tracking down the same participants for subsequent interviews can lead to patterns of attrition that cause certain representational distortions in the data, and having completed an initial survey may shape participants’ subsequent responses (e.g., by priming interest in specific issues).
[16] Ibid.
[17] Asher, p. 144.
[18] Allyson L. Holbrook, Melanie C. Green, and Jon A. Krosnick, “Telephone Versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias,” Public Opinion Quarterly 67 (2003), pp. 79-125.
[19] Frauke Kreuter, Stanley Presser, and Roger Tourangeau, “Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity,” Public Opinion Quarterly 75, no. 5 (2008), pp. 847-865.
[20] Holbrook, Green, and Krosnick, p. 79
[21] Asher, p. 144.
[22] Ibid.
[23] Ibid., p. 145.
[24] Loleen Berdahl and Keith Archer, Explorations: Conducting Empirical Research in Canadian Political Science (third edition) (Don Mills: Oxford, 2015).
[25] Ibid.
[26] See Asher, pp. 151-152, for a discussion of these competing perspectives, which have generated significant controversy amongst pollsters.
[27] Asher, p. 117. See final section for a more detailed discussion of the problem of declining rates of survey response (i.e., non-response bias).
[28] Holbrook, Green, and Krosnick, p. 79.
[29] Berdahl and Archer, p. 192 (see discussion of social desirability bias
[30] Asher, p. 142.
[31] Ibid.
[32] Though it is possible for live or recorded interviewers participate in data collection via a web survey, this is uncommon.
[33] See Mick P. Couper, “Web Surveys: A Review of Issues and Approaches,” Public Opinion Quarterly 64 (2000), pp. 464-494, for a broader discussion of five probability-based and three non-probability varieties of online surveys
[34] Hillygus, “The Practice of Survey Research,” p. 41.
[35] Berdahl and Archer, p. 192.
[36] Asher, p. 155.
[37] Ibid., p. 156.
[38] Scott Keeter, “From Telephone to the Web: The Challenge of Mode of Interview Effects in Public Opinion Polls,” Pew Research Center, May 13, 2015.
[39] Ibid.
[40] Ibid.
[41] Asher, p. 107.
[42] Hillygus, “The Practice of Survey Research,” p. 39-41.
[43] Courtney Kennedy, Andrew Mercer, Scott Keeter, Nick Hatley, Kyley McGeeney and Alejandra Gimenez, “Evaluating Online Nonprobability Surveys,” Pew Research Center, May 2, 2016.
[44] Peter M. Butler, Polling and Public Opinion: A Canadian Perspective (Toronto: University of Toronto Press, 2007), p. 57.
[45] For an example of how focus group research can inspire more systematic tests using representative survey data, see Elisabeth Gidengil and Heather Bastedo Eds., Canadian Democracy from the Ground Up: Perceptions and Performance (Vancouver: UBC Press, 2014), particularly chapters 2, 4, and 11.
[46] Mick P. Couper, “New Developments in Survey Data Collection,” Annual Review of Sociology, 43 (2017), pp. 121-145.
[47] See, for instance, Yannick Dufresne and Nick Ruderman, “Public Attitudes toward Official Bilingualism in Canada: Making Sense of Regional and Subregional Variation.” American Review of Canadian Studies 48, no. 4 (2018), pp. 371-386; Andrea Carson, Shaun Ratcliff, and Yannick Dufresne, “Public opinion and policy responsiveness: the case of same-sex marriage in Australia,” Australian Journal of Political Science 53, no. 1 (2018), pp. 3-23.
[48] André Blais, “Why is there so Little Strategic Voting in Canadian Plurality Elections?” Political Studies 50, no. 3 (2002), pp. 445-454.
[49] See, for instance, Strategic Voting 2021 Canadian Federal Election, “Our Methodology and Criteria.”
[50] Lachapelle, pp. 13-14.
[51] For a review of this literature, see Matthew Barnfield, “Think Twice before Jumping on the Bandwagon: Clarifying Concepts in Research on the Bandwagon Effect.” Political Studies Review 18, no. 4 (2020), pp. 553-574.
[52] Traugott and Lavrakas, p. 35.
[53] American Association for Public Opinion Research, "What is a "Push" Poll?"
[54] Lachapelle, pp. 37-41.
[55] Ibid.
[56] Canada, Royal Commission on Electoral Reform and Party Financing. Final Report. Volume One. (Ottawa: Minister of Supply and Services, 1991), pp. 455-461.
[57] Thomson Newspapers v. Canada (Attorney General), (May 29, 1998).
[58] Canada Elections Act, s. 328 (2) (“Transmission of election survey results during blackout period”).
[59] Ibid., ss. 326-328. In this context, an election survey refers to “a survey respecting whether persons intend to vote at an election or who they voted for or will vote for at an election or respecting an issue with which a registered party or candidate is associated,” (Canada Elections Act, s. 2(1)).
[60] Ibid., s. 326(1) (“Transmission of election survey results”).
[61] Ibid., s. 326(3) (“Report on survey results”).
[62] Ibid., s. 349 (“election survey”).
[63] Elections Canada, “7. Regulated Activities: Election Surveys in an Election Period,” Political Financing Handbook for Third Parties, Financial Agents and Auditors – June 2021.
[64] Election Finances Act, s. 36.1(1) (“Prohibition”).
[65] Ibid., s. 36.1(3).
[66] Ibid., s. 1(1) (“political advertising").
[67] Title LXIII – Elections, Chapter 664: Political Expenditures and Contributions, RSA 664:1 (Applicability of Chapter).
[68] Ibid.
[69] Sarah Schweitzer, “Pollsters cry foul over '98 N.H. law; Say push-poll measure is too broad, punitive,” Boston Globe, April 8, 2012.
[70] Denise-Marie Ordway, “11 questions journalists should ask about public opinion polls,” The Journalist’s Resource: Informing the News, June 14, 2018.
[71] Asher, chapter 9.
[72] Scott Keeter, “From Telephone to the Web: The Challenge of Mode of Interview Effects in Public Opinion Polls.”
[73] Asher, p. 154
[74] Scott Keeter, “From Telephone to the Web: The Challenge of Mode of Interview Effects in Public Opinion Polls.”
[75] Ibid.
[76] Hillygus, “The Practice of Survey Research,” p. 49.
[77] Andrew Mercer, “5 things to know about the margin of error in election polls,” Pew Research Center, September 8, 2016.
[78] Asher, p. 130.
[79] Andrew Mercer, “5 things to know about the margin of error in election polls,” Pew Research Center, September 8, 2016.
[80] Asher, pp. 124-125.
[81] Asher, p. 124.
[82] Indeed, according to Hibberts et al., “[t]he sample response rate depends greatly on the type of survey method used. Though response rates vary from survey to survey, there is some agreement that faceto-face surveys have the highest rate of response, followed by telephone surveys, with mail of self-administered surveys having the lowers response rates (Mary R. Hibberts, Burke Johnson, and Kenneth Hudson, “Common Survey Sampling Techniques,” in Handbook of Survey Methodology for the Social Sciences, edited by L. Gideon (New York: Springer, 2012), pp. 53-74).
[83] Butler, p. 69.
[84] Asher, p. 282.
[85] Ibid., p. 43.
[86] See Butler, pp. 109-117, for an overview of these concerns.