Measuring Public Opinion

Help Questions

AP Government and Politics › Measuring Public Opinion

Questions 1 - 10
1

A pollster divides the state into regions, then randomly samples proportional numbers from each region. Which method is this?

Cluster sampling, because the pollster randomly selects a few regions and surveys everyone within them to reduce cost and travel time.

Push polling, because dividing by region allows the pollster to target persuasive messages to specific audiences during interviews.

Voluntary response sampling, because regional residents choose whether to participate after seeing a public invitation to respond.

Stratified sampling, because the population is divided into subgroups and random samples are taken from each subgroup.

Convenience sampling, because regions are chosen based on easy access and then respondents are interviewed at public locations.

Explanation

In AP US Government and Politics, measuring public opinion involves various sampling methods, and this scenario describes stratified sampling where the state is divided into regions (strata) and proportional random samples are drawn from each to ensure representation. This methodology summarizes an approach to reduce bias by accounting for subgroups, making the sample more reflective of the population's diversity. Unlike simple random sampling, stratification guarantees key groups aren't underrepresented. The correct answer explains how this improves accuracy in diverse populations. A distractor is choice A (cluster sampling), which selects entire groups randomly but surveys all within them, not proportional samples from each. Random sampling is foundational, but stratification enhances it for better precision. This method is particularly useful for large, heterogeneous areas like states.

2

A phone poll reaches 1,200 people, but only 12% complete it; results lean older and wealthier. What issue is illustrated?

Sampling frame perfection, because contacting many people ensures the sample includes all groups in correct proportions automatically.

Nonresponse bias, because people who refuse or cannot be reached differ systematically from respondents, skewing results despite a large contact list.

Push polling, because low completion rates indicate the pollster used deceptive information to persuade respondents during interviews.

Social desirability bias, because respondents lie about sensitive behaviors, which primarily affects demographics rather than policy preferences.

Random digit dialing guarantees representativeness, so differences by age and income must reflect true public opinion trends.

Explanation

Measuring public opinion in AP US Government and Politics includes understanding biases like nonresponse, where low completion rates distort results. The phone poll contacts 1,200 but only 12% complete it, with results leaning older and wealthier, illustrating nonresponse bias as non-respondents differ systematically from respondents, skewing the sample. Even with random-digit dialing, low response rates mean the final sample isn't representative if certain groups (e.g., younger or poorer) are less likely to participate. This limitation highlights why high response rates are crucial for accuracy beyond just random selection. A distractor is choice B, suggesting random dialing guarantees representativeness, but it ignores nonresponse effects. Polling strategies often involve follow-ups or weighting to mitigate this bias. Overall, this shows that methodology alone doesn't ensure validity without addressing participation issues.

3

Two polls ask about the same policy, but one uses “assistance” and another says “welfare,” producing different results. What is illustrated?

Sampling frame error, where the list used to select respondents excludes certain groups, causing systematic undercoverage regardless of question wording.

A push poll, because any use of a negative term automatically means the poll’s primary purpose is persuasion rather than measurement.

Question wording and framing effects, where emotionally loaded or connotative terms alter responses even when the underlying policy is similar.

The margin of error shrinking to zero because both polls asked about the same topic, making results identical within any sample size.

The bandwagon effect, where respondents change their views after hearing which side is winning, rather than reacting to wording differences.

Explanation

This question demonstrates how question wording affects poll responses. When identical policies are described using different terms - "assistance" versus "welfare" - respondents react differently due to the connotations and emotional associations of these words. "Welfare" often carries negative connotations while "assistance" sounds more positive, leading to different support levels for the same policy. Option C correctly identifies this as question wording and framing effects. This isn't about sampling issues (A), bandwagon effects (B), margin of error (D), or push polling (E). The key insight is that word choice in survey questions can significantly influence responses, highlighting the importance of neutral wording in scientific polling.

4

A poll’s headline says “Candidate leads 49%–47%,” MOE $\pm3%$. What is the most accurate interpretation?

The result shows question wording bias, because close percentages always indicate respondents were confused by how the question was phrased.

The race is too close to call, because the lead is within the margin of error and the true population support could plausibly be reversed.

Candidate is behind, because the margin of error must be subtracted from the leader only, making 49% become 46% automatically.

The poll is a census, because reporting a margin of error indicates the poll contacted nearly everyone and confirmed the exact vote share.

Candidate is definitely ahead, because any numerical lead in a poll proves a real lead in the population regardless of margin of error.

Explanation

This question tests interpretation of polls with margins of error. With a 49%-47% lead and ±3% margin of error, the confidence intervals overlap significantly: the leading candidate could be anywhere from 46% to 52%, while the trailing candidate could be 44% to 50%. This overlap means we cannot conclude with statistical confidence that either candidate truly leads in the population - the race is within the margin of error. Option A correctly interprets this as "too close to call," explaining that the true population support could plausibly be reversed. The distractors show common misinterpretations: any numerical lead doesn't prove a real lead when considering uncertainty (B), margin of error applies to both candidates not just the leader (C), margin of error doesn't indicate a census (D), and close results don't necessarily indicate wording problems (E). Understanding margin of error is crucial for accurately interpreting polling data and avoiding overconfident conclusions about small differences.

5

A poll asks about immigration first, then later asks presidential approval. What potential effect is illustrated?

The margin of error, because changing question sequence only affects sampling variability, which is corrected by reporting $\pm3%$.

A push poll, because asking about immigration first is always intended to persuade respondents rather than measure their approval accurately.

Question order (context) effects, where earlier items prime respondents and can shift how they interpret and answer later questions.

Random digit dialing, because earlier questions generate randomization that ensures every respondent has an equal chance to be selected.

A census, because asking multiple questions in one survey ensures complete population coverage and eliminates all measurement error.

Explanation

This question examines question order effects in survey design. When a poll asks about immigration before presidential approval, the immigration questions can prime respondents' thinking and influence how they evaluate the president. For example, if immigration questions highlight problems or controversies, respondents might rate presidential performance more negatively than if asked about approval first. Option B correctly identifies these as question order or context effects, where earlier items shape interpretation of later questions. The distractors misunderstand the concept: random digit dialing is a sampling method unrelated to question sequence (A), multiple questions don't make something a census (C), margin of error measures sampling variability not question effects (D), and question order alone doesn't define a push poll which requires persuasive intent (E). This highlights why survey researchers randomize question order or carefully consider sequence to minimize artificial influences on responses.

6

On Election Day, interviewers ask voters leaving precincts whom they voted for and why. What poll type is this?

A census, because the poll is conducted on Election Day and therefore includes all voters who participated in the election.

A focus group, because interviewing people at precincts creates a small discussion-based sample designed to build narratives, not quantify votes.

An exit poll, which surveys voters immediately after voting to estimate results and analyze demographic and issue-based voting patterns.

A push poll, because asking “why” after the vote is intended to persuade future voters rather than record information from actual voters.

A tracking poll, because it repeatedly surveys the same sample of voters over time to measure opinion changes during the campaign.

Explanation

This question identifies exit polling methodology. When interviewers survey voters immediately after they leave polling places on Election Day, asking whom they voted for and why, this is an exit poll (Option C). Exit polls serve dual purposes: providing early estimates of election results before official counts and analyzing voting patterns by demographics and issues. They differ from tracking polls (A) which follow the same respondents over time, push polls (B) which aim to persuade, focus groups (D) which involve discussions, and censuses (E) which survey entire populations. Exit polls use systematic sampling at selected precincts to project statewide results and understand voter behavior.

7

A news outlet surveys only people leaving a primary polling place at noon. What poll type/issue is shown?

A tracking poll, because surveying at noon means the poll is repeated daily to observe changes in candidate support over time.

A margin-of-error improvement, because limiting interviews to noon reduces sampling error by keeping conditions constant across respondents.

An exit poll with a time-of-day coverage problem, because only midday voters are sampled, potentially missing systematic differences among later voters.

A push poll, because interviewing voters at polling places is mainly designed to influence their votes immediately after they cast ballots.

A census, because surveying voters at one location guarantees every voter in the election will eventually be asked the same questions.

Explanation

This question tests knowledge of exit polling limitations. Exit polls survey voters as they leave polling places to estimate election outcomes, but sampling only at noon creates a coverage problem - the sample misses voters who cast ballots at other times. Early morning voters might include more workers voting before their shift, while evening voters might include those voting after work, potentially creating systematic differences in demographics or preferences. Option A correctly identifies this as an exit poll with time-of-day coverage bias, explaining how midday-only sampling could miss important voter segments. The distractors misapply concepts: tracking polls measure change over time with repeated surveys, not single-location interviews (B), push polls aim to persuade not measure post-vote choices (C), exit polls at one location cannot survey every voter (D), and limiting time doesn't improve margin of error but rather introduces bias (E). The key insight is that exit polls must sample throughout voting hours to avoid systematic coverage gaps.

8

Two polls ask about welfare: one says “aid to the poor,” another says “welfare”; results differ. What explains this?

Question wording (framing) effects, where different terms carry different connotations and shift how respondents evaluate the same policy.

Random sampling error, because any two polls will differ only due to chance variation even when they use identical question wording.

A census approach, because changing words effectively increases the population measured and eliminates uncertainty about public opinion.

Social desirability bias, because respondents always lie to appear acceptable, making wording irrelevant to their answers in practice.

Exit polling, because only surveys conducted outside polling places can capture true views on social programs like welfare.

Explanation

This question demonstrates question wording or framing effects in polling. The terms "aid to the poor" and "welfare" refer to similar government programs, but they carry very different connotations. "Aid to the poor" frames the issue positively, emphasizing help for those in need, while "welfare" has acquired negative associations in American political discourse. These different framings activate different considerations in respondents' minds, leading to different levels of support even though the underlying policy might be identical. Option C correctly identifies this as question wording/framing effects. This isn't about random sampling error (which would affect both polls equally), exit polling (a different methodology), or census approaches (which measure entire populations).

9

A pollster samples 400 Democrats and 400 Republicans, though the electorate is 35% Democrat and 25% Republican. What method fixes this?

Switching to a push poll, because persuasive prompts can equalize party participation and make the sample mirror the electorate automatically.

Weighting the results to match known population proportions, adjusting subgroup influence so estimates better reflect the electorate’s actual composition.

Using a double-barreled question, because asking about two issues at once helps correct party imbalance by increasing response rates.

Reporting no margin of error, because the imbalance is solved by omitting uncertainty estimates and focusing only on raw percentages.

Reducing the sample size, because smaller samples decrease the chance of overrepresenting any group and improve overall representativeness.

Explanation

This question addresses sample weighting in polling methodology. The pollster has equal numbers of Democrats and Republicans (50-50 split) but the actual electorate has more Democrats (35%) than Republicans (25%), creating a representation problem. Weighting adjusts the influence of each subgroup in calculating overall estimates to match known population proportions - in this case, Democratic responses would be weighted up and Republican responses weighted down to reflect their true proportions in the electorate. Option A correctly identifies weighting as the solution, explaining how it makes estimates better reflect actual population composition. The distractors suggest inappropriate methods: push polls persuade rather than correct sampling (B), smaller samples worsen not improve representation (C), double-barreled questions create measurement problems (D), and omitting margin of error doesn't fix the underlying imbalance (E). This illustrates how pollsters use statistical adjustments to improve representativeness when samples don't match population characteristics.

10

A firm calls only landlines to survey adults about politics. What limitation is most directly illustrated?

Random assignment, because calling landlines ensures respondents are randomly assigned to treatment and control groups in an experiment.

Coverage error, because excluding cell-phone-only households can systematically miss groups, making the sample less representative of all adults.

A push poll, because landline calls are primarily used to persuade respondents and are not intended to estimate public opinion accurately.

Bandwagon effect, because landline users are more likely to follow winning candidates after hearing poll results, inflating support measures.

Margin of error elimination, because restricting to landlines removes sampling variability and guarantees the poll matches the population perfectly.

Explanation

This question examines coverage error in telephone polling. Calling only landlines systematically excludes households that rely exclusively on cell phones, which now represent a significant portion of the population. Cell-phone-only households tend to be younger, more mobile, and potentially different in political views, creating coverage error where the sampling frame (landline users) doesn't match the target population (all adults). Option A correctly identifies this as coverage error, explaining how excluding cell-phone-only households makes the sample less representative. The distractors misapply concepts: bandwagon effects involve opinion change after hearing results, not phone type (B), using landlines doesn't make something a push poll (C), restricting to landlines creates bias rather than eliminating error (D), and random assignment relates to experiments not survey sampling (E). This highlights how technological changes in communication require polling methods to adapt to maintain representative samples.

Page 1 of 6