The Role of Public Opinion Polls in Health Policy

Table of Contents

Introduction

Copy link to Introduction

Polls and surveys are useful tools for understanding health policy issues. However, it takes time and training to understand how to interpret survey results and to decide which polls are useful and which might be misleading. The aim of this chapter is to help you learn how to be a good consumer of polls so they can be a valuable part of your toolkit for understanding the health policy environment. It begins by discussing why polls are an important tool in policy analysis and the caveats to keep in mind when interpreting them. It then discusses polling methodology and the questions you should ask to assess the quality and usefulness of a poll. The chapter ends with some real-world examples in which polling helped inform policy debates.

People sometimes ask if there is a difference between a “poll” and a “survey.” The quick answer is that every poll is a survey, but not every survey is a poll (for example, large federal surveys like the Census or surveys of hospitals or other institutions would not be called polls). For purposes of this chapter, we use the terms interchangeably.

Why Should You Pay Attention to Polls at all?

Copy link to Why Should You Pay Attention to Polls at all?

Polls have gotten a bad rap over the past few years, particularly around election times when they don’t do a perfect job predicting who the winner of a given election will be. Given this, you may wonder why you should pay attention to polls when trying to understand health policy. There are six basic reasons why it’s important for health policy scholars to understand public opinion:

  1. People vote, and elections can have important consequences for health policy at the local, state, and national levels. While polls may not always be perfect predictors of election outcomes, they are one of the best ways to understand the dynamics of how voters are thinking and feeling when weighing their vote choices, not only for high-profile offices like President and Congress, but for state and local races and ballot initiatives as well.
  2. Public opinion can influence policy choices, particularly for highly salient issues, like health care, that touch pretty much everyone’s lives in some way. While the average member of the public may not be equipped to understand the details of most health policy legislation, their preferences and views can put constraints on lawmakers by identifying actions that would be deemed unacceptable by large majorities of the public or their constituents.
  3. Polls can also provide information about the broader environment in which health policy issues or changes are being debated. They can help you understand the salience of a given issue (i.e., how much do people care about prescription drug prices and how closely are they paying attention to debates over how to lower them?) and identify other factors that might affect the likely success of a given policy (i.e., if the country’s attention is focused on a foreign policy crisis, how will that affect the public’s reaction to a major new proposal to overhaul Medicaid?).
  4. Beyond measuring opinion, surveys can also be useful for understanding how health policy is affecting people. Survey questions about people’s experiences can offer context by providing information like the share of people who are struggling to afford their health insurance. Looking at questions like these at multiple points in time can also help you understand how experiences change in the months and years following enactment of major health legislation.
  5. Surveys can help amplify the voices of real people in policy debates, particularly those that are often ignored or drowned out by special interests.Polling that includes adequate sample sizes to represent the voices of marginalized and underrepresented populations, such as members of racial and ethnic minority groups, immigrants, LGBTQ individuals, people living in rural areas, and those with lower incomes, may be especially useful for understanding the impact of health policy on people.
  6. In this way, methodologically sound, non-partisan, transparent surveys can serve as a counterweight to polls sponsored by special interests that are conducted in private and used to craft public messages, design campaigns, or sell products.

Caveats to Polling

Copy link to Caveats to Polling

Polls do not tell the whole story. Public opinion is just one part of the political and policymaking process. Public support for a given policy may seem clear based on a single survey question, but it can be quite malleable in the course of a public debate, and not all surveys measure this malleability. Small changes in survey question wording can sometimes lead to big changes in public support, so it’s important never to rely on a single question from a single poll to make a conclusion about what the public thinks or knows. When possible, look for multiple questions on the same topics from multiple polls conducted at various times. If the answers are consistent, you can be more confident that the conclusion is correct. Sometimes a poll finding conflicts with your best sense of political reality when all available information is considered. In those instances, there’s a good chance your “gut” is a better guide than what a given poll tells you.

There are limits to polling on complex topics like health care. When the public says they support a specific proposal for lowering health care costs, it doesn’t mean they have fully thought through the details of that proposal and its implications. Rather, it may signal how important they think it is for policymakers to address the high cost of health care. And while some polls test this by asking follow-up questions that probe the public about trade-offs to any given policy approach, some health policy topics are just too complicated to reasonably ask the average American to weigh in on in a short survey.

Public opinion can’t give you the “right” answer. While public opinion can tell you where the public stands on an issue, it cannot tell you what the right policy solution is in any given situation. For example, pollsters often ask people to rank the priority they give to different health issues before Congress. They may ask the public to rank the issues of prescription drug costs, the future of the Affordable Care Act, Medicaid expansion, the financial sustainability of Medicare, and so forth. But it turns out that real people aren’t organized like congressional committees and don’t put the issues neatly into policy buckets like pollsters do. What they are concerned about is the cost and affordability of health care, a concern that cuts across these issues. These ranking questions provide some information about what resonates most with the public, but that doesn’t mean they should be treated as a rank-ordered list for policymakers to address starting from the top down. In addition, beyond telling you what the public thinks, polls can be just as useful for pointing out what the public doesn’t understand about a given policy issue, allowing you to direct outreach and education efforts or figure out messaging that will resonate with the public if you are advocating for a policy change.

Understanding the Methods: Questions to Ask about Polls

Copy link to Understanding the Methods: Questions to Ask about Polls

The science of survey research is complicated, but there are a few simple terms you can learn and questions you can ask when you encounter polls in your schooling and daily life. These include:

Population. Who is the population that the survey is claiming to represent? Polls can be conducted with many different populations, so it is important to know how researchers define the population under study. For example, a survey of voters may be useful for your understanding of a particular health care issue’s importance in the election, but it might not be as useful for estimating how many people have had problems paying medical bills, since lower-income people (who may be the most likely to experience bill problems) are less likely to be voters and may be left out of the study entirely.

Sampling. How did researchers reach the participants for their poll, and was it a probability or non-probability sample? In a probability-based sample, all individuals in the population under study have a known chance of being included in the survey. Such samples allow researchers to provide population estimates (within a margin of sampling error) based on a small sample of responses from that population. Examples of probability-based sampling techniques include random digit dialing (RDD), address-based sampling (ABS), registration-based sampling (RBS), and probability-based online panels. Non-probability sampling, sometimes called convenience or opt-in sampling, has become increasingly common in recent years. While non-probability surveys have some advantages for some types of studies (particularly their much lower cost), research has shown that results obtained from non-probability samples generally have greater error than those obtained from probability-based methods, particularly for certain populations.

Data collection (survey mode). While there are many ways to design a survey sample, there are also many ways to collect the data, known as the survey mode. For many years, telephone surveys were considered the gold standard because they combined a probability-based sampling design with a live interviewer. Survey methodology is more complicated now, but it is still important to know whether the data was collected via telephone, online, on paper, or some other way. If phones were used, were responses collected by human interviewers or by an automatic system, sometimes known as interactive voice response (IVR) or a “robocall”? Or were responses collected via text message? Depending on the population represented, different approaches might make the most sense. For example, about 5% of adults in the U.S. are not online, and many others are less comfortable responding to survey questions on a computer or internet-connected device. While young adults may be comfortable responding to a survey via text message, many older adults still prefer to take surveys over the phone with a live interviewer. Some populations feel a greater sense of privacy when taking surveys on paper, while literacy challenges may make a phone survey more appropriate for other populations. Many researchers now combine multiple data collection modes in a single survey to make sure these different segments of the population can be represented.

Language. Was the survey conducted only in English, or were other languages offered? If the survey is attempting to represent a population with lower levels of English language proficiency, this may affect your confidence in the results.

Survey sponsor. Who conducted the survey and who paid for it? Understanding whether there is a political agenda, special interest, or business behind the poll could help you better determine the poll’s purpose as well as its credibility.

Timing. When was the survey conducted? If key events related to the survey topic occurred while the survey was in the field (e.g., an election or a major Supreme Court decision), that might have implications for your interpretation of the results.

Data quality checks. During and after data collection, what data quality checks were implemented to ensure the quality of the results? Most online surveys include special “attention check” questions designed to identify respondents who may have fabricated responses or rushed through the survey without paying attention to the questions being asked. Inclusion of these questions is a good sign that the researchers were following best practices for data collection.

Weighting. Were the results weighted to known population parameters such as age, race and ethnicity, education, and gender? Despite best efforts to draw a representative sample, all surveys are subject to what is known as “non-response bias” which results from the fact that some types of people are more likely to respond to surveys than others. Even the best sampling approaches usually fall short of reaching a representative sample, so researchers apply weighting adjustments to correct for these types of biases in the sample. When reading a survey methodology statement, it should be clear whether the data was weighted, and what source was used for the weighting targets (usually a survey from the Census or another high-quality, representative survey).

Sample size and margin of sampling error. The sample size of a survey (sometimes referred to as the N) is the number of respondents who were interviewed, and the margin of sampling error (MOSE) is a measure of uncertainty around the survey’s results, usually expressed in terms of percentage points. For example, if the survey finds 25% of respondents give a certain answer and the MOSE is plus or minus 3 percentage points, this means that if the survey was repeated 100 times with different samples, the result could be expected to be between 22%-28% in 95 of those samples. In general, a sample size of 1,000 respondents yields a MOSE of about 3 percentage points, while smaller sample sizes result in larger MOSEs and vice versa. Weighting can also affect the MOSE. When reading poll results, it is helpful to look at the N and MOSE not only for the total population surveyed, but for any key subgroups reported. This can help you better understand the level of uncertainty around a given survey estimate. The non-random nature of non-probability surveys makes it inappropriate to calculate a MOSE for these types of polls. Some researchers publish confidence estimates, sometimes called “credibility intervals,” to mimic MOSE as a measure of uncertainty, but they are not the same as a margin of sampling error. It’s also important to note that sampling error is only one source of error in any poll.

Questionnaire. Responses to survey questions can differ greatly based on how the question was phrased and what answer choices were offered, so paying attention to these details is important when evaluating a survey result. Read the question wording and ask yourself – do the answer options seem balanced? Does the question seem to be leading respondents toward a particular answer choice? If the question is on a topic that is less familiar to people, did the question explicitly offer respondents the chance to say they don’t know or are unsure how to answer? If the full questionnaire is available, it can be helpful to look at the questions that came before the question of interest, as information provided in these questions might “prime” respondents to answer in a certain way.

Transparency. There is no “gold seal” of approval for high-quality survey methods. However, in recent years, there has been an increasing focus on how transparent survey organizations are about their methods. The most transparent researchers will release a detailed methodology statement with each poll that answers the questions above, as well as the full questionnaire showing each question in the survey in the order they were asked. If you see a poll released with a one or two-sentence methodology statement and can’t find any additional information, that may indicate that the survey organization is not being transparent with its methods. The American Association for Public Opinion Research has a Transparency Initiative whose members agree to release a standard set of information about all of their surveys. For political polling, 538 recently added transparency as an element of their pollster ratings. Some news organizations also “vet” polls for transparency before reporting results, but many do not. This means that just because a poll or survey is reported in the news doesn’t necessarily mean it’s reliable. It’s always a good idea to hunt down the original survey report and see if you can find answers to at least some of the questions above before making judgments about the credibility of a poll.

Election polling vs. issue polling. Election polls – those designed at least in part to help predict the outcome of an election – are covered frequently in the media, and election outcomes are often used by journalists and pundits to comment on the accuracy of polling. Issue polls – those designed to understand the public’s views, experiences, and knowledge on different issues – differ from election polls in several important ways. Perhaps the most important difference is that, in addition to the methodological challenges noted above, election polls face the added challenge of predicting who will turn out to vote on election day. Most election polls include questions designed to help with this prediction, and several questions may be combined to create a “likely voter” model, but events or other factors may affect individual voter turnout in ways pollsters can’t anticipate. Election polls conducted months, weeks, or even days before the election also face the risk that voters will change their mind about how to vote between the time they answer the survey and when they fill out their actual ballot. Issue polls do not generally face these challenges, so it’s important to keep in mind that criticisms about the accuracy of election polls may not always apply to other types of polls.

Examples of the Usefulness of Polls in Understanding Health Policy

Copy link to Examples of the Usefulness of Polls in Understanding Health Policy

Example #1: Tracking the evolution of public opinion and experience through debate, passage, and implementation of the Affordable Care Act

The Affordable Care Act (ACA) is the largest health legislation enacted in the 21st century. From the time the legislation was being debated in Congress through its passage, implementation, and efforts to repeal it, the ACA has been the subject of media coverage, political debate, campaign rhetoric, and advertising. In each of those stages, polls and surveys have provided important information for understanding what was happening with the law.

Prior to passage, polls showed the public’s desire for change in health care, particularly when it came to decreasing the uninsured rate and making health care and insurance more affordable. Despite this apparent consensus on the need for change, polls also helped shed light on some of the barriers to passing legislation. For example, survey trends demonstrated how the share of the public who expected health reform legislation to leave their families worse off increased over the course of an increasingly public debate in which opponents tapped into fears about how the proposed law might change the status quo.

After the law was passed, public opinion on the ACA was sharply divided along partisan lines, with majorities of Democrats viewing the law favorably and majorities of Republicans having an unfavorable view. However, surveys also painted a more nuanced picture beyond the overall partisanship, showing that majorities of U.S. adults across partisan lines favored many of the things the ACA did, including allowing young adults to stay on their parents’ insurance until age 26, preventing health plans from charging sick people more than healthy people, and providing financial subsidies to help lower- and moderate-income adults purchase coverage. At the same time, polls showed that many adults were not aware that these provisions were part of the ACA, and that many others incorrectly believed the law did things it did not, such as creating a government-run insurance plan and allowing undocumented immigrants to receive government financial help to purchase coverage.

This combination of “the parts more popular than the whole” and incomplete public knowledge of the law provided some insight into why efforts to repeal the law were ultimately unsuccessful despite the relative unpopularity and deep partisan divisions on the law overall. When faced with the very real prospect of the popular parts of the law going away – particularly the protections for people with pre-existing health conditions – the public (and particularly Democrats and independents who had previously expressed lukewarm support) rallied to protect it. In fact, following concerted Republican efforts to repeal the law in 2017, the ACA has remained more popular than ever, with more adults expressing a favorable than an unfavorable opinion.

In addition to providing information about the public’s evolving opinion and awareness of the law, surveys also helped provide information about people’s experiences under the law. For example, a 2014 survey of people who purchase their own insurance found that 6 in 10 people enrolled in insurance through the new marketplaces were previously uninsured, and that most of this group said they decided to purchase insurance because of the ACA. Subsequent surveys showed that most marketplace enrollees were satisfied with their plans, but many reported challenges related to the affordability of coverage and care.

These are just a few examples of the ways surveys helped provide insights into the dynamics of a complex health policy at different points in time.

Example #2: Understand the limits of public support of Medicare-for-All proposals

Another health policy issue where polls have provided useful information is the debate over a national, single-payer health plan. While the idea has been discussed for decades, public discussion was prominent most recently during the 2016 and 2020 Democratic presidential primaries, when Senator Bernie Sanders made “Medicare-for-all” a centerpiece of his campaign. Since 2017, a majority of U.S. adults have supported the idea of a national Medicare-for-all plan, but once again, polls also indicated why such a proposal had never become a political reality. For example, the public’s reaction to the idea varies considerably based on the language used to describe it; while majorities view the terms “universal coverage” and “Medicare-for-all” positively, most have a negative reaction to “socialized medicine,” and many are unsure how they feel about the term “single-payer health insurance.” Surveys also demonstrate that while support starts out high, many people say they would oppose a Medicare-for-all plan if they heard common arguments made by opponents, such as that it would lead to delays in treatments, threaten the current Medicare program, or increase taxes. Polls like these and others that test different messages can help shed light on the public’s likely reaction to real-world debates over policies, helping us understand some of the reasons why certain policies that seem to attract majority support in the abstract face an uphill battle once public debate and discussion about them begin.

Example #3: Understanding the impact of the Supreme Court’s overturning of Roe v. Wade

Polls can also help shed light when sudden events create policy changes that immediately affect individuals’ access to health care in different scenarios. A recent example is the Supreme Court 2022 decision in Dobbs v. Jackson that overturned Roe v. Wade and eliminated the nationwide right to abortion that had been in place since 1973. The Dobbs decision opened the door for states to pass their own abortion regulations, and many states had previously established “trigger laws” that made abortion illegal as soon as Roe was overturned.

Polls before and after the 2022 midterm election indicated how the overturn of Roe affected voter motivation, turnout, and vote choice. For example, polling in October 2022 showed abortion increasing as a motivating issue for voters, particularly among Democrats and those living in states where abortion was newly illegal. And election polling of voters showed how the Supreme Court decision played a key role in motivating turnout among key voting blocs that likely contributed to the Democratic party’s stronger-than-expected performance in the midterms.

Understanding the impact of Dobbs is an area where polling of specific populations (including grouping individuals by the abortion laws in their state) is more useful than looking at the U.S. population as a whole. For example, in addition to shedding light on the dynamics of abortion as an election issue, polling in 2023 indicated widespread confusion about the legality of medication abortion, particularly among people living in states that had banned or severely limited the procedure. Surveys also shed light on the experiences of people living in different states; for example, a 2024 survey found that 1 in 5 women of reproductive age (18-49) living in states with abortion bans said either they or someone they know had difficulty accessing an abortion since the Supreme Court overturned Roe v. Wade due to restrictions in their state.

Example #4: Amplifying the voices and experiences of marginalized populations

Well-designed surveys of under-represented groups can provide important information about health policy by amplifying the opinions and experiences of those whose voices are often left out of policy debates. Examples include:

  • A survey of 2023 Medicaid enrollees documented the coverage status of people who were disenrolled during the Medicaid “unwinding” process. Beginning in March 2020, states kept people enrolled in Medicaid without the need to renew or re-determine eligibility under a law passed in response to the COVID-19 pandemic. When the law expired in March 2023, it was uncertain how individuals and families would be affected. Surveys like this helped document the impact of the policy change on people’s coverage status and access to care.
  • A survey of U.S. immigrants shed light on the health and health care experiences of a group that makes up one-sixth of the adult population. Among other findings, this survey showed that half of all likely undocumented immigrants in the U.S. lacked health insurance coverage, information not previously available from other data sources. It also illustrated the importance of state policies in determining coverage rates for immigrant adults, documenting the much higher uninsured rate among immigrants living in states with less expansive coverage policies (like Texas) compared to those in states with more expansive policies (like California).
  • A survey of trans adults documented this population’s struggles accessing appropriate health care. Among other findings, this survey found that almost 4 in 10 trans adults said it was difficult to find a health care provider who treats them with dignity and respect, 3 in 10 said they had to teach a provider about trans people in order to get appropriate care, and 1 in 5 had health insurance that would not cover gender-affirming treatment. Importantly, these survey findings help increase understanding of the health care experiences of a group that is often marginalized in U.S. society, and one that also faces other barriers, including economic challenges, higher rates of mental health challenges and unmet needs for mental health care.
  • A survey focused on racism, discrimination, and health showed the extent of discrimination and unfair treatment in health care settings. This survey found that large shares of Black, Hispanic, Asian, and American Indian and Alaska Native adults reported preparing for possible insults or being very careful about their appearance in order to be treated fairly during health care encounters. It also showed how individuals who have more visits with providers who share their racial and ethnic background report more positive health care experiences. These findings provide insights into possible policy solutions to improve care, highlighting the importance of a diverse health care workforce that is trained in culturally appropriate care.
  • Surveys of areas impacted by natural disasters also help provide information to guide recovery efforts in these areas. For example, a survey of Hurricane Katrina evacuees living in Houston-area shelters documented the physical and emotional toll of the storm and the disproportionate impact on lower-income, African American, and uninsured residents. A series of surveys of New Orleans residents in the years following Katrina showed steady progress in many areas of recovery, but highlighted how the gap between the experiences of the city’s Black and White residents grew over time in many ways. Surveys of Puerto Rico residents following Hurricane Maria and Texas Gulf Coast residents following Hurricane Harvey provided similar insights to shine a lens on disparities and highlight the needs of the local populations in those areas.

Resources

Copy link to Resources

Citation

Copy link to Citation

Brodie, M., Hamel, L., & Kirzinger, A., The Role of Public Opinion Polls in Health Policy. In Altman, Drew (Editor), Health Policy 101, (KFF, July 2024) https://www.kff.org/health-policy-101-the-role-of-public-opinion-polls-in-health-policy (date accessed).

TOPICS:

KFF Headquarters: 185 Berry St., Suite 2000, San Francisco, CA 94107 | Phone 650-854-9400
Washington Offices and Barbara Jordan Conference Center: 1330 G Street, NW, Washington, DC 20005 | Phone 202-347-5270

www.kff.org | Email Alerts: kff.org/email | facebook.com/KFF | twitter.com/kff

The independent source for health policy research, polling, and news, KFF is a nonprofit organization based in San Francisco, California.