Workplace Benefits and Family Health Care Responsibilities: Key Findings from the 2022 KFF Women’s Health Survey
Overview
The 2022 KFF Women’s Health Survey is a nationally representative survey of 6,442 people ages 18 to 64, including 5,201 females (self-reported sex at birth) and 1,241 males, conducted from May 10, 2022, to June 7, 2022. The objective of the survey is to help better understand respondents’ experiences with contraception, potential barriers to health care access, and other issues related to reproductive health. The survey was designed and analyzed by researchers at KFF (Kaiser Family Foundation) and fielded online and by telephone by SSRS using its Opinion Panel, supplemented with sample from IPSOS’s KnowledgePanel.
This work was supported in part by Arnold Ventures. KFF maintains full editorial control over all of its policy analysis, polling, and journalism activities.
Questionnaire design
KFF developed the survey instrument with SSRS feedback regarding question wording, order, clarity, and other issues pertaining to questionnaire quality. The survey was conducted in English and Spanish. The survey instrument is available upon request.
Sample design
The majority of respondents completed the survey using the SSRS Opinion Panel (n=5,202), a nationally representative probability-based panel where panel members are recruited in one of two ways: (1) through invitations mailed to respondents randomly sampled from an Address-Based Sample (ABS) provided by Marketing Systems Group through the U.S. Postal Service’s Computerized Delivery Sequence. (2) from a dual-framed random digit dial (RDD) sample provided by Marketing Systems Group.
In order to have large enough sample sizes for certain subgroups (females ages 18 to 35, particularly females in the following subgroups: lesbian/gay/bisexual; Asian; Black; Hispanic; Medicaid enrollees; low-income; and rural), an additional 1,240 surveys were conducted using the IPSOS KnowledgePanel, a nationally representative probability-based panel recruited using a stratified ABS design.
Data collection
Web Administration Procedures
The majority of surveys completed using the SSRS Opinion Panel (n=5,056) and all of the surveys completed using the KnowledgePanel (n=1,240) were self-administered web surveys. Panelists were emailed an invitation, which included a unique passcode-embedded link, to complete the survey online. In appreciation for their participation, panelists received a modest incentive in the form of a $5 or $10 electronic gift card. All respondents who did not respond to their first invitation received up to five reminder emails and panelists who had opted into receiving text messages from the SSRS Opinion Panel received text message reminders.
Overall, the median length of the web surveys was 13 minutes.
Phone Administration Procedures
In addition to the self-administered web survey, n=146 surveys were completed by telephone with SSRS Opinion Panelists who are web reluctant. Overall, the median length of the phone surveys was 28 minutes.
Data processing and integration
SSRS implemented several quality assurance procedures in data file preparation and processing. Prior to launching data collection, extensive testing of the survey was completed to ensure it was working as anticipated. After the soft launch, survey data were carefully checked for accuracy, completeness, and non-response to specific questions so that any issues could be identified and resolved prior to the full launch.
The data file programmer implemented a “data cleaning” procedure in which web survey skip patterns were created in order to ensure that all questions had the appropriate numbers of cases. This procedure involved a check of raw data by a program that consisted of instructions derived from the skip patterns designated on the questionnaire. The program confirmed that data were consistent with the definitions of codes and ranges and matched the appropriate bases of all questions. The SSRS team also reviewed preliminary SPSS files and conducted an independent check of all created variables to ensure that all variables were accurately constructed.
As a standard practice, quality checks were incorporated into the survey. Quality control checks for this study included a review of “speeders,” reviewing the internal response rate (number of questions answered divided by the number of questions asked) and open-ended questions. Among all respondents, the vast majority (97%) answered 96% or more of the survey questions they received, with no one completing less than 91% of the administered survey (respondents were informed at the start of the survey that they could skip any question).
Weighting
The data were weighted to represent U.S. adults ages 18 to 64. The data include oversamples of females ages 18 to 35 and females ages 36 to 64. Due to this oversampling, the data were classified into three subgroups: females 18 to 35, females 36 to 64, and males 18 to 64. The weighting consisted of two stages: 1) application of base weights and 2) calibration to population parameters. Each subgroup was calibrated separately, then the groups were put into their proper proportions relative to their size in the population.
Calibration to Population Benchmarks
The sample was balanced to match estimates of each of the three subgroups (females ages 18 to 35, females ages 36 to 64, and males ages 18 to 64) along the following dimensions: age; education (less than a high school graduate, high school graduate, some college, four-year college or more); region (Northeast, Midwest, South, West); and race/ethnicity (White non-Hispanic, Black non-Hispanic, Hispanic-born in U.S., Hispanic-born Outside the U.S., Asian non-Hispanic, Other non-Hispanic). The sample was weighted within race (White, non-Hispanic; Black, non-Hispanic; Hispanic; and Asian) to match population estimates. Benchmark distributions were derived from 2021 Current Population Survey (CPS) data.
Weighting summaries for females ages 18 to 35, females ages 36 to 64, and males ages 18 to 64 are available upon request.
Finally, the three weights were combined, and a final adjustment was made to match the groups to their proper proportions relative to their size in the population (Table 1).
Margin of Sampling Error
The margin of sampling error, including the design effect for subgroups, is presented in Table 2 below. It is important to remember that the sampling fluctuations captured in the margin of error are only one possible source of error in a survey estimate and there may be other unmeasured error in this or any other survey.
KFF Analysis
Researchers at KFF conducted further data analysis using the R survey package, including creating constructed variables, running additional testing for statistical significance, and coding responses to open-ended questions. The survey instrument is available upon request.
Rounding and sample sizes
Some figures in the report do not sum to totals due to rounding. Although overall totals are statistically valid, some breakdowns may not be available due to limited sample sizes or cell sizes. Where the unweighted sample size is less than 100 or where observations are less than 10, figures include the notation “NSD” (Not Sufficient Data).
Statistical significance
All statistical tests are performed at the .05 confidence level. Statistical tests for a given subgroup are tested against the reference group (Ref.) unless otherwise indicated. For example, White is the standard reference for race/ethnicity comparisons and private insurance is the standard reference for types of insurance coverage. Some breakouts by subsets have a large standard error, meaning that sometimes even large differences between estimates are not statistically different.
A note about sex and gender language
Our survey asked respondents which sex they were assigned at birth, on their original birth certificate (male or female). They were then asked what their current gender is (man, woman, transgender, non-binary, or other). Those who identified as transgender men are coded as men and transgender women are coded as women. While we attempted to be as inclusive as possible and recognize the importance of better understanding the health of non-cisgendered people, as is common in many nationally representative surveys, we did not have a sufficient sample size (n >= 100) to report gender breakouts other than men and women with confidence that they reflect the larger non-cisgender population as a whole. The data in our reproductive health reports use the respondent’s sex assigned at birth (inclusive of all genders) to account for reproductive health needs/capacity (e.g., ever been pregnant) while the data in our other survey reports use the respondent’s gender.