Methodology — National Benchmark of U.S. Mothers

How we conduct the Benchmark.

Wave 1 — Spring 2026

About This Page

The National Benchmark of U.S. Mothers is a semiannual, nationally representative research infrastructure tracking five composite indices of maternal and family wellbeing. This page documents the complete methodology for Wave 1. It is published before data collection begins and updated with exact fieldwork dates after the survey closes.

Count on Mothers publishes full methodology with every wave — not as a compliance exercise, but because the integrity of independent research depends on it.

Sample Sourcing & Recruitment

Wave 1 draws from two distinct panels, combined into a single weighted dataset. A third-party survey sampling platform provides access to a representative panel of respondents. The Count™ panel enriches the dataset with high-engagement respondents and supports longitudinal insight.

Community Panel — The Count™

The Count™ is Count on Mothers' subscriber community — mothers across all 50 states who joined countonmothers.org and have opted in to participate in surveys. Wave 1 invitations are issued to The Count™ members via direct email and through CoM's Anchor Mom Advisor network.

Recruitment method

Direct email invitation to The Count™ subscriber list. Personalized invitation to Anchor Mom Advisors (a structured advisory segment of engaged community members representing all U.S. regions and the political spectrum).

Respondent profile

Self-selected from CoM's community; higher engagement and authentic motivation than cold opt-in panels; stronger longitudinal retention.

Primary role in Wave 1

Community panel contributes qualitative depth, enrichment data, and community-sourced insight. Community panel responses are incorporated into the weighted dataset (see Hybrid Design below).

Panel integrity advantage

As AI-generated survey responses threaten paid panel quality industry-wide, Count on Mothers' community-sourced panel is more defensible. Community members with established relationships are more resistant to synthetic responses due to established participant relationships and longitudinal validation signals; longitudinal behavioral patterns enable anomaly detection.
Paid Panel — Third-party Survey Sampling Platform
We use a national online survey panel provider used to recruit a random, nationally representative sample of U.S. mothers for Wave 1. Paid panel respondents are recruited independently of CoM's community and are not self-selected into a CoM relationship.

Recruitment method

Vendor-sourced online panel. Respondents are screened at intake to confirm U.S. residency and current parenting status (mother, child in household).

Compensation

Paid panel respondents receive standard panel incentive.

Role in Wave 1

Primary source for nationally representative quantitative wave. Provides the statistically defensible random sample required for institutional-grade benchmark legitimacy.

Target n

Approximately 2,000–2,500 completed responses from paid panel, sufficient for national topline reporting and key subgroup analysis.

Hybrid Design

Both panels contribute to a single Wave 1 dataset. Separate post-stratification weights are applied to each panel before combination. This approach is methodologically defensible and preserves both the statistical rigor of the paid panel random sample and the community value of The Count panel. All reported topline statistics and index scores are drawn from the combined, weighted dataset.

Weighting Variables, Census Vintage & Method

Weighting Method

Post-stratification raking (iterative proportional fitting) is used to align the combined survey sample with U.S. population targets. Raking iteratively adjusts respondent weights across multiple demographic variables simultaneously until the sample distribution converges with Census benchmarks. This is the standard approach used by major survey organizations including Pew Research Center, Gallup, and NORC.

Census Vintage

Population targets are drawn from the American Community Survey (ACS) 5-Year Estimates, most recent available year at time of weighting. [ACS vintage year to be confirmed and documented here after final weighting is applied.]

Weighting Variables

The following variables are used as weighting targets. All targets are specific to U.S. mothers (not the general population).

Variable

Target Population

Age
Distribution of mothers by age cohort (18–29, 30–39, 40–49, 50+)
Race / Ethnicity
Non-Hispanic White, Non-Hispanic Black, Hispanic/Latina, Asian/Pacific Islander, Other/Multiracial
Census Region
Northeast, Midwest, South, West (4-region; 9-division available for subscriber crosstabs)
Education Level
Less than high school, High school diploma/GED, Some college, Bachelor's degree, Graduate degree
Household Income
Used as supplemental weight where response rate permits; see SES methodology note below
Weight capping: Sensitivity checks are conducted to ensure weight capping does not materially distort population alignment.

Socioeconomic Status Methodology Note

Why We Use Education + Area Deprivation Index (ADI) as Our SES Measure

Count on Mothers uses a combination of education level and zip code–linked Area Deprivation Index (ADI) as our socioeconomic status measure, in addition to self-reported household income.

Self-reported income has well-documented limitations in survey research: non-response rates of 15–25% are typical, and wave-to-wave volatility introduces noise that undermines longitudinal comparability. Our approach — pairing education (stable, well-reported, and strongly predictive of economic conditions) with neighborhood-level deprivation data (ADI, derived from Census-linked geographic indicators) — produces a more stable and reliable proxy for socioeconomic conditions across waves.

This approach is consistent with contemporary practice in population health research and social survey methodology. It enables more confident subgroup comparisons across waves without the volatility introduced by income non-response.

Margin of Error

All margins of error are reported at the 95% confidence level.

Sample

Margin of Error (±, 95% CI)

Full sample (n = 2,000–2,500)
±2.2–2.5 percentage points
Key subgroups (n ≈ 500)
±4.4 percentage points
Smaller subgroups (n ≈ 200)
±6.9 percentage points
Subgroup reporting threshold: Subgroup percentages are not reported when the unweighted n falls below 50. Results are flagged as "directional only" when unweighted n falls between 50 and 99.

Cross-partisan reporting: Political ideology subgroup analysis is a core CoM output. When findings are consistent across ideological groups, results are labeled "consistent across all political groups" — CoM's strongest credibility signal for policy and institutional audiences.

Effective Sample Size

Effective sample size (n_eff) is reported alongside unweighted n for all wave publications and institutional deliverables.

Post-stratification weighting increases the representativeness of the sample but introduces design effects: some respondents receive higher weights than others, which reduces the statistical precision compared to a pure random sample of the same size. The effective sample size accounts for this:
n_eff = n / DEFF
Effective sample size is reported in the methodology appendix of every wave report. Institutional subscribers receive n_eff alongside unweighted n and weighted n in the full data deliverable.

Index Construction

The National Benchmark tracks five composite indices, each scored on a 0–100 scale. For all five indices: higher scores indicate better conditions (lower stress, stronger trust, greater access, safer environment). This directional convention is held constant across all waves.

Each index is constructed from a defined set of fixed survey items. Raw item responses are standardized, directionally aligned, and aggregated into a composite score using weighted averaging. Individual item weights within each index are documented in the methodology appendix and held constant across waves.

The Five Indices

MCSI

Maternal Stress & Capacity Index

What it measures
Perceived stress, time scarcity, coping capacity, and resilience. Captures whether mothers have the bandwidth — cognitive, emotional, physical, financial — to meet the demands of daily life.
Why it matters
MCSI is the Benchmark's barometer. It is the operating condition through which all other index scores are interpreted. When MCSI declines, the significance of pressures measured by other indices is amplified. It is presented first in every wave report.

FEPI

Family Economic Pressure Index

What it measures
Household financial strain, childcare and work constraint, and healthcare affordability. Treats economic pressure as a persistent condition rather than an income threshold — capturing the squeeze families feel regardless of nominal income level.
Why it matters
Economic pressure on families is consistently undercounted when measured by income alone. FEPI captures the lived financial reality — including the specific costs of childcare, healthcare, and work constraints that are structurally concentrated in families with children.

ITAI

Institutional Trust & Accountability Index

What it measures
Maternal trust in schools, healthcare systems, technology platforms, and government — and mothers' assessment of whether these institutions act in families' interests. Measures perceived accountability and fairness alongside trust levels.
Why it matters
Institutional trust is a leading indicator for family system engagement. Low ITAI scores predict reduced uptake of services, avoidance behaviors, and civic disengagement. Cross-partisan patterns in ITAI are among CoM's most policy-actionable findings.

YECCI

Youth Environment & Safeguard Conditions Index

What it measures
AI and digital reliance among children, commercial design safeguards in digital environments, mothers' assessment of risk exposure climate for their children online and in consumer spaces, and their integrated judgment of whether today's youth environment is helping or harming children's cognitive, social, and emotional development.
Why it matters
CoM's published AI & Child Safety research (n=2,290, January 2026) established this as a domain where maternal concern is high across all political groups and where longitudinal tracking has direct policy relevance. YECCI makes this concern measurable and trendable.

CWASI

Child Wellbeing Access & Support Index

What it measures
Child mental health access, school support confidence, and the navigation friction families encounter when seeking services. Captures both availability and accessibility — including insurance barriers, wait times, and out-of-pocket costs — and institutional confidence in the school and insurance systems as gateways to mental, emotional, and behavioral health support.
Why it matters
CoM's Pulse Check 2025 (n=2,703) found that 23% of mothers who needed mental health care for their children could not access it — and 80% of those without sufficient access had private insurance. CWASI tracks this access gap systematically across waves.
Complete item-level composition for each index — including individual survey items, response scales, directional coding, and item weights — is documented in the full methodology appendix available to institutional subscribers.

Panel Integrity Protocols

As AI-generated and bot-produced survey responses become a growing threat to online panel quality industry-wide, Count on Mothers treats panel integrity as a competitive differentiator — not just a quality control step. The following protocols are applied to every wave.

Speed Screener

All survey completions are flagged if total response time falls below a minimum threshold calibrated to the instrument length. Wave 1 threshold: completions under 4 minutes are flagged for review.
Flagged responses are individually reviewed and removed from the dataset if they fail additional quality checks.

Open-Ended Response Quality Control

Wave 1 includes at least one open-ended response question. All open-ended responses are screened for: (a) AI-generated or templated language patterns; (b) copy-paste duplication across respondents; (c) off-topic or nonsensical content.
Responses failing this screen are removed from the dataset. The open-ended QC step provides a high-sensitivity signal for detecting synthetic respondents that closed-ended items alone cannot detect.

Attention Check Items

The Wave 1 instrument includes embedded attention check items — instructed-response questions (e.g., "For this question, please select [specific answer]") placed within the survey flow.
Respondents who fail attention checks are flagged. Respondents who fail more than one attention check are removed from the dataset prior to weighting.

Community Panel Longitudinal Integrity

The Count community panel has structural integrity advantages over cold opt-in panels. Community members have established relationships with CoM over time, creating behavioral fingerprints that enable anomaly detection. Longitudinal wave-over-wave participation patterns are monitored for irregular activity. New community panel entrants for Wave 1 are subject to the same speed, open-ended, and attention check protocols as the paid panel respondents.

Why This Matters for Institutional Buyers

The methodological risk of AI-contaminated survey panels is not theoretical — it is actively degrading the quality of opt-in panel research industry-wide. Count on Mothers' blended panel architecture, combined with multi-layer integrity screening, produces a more defensible dataset than single-source cold panels. We document and publish these protocols because institutional buyers purchasing independent data should be able to evaluate what they are buying.

Instrument Stability

Wave 1 establishes the baseline. It is the foundation from which all trend analysis is built.

One wave = report.
Two waves = trend.
Three waves = infrastructure.

The Wave 1 core instrument consists of 25–35 fixed questions. These questions are held constant across waves. The stability of the core instrument is what makes longitudinal comparison possible — and what makes the Benchmark's value compound with each subsequent wave.
  • Fixed core questions: Do not change across waves without a documented rationale, a formal review by the research team and Methodology Advisory Board, and an overlap procedure (running both old and new wording in the transition wave on split samples) to establish trend continuity before retiring any item.
  • Response scales: Held constant within subdomains. Mixing scale types across waves introduces measurement error.
  • Index construction: Item composition and weighting within each index is documented and locked at Wave 1. Changes to index construction require the same overlap procedure as item changes.

Rotating Modules

Wave 1 includes a supplemental rotating module of up to 4 questions (plus an optional open-response field). Rotating module questions address time-sensitive topics and are positioned after the fixed core instrument. They are clearly labeled as supplemental in all published materials.
  • Independence from core scores: Rotating module responses are not incorporated into the five composite index scores. They are analyzed and reported separately.
  • Labeling: All findings from the rotating module are labeled "Supplemental — Wave [n] Rotating Module" in published materials and data deliverables.
  • Institutional add-on: Rotating module topic selection is eligible for institutional underwriting as an à la carte add-on. The CoM research team retains final authority over question wording, framing, and inclusion.

Research Team

Institutional subscribers and data partners are buying trust in the methodology. Research team members' credentials are displayed prominently in every wave report and institutional deliverable.
Jennifer Bransford
Founder, CEO
Survey methodology and research design. Responsible for instrument architecture, question development, index construction, and methodology documentation.
Melissa Lawrence, MPH
Director of Data Science
Statistical weighting, data cleaning, and quantitative analysis. Responsible for raking procedures, effective sample size calculation, and longitudinal dataset architecture.
Academic partnerships supporting Wave 1 methodology:

University College London (Methodology Advisory Board — Dr. Kaitlyn Regehr, Dr. Photini Vrikki, Dr. Katharine Smales).

Fieldwork Dates

Wave 1 Field Dates

Survey open:
[TO BE ADDED AFTER FIELDWORK CLOSES]
Survey close:
[TO BE ADDED AFTER FIELDWORK CLOSES]
Target field window:
14–21 days
This page is published before fielding begins and updated with exact open and close dates after the survey window closes.

Sample Growth & Longitudinal Transparency

The Benchmark is designed to grow in analytical power across waves.
Wave 1 establishes the baseline at n = 2,000–2,500 (±2.2–2.5 pp). As the CoM community panel grows through ongoing subscriber recruitment, subsequent waves will increase sample size — reducing margin of error and enabling more granular subgroup analysis at the regional, demographic, and political levels.
This growth is a designed feature of the Benchmark architecture, not a concession.
The paid panel provides a stable representativeness floor at any sample size. Community panel growth compounds  the dataset’s analytical power without replacing the statistical rigor of the random sample foundation.
n = 2,000–2,500
Wave 1 sample size (±2.2–2.5 pp at 95% CI)
Wave-over-wave sample sizes and methodology changes are documented in a public change log maintained on this page beginning with Wave 1. Pre-Wave 1 instrument iteration history is documented in the institutional methodology appendix and is not trend-affecting. From Wave 1 forward, no change to core instrument questions, index construction, or weighting methodology is made without a formal overlap procedure and published rationale.

Methodology Changes

Wave

Change Date

Change Summary

Rationale

Wave 1
Spring 2026
Baseline established. No prior waves to compare.

Questions about our methodology?

Count on Mothers • National Benchmark of U.S. Mothers • Wave 1 Methodology • Spring 2026