Skip to main content

Quality Measurement at CMS

Measure Development Principles

  • General principles for measure development serve as overarching guidelines for measure development that meet the standards and rigor expected of a meaningful, valid, and useful measure. 

    Measure development should 

    • Focus on what is best for persons and most meaningful to persons, caregivers, and measured entities.
    • Explicitly align with CMS goals and objectives.
    • Align across payors, including Medicare, Medicaid, the Health Insurance Exchanges, other federal partners, and private payors, to the extent feasible based on data availability for each payor type, differences in populations served, and level of analysis.
    • Address a performance gap where there is known variation in performance, not just a measure gap. 
    • Use resources efficiently in a rapid-cycle fashion, including using process improvement techniques, such as Lean and human-centered design, and considering respecification instead of de novo measure development.
    • Encourage collaboration among measure developers and share best practices/new learnings freely.
    • Reorient and align around person-centered outcomes that span clinical settings, which may require different “versions” of the same measure (i.e., different cohorts, but same numerator). Test each of these setting-specific versions for reliability and validity.
    • Promote value-based care that produces quality outcomes. 
    • Focus on outcomes (including patient-reported outcomes), safety, patient experience, care coordination, appropriate use/efficiency, and/or cost.
    • Identify disparities and promote health equity in the delivery of care.
    • Guard against negative unintended consequences of measure implementation, including overuse and underuse of care.
    • Engage stakeholders early and often during the measure development process.
    • Strive to reduce clinician and person burden in data collection and reporting measures. 
    • Ensure measures meet the specific program’s(s) stated intent, goals, and objectives. 
    • Use digital sources and avoid paper-based sources.
       
  • Measure developers should apply technical principles when developing measures for consideration for quality reporting and value-based purchasing programs.

    Measure developers should 

    • Develop a rigorous business case for an evidence-based measure concept—a critical first step in the development process.
    • Prioritize digital data sources and digital measures (e.g., electronic health records, registries, electronic administrative claims), where appropriate, and eliminate dependency on data from chart abstraction. 
    • Maintain a focus on iterative testing using both real and synthetic data.
    • Consider approaches to aggregate multiple data sources (e.g., hybrid measures) to achieve the most accurate assessment of quality until interoperability is universal.
    • Define outcomes, risk factors, cohorts, inclusion, and denominator exclusion and/or numerator exclusion criteria based on clinical and empirical evidence.
    • Judiciously select denominator/numerator exclusions to capture as broad a person population as possible and appropriate. 
    • Consider developing a paired measure to capture and measure the care excluded persons received if a significant number of persons are excluded from the measure calculation (e.g., for all persons seen in the emergency department, if those persons who were transferred directly to another acute care facility for tertiary treatment are excluded, a paired measure would address those persons who were transferred out of the original facility).
    • Develop risk adjustment models to distinguish performance among measured entities rather than predict person outcomes, if appropriate.
    • Include measure stratification and risk adjustment approaches to show differences in quality or outcomes among demographic groups and allow for quality comparisons between measured entities after considering differences in person characteristics that would not influence the care received, if appropriate. 
    • To avoid duplication, thoroughly review existing measures and harmonize measures, methodologies, data elements, and specifications, when applicable and feasible.
    • Develop each measure with sufficient statistical power to detect and report statistically significant differences in measured entity performance.
    • Consider strategies to enable clinicians that have smaller practices and low-volume facilities to report a measure reliably, e.g., partial pooling.
    • Strive to develop measures that can progress to multi-payor applicability using all-payor databases where available. 
    • To minimize clinician burden, analyze and factor in the clinical workflow required to ensure seamless data flow from structured fields in digital sources for measure calculation, e.g., electronic clinical quality measures.
    • Using the Interoperability Standards Advisory as a guide, prioritize use of interoperable data elements (e.g., United States Core Data for Interoperability) and data exchange standards.
       
Last Updated: May 2022