Measure Selection

The path a measure takes for selection and implementation depends on the program, as not all measures go through the pre-rulemaking and rulemaking process. Section 3014 of the Patient Protection and Affordable Care Act (ACA) requires those measures subjected to this section go through the pre-rulemaking and rulemaking process. For a list of the programs, and information on their needs and priorities, see the annual Measures Under Consideration (MUC) List program-specific measure needs and priorities on the Pre-Rulemaking page. Other quality initiatives, such as the Health Insurance Marketplace Quality Initiatives, do not go through pre-rulemaking and follow another path.

CMS Measure Selection Criteria

CMS measure selection criteria help ensure that each measure

  • Supports the CMS and national health care priorities including prioritizing outcome measures, patient-reported outcome measures, digital measures, and equity
  • Is responsive to specific program goals and statutory requirements
  • Addresses an important condition or topic with a performance gap and has a strong scientific evidence base to demonstrate the measure can lead to the desired outcomes and/or more affordable care 
  • Has written consent for any proprietary algorithms needed for measure production
  • Promotes alignment with CMS program attributes and across Department of Health and Human Services (HHS) programs and health care settings
  • Identifies opportunities for improvement (e.g., not topped out)
  • Does not result in negative unintended consequences (e.g., overuse or inappropriate use of care or treatment, limiting access to care)
  • Does not duplicate another measure currently implemented in one or more programs
  • If an electronic clinical quality measure (eCQM), it must have a CMS ID (found in MADiE) and expressed in Health Quality Measure Format using the Quality Data Model and Clinical Quality Language

Fully Developed Measure

To meet these selection criteria, the measure developer must complete  testing of the measure. This means the measure developer has completed 

  • Person/encounter-level (data element-level) reliability and validity testing, when appropriate, for each critical data element and the measure specifications do not need changes based on the results. Testing may be empiric or reference external or previous testing (e.g., established data element library such as the CMS Data Element Library (DEL) or eCQM Data Element Repository (DERep) or literature).

AND

  • Accountable entity-level (measure score-level) reliability and validity testing, when appropriate, and specifications do not need changes based on the results. Measure developers are encouraged to report accountable entity-level reliability results by decile (rather than just the median) to detect differences in reliability across the target population size distribution.
  • Completion of face validity testing as the sole type of validity testing does not meet the criteria for completion of testing for a fully developed measure. However, face validity is acceptable for new measures (i.e., those not currently in use in CMS programs and undergoing substantive changes) that are not eCQMs. Instead of Likert-scale type assessments of face validity, measure developers are encouraged to develop a logic model consisting of inputs, activities, outputs, and outcomes to describe the associations between the health care structures and processes and the desired health outcome(s). The logic model should indicate the structure(s), process(es), and/or outcome(s) included in the measure. A detailed logic model will help the measure developer identify appropriate constructs for future empiric validity testing.

AND

For measures based on survey data or patient-reported assessment tools, including patient-reported outcome-based performance measures (PRO-PMs), the measure developer has tested reliability and validity of the survey or tool and the survey or tool does not need changes based on the results. For measures based on assessment tools, the measure developer must have completed reliability and validity testing for each critical data element and complete testing of the assessment tool itself with no changes to the tool needed based on the results.

Pre-Rulemaking Process

Section 3014 of the ACA mandated the establishment of a federal pre‐rulemaking process for selecting quality and efficiency measures for specific programs within HHS. The pre-rulemaking process requires HHS to consider multi-interested party input on quality and efficiency measure selection. To meet these requirements, CMS develops a MUC List. The CMS consensus-based entity (CBE) convenes interested parties described in Section 3014, to provide input to HHS on the list of measures for use by CMS. By statute, HHS and CMS must consider this input.

Measures Under Consideration

Over the past few years, CMS has articulated a number of measure selection criteria in its final rules for various programs. The term “measure selection” typically applies to determining whether a measure should be included in a measure set for a specific program, while “measure evaluation” applies to assessing the merits of an individual measure, not in the context of a specific program. CMS has established a set of measure selection criteria so HHS can develop the MUC List for qualifying programs and make it available publicly by December 1 each year. These selection criteria are operationalized by CMS program staff and leadership, who decide which measures to place on the MUC List for review.

Click here to view the annual pre-rulemaking timeline.

After opening the CMS Measures Under Consideration Entry / Review Information Tool (MERIT) (login required), CMS gathers specifications and supporting information on new candidate measures. CMS publishes guidance on the Pre-Rulemaking page and may host educational webinars to kick off the official MUC season. Visit the Pre-Rulemaking page for more information. 

Applying the measure selection criteria, CMS develops the MUC List. CMS may ask measure developers to provide details on the measures to help CMS develop the MUC List. CMS then provides this list to the CBE.

MUC List Recommendations

The input provided by interested parties convened by the CMS CBE to HHS on the annual list of quality and efficiency measures that are under consideration by one or more Medicare programs is due by February 1 of each year as a recommendation report. Each annual report can be found on the Pre-Rulemaking page. CMS strongly encourages measure developers with measures on the MUC List to attend these recommendation meetings.

CMS Considers MUC List Recommendations for Final Selection

After CMS receives the recommendations on the MUC List, a deliberation process begins. CMS determines the inclusion of the measures using the federal rulemaking processes. The measure selection criteria used during development of the MUC List are the same criteria used for rulemaking. HHS must consider the recommendations on the MUC List.

CMS Rulemaking Processes

After CMS completes the pre-rulemaking process and selects measures for potential inclusion in rulemaking, the next steps in the cycle are

  • Proposed rules: CMS writes the proposed rules and publishes them in the Federal Register. A proposed rule is generally available for public comment for 60 days.
  • Final rules: CMS considers the received comments and publishes the final rules in the Federal Register.

CMS treats existing measures that undergo substantive changes as new measures.

Examples of Substantive Changes

Changes to the

  • Intent of the measure
  • Numerator's inclusion and/or denominator inclusion, denominator exclusions and exceptions, numerator exclusion criteria
  • Methodology previously published in a final rule (methodology in this case refers to measure calculation/measure scoring methodology)
  • Cohort, both as significant increase or decrease
  • Science impacting the primary medication, dosage, or medical device

Paperwork Reduction Act Guidance for CMS Measure Developers

There may be a requirement for CMS measure developers to prepare documentation for an Information Collection Requirement (ICR), which is part of the Paperwork Reduction Act (PRA). CMS measure developers can find more information about ICRs and PRA in the Blueprint Contractual Guidance and Considerations found on the Measure & Instrument Development and Support (MIDS) Library. The MIDS Library is a protected website restricted to CMS and CMS MIDS contractors. MIDS contractors should email MMSsupport@battelle.org to request access to the MIDS Library. Once eligibility is confirmed, the MIDS contractor will receive an invitation to the site.

Resource

CMIT Data Elements Substantive Change

Last Updated: