ACR Bulletin Feature Article

  • The Right Tools

    Radiologists can improve the appropriateness of image ordering, transforming patient care and driving down costs.

    read more

 

ACR Bulletin Top Stories

How Did We Do in Year One?

The results are in for radiology-specific Quality Payment Program performance for 2017.

June Images2

The first performance period of the CMS Quality Payment Program (QPP), affecting 2019 payments, took place in 2017. In February 2019, I wrote about broad multispecialty trends under the QPP from 2017. This included scoring and performance under the Merit-Based Incentive Payment System (MIPS). In that column, I communicated that the median score across all eligible clinicians was 89 and that 93 percent of eligible clinicians and that 93 percent of eligible clinicians received some bonus payment. Simply put, physicians performed well.

In this column, I will focus on the recently released CMS specialty and geographically specific data (see below). I will comment on two of the performance categories used to determine our final MIPS score: Quality and Improvement Activities (IA).

 

MIPS 2017 performance

Our final score in 2017 was mostly made up of the Quality Performance category. Success required the reporting of six quality measures, many of which are categorized within specialty measure sets, including diagnostic radiology (DR), IR, and radiation oncology (RO). For instance, the DR specialty set included Measure 145 (reporting fluoroscopy dose or time), the IR specialty set included Measure 265 (biopsy follow-up), and the RO specialty set included Measure 382 (dose limits to normal tissues). The data suggests that radiology reported a small percentage of our own measures, commonly choosing measures outside our  specialty. Only 53 percent of DRs reported at least one measure within our set, RO was less than 25 percent, and IR was less than 1 percent (7 percent to be exact).

Why would we avoid our own measures in this manner? DRs in multispecialty groups are likely reporting measures across other specialties via the group reporting
option. IRs and ROs may be doing the same or reporting DR measures.

This specialty-specific circumstance has negative consequences. First, the optics are not great. We continually argue for better measures and dedicate considerable resources to creating them. The task becomes more difficult when policymakers see our own specialists choosing not to participate in the performance and reporting of our own measures. Further, reporting experience is necessary for CMS to establish the benchmarks under which we are scored. Measures without reporting data, by definition, lack benchmarks. And measures for which only the highest-performing sites report, generally, have very high benchmarks. This suggests that these measures have little potential for quality improvement, which may be true for those sites reporting but may not be the case for sites that do not report. We simply do not know.

Regardless of the cause, the response of CMS is to classify these high-performing measures as “topped out.” Measures that fall into either of these categories — with
no benchmarks or as topped out — have their maximum potential points capped. For example, topped-out measures have a cap of 7 (out of 10), while measures with no benchmarks have a cap of 3. The solution is for us to report on all our measures, even if they are not our highest-performing ones. CMS chooses the six measures that yield the highest-quality score, so there is no downside to this action. Nationally, this establishes the benchmarks and reporting rates to show our  engagement. Locally, it shows our associates that we are engaged and not simply relying on other specialties to carry us.

IA represented 15 percent of our final score in 2017. The three most commonly reported IAs by DRs were 24/7 access, formal quality improvement methods, and follow-up on patient experience — all of which reflect favorably on our profession. The 2017 QPP data suggest that radiology participated at a high rate and scored well. However, we stand to improve in the reporting of our own quality measures.


 By Ezequiel Silva III, MD, FACR, Chair

Share this content

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedIn