Resolve to Evolve

Radiologists boost patient care through the peer review process.resolve to evolve

From our earliest days in elementary school to working as part of a team of professionals, most of us want to feel we’re an integral part of a whole. According to some biologists, it’s not just something we’re taught, either: it’s a big part of our evolutionary history.

And an important element in group dynamics is concern for the group. A team of Harvard biologists recently pioneered a theory that says altruism emerged among humans as a way to protect social groups. In other words, when people judge that it is important to be included in a group, they will act altruistically to advance its collective needs.1

This view holds as true for the workplace as it does for social society. Radiologists, not unlike professionals in other fields, are often loath to take actions that might distance them from others in the group. It can be difficult to point out weaknesses in their colleagues for many reasons, including an aversion to making their peers look bad in front of others and a fear of retribution. But while this attitude may preserve a culture of protection at work, it can be detrimental to quality patient care. Many radiologists have come to understand that, as high-value health care takes prominence over the volume of images read, it is more crucial than ever to deliver outstanding patient care. One way to establish such a threshold of excellence is through peer review.

High Standards

Peer review has long been an important component of many radiologists’ dayto- day work. The activity became more highly standardized in the past decade after the AMA and the American Board of Medical Specialties put forth maintenance of competence requirements — which are synonymous with maintenance of certification requirements — meant to ensure that physicians practice with “the expected level of safety and skill.”2 In addition, in 2009, The Joint Commission introduced the Ongoing Professional Practice Evaluation and Focused Professional Practice Evaluation. These tools called upon physicians to be routinely monitored in terms of professional competency and established strict guidelines in the credentialing and privileging of new staff members.3

Many radiology departments utilize peer review to meet these requirements. Richard J. Woodcock Jr., MD, radiologist at Northwest Radiology Consultants, PC in Atlanta, cites several advantages to implementing a robust peer review program. According to Woodcock, such an effort “reduces anxiety related to inter-observer variation and substandard performance by using a standardized process rather than an informal discussion. It also raises awareness about the importance of quality” to both radiologists and the technical facilities with whom they contract.

Although the reasons for adhering to a peer review system may seem self-evident to many, some radiologists bristle at having to partake in a process that, from their perspective, takes time away from reading images. In addition, explains Jonathan D. Clemente, MD, vice chief of the radiology department at Carolinas Medical Center and director of quality for Charlotte Radiology, both in Charlotte, some radiologists may see peer review in a negative light because, “depending on how peer review data is presented, it could be divisive within a radiology group, especially if used improperly in a punitive or judgmental manner.”

But if an organization establishes a peer review program that can hit on all cylinders, it can put them in a position of strength. “Hospitals and, increasingly, other technical facilities are ahead of physician groups in embracing and requiring quality programs,” notes Woodcock. “This forms an important part of accreditation for them, and increasingly will be needed to justify or improve reimbursement. Demonstrating that your group has a working program puts the radiology group in an advantageous position and satisfies the facility’s need for quality assurance.”

RADPEER Enters the Picture

The ACR’s peer review solution, RADPEER™, emerged in part to address these concerns. As an ever-increasing level of scrutiny was being placed on physicians’ adherence to standards, the ACR convened a special patient safety task force to evaluate maintenance of competence and peer review, among other activities. The task force came to the conclusion that to meet one of the requirements of maintenance of certification, a peer review program “must be national, uniform in structure and function across practices, accurate, facile, nonpunitive, and able to be integrated into a facility’s quality assurance program.”2 As a result, a pilot of the RADPEER program was launched in 2002.

RADPEER is an assessment tool that allows for the evaluation of interpretation quality. It resembles a double-reading process but is incorporated into daily workflow for ease of use. Clemente asserts that participating in a peer review program has enhanced the quality of his group’s reads. “Our main source of peer review data, RADPEER, is integrated into our PACS through a third-party plug-in with minimal disruption to workflow,” he says.

And RADPEER, when used as part of a wider peer review effort, aids in maintaining confidentiality. “When a ‘miss’ is identified,” continues Clemente, “the individual is notified confidentially of the error without being told who submitted the peer review. If there is a concern that the case may fall outside the standard of care, we show it to several other radiologists (preferably outside of their radiology subspecialty) for their opinion. All of this is done anonymously. The reviewing radiologists do not know the name of the radiologist involved or the patient.”

Building a Better System

Although RADPEER can prove an invaluable component to a strong peer review setup, supplemental enhancements to the overall peer review process can make the system more comprehensive. For example, the radiologists at the Austin Radiological Association in Austin, Texas, have implemented an innovative way of segmenting out cases that originate from different sources. Christopher R. Richards, MD, medical director of the Austin Radiological Association, describes how his group uses RADPEER to randomly review past studies, while using a system called Blue Card to catch other quality control issues that may come up outside of random peer review.

“If a referring physician comes to us and says that one of our radiologists missed an important finding outside of random peer review,” says Richards, “then that event gets entered into the Blue Card system.” One reason for treating these types of potential misses separately, explains Richards, is that, occasionally, a radiologist might be targeted due to some form of bias on the referring physician’s part.

In addition to the objectivity afforded by seeing one’s misses quantified through an instrument like RADPEER, Richards attests to the value of setting up a quality assurance committee to address radiologist errors. The Austin Radiological Association has contracts with two different hospital systems and has set up a peer review committee for each one. Both committees are staffed by six radiologists in a range of subspecialties. In Richards’ experience, when a radiologist is brought before one of the committees as the result of an error, it becomes difficult for the radiologist in question to argue with six people who are unanimous in their agreement about the error. “It’s hard to say that a half dozen people are all wrong,” asserts Richards.

Woodcock agrees. “Using multiple peers to re-review errors to confirm that the disagreement of interpretation is a miss or misinterpretation, and not a disagreement of opinion, is critical,” he says. “So when a case is reviewed, if there is an error deemed to be clinically relevant, two other peers will confirm it. The standardized system includes the opportunity for suggestions for improvement if there are multiple significant errors committed by the same primary reader.”

What happens after it is determined that errors have occurred? Clemente’s group designs a plan of action for those with an unacceptably high rate of errors. “Say for example that a radiologist has missed three cases of a specific abdominal disease,” notes Clemente. “We might have that person review and document 25 proven cases of that disease entity, review a chapter on that disease in a major radiology textbook, and prepare an educational presentation for all of the radiologists.”

Remediation has worked well over the past decade for the Austin Radiological Association. But while focusing on highlevel patient care has borne fruit, other types of incentives have proven key to the group’s success. Every quarter, the group offers a monetary incentive to induce staff radiologists to engage in the peer review process. In order to earn the stipend, radiologists must complete three peer reviews per day. “You have to offer a little bit of a carrot,” Richards explains.

Radiologists are only human. They make mistakes and, for the most part, they display the very human attribute of wanting to make their colleagues look good. But these realities must be weighed against the end goal of providing the highest level of patient care possible. A good way to achieve this balance is through the implementation of a rigorous peer review process, which respects participants’ privacy while keeping patients at the very center of care.


By Chris Hobson

 

ENDNOTES

1. Weintraub P. “#3: E.O. Wilson’s Theory of Altruism Shakes Up Understanding of Evolution.” Discover. http://bit.ly/GroupInclusion. Accessed January 9, 2013.
2. Borgstede J, Lewis RS, Bhargavan M, Sunshine JH. “RADPEER Quality Assurance Program: A Multifacility Study of Interpretive Disagreement Rates.” JACR 2004;1(1):59–65.
3. Kaewlai R, Abujudeh H. “Peer Review in Clinical Radiology Practice.” AJR 2012;199(2):158–62.

Share this content

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedIn