Learning Bias

Without an ethical base, will smart machines pick up preconceptions that are dangerous to patient care?

GettyImages 801481260

In the past, a machine would only “know” what it was programmed to compute. Today, we have machine learning — a type of AI that allows applications to become highly accurate in predicting outcomes without being explicitly programmed. Machine learning could reduce repetitive tasks inherent in radiology, but at what cost? How does a machine make human-like judgments without encompassing human-like biases? How do machines learn to judge without learning to be judgmental? Therein lies the debate over humans programming unconscious bias into machine learning. The heart of the issue is that machines can magnify inherent biases and incorporate them into data, leading to false assumptions.

A tremendous incentive exists to innovate and lead the way in fair automation, and success comes down to consideration of ethics by humans engaged in the process.1 Success may be defined as machines assisting radiologists in finding patterns without introducing a series of new problems. Since machines are able to identify large-scale patterns and apply them to specific situations, a smart machine might incorporate prejudicial thought onto a specific population.

For example: An elderly person has a treatable form of cancer, but the machine has learned not to screen for early stage cancers for the elderly because many such patients do not live a substantial amount of time after treatment. Or, since breast cancer is caught more frequently and in an earlier stage for Caucasian women than for African-American women,2 the machine could learn only to look for later-stage breast cancers in minority women. These situations may seem extreme, but to make proper judgments, machines must not be given incomplete, faulty, or biased information. If not, they may magnify a concept that includes major biases towards a specific population.

According to Geraldine B. McGinty, MD, MBA, FACR, radiologist at Weill Cornell Medical College and vice chair of ACR’s BOC, “We struggle in healthcare with research that is not reflective of the diversity of the populations we serve. We also recognize the insidious impact of unconscious bias on the care we deliver. How tragic would it be if we took what promises to be a truly disruptive innovation and allowed it to be subject to the same biases that have limited us in the past?”

C. Matthew Hawkins, MD, director of pediatric IR at Children’s Healthcare of Atlanta at Egleston, says, “It will be important for radiologists and developers to gain keen insight into the biases that we have and to develop machine learning algorithms that limit incorporation of those biases.” Hawkins adds, “For now, our profession has to focus on developing algorithms that incorporate the sum of the knowledge that we possess as a collective community, and mitigate the bias that is inherent in much of the expert, consensus-driven learning materials.”

The ACR Data Science Institute™ (DSI) empowers the advancement, validation, and implementation of AI in medical imaging for improved patient care. DSI efforts to advance development of safe and effective AI solutions for medical imaging and radiological sciences are picking up speed. However, the ACR is concerned with the safety, efficiency, and effectiveness of AI, including standardized data sets for training and testing, measurement of effectiveness and outcomes, validation and certification of algorithms, and clarification of patient-consent issues and appropriate methods — all of which will help preclude bias from sneaking into datasets.DSI stacked

Adds McGinty, “When we are defining ‘normal’ for a data set, it’s imperative that that ‘normal’ reflect a diverse population. Otherwise we may give too much or not enough care. If our use cases do not encompass the health challenges faced by our entire population, we won’t fully realize the potential of AI to improve healthcare.”

Bibb Allen Jr., MD, FACR, diagnostic radiologist at Trinity Radiology in Birmingham, Ala., and chief medical officer of the ACR DSI, says, “Machines are not biased. The bias in machine learning is introduced by the training datasets and then multiplied as iterations of the model are developed. So one of the DSI’s priorities will be to help developers understand the risk for unintended bias.”

McGinty concurs that ensuring we take a broad look at the problems we want to address with machine learning is key, as well as requiring data sets that are representative of the populations impacted by those healthcare challenges. “Bringing diverse voices into the planning process for our efforts is also essential,” she says. “I’m proud that the group that advises our ACR DSI comprises stakeholders from across the radiology community but, most importantly, includes the voices of our patients.”

Implying that machines can be trusted to do the footwork, Hawkins concludes, “Interpreting patterns within a clinical context and determining what imaging test will best serve patients in specific scenarios is where we add value. We should be doing those things all of the time.” And machines may just allow that to happen.

Hawkins adds, “I see AI as another step that allows radiologists to spend more time doing the tasks that require our level of training.” Perhaps in the future, radiologists will need to teach their machines when they can be helpful and when they should keep their opinions to themselves.

By Dara L. Fox, freelance writer, ACR Press

1. Center for Democracy and Technology. Preparing for the future of artificial intelligence: In response to White House OSTP RFI. http://bit.ly/CDT_AI. Accessed Oct.16, 2017.
2. Dickens JL. Breast cancer awareness: what it means for African-American women. Huffpost. http://bit.ly/breast_canceraware. Oct. 30, 2017. Accessed Oct. 31, 2017.

Share this content

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedIn