Translating What’s Relevant to Radiologists
The ACR DSI is putting radiology’s priorities into a language AI developers understand.
When algorithms are asked to go beyond identifying images of cats and dogs and solve more complex problems, the data scientists developing the algorithms must first ask three questions: What problems will be valuable to solve? Will AI be a potential way to solve them? And is there enough data available for supervised learning?
Those are tough questions to answer when it comes to healthcare. And they aren’t questions machine learning experts are prepared to answer alone. When left to their own devices, industry has often defaulted to identification algorithms that are easier to develop but of little use in clinical practice.
“Developers of AI will typically be savvy with the technical computing aspects of a problem, but have no clue what is of value in radiology,” says Jay W. Patti, MD, of Mecklenburg Radiology Associates in Charlotte, N.C., and the chair of an ACR Data Science Institute™ (DSI) use case panel. “We are translating what is relevant to radiologists — and important to patients — into a language a developer can understand.”
Patti chairs the panel focusing on musculoskeletal (MSK) radiology, one of 10 specialty areas defined by working groups established by the DSI. Within months of forming in 2017, DSI had recruited or been contacted by 85 volunteers (including Patti). Their task was to establish simple radiology problems to guide the AI development process. Their work provides guidance on the inputs and outputs for researchers and industry developers of deep learning software. These solvable problems (known as use cases) help identify and prioritize development so that AI tools can help radiologists provide the highest value for their patients.
With his background in informatics, Patti was well-qualified to lead six panel members focusing on MSK radiology use cases, including the Lisfranc Joint Injury. Patti’s MSK panel selected Lisfranc as an initial use case because the diagnosis is easy to miss, yet fairly simple for developing an AI algorithm. Put another way, characterizing the Lisfranc joint as abnormal or normal is straightforward — as opposed to other use cases where findings are more subjective. The “black-and-white” nature of the condition makes Lisfranc an ideal problem for AI.
The Lisfranc joint use case was included with the DSI’s October release of approximately 40 use cases to AI developers. Several hundred more use cases are expected to be released in the next two years as part of the Technically Oriented Use Cases for Healthcare AI (TOUCH-AI) directory, exposing researchers, developers and the health informatics industry to the important problems in radiology and radiation oncology that radiology professionals believe are most amenable to AI solutions. The comprehensive collection of use cases — organized by body part, modality, and disease type — provides a narrative description, specifications for image annotation for training, and explicit parameters for how the algorithm outputs will be integrated into clinical workflows in a machine readable format that uses common data elements to foster interoperability.
To date, a number of AI algorithms for diagnostic imaging have already received FDA clearance for widespread marketing, but for the most part these have been developed in conjunction with radiologists at single institutions. Building AI solutions around well-defined use cases allows for participation by multiple institutions and provides more diversity in algorithm testing and training, which could significantly improve algorithm accuracy and significantly diminish unintended bias. As algorithm development ramps up over the next several years, there will be an ongoing need for radiologists to define and specify more AI use cases, and the DSI use case project is a way for radiologists to become involved in the AI world. Additionally, radiologists can partner with industry developers by annotating cases based on the DSI use cases.
“There are many talented computer scientists out there,” says Christopher T. Whitlow, MD, PhD, MHA, chief of neuroradiology and vice chair of informatics at Wake Forest School of Medicine and co-chair of the DSI’s neuroradiology panel. “Many enthusiastic developers with great ideas are eager to jump in and help clinicians.” According to Whitlow, partnering with physician groups is an ideal solution for developers who are interested in solving clinical dilemmas and improving workflows. With input from internal stakeholders, developers can establish the market for their products early.
Since deep learning requires large amounts of annotated and accessible data, Whitlow is sympathetic to the problems developers will continue to face, especially smaller ones with limited resources. At present, he says, “There is a growing understanding that data for developing AI can be a commodity. That’s making it harder for companies to acquire data to use for developing new AI algorithms.” Even if they determine a valuable problem to solve, developers will need huge amounts of accessible data for training their algorithms upfront and validating the output later.
Despite these challenges, Whitlow believes that within three to five years many small companies now developing AI tools will roll them out into the mainstream marketplace. In his field of neuroradiology, he expects to see tools that are involved in disease detection, image processing, and workflow. He anticipates many of these will also be add-ons to an existing task, rather than stand-alone products, and available at an accessible price point. According to Whitlow, vendors are getting very close.
“This is the time to get involved,” adds Patti. “And we are expanding the palette developers have to work with, to create AI tools that support what we do as radiologists.”
By Alison Loughlin, senior communications specialist, ACR DSI.