RFSlogo

 RFS news highlights resources, issues, and news relevant to in-training members of the ACR. If you have a topic idea or would like to contribute to the blog, please email RFS Secretary Nathan Coleman, MD.

 

 

 

Discovering AI

Machine learning application is on the rise in just about every field of healthcare, signaling changes that have some specialists, including radiologists, speculating on how the ever-improving technology may change their position in the landscape.

GettyImages 852049214

AI means vastly different things to people. Some are excited for what it could mean for improvements in workflow — the so-called non-interpretative uses of AI. Some are wary that developing programs to interpret images will eventually lead to the radiologic equivalent of Skynet, where machines control everything and human radiologists are subservient.

It’s difficult to know what’s in store, but here’s a quick overview of what some of the key phrases in AI mean, as well as what some of the experts in the field have been saying. AI is an umbrella term that we use to describe programs that are able to evaluate data beyond the parameters input. When your camera recognizes your face as you prepare to take a picture would be an example of AI. The camera was not provided a picture of your face in the identical light, orientation, or facial expression but is still able to deduce that your face is your face.

Machine learning and deep learning are two methods by which AI can extend beyond its initial programming. This requires a large amount of human-annotated data (such as when you’re asked to prove you’re not a robot by choosing which images contain a stop sign). The data must be split into a training portion and then a testing portion. During the training phase, the program is getting used to images that tell it where and what the pathology is. It sees multiple different iterations of the chosen pathology and analyzes each image to determine what imaging features may be predictive of the pathology.

During the testing (or validation) phase, the program then sees new images that are not explicitly telling it where the pathology lies. The program uses all the information it has accumulated through the training images and makes a best guess as to what is present in the test image. Perhaps an example of this is any of the numerous ‘dog vs X’ memes available on the Internet, where X is any number of things — mostly food items — such as bagels, chocolate chip muffins, or fried chicken. Based on the images provided in the training set, a photo of a bagel may look identical to a dog curled up and sleeping peacefully. This is why the testing/validation phase is key — we are seeing how well our program can interpret these images, again all identified by humans, to determine if it is ready for use or if we need to train additionally.

DogMeme

The biggest myth about AI is that radiologists will soon be replaced by machines. But the truth is that all the current publications extolling a new system that can diagnose ‘better than a radiologist’ are incredibly limited in their use. A program developed for assessing for pneumonia is just that: assessing for the presence or absence of pneumonia. But that same program cannot also comment on mediastinal contours, cardiac size, vascular congestion, bronchiectasis, pulmonary nodules, or pneumothorax. It cannot identify incidentals, such as soft tissue gas, foreign bodies, or hiatal hernia.

This is the difference between what is termed broad AI and narrow AI. The case described here is one of narrow AI. Yes, it can diagnose pneumonia — perhaps even better than some radiologists — but it lacks the fundamental capacity to do anything greater. There may be development of several different programs, each able to more optimally assess a single condition than the average radiologist, but none able to fulfill the role of another program or synthesize a narrative based on multiple data streams. So perhaps we can use AI #1 to determine the presence of biliary dilatation and AI #2 to find gallstones, but we cannot combine them and simply have them understand that the two findings are suggestive of obstructive choledocholithiasis.

Beyond broad versus narrow AI, another limitation tends to be generalizability. The image data sets are generally unicentric, which means that a program from Berkeley, California, may not work as well for the population of Indianapolis, Indiana. While nodules in the former may represent coccidiodomycosis, in the latter they will almost always be from histoplasmosis. So the training and testing populations from which the images are gathered is important and one size AI program far from fits all.

So what are our best use cases (meaning ways in which we can utilize AI in radiology)? There are definitely commercial vendors that are able to more quickly quantify data in imaging sets, particularly more so than a radiologist, who would likely have to open a secondary program and manually annotate/calculate the data otherwise. But there are plenty of ways that we could use it to reduce the tedious aspects of being a radiologist— helping automate imaging protocols, optimizing scheduling and use of resources, triaging similar studies in a reading list so that life-threatening findings can be identified faster, etc.

GettyImages 812795676

Our best bet as to what AI will bring to radiology is an opportunity to reinvent ourselves, to engage to a greater degree in the clinical care of patients and as a more integral part of the medical team. AI will bring us the opportunity to provide more value. And after all, hasn’t that been our goal all along?

For more information about AI, consider the following resources:


SheaHeadshot

By Lindsey Shea, MD, a diagnostic radiology resident at Indiana University.

Share this content

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedIn