Even before the COVID-19 pandemic, health care capacity in the United States was strained, with not enough medical professionals to meet growing demand.
In this context, technology can be a double-edged sword: It can save time, but it can also generate complex data that is difficult to analyze quickly. Ying Ding, a professor at the University of Texas, Austin (UT), is using artificial intelligence (AI) to help doctors get the most out of radiological data, with support from a 2020 Amazon Research Award.
Ding was originally trained as an information scientist at the Nanyang Technological University Singapore — not exactly a straight line to designing AI for health care.
“But it’s my personality to always want to try new things,” she said.
For over a decade as a professor and researcher at Indiana University, she studied the patterns of scholarly collaboration while developing the university’s online data science program. Using metadata and semantics, she designed methods to measure the impact of scientists and quantify their scientific collaborative patterns via Google Scholar and Microsoft Academic Graph.
While still at Indiana, she co-founded Data2Discovery, a startup aimed at mining complex datasets for scientific breakthroughs. As the company’s chief science officer, she used semantic technologies to look for and predict associations among drugs, diseases, and genes, with the idea that big data could be used for drug target prediction and drug repurposing.
That interest led her directly to her next job as the Bill & Lewis Suit Professor in the School of Information at UT. When she joined, Eric Meyer, the school’s dean, told her to focus on AI healthcare solutions.
In response, Ding built the AI Health Lab “from scratch”. Her team at the lab brings together scholars and students in fields ranging from neuroscience to machine learning to explore how AI can be used in medicine.
While building the lab, she began doing research at the university’s Dell Medical School, starting with a general focus on medical imaging.
“We have an increasing number of images, but we have very severe shortage of radiologists,” explained Ding, who now has a co-appointment at Dell Medical School in the Department of Population Health. “So this is a good area to come up with a solution.”
Putting AI to work for radiologists
With a shortage of people in the field and more work as populations grow (not to mention increasing patient loads from the pandemic), both radiologists and physicians have been taxed. Ding wondered whether machine learning and computer vision might give them an assist.
She started by talking to Dell Medical School’s radiology staff and observing them at work.
“I observed how the radiologists were doing their daily jobs and how they worked with images,” she said. She found some areas where AI algorithms were already in use: In diagnostic image evaluations for skin cancer, for example, existing algorithms can be highly effective. But staff confidence was lower when it came to AI programs targeted at other diseases.
“They didn’t want AI to interfere with their diagnosis,” Ding said. Doctors were less likely to use AI, relying instead on what they know if we did not find the right way to introduce AI to the doctors. Ding knew that truly useful collaboration — where AI would augment human capabilities and assist human decisions — was what those busy doctors and radiologists needed.
“Everything works better with teamwork, right?” she said. “So I thought, ‘How can I put the doctor and AI together as a team, rather than competing with each other?’”
During her in-depth interviews with doctors and radiologists, Ding realized that the reason some AI programs hadn’t been adopted or more fully accepted was that they ignored existing human expertise. Many professionals had been doing this work for 20 years or more, and were skeptical about AI’s ability to diagnose diseases effectively.
Radiologists spend years learning to interpret scans based on nuances in light, textures, and shape. Since about 2012, they’ve done this with the assistance of radiomics, an algorithmic method that uses advanced mathematical analysis to analyze scans.
Ding started with human-generated radiomics data (including scans and their associated annotations) when designing her program. Her goal: combine expert experience in diagnosing disease from a scan with computer vision’s ability to characterize even finer detail than the human eye can see (smaller pixel levels and shadings).
To achieve this, Ding used contrastive learning, a type of supervised deep learning. Unlike many other deep-learning algorithms, this algorithm is trained on the actual chest x-ray images that have been verified and annotated by experts.
This is how human-centered AI design happens. Machine learning in a vacuum will generate some useful information — but will also churn out a lot of not-useful information, said Ding, which is unacceptable when it comes to health care. A doctor who has seen 300,000 images is the expert at detecting a disease on a scan, but a machine can pick up smaller details than might be imperceptible to a human.
“You take the best part of what the human knows and integrate it to develop a better deep learning algorithm that actually can achieve better downstream tasks like classification,” Ding said.
A time-saving diagnostic tool
In a simple example (and one she has published a paper on), Ding fed both the chest x-ray of a sick person’s lung into the program along with the doctor’s diagnosis of pneumonia.
“We use radiomics as the positive sample and our other image as a negative sample. We try to integrate this kind of prior knowledge into it to develop supervised deep learning,” she said.
Having worked to understand what radiology professionals really need, Ding started developing i-RadioDiagno, an open-source tool that enables diagnostic notes based on medical images.
The radiologist or doctor still reads a given scan, but the tool does a lot of the more time-consuming basic diagnostic labor first. That enables the person reading the scan to jump in with some of the work already done, speeding up the diagnosis process while still putting a human at the center of it.
“In the past, too many medical imaging programs relied only on AI. With i-RadioDiagno, the radiologist and AI work together, using feedback loops to improve accuracy,” said Ding. The program, which is still in the research phases, uses knowledge graphs, natural language processing, and computer vision to derive diagnoses.
Amazon Research Award
The i-RadioDiagno program was built on Amazon SageMaker and Apache MXNet on Amazon Web Services (AWS). Ding connected early and often with the AWS contact at UT, Sylvia Herrera-Alaniz, who played a key role in connecting her to resources for the project.
“When we sent her email, she was so responsive, and she was so easy to meet and easy to communicate with,” Ding said.
The AWS research award gave Ding 70,000 AWS computing credits and $20,000 in cash. She said the grant enabled her to work on this project throughout the pandemic, which she wouldn’t have been able to do otherwise.
Ding knows AI can be a powerful tool for a healthcare industry that, more than ever, needs support — but only if people are at the heart of the approach.
“It has to be human-centered,” she said, “a collaboration, to achieve both efficiency and accuracy for better care.”