AI Model Draws Maps to Accurately Identify Tumors and Diseases in Medical Images

By MedImaging International staff writers
Posted on 05 Mar 2024

The interpretation of medical images varies across different regions of the world, particularly in developing countries where doctor shortages and long patient queues are common. Artificial Intelligence (AI) has emerged as a valuable aid in these settings. Automated medical image screening using AI can act as a supportive tool for doctors, pre-scanning images and highlighting unusual findings, such as tumors or early disease indicators (biomarkers), for further medical review. This approach not only saves time but can also enhance the accuracy of diagnoses. However, traditional AI models lack the capability to explain their findings, merely indicating the presence or absence of tumors without further elaboration.

Now, researchers at the Beckman Institute for Advanced Science and Technology (Urbana, IL, USA) have developed an innovative AI model that not only detects anomalies but also explains each decision it makes. This model, unlike standard AI tools, provides interpretive feedback rather than just identifying tumors. Conventionally, AI models assisting doctors are trained with numerous medical images, some showing abnormalities and others normal. These models, upon encountering a new image, assign a probability score indicating the likelihood of a tumor being present.


Image: The new AI model draws treasure maps to diagnose disease (Photo courtesy of 123RF)

This novel AI model goes a step further by offering a visual explanation for its decision-making process through what's known as an "equivalency map" (E-map). This E-map transforms the original medical image, such as an X-ray or mammogram, assigning values to different regions based on their medical significance in predicting anomalies. The model aggregates these values to derive a final diagnostic score. This transparent approach allows doctors to see which areas of the map contributed more significantly to the diagnosis and to investigate these regions more closely, enhancing understanding and answering patient inquiries about the diagnostic process.

The research team trained this model on over 20,000 images across three different disease diagnostic tasks. The model was taught to identify early signs of tumors in simulated mammograms, to detect Drusen buildup in retinal images indicative of macular degeneration, and to recognize cardiomegaly in chest X-rays. When compared to traditional AI systems without self-explanation capabilities, this new model demonstrated comparable accuracy: 77.8% in mammograms, 99.1% in retinal OCT images, and 83% in chest X-rays, matching the existing models' accuracy. The success of this model, which employs a deep neural network mimicking the complexity of human neurons, is attributed to its design inspired by simpler, more interpretable linear neural networks. The researchers aim to extend this model's application to various body parts, with the ability to potentially distinguish between different anomalies in future developments.

"The idea is to help catch cancer and disease in its earliest stages — like an X on a map — and understand how the decision was made. Our model will help streamline that process and make it easier on doctors and patients alike,” said Sourya Sengupta, the study’s lead author and a graduate research assistant at the Beckman Institute.

“I am excited about our tool’s direct benefit to society, not only in terms of improving disease diagnoses, but also improving trust and transparency between doctors and patients,” added principal investigator Mark Anastasio, a Beckman Institute researcher and the Donald Biggar Willet Professor and Head of the Illinois Department of Bioengineering.

Related Links:
Beckman Institute 


Latest General/Advanced Imaging News