AI-Powered Ultrasound Imaging Detects Breast Cancer

By MedImaging International staff writers
Posted on 14 Mar 2023

Breast cancer is undeniably the most commonly reported type of cancer among women, exhibiting a continuous increase in incidence rates in the past two decades, unlike the other significant cancer types. Early detection and treatment can improve the probability of recovery; however, the survival rate in breast cancer patients sharply declines to less than 75% after the third stage. As a result, regular medical check-ups are critical for reducing mortality rates. Ultrasonography is a major medical imaging technique for the assessment of breast lesions, and computer-aided diagnosis (CAD) systems have aided radiologists by segmenting and identifying lesion features to distinguish between benign and malignant lesions. Now, a team of researchers has developed an AI network system for ultrasonography to accurately detect and diagnose breast cancer.

A team of researchers from Pohang University of Science and Technology (POSTECH, Gyeongbuk. Korea) has developed a deep learning-based multimodal fusion network for the segmentation and classification of breast cancers using B-mode and strain elastography ultrasound images. The team developed deep learning (DL)-based methods to segment the lesions and then classify them as benign or malignant, using both B-mode and strain elastography (SE-mode) images. First, the team constructed a ‘weighted multimodal U-Net (W-MM-U-Net) model’ where the optimum weight is assigned on different imaging modalities to segment lesions, utilizing a weighted-skip connection method. The researchers have also proposed a ‘multimodal fusion framework (MFF)’ on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions.


Image: An AI network system for ultrasonography accurately detects and diagnoses breast cancer (Photo courtesy of Pexels)

The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNN) that are trained with B-mode and SE-mode US images. The features of the CNN are ensembled using the multimodal EmbraceNet model, while DN classifies the images using those features. Experimental results on the clinical data reveal that the method identified seven benign patients as being benign in three out of the five trials and six malignant patients as malignant in five out of the five trials. This indicates that the proposed method outperforms the conventional single and multimodal methods and could improve the classification accuracy of radiologists for breast cancer detection in ultrasound images.

“We were able to increase the accuracy of lesion segmentation by determining the importance of each input modal and automatically giving the proper weight,” explained Professor Chulhong Kim from POSTECH, who led the team of researchers. “We trained each deep learning model and the ensemble model at the same time to have a much better classification performance than the conventional single modal or other multimodal methods.”

Related Links:
POSTECH


Latest Ultrasound News