A Novel Artificial Intelligence-Based Hybrid System to Improve Breast Cancer DetectionUsing DCE-MRI


Creative Commons License

Akgül İ., Kaya V., Karavaş E., Aydın S., Baran A.

BULLETIN OF THE POLISH ACADEMY OF SCIENCES. TECHNICAL SCIENCES, cilt.72, sa. 3, ss.1-11, 2024 (SCI-Expanded)

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 72 Sayı: 3
  • Basım Tarihi: 2024
  • Doi Numarası: 10.24425/bpasts.2024.149172
  • Dergi Adı: BULLETIN OF THE POLISH ACADEMY OF SCIENCES. TECHNICAL SCIENCES
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Aerospace Database, Communication Abstracts, Compendex, Metadex, zbMATH, Directory of Open Access Journals, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.1-11
  • Erzincan Binali Yıldırım Üniversitesi Adresli: Evet

Özet

The interpretation of breast magnetic resonance imaging (MRI) in the healthcare field depends on the good knowledge and experience of radiologists. Recent developments in artificial intelligence (AI) have shown advances in the field of radiology. However, the desired levels have not been reached in the field of radiology yet. In this study, a novel model structure is proposed to characterize the diagnostic performance of AI technology for individual breast dynamic contrast material–enhanced (DCE) MRI sequences. In the proposed model structure, Inception-v3, EfficientNet-B3 and DenseNet-201 models were used as hybrids together with the Yolo-v3 algorithm to detect breast and cancer regions. In the proposed model, DCE-MRI sequences (T2, ADC, Diffusion, Non-Contrast Fat Non-Suppressed T1, Non-Contrast Fat Suppressed T1, Contrast Fat Suppressed T1, and Subtraction T1) were evaluated separately and validation was made, thus providing a unique perspective. According to the validation results, the model structure with the best performance was determined as Yolo-v3 + DenseNet-201. With this model structure, 92.41 accuracy, 0.5936 loss, 92.44% sensitivity, and 92.44% specificity rates were obtained. In addition, it was determined that the results obtained without using contrast material in the best model were 91.53% accuracy, 0.9646 loss, 92.19% sensitivity, and 92.19% specificity. Therefore, it is predicted that the need for contrast material use can bereduced with the help of this model structure.