Skip to main content
Passa alla visualizzazione normale.

SALVATORE VITABILE

Rad4XCNN: A new agnostic method for post-hoc global explanation of CNN-derived features by means of Radiomics

  • Authors: Prinzi, Francesco; Militello, Carmelo; Zarcaro, Calogero; Bartolotta, Tommaso Vincenzo; Gaglio, Salvatore; Vitabile, Salvatore
  • Publication year: 2025
  • Type: Articolo in rivista
  • OA Link: http://hdl.handle.net/10447/673543

Abstract

Background and Objective: In recent years, machine learning-based clinical decision support systems (CDSS) have played a key role in the analysis of several medical conditions. Despite their promising capabilities, the lack of transparency in AI models poses significant challenges, particularly in medical contexts where reliability is a mandatory aspect. However, it appears that explainability is inversely proportional to accuracy. For this reason, achieving transparency without compromising predictive accuracy remains a key challenge. Methods: This paper presents a novel method, namely Rad4XCNN, to enhance the predictive power of CNN-derived features with the inherent interpretability of radiomic features. Rad4XCNN diverges from conventional methods based on saliency maps, by associating intelligible meaning to CNN-derived features by means of Radiomics, offering new perspectives on explanation methods beyond visualization maps. Results: Using a breast cancer classification task as a case study, we evaluated Rad4XCNN on ultrasound imaging datasets, including an online dataset and two in-house datasets for internal and external validation. Some key results are: (i) CNN-derived features guarantee more robust accuracy when compared against ViT-derived and radiomic features; (ii) conventional visualization map methods for explanation present several pitfalls; (iii) Rad4XCNN does not sacrifice model accuracy for their explainability; (iv) Rad4XCNN provides a global explanation enabling the physician to extract global insights and findings. Conclusions: Our method can mitigate some concerns related to the explainability-accuracy trade-off. This study highlighted the importance of proposing new methods for model explanation without affecting their accuracy.