SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps
Published in World Conference on Explainable Artificial Intelligence, 2024
Many explainable AI methods for generating medical image saliency maps exist, but most are devoted to working on trained neural network-based models. At the same time, many medical image classification neural networks use as input radiomic features derived from images. The mathematics for radiomic feature computations are not always represented by a differentiable function, which makes it impossible to apply existing saliency map methods to obtain input image pixel attributions because they heavily rely on gradient calculation possibility. For this reason, a novel method (SRFAMap) is introduced to map the statistical radiomic feature attributions derived by applying the Integrated Gradients methods, often used in explainable AI, to image saliency maps. In detail, integrated gradients are used to compute radiomic feature attributions of chest X-ray scans for a ResNet-50 convolutional network model trained to distinguish healthy lungs from tuberculosis lesions. These are subsequently mapped to saliency maps over the original scans to facilitate their interpretation for diagnostics. Findings show that, in most cases, SRFAMap can generate saliency maps with an acceptable level of faithfulness. The increase-in-confidence metric reached at least , while the Average Drop reached at most. Finally, the percentage of the statistically significant target class score increases reached at least on 20 random folds for the grey-level co-occurrence matrix and higher for other methods when only pixels under positive saliency map values were revealed on the blurred image. The main contribution is a method of mapping integrated gradients to saliency maps to facilitate the visual interpretation of the relevant statistical radiomic features of X-ray scans of lungs responsible for discriminating types of lesions.