13.6.6 Explainable Aritficial Intelligence

Chapter Contents (Back)
Explainable. Knowledge. Applied to CNNs especially:
See also CNN Intrepretation, Explanation, Understanding of Convolutional Neural Networks.

Wellman, M.P., Henrion, M.,
Explaining 'explaining away',
PAMI(15), No. 3, March 1993, pp. 287-292.
IEEE DOI 0401
BibRef

Montavon, G.[Grégoire], Lapuschkin, S.[Sebastian], Binder, A.[Alexander], Samek, W.[Wojciech], Müller, K.R.[Klaus-Robert],
Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition,
PR(65), No. 1, 2017, pp. 211-222.
Elsevier DOI 1702
Award, Pattern Recognition. Deep neural networks BibRef

Lapuschkin, S., Binder, A., Montavon, G.[Grégoire], Müller, K.R.[Klaus-Robert], Samek, W.[Wojciech],
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks,
CVPR16(2912-2920)
IEEE DOI 1612
BibRef

Jung, A., Nardelli, P.H.J.,
An Information-Theoretic Approach to Personalized Explainable Machine Learning,
SPLetters(27), 2020, pp. 825-829.
IEEE DOI 2006
Predictive models, Data models, Probabilistic logic, Machine learning, Decision making, Linear regression, decision support systems BibRef

Muńoz-Romero, S.[Sergio], Gorostiaga, A.[Arantza], Soguero-Ruiz, C.[Cristina], Mora-Jiménez, I.[Inmaculada], Rojo-Álvarez, J.L.[José Luis],
Informative variable identifier: Expanding interpretability in feature selection,
PR(98), 2020, pp. 107077.
Elsevier DOI 1911
Feature selection, Interpretability, Explainable machine learning, Resampling, Classification BibRef

Kauffmann, J.[Jacob], Müller, K.R.[Klaus-Robert], Montavon, G.[Grégoire],
Towards explaining anomalies: A deep Taylor decomposition of one-class models,
PR(101), 2020, pp. 107198.
Elsevier DOI 2003
Outlier detection, Explainable machine learning, Deep Taylor decomposition, Kernel machines, Unsupervised learning BibRef

Yeom, S.K.[Seul-Ki], Seegerer, P.[Philipp], Lapuschkin, S.[Sebastian], Binder, A.[Alexander], Wiedemann, S.[Simon], Müller, K.R.[Klaus-Robert], Samek, W.[Wojciech],
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning,
PR(115), 2021, pp. 107899.
Elsevier DOI 2104
Pruning, Layer-wise relevance propagation (LRP), Convolutional neural network (CNN), Interpretation of models, Explainable AI (XAI) BibRef

Pierrard, R.[Régis], Poli, J.P.[Jean-Philippe], Hudelot, C.[Céline],
Spatial relation learning for explainable image classification and annotation in critical applications,
AI(292), 2021, pp. 103434.
Elsevier DOI 2102
Explainable artificial intelligence, Relation learning, Fuzzy logic BibRef

Langer, M.[Markus], Oster, D.[Daniel], Speith, T.[Timo], Hermanns, H.[Holger], Kästner, L.[Lena], Schmidt, E.[Eva], Sesing, A.[Andreas], Baum, K.[Kevin],
What do we want from Explainable Artificial Intelligence (XAI)?: A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research,
AI(296), 2021, pp. 103473.
Elsevier DOI 2106
Explainable Artificial Intelligence, Explainability, Interpretability, Explanations, Understanding, Human-Computer Interaction BibRef

Rio-Torto, I.[Isabel], Fernandes, K.[Kelwin], Teixeira, L.F.[Luís F.],
Understanding the decisions of CNNs: An in-model approach,
PRL(133), 2020, pp. 373-380.
Elsevier DOI 2005
Explainable AI, Explainability, Interpretability, Deep Llearning, Convolutional Nneural Nnetworks BibRef

Mokoena, T.[Tshepiso], Celik, T.[Turgay], Marivate, V.[Vukosi],
Why is this an anomaly? Explaining anomalies using sequential explanations,
PR(121), 2022, pp. 108227.
Elsevier DOI 2109
Outlier explanation, Sequential feature explanation, Sequential explanation, Anomaly validation, Explainable AI BibRef

Anjomshoae, S.[Sule], Omeiza, D.[Daniel], Jiang, L.[Lili],
Context-based image explanations for deep neural networks,
IVC(116), 2021, pp. 104310.
Elsevier DOI 2112
DNNs, Explainable AI, Contextual importance, Visual explanations BibRef

Sattarzadeh, S.[Sam], Sudhakar, M.[Mahesh], Plataniotis, K.N.[Konstantinos N.],
SVEA: A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition,
HTCV21(4141-4150)
IEEE DOI 2112
Performance evaluation, Visualization, Image recognition, Correlation, Machine learning, Benchmark testing BibRef

Zhang, Q.S.[Quan-Shi], Cheng, X.[Xu], Chen, Y.[Yilan], Rao, Z.F.[Zhe-Fan],
Quantifying the Knowledge in a DNN to Explain Knowledge Distillation for Classification,
PAMI(45), No. 4, April 2023, pp. 5099-5113.
IEEE DOI 2303
Knowledge engineering, Task analysis, Measurement, Optimization, Feature extraction, Birds, Visualization, Knowledge distillation, knowledge points BibRef

Teneggi, J.[Jacopo], Luster, A.[Alexandre], Sulam, J.[Jeremias],
Fast Hierarchical Games for Image Explanations,
PAMI(45), No. 4, April 2023, pp. 4494-4503.
IEEE DOI 2303
Games, Computational modeling, Neural networks, Tumors, Task analysis, Supervised learning, Standards, image explanations BibRef

Chattopadhyay, A.[Aditya], Slocum, S.[Stewart], Haeffele, B.D.[Benjamin D.], Vidal, R.[René], Geman, D.[Donald],
Interpretable by Design: Learning Predictors by Composing Interpretable Queries,
PAMI(45), No. 6, June 2023, pp. 7430-7443.
IEEE DOI 2305
Birds, Task analysis, Predictive models, Image color analysis, Computational modeling, Vegetation, Shape, Explainable AI, information theory BibRef

Bandyapadhyay, S.[Sayan], Fomin, F.V.[Fedor V.], Golovach, P.A.[Petr A.], Lochet, W.[William], Purohit, N.[Nidhi], Simonov, K.[Kirill],
How to find a good explanation for clustering?,
AI(322), 2023, pp. 103948.
Elsevier DOI 2308
Explainable clustering, Clustering with outliers, Multivariate analysis BibRef

Chen, H.F.[Hai-Fei], Yang, L.P.[Li-Ping], Wu, Q.S.[Qiu-Sheng],
Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning Approach Using Google Earth Engine,
RS(15), No. 18, 2023, pp. 4585.
DOI Link 2310
BibRef

Jiao, L.M.[Lian-Meng], Yang, H.Y.[Hao-Yu], Wang, F.[Feng], Liu, Z.G.[Zhun-Ga], Pan, Q.[Quan],
DTEC: Decision tree-based evidential clustering for interpretable partition of uncertain data,
PR(144), 2023, pp. 109846.
Elsevier DOI 2310
Evidential clustering, Interpretable clustering, Unsupervised decision tree, Belief function theory BibRef

Roussel, C.[Cédric], Böhm, K.[Klaus],
Geospatial XAI: A Review,
IJGI(12), No. 9, 2023, pp. 355.
DOI Link 2310
BibRef

Patricio, C.[Cristiano], Neves, J.C.[Joao C.], Teixeira, L.F.[Luis F.],
Explainable Deep Learning Methods in Medical Image Classification: A Survey,
Surveys(56), No. 4, October 2023, pp. xx-yy.
DOI Link 2312
Survey, Explainable AI. Explainable AI, interpretability, explainability, deep learning, medical image analysis BibRef

Delaney, E.[Eoin], Pakrashi, A.[Arjun], Greene, D.[Derek], Keane, M.T.[Mark T.],
Counterfactual explanations for misclassified images: How human and machine explanations differ,
AI(324), 2023, pp. 103995.
Elsevier DOI 2312
XAI, Counterfactual explanation, User testing BibRef

Posada-Moreno, A.F.[Andrés Felipe], Surya, N.[Nikita], Trimpe, S.[Sebastian],
ECLAD: Extracting Concepts with Local Aggregated Descriptors,
PR(147), 2024, pp. 110146.
Elsevier DOI 2312
Concept extraction, Explainable artificial intelligence, Convolutional neural networks BibRef

Liu, P.[Peng], Wang, L.[Lizhe], Li, J.[Jun],
Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data,
RS(15), No. 23, 2023, pp. 5448.
DOI Link 2312
BibRef

Yang, Y.Q.[Ya-Qi], Zhao, Y.[Yang], Cheng, Y.[Yuan],
PRIME: Posterior Reconstruction of the Input for Model Explanations,
PRL(176), 2023, pp. 202-208.
Elsevier DOI 2312
Machine learning, Statistical inference, Classification, Model explainability BibRef

Wang, Z.[Zhuo], Zhang, W.[Wei], Liu, N.[Ning], Wang, J.Y.[Jian-Yong],
Learning Interpretable Rules for Scalable Data Representation and Classification,
PAMI(46), No. 2, February 2024, pp. 1121-1133.
IEEE DOI 2401
Interpretable classification, representation learning, rule-based model, scalability BibRef

Rong, Y.[Yao], Leemann, T.[Tobias], Nguyen, T.T.[Thai-Trang], Fiedler, L.[Lisa], Qian, P.Z.[Pei-Zhu], Unhelkar, V.[Vaibhav], Seidel, T.[Tina], Kasneci, G.[Gjergji], Kasneci, E.[Enkelejda],
Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations,
PAMI(46), No. 4, April 2024, pp. 2104-2122.
IEEE DOI 2403
Artificial intelligence, Task analysis, Human computer interaction, Surveys, Bibliographies, Usability, human-AI interaction BibRef

Vandersmissen, B.[Benjamin], Oramas, J.[José],
On the coherency of quantitative evaluation of visual explanations,
CVIU(241), 2024, pp. 103934.
Elsevier DOI 2403
Deep learning, Explainable AI (XAI), Visual explanations BibRef

Chen, X.Y.[Xin-Ye], Güttel, S.[Stefan],
Fast and explainable clustering based on sorting,
PR(150), 2024, pp. 110298.
Elsevier DOI 2403
Clustering, Fast aggregation, Sorting, Explainability BibRef

Wulfert, L.[Lars], Kühnel, J.[Johannes], Krupp, L.[Lukas], Viga, J.[Justus], Wiede, C.[Christian], Gembaczka, P.[Pierre], Grabmaier, A.[Anton],
AIfES: A Next-Generation Edge AI Framework,
PAMI(46), No. 6, June 2024, pp. 4519-4533.
IEEE DOI 2405
Training, Data models, Artificial intelligence, Support vector machines, Hardware acceleration, Libraries, TinyML BibRef

Pelous, E.[Enzo], Méger, N.[Nicolas], Benoit, A.[Alexandre], Atto, A.[Abdourrahmane], Ienco, D.[Dino], Courteille, H.[Hermann], Lin-Kwong-Chon, C.[Christophe],
Explaining the decisions and the functioning of a convolutional spatiotemporal land cover classifier with channel attention and redescription mining,
PandRS(215), 2024, pp. 256-270.
Elsevier DOI 2408
Explainable AI, Convolutional neural networks, Land cover classification, Satellite image time series, Grouped frequent sequential patterns BibRef

Shang, Q.J.[Qi-Jie], Zheng, T.[Tieran], Zhang, L.W.[Li-Wen], Zhang, Y.C.[You-Cheng], Ma, Z.[Zhe],
Concept-Based Explanations for Millimeter Wave Radar Target Recognition,
RS(16), No. 14, 2024, pp. 2640.
DOI Link 2408
BibRef

Zhang, H.W.[Han-Wei], Torres, F.[Felipe], Sicre, R.[Ronan], Avrithis, Y.[Yannis], Ayache, S.[Stephane],
Opti-CAM: Optimizing saliency maps for interpretability,
CVIU(248), 2024, pp. 104101.
Elsevier DOI 2409
Interpretability, Explainable AI, Saliency map, Class activation maps, Computer vision BibRef

Tasneem, S.[Sumaiya], Islam, K.A.[Kazi Aminul],
Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods,
RS(16), No. 17, 2024, pp. 3210.
DOI Link 2409
BibRef

Li, Y.S.[Yan-Shan], Liang, H.[Huajie], Zheng, L.R.[Li-Rong],
WB-LRP: Layer-wise relevance propagation with weight-dependent baseline,
PR(158), 2025, pp. 110956.
Elsevier DOI 2411
Layer-Wise Relevance Propagation (LRP), Interpretation BibRef

Kuznietsov, A.[Anton], Gyevnar, B.[Balint], Wang, C.[Cheng], Peters, S.[Steven], Albrecht, S.V.[Stefano V.],
Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review,
ITS(25), No. 12, December 2024, pp. 19342-19364.
IEEE DOI 2412
Safety, Surveys, Explainable AI, Stakeholders, Planning, Taxonomy, Standards, Monitoring, Autonomous vehicles, Terminology, AI safety BibRef

Jeanneret, G.[Guillaume], Simon, L.[Loďc], Jurie, F.[Frédéric],
Diffusion Models for Counterfactual Explanations,
CVIU(249), 2024, pp. 104207.
Elsevier DOI 2412
Counterfactual explanations, Explainable AI, Diffusion models, Spurious correlation detection BibRef

Sun, T.L.[Tian-Li], Chen, H.[Haonan], Hu, G.S.[Guo-Sheng], Zhao, C.R.[Cai-Rong],
Explainability-based knowledge distillation,
PR(159), 2025, pp. 111095.
Elsevier DOI Code:
WWW Link. 2412
Explainability, Knowledge distillation, CAM BibRef

Bley, F.[Florian], Lapuschkin, S.[Sebastian], Samek, W.[Wojciech], Montavon, G.[Grégoire],
Explaining predictive uncertainty by exposing second-order effects,
PR(160), 2025, pp. 111171.
Elsevier DOI 2501
Explainable AI, Predictive uncertainty, Ensemble models, Second-order attribution BibRef

Bello, M.[Marilyn], Amador, R.[Rosalís], García, M.M.[María-Matilde], del Ser, J.[Javier], Mesejo, P.[Pablo], Cordón, Ó.[Óscar],
The level of strength of an explanation: A quantitative evaluation technique for post-hoc XAI methods,
PR(161), 2025, pp. 111221.
Elsevier DOI 2502
Explainable artificial intelligence, Trustworthiness, Feature attribution, Quantitative evaluation, Likelihood ratio BibRef

Sreedharan, S.[Sarath], Srivastava, S.[Siddharth], Kambhampati, S.[Subbarao],
Explain it as simple as possible, but no simpler: Explanation via model simplification for addressing inferential gap,
AI(340), 2025, pp. 104279.
Elsevier DOI 2502
Explanations for plans, Abstractions, Contrastive explanations BibRef

Hu, L.[Lianyu], Jiang, M.[Mudi], Dong, J.J.[Jun-Jie], Liu, X.[Xinying], He, Z.[Zengyou],
Interpretable categorical data clustering via hypothesis testing,
PR(162), 2025, pp. 111364.
Elsevier DOI 2503
Interpretable clustering, Categorical data, Binary decision trees, Statistical hypothesis test, Splitting criteria BibRef

Xu, A.[Ao], Li, Z.[Zihao], Zhang, Y.[Yukai], Wu, T.[Tieru],
Generating Image Counterfactuals in Deep Learning Models Without the Aid of Generative Models,
SPLetters(32), 2025, pp. 1495-1499.
IEEE DOI 2504
Signal processing algorithms, Artificial intelligence, Closed box, Deep learning, Vectors, Optimization, Data models, image counterfactual explanation BibRef

Zhou, P.[Peng], Tong, Q.H.[Qi-Hui], Chen, S.[Shiji], Zhang, Y.Y.[Yun-Yun], Wu, X.D.[Xin-Dong],
EACE: Explain Anomaly via Counterfactual Explanations,
PR(164), 2025, pp. 111532.
Elsevier DOI 2504
Interpretable machine learning, Counterfactual explanation, Anomaly detection, Genetic algorithm BibRef

Choi, H.[Hoyoung], Jin, S.[Seungwan], Han, K.[Kyungsik],
ICEv2: Interpretability, Comprehensiveness, and Explainability in Vision Transformer,
IJCV(133), No. 5, May 2025, pp. 2487-2504.
Springer DOI 2504
BibRef


Tan, H.X.[Han-Xiao],
Evaluating Sensitivity Consistency of Explanations,
WACV25(182-191)
IEEE DOI 2505
Sensitivity, Perturbation methods, Computer network reliability, Refining, Artificial neural networks, Robustness, Proposals, explanation evaluation BibRef

Miles, R.[Roy], Elezi, I.[Ismail], Deng, J.K.[Jian-Kang],
V_kD: Improving Knowledge Distillation Using Orthogonal Projections,
CVPR24(15720-15730)
IEEE DOI Code:
WWW Link. 2410
Training, Deep learning, Image synthesis, Object detection, Transformer cores, Transformers, Knowledge distillation, Explainable AI BibRef

Wang, H.J.[Han-Jing], Biswas, B.A.[Bashirul Azam], Ji, Q.[Qiang],
Optimization-based Uncertainty Attribution Via Learning Informative Perturbations,
ECCV24(LXXVIII: 237-253).
Springer DOI 2412
BibRef

Jabbour, S.[Sarah], Kondas, G.[Gregory], Kazerooni, E.[Ella], Sjoding, M.[Michael], Fouhey, D.[David], Wiens, J.[Jenna],
Depict: Diffusion-enabled Permutation Importance for Image Classification Tasks,
ECCV24(LXIV: 35-51).
Springer DOI 2412
BibRef

Sobieski, B.[Bartlomiej], Biecek, P.[Przemyslaw],
Global Counterfactual Directions,
ECCV24(LXIII: 72-90).
Springer DOI 2412
BibRef

Duan, J.R.[Jia-Rui], Li, H.[Haoling], Zhang, H.F.[Hao-Fei], Jiang, H.[Hao], Xue, M.Q.[Meng-Qi], Sun, L.[Li], Song, M.L.[Ming-Li], Song, J.[Jie],
On the Evaluation Consistency of Attribution-based Explanations,
ECCV24(LXX: 206-224).
Springer DOI 2412
BibRef

Zhang, X.[Xianren], Lee, D.[Dongwon], Wang, S.[Suhang],
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector,
ECCV24(LXXXI: 196-213).
Springer DOI 2412
BibRef

Mehrpanah, A.[Amir], Englesson, E.[Erik], Azizpour, H.[Hossein],
On Spectral Properties of Gradient-based Explanation Methods,
ECCV24(LXXXVII: 282-299).
Springer DOI 2412
BibRef

Atote, B.[Bhushan], Sanchez, V.[Victor],
Enhanced Prototypical Part Network (EPPNet) for Explainable Image Classification Via Prototypes,
ICIP24(528-534)
IEEE DOI 2411
Accuracy, Prototypes, Artificial neural networks, Artificial intelligence, Image classification, Explainable AI, image classification BibRef

Chowdhury, P.[Prithwijit], Prabhushankar, M.[Mohit], Al Regib, G.[Ghassan], Deriche, M.[Mohamed],
Are Objective Explanatory Evaluation Metrics Trustworthy? An Adversarial Analysis,
ICIP24(3938-3944)
IEEE DOI 2411
Measurement, Visualization, Shape, Shape measurement, Neural networks, Reliability theory, Predictive models, Importance Maps BibRef

Gong, S.Z.[Shi-Zhan], Dou, Q.[Qi], Farnia, F.[Farzan],
Structured Gradient-Based Interpretations via Norm-Regularized Adversarial Training,
CVPR24(11009-11018)
IEEE DOI Code:
WWW Link. 2410
Training, Perturbation methods, Benchmark testing, Stability analysis, Robustness, adversarial training BibRef

Bora, R.P.[Revoti Prasad], Terhorst, P.[Philipp], Veldhuis, R.[Raymond], Ramachandra, R.[Raghavendra], Raja, K.[Kiran],
SLICE: Stabilized LIME for Consistent Explanations for Image Classification,
CVPR24(10988-10996)
IEEE DOI 2410
Training, Perturbation methods, Closed box, Feature extraction, Artificial intelligence, LIME, XAI, Interpretability BibRef

Lei, X.H.[Xiao-Han], Wang, M.[Min], Zhou, W.G.[Wen-Gang], Li, L.[Li], Li, H.Q.[Hou-Qiang],
Instance-Aware Exploration-Verification-Exploitation for Instance ImageGoal Navigation,
CVPR24(16329-16339)
IEEE DOI Code:
WWW Link. 2410
Visualization, Solid modeling, Navigation, Decision making, Semantics, Switches, Visual Navigation, Verification, Embodied Vision BibRef

Wu, W.J.[Wen-Jun], Zhang, L.L.[Ling-Ling], Liu, J.[Jun], Tang, X.[Xi], Wang, Y.X.[Ya-Xian], Wang, S.W.[Shao-Wei], Wang, Q.Y.[Qian-Ying],
E-GPS: Explainable Geometry Problem Solving via Top-Down Solver and Bottom-Up Generator,
CVPR24(13828-13837)
IEEE DOI 2410
Geometry, Training, Annotations, Formal languages, Generators, Cognition, Geometry Problem Solving BibRef

Sumiyasu, K.[Kosuke], Kawamoto, K.[Kazuhiko], Kera, H.[Hiroshi],
Identifying Important Group of Pixels using Interactions,
CVPR24(6017-6026)
IEEE DOI Code:
WWW Link. 2410
Greedy algorithms, Visualization, Costs, Codes, Computational modeling, Predictive models, explainable AI, Interactions BibRef

Wang, B.S.[Bor-Shiun], Wang, C.Y.[Chien-Yi], Chiu, W.C.[Wei-Chen],
MCPNet: An Interpretable Classifier via Multi-Level Concept Prototypes,
CVPR24(10885-10894)
IEEE DOI Code:
WWW Link. 2410
Training, Computational modeling, Semantics, Decision making, Prototypes, Explainable AI, Multi-scale explanation BibRef

Bandyopadhyay, H.[Hmrishav], Chowdhury, P.N.[Pinaki Nath], Bhunia, A.K.[Ayan Kumar], Sain, A.[Aneeshan], Xiang, T.[Tao], Song, Y.Z.[Yi-Zhe],
What Sketch Explainability Really Means for Downstream Tasks?,
CVPR24(10997-11008)
IEEE DOI 2410
Adaptation models, Computational modeling BibRef

Chakraborty, R.[Rwiddhi], Sletten, A.[Adrian], Kampffmeyer, M.C.[Michael C.],
ExMap: Leveraging Explainability Heatmaps for Unsupervised Group Robustness to Spurious Correlations,
CVPR24(12017-12026)
IEEE DOI Code:
WWW Link. 2410
Heating systems, Deep learning, Training, Learning systems, Bridges, Correlation BibRef

Berrouyne, M.[Mustapha], Hami, H.[Hinde], Jouilil, Y.[Youness],
Predictive Power of AI: Tackling Child Undernutrition in Morocco,
ISCV24(1-8)
IEEE DOI 2408
Logistic regression, Pediatrics, Sociology, Predictive models, Nearest neighbor methods, Boosting, Prediction algorithms, Gradient Boosting BibRef

Bachiri, K.[Khalil], Malek, M.[Maria], Yahyaouy, A.[Ali], Rogovschi, N.[Nicoleta],
Adaptive Subgraph Feature Extraction for Explainable Multi-Modal Learning,
ISCV24(1-7)
IEEE DOI 2408
Adaptation models, Adaptive systems, Social networking (online), Decision making, Refining, Closed box, Feature extraction, Adaptive Subgraph Feature Extraction BibRef

Hong, J.Y.[Jin-Yung], Park, K.H.[Keun Hee], Pavlic, T.P.[Theodore P.],
Concept-Centric Transformers: Enhancing Model Interpretability through Object-Centric Concept Learning within a Shared Global Workspace,
WACV24(4868-4879)
IEEE DOI 2404
Computational modeling, Semantics, Decision making, Memory modules, Transformers, Feature extraction, Synchronization, Algorithms, and algorithms BibRef

Wan, Q.Y.[Qi-Yang], Wang, R.P.[Rui-Ping], Chen, X.L.[Xi-Lin],
Interpretable Object Recognition by Semantic Prototype Analysis,
WACV24(789-798)
IEEE DOI 2404
Visualization, Analytical models, Semantics, Natural languages, Prototypes, Object recognition, Algorithms, ethical computer vision BibRef

Wang, C.[Chong], Liu, Y.[Yuyuan], Chen, Y.H.[Yuan-Hong], Liu, F.B.[Feng-Bei], Tian, Y.[Yu], McCarthy, D.[Davis], Frazer, H.[Helen], Carneiro, G.[Gustavo],
Learning Support and Trivial Prototypes for Interpretable Image Classification,
ICCV23(2062-2072)
IEEE DOI 2401
BibRef

Santos, F.A.O.[Flávio Arthur Oliveira], Zanchettin, C.[Cleber],
Exploring Image Classification Robustness and Interpretability with Right for the Right Reasons Data Augmentation,
LXCV-ICCV23(4149-4158)
IEEE DOI 2401
BibRef

Zhang, Y.F.[Yi-Fei], Gu, S.[Siyi], Gao, Y.Y.[Yu-Yang], Pan, B.[Bo], Yang, X.F.[Xiao-Feng], Zhao, L.[Liang],
MAGI: Multi-Annotated Explanation-Guided Learning,
ICCV23(1977-1987)
IEEE DOI 2401
BibRef

Fan, L.[Lei], Liu, B.[Bo], Li, H.X.[Hao-Xiang], Wu, Y.[Ying], Hua, G.[Gang],
Flexible Visual Recognition by Evidential Modeling of Confusion and Ignorance,
ICCV23(1338-1347)
IEEE DOI 2401
Address confidence of results and multiple options. BibRef

Englebert, A.[Alexandre], Stassin, S.[Sédrick], Nanfack, G.[Géraldin], Mahmoudi, S.A.[Sidi Ahmed], Siebert, X.[Xavier], Cornu, O.[Olivier], de Vleeschouwer, C.[Christophe],
Explaining through Transformer Input Sampling,
NIVT23(806-815)
IEEE DOI Code:
WWW Link. 2401
BibRef

Masala, M.[Mihai], Cudlenco, N.[Nicolae], Rebedea, T.[Traian], Leordeanu, M.[Marius],
Explaining Vision and Language through Graphs of Events in Space and Time,
CLVL23(2818-2823)
IEEE DOI 2401
BibRef

Zelenka, C.[Claudius], Göhring, A.[Andrea], Kazempour, D.[Daniyal], Hünemörder, M.[Maximilian], Schmarje, L.[Lars], Kröger, P.[Peer],
A Simple and Explainable Method for Uncertainty Estimation using Attribute Prototype Networks,
Uncertainty23(4572-4581)
IEEE DOI 2401
BibRef

Hesse, R.[Robin], Schaub-Meyer, S.[Simone], Roth, S.[Stefan],
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods,
ICCV23(3958-3968)
IEEE DOI 2401
BibRef

Dai, B.[Bo], Wang, L.[Linge], Jia, B.X.[Bao-Xiong], Zhang, Z.[Zeyu], Zhu, S.C.[Song-Chun], Zhang, C.[Chi], Zhu, Y.X.[Yi-Xin],
X-VoE: Measuring eXplanatory Violation of Expectation in Physical Events,
ICCV23(3969-3979)
IEEE DOI 2401
BibRef

Roth, K.[Karsten], Kim, J.M.[Jae Myung], Koepke, A.S.[A. Sophia], Vinyals, O.[Oriol], Schmid, C.[Cordelia], Akata, Z.[Zeynep],
Waffling around for Performance: Visual Classification with Random Words and Broad Concepts,
ICCV23(15700-15711)
IEEE DOI Code:
WWW Link. 2401
BibRef

Gerstenberger, M.[Michael], Wiegand, T.[Thomas], Eisert, P.[Peter], Bosse, S.[Sebastian],
But That's Not Why: Inference Adjustment by Interactive Prototype Revision,
CIARP23(I:123-132).
Springer DOI 2312
BibRef

Poppi, S.[Samuele], Bigazzi, R.[Roberto], Rawal, N.[Niyati], Cornia, M.[Marcella], Cascianelli, S.[Silvia], Baraldi, L.[Lorenzo], Cucchiara, R.[Rita],
Towards Explainable Navigation and Recounting,
CIAP23(I:171-183).
Springer DOI 2312
BibRef

Nicolaou, A.[Andria], Prentzas, N.[Nicoletta], Loizou, C.P.[Christos P.], Pantzaris, M.[Marios], Kakas, A.[Antonis], Pattichis, C.S.[Constantinos S.],
A Comparative Study of Explainable Ai models in the Assessment of Multiple Sclerosis,
CAIP23(II:140-148).
Springer DOI 2312
BibRef

Wang, Y.Y.[Yang Yang], Bunyak, F.[Filiz],
DFT-CAM: Discrete Fourier Transform Driven Class Activation Map,
ICIP23(500-504)
IEEE DOI 2312
BibRef

Joukovsky, B.[Boris], Sammani, F.[Fawaz], Deligiannis, N.[Nikos],
Model-Agnostic Visual Explanations via Approximate Bilinear Models,
ICIP23(1770-1774)
IEEE DOI 2312
BibRef

Wang, Y.F.[Yi-Fan], Deng, S.Y.[Si-Yuan], Yuan, K.H.[Kun-Hao], Schaefer, G.[Gerald], Liu, X.[Xiyao], Fang, H.[Hui],
A Novel Class Activation Map for Visual Explanations in Multi-Object Scenes,
ICIP23(2615-2619)
IEEE DOI 2312
BibRef

Moayeri, M.[Mazda], Rezaei, K.[Keivan], Sanjabi, M.[Maziar], Feizi, S.[Soheil],
Text2Concept: Concept Activation Vectors Directly from Text,
XAI4CV23(3744-3749)
IEEE DOI 2309
BibRef

Ramaswamy, V.V.[Vikram V.], Kim, S.S.Y.[Sunnie S. Y.], Fong, R.[Ruth], Russakovsky, O.[Olga],
Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability,
CVPR23(10932-10941)
IEEE DOI 2309
BibRef

Wang, H.J.[Han-Jing], Joshi, D.[Dhiraj], Wang, S.Q.[Shi-Qiang], Ji, Q.[Qiang],
Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning,
CVPR23(12044-12053)
IEEE DOI 2309
BibRef

Zemni, M.[Mehdi], Chen, M.[Mickaël], Zablocki, É.[Éloi], Ben-Younes, H.[Hédi], Pérez, P.[Patrick], Cord, M.[Matthieu],
OCTET: Object-aware Counterfactual Explanations,
CVPR23(15062-15071)
IEEE DOI 2309
BibRef

Jeanneret, G.[Guillaume], Simon, L.[Loďc], Jurie, F.[Frédéric],
Adversarial Counterfactual Visual Explanations,
CVPR23(16425-16435)
IEEE DOI 2309
BibRef

Yang, R.[Ruo], Wang, B.H.[Bing-Hui], Bilgic, M.[Mustafa],
IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients,
CVPR23(23725-23734)
IEEE DOI 2309
BibRef

Fel, T.[Thomas], Ducoffe, M.[Melanie], Vigouroux, D.[David], Cadčne, R.[Rémi], Capelle, M.[Mikael], Nicodčme, C.[Claire], Serre, T.[Thomas],
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis,
CVPR23(16153-16163)
IEEE DOI 2309
BibRef

Arias-Duart, A.[Anna], Mariotti, E.[Ettore], Garcia-Gasulla, D.[Dario], Alonso-Moral, J.M.[Jose Maria],
A Confusion Matrix for Evaluating Feature Attribution Methods,
XAI4CV23(3709-3714)
IEEE DOI 2309
BibRef

Bordt, S.[Sebastian], Upadhyay, U.[Uddeshya], Akata, Z.[Zeynep], von Luxburg, U.[Ulrike],
The Manifold Hypothesis for Gradient-Based Explanations,
XAI4CV23(3697-3702)
IEEE DOI 2309
BibRef

Yang, Y.[Yue], Panagopoulou, A.[Artemis], Zhou, S.H.[Sheng-Hao], Jin, D.[Daniel], Callison-Burch, C.[Chris], Yatskar, M.[Mark],
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification,
CVPR23(19187-19197)
IEEE DOI 2309
BibRef

Nauta, M.[Meike], Schlötterer, J.[Jörg], van Keulen, M.[Maurice], Seifert, C.[Christin],
PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification,
CVPR23(2744-2753)
IEEE DOI 2309
BibRef

Fel, T.[Thomas], Picard, A.[Agustin], Bethune, L.[Louis], Boissin, T.[Thibaut], Vigouroux, D.[David], Colin, J.[Julien], Cadénc, R.[Rémi], Serre, T.[Thomas],
CRAFT: Concept Recursive Activation FacTorization for Explainability,
CVPR23(2711-2721)
IEEE DOI 2309
BibRef

Santhirasekaram, A.[Ainkaran], Kori, A.[Avinash], Winkler, M.[Mathias], Rockall, A.[Andrea], Toni, F.[Francesca], Glocker, B.[Ben],
Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification,
TAG-PRA23(561-570)
IEEE DOI 2309
BibRef

Zhao, Z.X.[Zi-Xiang], Zhang, J.S.[Jiang-She], Bai, H.W.[Hao-Wen], Wang, Y.C.[Yi-Cheng], Cui, Y.K.[Yu-Kun], Deng, L.[Lilun], Sun, K.[Kai], Zhang, C.X.[Chun-Xia], Liu, J.[Junmin], Xu, S.[Shuang],
Deep Convolutional Sparse Coding Networks for Interpretable Image Fusion,
AML23(2369-2377)
IEEE DOI 2309
BibRef

Akash Guna, R.T., Benitez, R.[Raul], Sikha, O.K.,
Ante-Hoc Generation of Task-Agnostic Interpretation Maps,
XAI4CV23(3764-3769)
IEEE DOI 2309
BibRef

Shukla, P.[Pushkar], Bharati, S.[Sushil], Turk, M.[Matthew],
CAVLI - Using image associations to produce local concept-based explanations,
XAI4CV23(3750-3755)
IEEE DOI 2309
BibRef

Hasany, S.N.[Syed Nouman], Petitjean, C.[Caroline], Mériaudeau, F.[Fabrice],
Seg-XRes-CAM: Explaining Spatially Local Regions in Image Segmentation,
XAI4CV23(3733-3738)
IEEE DOI 2309
BibRef

Riva, M.[Mateus], Gori, P.[Pietro], Yger, F.[Florian], Bloch, I.[Isabelle],
Is the U-NET Directional-Relationship Aware?,
ICIP22(3391-3395)
IEEE DOI 2211
Training data, Cognition, Task analysis, Standards, XAI, structural information, directional relationships, U-Net BibRef

Teney, D.[Damien], Peyrard, M.[Maxime], Abbasnejad, E.[Ehsan],
Predicting Is Not Understanding: Recognizing and Addressing Underspecification in Machine Learning,
ECCV22(XXIII:458-476).
Springer DOI 2211
BibRef

Sovatzidi, G.[Georgia], Vasilakakis, M.D.[Michael D.], Iakovidis, D.K.[Dimitris K.],
Automatic Fuzzy Graph Construction For Interpretable Image Classification,
ICIP22(3743-3747)
IEEE DOI 2211
Image edge detection, Semantics, Machine learning, Predictive models, Feature extraction, Interpretability BibRef

Chari, P.[Pradyumna], Ba, Y.H.[Yun-Hao], Athreya, S.[Shreeram], Kadambi, A.[Achuta],
MIME: Minority Inclusion for Majority Group Enhancement of AI Performance,
ECCV22(XIII:326-343).
Springer DOI 2211

WWW Link. BibRef

Deng, A.[Ailin], Li, S.[Shen], Xiong, M.[Miao], Chen, Z.[Zhirui], Hooi, B.[Bryan],
Trust, but Verify: Using Self-supervised Probing to Improve Trustworthiness,
ECCV22(XIII:361-377).
Springer DOI 2211
BibRef

Rymarczyk, D.[Dawid], Struski, L.[Lukasz], Górszczak, M.[Michal], Lewandowska, K.[Koryna], Tabor, J.[Jacek], Zielinski, B.[Bartosz],
Interpretable Image Classification with Differentiable Prototypes Assignment,
ECCV22(XII:351-368).
Springer DOI 2211
BibRef

Vandenhende, S.[Simon], Mahajan, D.[Dhruv], Radenovic, F.[Filip], Ghadiyaram, D.[Deepti],
Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals,
ECCV22(XII:261-279).
Springer DOI 2211

WWW Link. BibRef

Kim, S.S.Y.[Sunnie S. Y.], Meister, N.[Nicole], Ramaswamy, V.V.[Vikram V.], Fong, R.[Ruth], Russakovsky, O.[Olga],
HIVE: Evaluating the Human Interpretability of Visual Explanations,
ECCV22(XII:280-298).
Springer DOI 2211
BibRef

Jacob, P.[Paul], Zablocki, É.[Éloi], Ben-Younes, H.[Hédi], Chen, M.[Mickaël], Pérez, P.[Patrick], Cord, M.[Matthieu],
STEEX: Steering Counterfactual Explanations with Semantics,
ECCV22(XII:387-403).
Springer DOI 2211
BibRef

Machiraju, G.[Gautam], Plevritis, S.[Sylvia], Mallick, P.[Parag],
A Dataset Generation Framework for Evaluating Megapixel Image Classifiers and Their Explanations,
ECCV22(XII:422-442).
Springer DOI 2211
BibRef

Kolek, S.[Stefan], Nguyen, D.A.[Duc Anh], Levie, R.[Ron], Bruna, J.[Joan], Kutyniok, G.[Gitta],
Cartoon Explanations of Image Classifiers,
ECCV22(XII:443-458).
Springer DOI 2211
BibRef

Motzkus, F.[Franz], Weber, L.[Leander], Lapuschkin, S.[Sebastian],
Measurably Stronger Explanation Reliability Via Model Canonization,
ICIP22(516-520)
IEEE DOI 2211
Location awareness, Deep learning, Visualization, Current measurement, Neural networks, Network architecture BibRef

Yang, G.[Guang], Rao, A.[Arvind], Fernandez-Maloigne, C.[Christine], Calhoun, V.[Vince], Menegaz, G.[Gloria],
Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges,
ICIP22(1531-1535)
IEEE DOI 2211
Deep learning, Image segmentation, Special issues and sections, Biological system modeling, Signal processing, Data models, Biomedical Data BibRef

Paiss, R.[Roni], Chefer, H.[Hila], Wolf, L.B.[Lior B.],
No Token Left Behind: Explainability-Aided Image Classification and Generation,
ECCV22(XII:334-350).
Springer DOI 2211
BibRef

Khorram, S.[Saeed], Li, F.X.[Fu-Xin],
Cycle-Consistent Counterfactuals by Latent Transformations,
CVPR22(10193-10202)
IEEE DOI 2210
Try to find images similar to the query image that change the decision. Training, Measurement, Visualization, Image resolution, Machine vision, Computational modeling, Explainable computer vision BibRef

Haselhoff, A.[Anselm], Kronenberger, J.[Jan], Küppers, F.[Fabian], Schneider, J.[Jonas],
Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation,
SAIAD21(21-28)
IEEE DOI 2109
Visualization, Shape, Semantics, Training data, Object detection, Predictive models, Linear programming BibRef

Hepburn, A.[Alexander], Santos-Rodriguez, R.[Raul],
Explainers in the Wild: Making Surrogate Explainers Robust to Distortions Through Perception,
ICIP21(3717-3721)
IEEE DOI 2201
Training, Measurement, Image processing, Predictive models, Distortion, Robustness, Explainability, surrogates, perception BibRef

Palacio, S.[Sebastian], Lucieri, A.[Adriano], Munir, M.[Mohsin], Ahmed, S.[Sheraz], Hees, J.[Jörn], Dengel, A.[Andreas],
XAI Handbook: Towards a Unified Framework for Explainable AI,
RPRMI21(3759-3768)
IEEE DOI 2112
Measurement, Terminology, Pipelines, Concrete BibRef

Vierling, A.[Axel], James, C.[Charu], Berns, K.[Karsten], Katsaouni, N.[Nikoletta],
Provable Translational Robustness for Object Detection With Convolutional Neural Networks,
ICIP21(694-698)
IEEE DOI 2201
Training, Support vector machines, Analytical models, Scattering, Object detection, Detectors, Feature extraction, Explainable AI BibRef

Ortega, A.[Alfonso], Fierrez, J.[Julian], Morales, A.[Aythami], Wang, Z.L.[Zi-Long], Ribeiro, T.[Tony],
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment,
WACVW21(78-87) Explainable and Interpretable AI
IEEE DOI 2105
Training, Machine learning algorithms, Biometrics (access control), Resumes, Neural networks, Tools BibRef

Kwon, H.J.[Hyuk Jin], Koo, H.I.[Hyung Il], Cho, N.I.[Nam Ik],
Improving Explainability of Integrated Gradients with Guided Non-Linearity,
ICPR21(385-391)
IEEE DOI 2105
Measurement, Heating systems, Visualization, Gradient methods, Action potentials, Perturbation methods, Neurons BibRef

Fuhl, W.[Wolfgang], Rong, Y.[Yao], Motz, T.[Thomas], Scheidt, M.[Michael], Hartel, A.[Andreas], Koch, A.[Andreas], Kasneci, E.[Enkelejda],
Explainable Online Validation of Machine Learning Models for Practical Applications,
ICPR21(3304-3311)
IEEE DOI 2105
Machine learning algorithms, Microcontrollers, Memory management, Data acquisition, Training data, Transforms, Machine learning BibRef

Mänttäri, J.[Joonatan], Broomé, S.[Sofia], Folkesson, J.[John], Kjellström, H.[Hedvig],
Interpreting Video Features: A Comparison of 3d Convolutional Networks and Convolutional LSTM Networks,
ACCV20(V:411-426).
Springer DOI 2103

See also Interpretable Explanations of Black Boxes by Meaningful Perturbation. BibRef

Oussalah, M.[Mourad],
Ai Explainability. A Bridge Between Machine Vision and Natural Language Processing,
EDL-AI20(257-273).
Springer DOI 2103
BibRef

Petkovic, D., Alavi, A., Cai, D., Wong, M.,
Random Forest Model and Sample Explainer for Non-experts in Machine Learning: Two Case Studies,
EDL-AI20(62-75).
Springer DOI 2103
BibRef

Muddamsetty, S.M.[Satya M.], Jahromi, M.N.S.[Mohammad N. S.], Moeslund, T.B.[Thomas B.],
Expert Level Evaluations for Explainable Ai (XAI) Methods in the Medical Domain,
EDL-AI20(35-46).
Springer DOI 2103
BibRef

Muddamsetty, S.M., Mohammad, N.S.J., Moeslund, T.B.,
SIDU: Similarity Difference And Uniqueness Method for Explainable AI,
ICIP20(3269-3273)
IEEE DOI 2011
Visualization, Predictive models, Machine learning, Computational modeling, Measurement, Task analysis, Explainable AI, CNN BibRef

Sun, Y.C.[You-Cheng], Chockler, H.[Hana], Huang, X.W.[Xiao-Wei], Kroening, D.[Daniel],
Explaining Image Classifiers Using Statistical Fault Localization,
ECCV20(XXVIII:391-406).
Springer DOI 2011
BibRef

Choi, H., Som, A., Turaga, P.,
AMC-Loss: Angular Margin Contrastive Loss for Improved Explainability in Image Classification,
Diff-CVML20(3659-3666)
IEEE DOI 2008
Training, Task analysis, Feature extraction, Euclidean distance, Airplanes, Media BibRef

Parafita, Á., Vitriŕ, J.,
Explaining Visual Models by Causal Attribution,
VXAI19(4167-4175)
IEEE DOI 2004
data handling, feature extraction, intervened causal model, causal attribution, visual models, image generative models, learning BibRef

Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., Keim, D.A.,
Towards A Rigorous Evaluation Of XAI Methods On Time Series,
VXAI19(4197-4201)
IEEE DOI 2004
image processing, learning (artificial intelligence), text analysis, time series, SHAP, image domain, text-domain, explainable-ai-evaluation BibRef

Fong, R.C.[Ruth C.], Vedaldi, A.[Andrea],
Interpretable Explanations of Black Boxes by Meaningful Perturbation,
ICCV17(3449-3457)
IEEE DOI 1802
Explain the result of learning. image classification, learning (artificial intelligence), black box algorithm, black boxes, classifier decision, Visualization BibRef

Hossam, M.[Mahmoud], Le, T.[Trung], Zhao, H.[He], Phung, D.[Dinh],
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability,
ICPR21(8922-8928)
IEEE DOI 2105
Training, Deep learning, Computational modeling, Perturbation methods, Text categorization, Natural languages, Training data BibRef

Plummer, B.A.[Bryan A.], Vasileva, M.I.[Mariya I.], Petsiuk, V.[Vitali], Saenko, K.[Kate], Forsyth, D.A.[David A.],
Why Do These Match? Explaining the Behavior of Image Similarity Models,
ECCV20(XI:652-669).
Springer DOI 2011
BibRef

Cheng, X., Rao, Z., Chen, Y., Zhang, Q.,
Explaining Knowledge Distillation by Quantifying the Knowledge,
CVPR20(12922-12932)
IEEE DOI 2008
Visualization, Task analysis, Measurement, Knowledge engineering, Optimization, Entropy, Neural networks BibRef

Chen, Y.,
Nonparametric Learning Via Successive Subspace Modeling (SSM),
ICIP19(3031-3032)
IEEE DOI 1910
Machine Learning, Explainable Machine Learning, Nonparametric Learning, Subspace Modeling, Successive Subspace Modeling BibRef

Shi, J.X.[Jia-Xin], Zhang, H.W.[Han-Wang], Li, J.Z.[Juan-Zi],
Explainable and Explicit Visual Reasoning Over Scene Graphs,
CVPR19(8368-8376).
IEEE DOI 2002
BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Constraint Based Matching .


Last update:May 5, 2025 at 20:47:32