14.5.9.8.11 Intrepretation, Explaination, Understanding of Convolutional Neural Networks

Chapter Contents (Back)
Convolutional Neural Networks. Explainable. CNN.
See also Forgetting, Learning without Forgetting, Convolutional Neural Networks.
See also Continual Learning.
See also Dynamic Learning, Incremental Learning.
See also Knowledge Distillation. General Explainable:
See also Explainable Aritficial Intelligence.

Chung, F.L., Wang, S., Deng, Z., Hu, D.,
CATSMLP: Toward a Robust and Interpretable Multilayer Perceptron With Sigmoid Activation Functions,
SMC-B(36), No. 6, December 2006, pp. 1319-1331.
IEEE DOI 0701
BibRef

Mopuri, K.R., Garg, U., Babu, R.V.[R. Venkatesh],
CNN Fixations: An Unraveling Approach to Visualize the Discriminative Image Regions,
IP(28), No. 5, May 2019, pp. 2116-2125.
IEEE DOI 1903
convolutional neural nets, feature extraction, object recognition, CNN fixations, discriminative image regions, weakly supervised localization BibRef

Kuo, C.C.J.[C.C. Jay], Zhang, M.[Min], Li, S.Y.[Si-Yang], Duan, J.L.[Jia-Li], Chen, Y.[Yueru],
Interpretable convolutional neural networks via feedforward design,
JVCIR(60), 2019, pp. 346-359.
Elsevier DOI 1903
Interpretable machine learning, Convolutional neural networks, Principal component analysis, Dimension reduction BibRef

Chen, Y., Yang, Y., Wang, W., Kuo, C.C.J.,
Ensembles of Feedforward-Designed Convolutional Neural Networks,
ICIP19(3796-3800)
IEEE DOI 1910
Ensemble, Image classification, Interpretable CNN, Dimension reduction BibRef

Chen, Y., Yang, Y., Zhang, M., Kuo, C.C.J.,
Semi-Supervised Learning Via Feedforward-Designed Convolutional Neural Networks,
ICIP19(365-369)
IEEE DOI 1910
Semi-supervised learning, Ensemble, Image classification, Interpretable CNN BibRef

Li, H.[Heyi], Tian, Y.K.[Yun-Ke], Mueller, K.[Klaus], Chen, X.[Xin],
Beyond saliency: Understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation,
IVC(83-84), 2019, pp. 70-86.
Elsevier DOI 1904
Convolutional neural networks, Deep learning understanding, Salient relevance map, Attention area BibRef

Fan, C.X.[Chun-Xiao], Li, Y.[Yang], Tian, L.[Lei], Li, Y.[Yong],
Rectifying Transformation Networks for Transformation-Invariant Representations with Power Law,
IEICE(E102-D), No. 3, March 2019, pp. 675-679.
WWW Link. 1904
CNN to rectify learned feature representations. BibRef

Cao, C.S.[Chun-Shui], Huang, Y.Z.[Yong-Zhen], Yang, Y.[Yi], Wang, L.[Liang], Wang, Z.L.[Zi-Lei], Tan, T.N.[Tie-Niu],
Feedback Convolutional Neural Network for Visual Localization and Segmentation,
PAMI(41), No. 7, July 2019, pp. 1627-1640.
IEEE DOI 1906
Neurons, Visualization, Image segmentation, Semantics, Convolutional neural networks, Task analysis, object segmentation BibRef

Zhou, B.[Bolei], Bau, D.[David], Oliva, A.[Aude], Torralba, A.B.[Antonio B.],
Interpreting Deep Visual Representations via Network Dissection,
PAMI(41), No. 9, Sep. 2019, pp. 2131-2145.
IEEE DOI 1908
Method quantifies the interpretability of CNN representations. Visualization, Detectors, Training, Image color analysis, Task analysis, Image segmentation, Semantics, interpretable machine learning BibRef

Liu, R.S.[Ri-Sheng], Cheng, S.C.[Shi-Chao], Ma, L.[Long], Fan, X.[Xin], Luo, Z.X.[Zhong-Xuan],
Deep Proximal Unrolling: Algorithmic Framework, Convergence Analysis and Applications,
IP(28), No. 10, October 2019, pp. 5013-5026.
IEEE DOI 1909
Task analysis, Optimization, Convergence, Mathematical model, Network architecture, Data models, low-level computer vision BibRef

Cui, X.R.[Xin-Rui], Wang, D.[Dan], Wang, Z.J.[Z. Jane],
Multi-Scale Interpretation Model for Convolutional Neural Networks: Building Trust Based on Hierarchical Interpretation,
MultMed(21), No. 9, September 2019, pp. 2263-2276.
IEEE DOI 1909
Visualization, Computational modeling, Analytical models, Feature extraction, Perturbation methods, Image segmentation, model-agnostic BibRef

Wang, W.[Wei], Zhu, L.Q.[Li-Qiang], Guo, B.Q.[Bao-Qing],
Reliable identification of redundant kernels for convolutional neural network compression,
JVCIR(63), 2019, pp. 102582.
Elsevier DOI 1909
Network compression, Convolutional neural network, Pruning criterion, Channel-level pruning BibRef

Hu, S.X.[Shell Xu], Zagoruyko, S.[Sergey], Komodakis, N.[Nikos],
Exploring weight symmetry in deep neural networks,
CVIU(187), 2019, pp. 102786.
Elsevier DOI 1909
BibRef

Selvaraju, R.R.[Ramprasaath R.], Cogswell, M.[Michael], Das, A.[Abhishek], Vedantam, R.[Ramakrishna], Parikh, D.[Devi], Batra, D.[Dhruv],
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,
IJCV(128), No. 2, February 2020, pp. 336-359.
Springer DOI 2002
BibRef
Earlier: ICCV17(618-626)
IEEE DOI 1802
Explain the CNN models. convolution, data visualisation, gradient methods, image classification, image representation, inference mechanisms, Visualization BibRef

Aich, S.[Shubhra], Yamazaki, M.[Masaki], Taniguchi, Y.[Yasuhiro], Stavness, I.[Ian],
Multi-Scale Weight Sharing Network for Image Recognition,
PRL(131), 2020, pp. 348-354.
Elsevier DOI 2004
Multi-scale weight sharing, Image recognition, Convolutional neural networks, Image classification BibRef

Saraee, E.[Elham], Jalal, M.[Mona], Betke, M.[Margrit],
Visual complexity analysis using deep intermediate-layer features,
CVIU(195), 2020, pp. 102949.
Elsevier DOI 2005
Visual complexity, Convolutional layers, Deep neural network, Feature extraction, Convolutional neural network, Scene classification BibRef

Xie, L., Lee, F., Liu, L., Yin, Z., Chen, Q.,
Hierarchical Coding of Convolutional Features for Scene Recognition,
MultMed(22), No. 5, May 2020, pp. 1182-1192.
IEEE DOI 2005
Visualization, Convolutional codes, Encoding, Image representation, Feature extraction, Image recognition, Image coding, Scene recognition BibRef

Gao, X.J.[Xin-Jian], Zhang, Z.[Zhao], Mu, T.T.[Ting-Ting], Zhang, X.D.[Xu-Dong], Cui, C.R.[Chao-Ran], Wang, M.[Meng],
Self-attention driven adversarial similarity learning network,
PR(105), 2020, pp. 107331.
Elsevier DOI 2006
Self-attention mechanism, Adversarial loss, Similarity learning network, Explainable deep learning BibRef

Rickmann, A.M.[Anne-Marie], Roy, A.G.[Abhijit Guha], Sarasua, I.[Ignacio], Wachinger, C.[Christian],
Recalibrating 3D ConvNets With Project Excite,
MedImg(39), No. 7, July 2020, pp. 2461-2471.
IEEE DOI 2007
Biomedical imaging, Image segmentation, Task analysis, volumetric segmentation BibRef

Sarasua, I.[Ignacio], Pölsterl, S., Wachinger, C.[Christian],
Recalibration of Neural Networks for Point Cloud Analysis,
3DV20(443-451)
IEEE DOI 2102
Shape, Solid modeling, Calibration, Feature extraction, Image analysis, Computer architecture BibRef

Wang, Y., Su, H., Zhang, B., Hu, X.,
Learning Reliable Visual Saliency For Model Explanations,
MultMed(22), No. 7, July 2020, pp. 1796-1807.
IEEE DOI 2007
Visualization, Reliability, Predictive models, Task analysis, Perturbation methods, Backpropagation, Real-time systems, deep learning BibRef

Cui, X.R.[Xin-Rui], Wang, D.[Dan], Wang, Z.J.[Z. Jane],
Feature-Flow Interpretation of Deep Convolutional Neural Networks,
MultMed(22), No. 7, July 2020, pp. 1847-1861.
IEEE DOI 2007
Visualization, Computational modeling, Perturbation methods, Convolutional neural networks, Medical services, Birds, sparse representation BibRef

Rafegas, I.[Ivet], Vanrell, M.[Maria], Alexandre, L.A.[Luís A.], Arias, G.[Guillem],
Understanding trained CNNs by indexing neuron selectivity,
PRL(136), 2020, pp. 318-325.
Elsevier DOI 2008
Convolutional neural networks, Visualization of CNNs, Neuron selectivity, CNNs Understanding, Feature visualization, BibRef

Shi, X., Xing, F., Xu, K., Chen, P., Liang, Y., Lu, Z., Guo, Z.,
Loss-Based Attention for Interpreting Image-Level Prediction of Convolutional Neural Networks,
IP(30), 2021, pp. 1662-1675.
IEEE DOI 2101
Feature extraction, Routing, Visualization, Training, Convolutional codes, weighted sum BibRef

Patro, B.N.[Badri N.], Lunayach, M.[Mayank], Namboodiri, V.P.[Vinay P.],
Uncertainty Class Activation Map (U-CAM) Using Gradient Certainty Method,
IP(30), 2021, pp. 1910-1924.
IEEE DOI 2101
Uncertainty, Visualization, Predictive models, Task analysis, Knowledge discovery, Mathematical model, Deep learning, epistemic uncertainty BibRef

Patro, B.N.[Badri N.], Lunayach, M.[Mayank], Patel, S.[Shivansh], Namboodiri, V.P.[Vinay P.],
U-CAM: Visual Explanation Using Uncertainty Based Class Activation Maps,
ICCV19(7443-7452)
IEEE DOI 2004
inference mechanisms, learning (artificial intelligence), visual explanation, uncertainty based class activation maps, Data models BibRef

Gu, R.[Ran], Wang, G.T.[Guo-Tai], Song, T.[Tao], Huang, R.[Rui], Aertsen, M.[Michael], Deprest, J.[Jan], Ourselin, S.[Sébastien], Vercauteren, T.[Tom], Zhang, S.T.[Shao-Ting],
CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation,
MedImg(40), No. 2, February 2021, pp. 699-711.
IEEE DOI 2102
Image segmentation, Task analysis, Feature extraction, Medical diagnostic imaging, Shape, Convolutional neural networks, explainability BibRef

Monga, V., Li, Y., Eldar, Y.C.,
Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing,
SPMag(38), No. 2, March 2021, pp. 18-44.
IEEE DOI 2103
Training data, Systematics, Neural networks, Signal processing algorithms, Performance gain, Machine learning BibRef

Van Luong, H., Joukovsky, B., Deligiannis, N.,
Designing Interpretable Recurrent Neural Networks for Video Reconstruction via Deep Unfolding,
IP(30), 2021, pp. 4099-4113.
IEEE DOI 2104
Image reconstruction, Minimization, Recurrent neural networks, Image coding, Signal reconstruction, Task analysis, sequential frame reconstruction BibRef

Feng, Z.P.[Zhen-Peng], Zhu, M.Z.[Ming-Zhe], Stankovic, L.[Ljubiša], Ji, H.B.[Hong-Bing],
Self-Matching CAM: A Novel Accurate Visual Explanation of CNNs for SAR Image Interpretation,
RS(13), No. 9, 2021, pp. xx-yy.
DOI Link 2105
BibRef

Wang, D.[Dan], Cui, X.R.[Xin-Rui], Chen, X.[Xun], Ward, R.[Rabab], Wang, Z.J.[Z. Jane],
Interpreting Bottom-Up Decision-Making of CNNs via Hierarchical Inference,
IP(30), 2021, pp. 6701-6714.
IEEE DOI 2108
Visualization, Decision making, Semantics, Image color analysis, Perturbation methods, Neuroscience, Training, Interpretation model, decision-making process BibRef

La Gatta, V.[Valerio], Moscato, V.[Vincenzo], Postiglione, M.[Marco], Sperlì, G.[Giancarlo],
PASTLE: Pivot-aided space transformation for local explanations,
PRL(149), 2021, pp. 67-74.
Elsevier DOI 2108
eXplainable artificial intelligence, Interpretable machine learning, Artificial intelligence BibRef

Dazeley, R.[Richard], Vamplew, P.[Peter], Foale, C.[Cameron], Young, C.[Charlotte], Aryal, S.I.[Sun-Il], Cruz, F.[Francisco],
Levels of explainable artificial intelligence for human-aligned conversational explanations,
AI(299), 2021, pp. 103525.
Elsevier DOI 2108
Explainable Artificial Intelligence (XAI), Broad-XAI, Interpretable Machine Learning (IML), Human-Computer Interaction (HCI) BibRef

Yang, Z.B.[Ze-Bin], Zhang, A.[Aijun], Sudjianto, A.[Agus],
GAMI-Net: An explainable neural network based on generalized additive models with structured interactions,
PR(120), 2021, pp. 108192.
Elsevier DOI 2109
Explainable neural network, Generalized additive model, Pairwise interaction, Interpretability constraints BibRef

Zhang, Q.S.[Quan-Shi], Wang, X.[Xin], Wu, Y.N.[Ying Nian], Zhou, H.L.[Hui-Lin], Zhu, S.C.[Song-Chun],
Interpretable CNNs for Object Classification,
PAMI(43), No. 10, October 2021, pp. 3416-3431.
IEEE DOI 2109
Visualization, Semantics, Neural networks, Task analysis, Feature extraction, Annotations, Benchmark testing, interpretable deep learning BibRef

Ivanovs, M.[Maksims], Kadikis, R.[Roberts], Ozols, K.[Kaspars],
Perturbation-based methods for explaining deep neural networks: A survey,
PRL(150), 2021, pp. 228-234.
Elsevier DOI 2109
Survey, Explainable Networks. Deep learning, Explainable artificial intelligence, Perturbation-based methods BibRef

Zhu, S.[Sijie], Yang, T.[Taojiannan], Chen, C.[Chen],
Visual Explanation for Deep Metric Learning,
IP(30), 2021, pp. 7593-7607.
IEEE DOI 2109
Measurement, Visualization, Mouth, Image retrieval, Computational modeling, Perturbation methods, activation decomposition BibRef

Cui, Y.[Yunbo], Du, Y.T.[You-Tian], Wang, X.[Xue], Wang, H.[Hang], Su, C.[Chang],
Leveraging attention-based visual clue extraction for image classification,
IET-IPR(15), No. 12, 2021, pp. 2937-2947.
DOI Link 2109
What features are really used. BibRef

Dombrowski, A.K.[Ann-Kathrin], Anders, C.J.[Christopher J.], Müller, K.R.[Klaus-Robert], Kessel, P.[Pan],
Towards robust explanations for deep neural networks,
PR(121), 2022, pp. 108194.
Elsevier DOI 2109
Explanation method, Saliency map, Adversarial attacks, Manipulation, Neural networks, BibRef

Zhang, Q.S.[Quan-Shi], Wang, X.[Xin], Cao, R.M.[Rui-Ming], Wu, Y.N.[Ying Nian], Shi, F.[Feng], Zhu, S.C.[Song-Chun],
Extraction of an Explanatory Graph to Interpret a CNN,
PAMI(43), No. 11, November 2021, pp. 3863-3877.
IEEE DOI 2110
Feature extraction, Visualization, Neural networks, Semantics, Annotations, Task analysis, Training, interpretable deep learning BibRef

Losch, M.M.[Max Maria], Fritz, M.[Mario], Schiele, B.[Bernt],
Semantic Bottlenecks: Quantifying and Improving Inspectability of Deep Representations,
IJCV(129), No. 11, November 2021, pp. 3136-3153.
Springer DOI 2110
BibRef
Earlier: GCPR20(15-29).
Springer DOI 2110
BibRef

Kook, L.[Lucas], Herzog, L.[Lisa], Hothorn, T.[Torsten], Dürr, O.[Oliver], Sick, B.[Beate],
Deep and interpretable regression models for ordinal outcomes,
PR(122), 2022, pp. 108263.
Elsevier DOI 2112
Deep learning, Interpretability, Distributional regression, Ordinal regression, Transformation models BibRef

Kim, J.[Junho], Kim, S.[Seongyeop], Kim, S.T.[Seong Tae], Ro, Y.M.[Yong Man],
Robust Perturbation for Visual Explanation: Cross-Checking Mask Optimization to Avoid Class Distortion,
IP(31), 2022, pp. 301-313.
IEEE DOI 2112
Distortion, Perturbation methods, Visualization, Optimization, Cats, Bicycles, Automobiles, Visual explanation, attribution map, mask perturbation BibRef

Sokolovska, N.[Nataliya], Behbahani, Y.M.[Yasser Mohseni],
Vanishing boosted weights: A consistent algorithm to learn interpretable rules,
PRL(152), 2021, pp. 63-69.
Elsevier DOI 2112
Machine learning, Fine-tuning procedure, Interpretable sparse models, Decision stumps BibRef

Shin, S.[Sunguk], Kim, Y.J.[Young-Joon], Yoon, J.W.[Ji Won],
A new approach to training more interpretable model with additional segmentation,
PRL(152), 2021, pp. 188-194.
Elsevier DOI 2112
Classification model, Convolutional neural networks, Interpretable machine learning BibRef

Narwaria, M.[Manish],
Does explainable machine learning uncover the black box in vision applications?,
IVC(118), 2022, pp. 104353.
Elsevier DOI 2202
Explainable machine learning, Deep learning, Vision, Signal processing BibRef

Yuan, H.[Hao], Cai, L.[Lei], Hu, X.[Xia], Wang, J.[Jie], Ji, S.W.[Shui-Wang],
Interpreting Image Classifiers by Generating Discrete Masks,
PAMI(44), No. 4, April 2022, pp. 2019-2030.
IEEE DOI 2203
Generators, Predictive models, Training, Computational modeling, Neurons, Convolutional neural networks, image classification, reinforcement learning BibRef

Ben Sahel, Y.[Yair], Bryan, J.P.[John P.], Cleary, B.[Brian], Farhi, S.L.[Samouil L.], Eldar, Y.C.[Yonina C.],
Deep Unrolled Recovery in Sparse Biological Imaging: Achieving fast, accurate results,
SPMag(39), No. 2, March 2022, pp. 45-57.
IEEE DOI 2203
Architectures that combine the interpretability of iterative algorithms with the performance of deep learning. Location awareness, Learning systems, Biological system modeling, Algorithm design and analysis, Biomedical imaging, Performance gain BibRef

Cheng, L.[Lin], Fang, P.F.[Peng-Fei], Liang, Y.J.[Yan-Jie], Zhang, L.[Liao], Shen, C.H.[Chun-Hua], Wang, H.Z.[Han-Zi],
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency,
IP(31), 2022, pp. 2529-2540.
IEEE DOI 2204
Visualization, Semantics, Task analysis, Convolutional neural networks, Medical diagnostic imaging, CNN visualization BibRef

Fu, W.J.[Wei-Jie], Wang, M.[Meng], Du, M.N.[Meng-Nan], Liu, N.H.[Ning-Hao], Hao, S.J.[Shi-Jie], Hu, X.[Xia],
Differentiated Explanation of Deep Neural Networks With Skewed Distributions,
PAMI(44), No. 6, June 2022, pp. 2909-2922.
IEEE DOI 2205
Generators, Perturbation methods, Tuning, Neural networks, Convolution, Visualization, Training, Deep neural networks, differentiated saliency maps BibRef

Muddamsetty, S.M.[Satya M.], Jahromi, M.N.S.[Mohammad N.S.], Ciontos, A.E.[Andreea E.], Fenoy, L.M.[Laura M.], Moeslund, T.B.[Thomas B.],
Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) method,
PR(127), 2022, pp. 108604.
Elsevier DOI 2205
Explainable AI (XAI), CNN, Adversarial attack, Eye-tracker BibRef

Zheng, T.Y.[Tian-You], Wang, Q.[Qiang], Shen, Y.[Yue], Ma, X.[Xiang], Lin, X.T.[Xiao-Tian],
High-resolution rectified gradient-based visual explanations for weakly supervised segmentation,
PR(129), 2022, pp. 108724.
Elsevier DOI 2206
BibRef

Cooper, J.[Jessica], Arandjelovic, O.[Ognjen], Harrison, D.J.[David J.],
Believe the HiPe: Hierarchical perturbation for fast, robust, and model-agnostic saliency mapping,
PR(129), 2022, pp. 108743.
Elsevier DOI 2206
XAI, AI safety, Saliency mapping, Deep learning explanation, Interpretability, Prediction attribution BibRef

Mochaourab, R.[Rami], Venkitaraman, A.[Arun], Samsten, I.[Isak], Papapetrou, P.[Panagiotis], Rojas, C.R.[Cristian R.],
Post Hoc Explainability for Time Series Classification: Toward a signal processing perspective,
SPMag(39), No. 4, July 2022, pp. 119-129.
IEEE DOI 2207
Tracking, Solid modeling, Time series analysis, Neural networks, Speech recognition, Transforms, Signal processing BibRef

Ho, T.K.[Tin Kam], Luo, Y.F.[Yen-Fu], Guido, R.C.[Rodrigo Capobianco],
Explainability of Methods for Critical Information Extraction From Clinical Documents: A survey of representative works,
SPMag(39), No. 4, July 2022, pp. 96-106.
IEEE DOI 2207
Vocabulary, Symbols, Natural language processing, Cognition, Real-time systems, Data mining, Artificial intelligence BibRef

Letzgus, S.[Simon], Wagner, P.[Patrick], Lederer, J.[Jonas], Samek, W.[Wojciech], Müller, K.R.[Klaus-Robert], Montavon, G.[Grégoire],
Toward Explainable Artificial Intelligence for Regression Models: A methodological perspective,
SPMag(39), No. 4, July 2022, pp. 40-58.
IEEE DOI 2207
Deep learning, Neural networks, Predictive models, Medical diagnosis, Task analysis BibRef

AlRegib, G.[Ghassan], Prabhushankar, M.[Mohit],
Explanatory Paradigms in Neural Networks: Towards relevant and contextual explanations,
SPMag(39), No. 4, July 2022, pp. 59-72.
IEEE DOI 2207
Correlation, Codes, Neural networks, Decision making, Probabilistic logic, Cognition, Reproducibility of results, Context awareness BibRef

Nielsen, I.E.[Ian E.], Dera, D.[Dimah], Rasool, G.[Ghulam], Ramachandran, R.P.[Ravi P.], Bouaynaya, N.C.[Nidhal Carla],
Robust Explainability: A tutorial on gradient-based attribution methods for deep neural networks,
SPMag(39), No. 4, July 2022, pp. 73-84.
IEEE DOI 2207
Deep learning, Neural networks, Tutorials, Predictive models, Reproducibility of results BibRef

Das, P.[Payel], Varshney, L.R.[Lav R.],
Explaining Artificial Intelligence Generation and Creativity: Human interpretability for novel ideas and artifacts,
SPMag(39), No. 4, July 2022, pp. 85-95.
IEEE DOI 2207
Training data, Signal processing algorithms, Intellectual property, Gaussian distribution, Creativity BibRef

Shi, R.[Rui], Li, T.X.[Tian-Xing], Yamaguchi, Y.S.[Yasu-Shi],
Output-targeted baseline for neuron attribution calculation,
IVC(124), 2022, pp. 104516.
Elsevier DOI 2208
Convolutional neural networks, Network interpretability, Attribution methods, Shapley values BibRef

Huang, Z.L.[Zhong-Ling], Yao, X.[Xiwen], Liu, Y.[Ying], Dumitru, C.O.[Corneliu Octavian], Datcu, M.[Mihai], Han, J.W.[Jun-Wei],
Physically explainable CNN for SAR image classification,
PandRS(190), 2022, pp. 25-37.
Elsevier DOI 2208
Explainable deep learning, Physical model, SAR image classification, Prior knowledge BibRef

Guo, X.P.[Xian-Peng], Hou, B.[Biao], Wu, Z.T.[Zi-Tong], Ren, B.[Bo], Wang, S.[Shuang], Jiao, L.C.[Li-Cheng],
Prob-POS: A Framework for Improving Visual Explanations from Convolutional Neural Networks for Remote Sensing Image Classification,
RS(14), No. 13, 2022, pp. xx-yy.
DOI Link 2208
BibRef

Temenos, A.[Anastasios], Tzortzis, I.N.[Ioannis N.], Kaselimi, M.[Maria], Rallis, I.[Ioannis], Doulamis, A.[Anastasios], Doulamis, N.[Nikolaos],
Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI) and Remote Sensing,
RS(14), No. 13, 2022, pp. xx-yy.
DOI Link 2208
BibRef

Corbière, C.[Charles], Thome, N.[Nicolas], Saporta, A.[Antoine], Vu, T.H.[Tuan-Hung], Cord, M.[Matthieu], Pérez, P.[Patrick],
Confidence Estimation via Auxiliary Models,
PAMI(44), No. 10, October 2022, pp. 6043-6055.
IEEE DOI 2209
Task analysis, Estimation, Neural networks, Semantics, Predictive models, Uncertainty, Training, Confidence estimation, semantic image segmentation BibRef

Ye, F.[Fei], Bors, A.G.[Adrian G.],
Lifelong Teacher-Student Network Learning,
PAMI(44), No. 10, October 2022, pp. 6280-6296.
IEEE DOI 2209
BibRef
Earlier:
Learning Latent Representations Across Multiple Data Domains Using Lifelong VaeGAN,
ECCV20(XX:777-795).
Springer DOI 2011
Task analysis, Training, Data models, Generative adversarial networks, Probabilistic logic, teacher -student framework BibRef

Dietterich, T.G.[Thomas G.], Guyer, A.[Alex],
The familiarity hypothesis: Explaining the behavior of deep open set methods,
PR(132), 2022, pp. 108931.
Elsevier DOI 2209
Anomaly detection, Open set learning, Object recognition, Novel category detection, Representation learning, Deep learning BibRef

Jung, H.G.[Hong-Gyu], Kang, S.H.[Sin-Han], Kim, H.D.[Hee-Dong], Won, D.O.[Dong-Ok], Lee, S.W.[Seong-Whan],
Counterfactual explanation based on gradual construction for deep networks,
PR(132), 2022, pp. 108958.
Elsevier DOI 2209
Explainable AI, Counterfactual explanation, Interpretability, Model-agnostics, Generative model BibRef

Schnake, T.[Thomas], Eberle, O.[Oliver], Lederer, J.[Jonas], Nakajima, S.[Shinichi], Schütt, K.T.[Kristof T.], Müller, K.R.[Klaus-Robert], Montavon, G.[Grégoire],
Higher-Order Explanations of Graph Neural Networks via Relevant Walks,
PAMI(44), No. 11, November 2022, pp. 7581-7596.
IEEE DOI 2210
Graph neural networks, Neural networks, Predictive models, Optimization, Taylor series, Feature extraction, Adaptation models, explainable machine learning BibRef

Giryes, R.[Raja],
A Function Space Analysis of Finite Neural Networks With Insights From Sampling Theory,
PAMI(45), No. 1, January 2023, pp. 27-37.
IEEE DOI 2212
Neural networks, Training data, Discrete Fourier transforms, Interpolation, Training, Transforms, Splines (mathematics), band-limited mappings BibRef

Fu, Y.W.[Yan-Wei], Liu, C.[Chen], Li, D.H.[Dong-Hao], Zhong, Z.Y.[Zu-Yuan], Sun, X.W.[Xin-Wei], Zeng, J.S.[Jin-Shan], Yao, Y.[Yuan],
Exploring Structural Sparsity of Deep Networks Via Inverse Scale Spaces,
PAMI(45), No. 2, February 2023, pp. 1749-1765.
IEEE DOI 2301
Training, Computational modeling, Neural networks, Deep learning, Convergence, Mirrors, Couplings, Early stopping, growing network, structural sparsity BibRef

Wang, X.[Xiang], Wu, Y.X.[Ying-Xin], Zhang, A.[An], Feng, F.[Fuli], He, X.N.[Xiang-Nan], Chua, T.S.[Tat-Seng],
Reinforced Causal Explainer for Graph Neural Networks,
PAMI(45), No. 2, February 2023, pp. 2297-2309.
IEEE DOI 2301
Predictive models, Task analysis, Computational modeling, Analytical models, Visualization, Representation learning, cause-effect BibRef

Gautam, S.[Srishti], Höhne, M.M.C.[Marina M.C.], Hansen, S.[Stine], Jenssen, R.[Robert], Kampffmeyer, M.[Michael],
This looks More Like that: Enhancing Self-Explaining Models by Prototypical Relevance Propagation,
PR(136), 2023, pp. 109172.
Elsevier DOI 2301
Self-explaining models, Explainable AI, Deep learning, Spurious Correlation Detection BibRef

Sousa, E.V.[Eduardo Vera], Vasconcelos, C.N.[Cristina Nader], Fernandes, L.A.F.[Leandro A.F.],
An analysis of ConformalLayers' robustness to corruptions in natural images,
PRL(166), 2023, pp. 190-197.
Elsevier DOI 2302
BibRef

Alfarra, M.[Motasem], Bibi, A.[Adel], Hammoud, H.[Hasan], Gaafar, M.[Mohamed], Ghanem, B.[Bernard],
On the Decision Boundaries of Neural Networks: A Tropical Geometry Perspective,
PAMI(45), No. 4, April 2023, pp. 5027-5037.
IEEE DOI 2303
Geometry, Neural networks, Standards, Optimization, Task analysis, Generators, Complexity theory, Adversarial attacks, tropical geometry BibRef

Li, J.[Jing], Zhang, D.B.[Dong-Bo], Meng, B.[Bumin], Li, Y.X.[Yong-Xing], Luo, L.[Lufeng],
FIMF score-CAM: Fast score-CAM based on local multi-feature integration for visual interpretation of CNNS,
IET-IPR(17), No. 3, 2023, pp. 761-772.
DOI Link 2303
class activation mapping, deep network, model interpretation BibRef

Yuan, H.[Hao], Yu, H.Y.[Hai-Yang], Gui, S.[Shurui], Ji, S.W.[Shui-Wang],
Explainability in Graph Neural Networks: A Taxonomic Survey,
PAMI(45), No. 5, May 2023, pp. 5782-5799.
IEEE DOI 2304
Predictive models, Task analysis, Taxonomy, Biological system modeling, Graph neural networks, Data models, survey BibRef

Cheng, M.M.[Ming-Ming], Jiang, P.T.[Peng-Tao], Han, L.H.[Ling-Hao], Wang, L.[Liang], Torr, P.H.S.[Philip H.S.],
Deeply Explain CNN Via Hierarchical Decomposition,
IJCV(131), No. 5, May 2023, pp. 1091-1105.
Springer DOI 2305
BibRef

Wickstrøm, K.K.[Kristoffer K.], Trosten, D.J.[Daniel J.], Løkse, S.[Sigurd], Boubekki, A.[Ahcène], Mikalsen, K.ø.[Karl øyvind], Kampffmeyer, M.C.[Michael C.], Jenssen, R.[Robert],
RELAX: Representation Learning Explainability,
IJCV(131), No. 6, June 2023, pp. 1584-1610.
Springer DOI 2305
BibRef

Qiao, S.S.[Shi-Shi], Wang, R.P.[Rui-Ping], Shan, S.G.[Shi-Guang], Chen, X.L.[Xi-Lin],
Hierarchical disentangling network for object representation learning,
PR(140), 2023, pp. 109539.
Elsevier DOI 2305
Object understanding, Hierarchical learning, Representation disentanglement, Network interpretability BibRef

Böhle, M.[Moritz], Fritz, M.[Mario], Schiele, B.[Bernt],
Optimising for Interpretability: Convolutional Dynamic Alignment Networks,
PAMI(45), No. 6, June 2023, pp. 7625-7638.
IEEE DOI 2305
Computational modeling, Neural networks, Predictive models, Informatics, Task analysis, Transforms, Ear, explainability in deep learning BibRef

Lin, C.S.[Ci-Siang], Wang, Y.C.A.F.[Yu-Chi-Ang Frank],
Describe, Spot and Explain: Interpretable Representation Learning for Discriminative Visual Reasoning,
IP(32), 2023, pp. 2481-2492.
IEEE DOI 2305
Prototypes, Visualization, Transformers, Task analysis, Heating systems, Training, Annotations, Interpretable prototypes, deep learning BibRef

Nguyen, K.P.[Kevin P.], Treacher, A.H.[Alex H.], Montillo, A.A.[Albert A.],
Adversarially-Regularized Mixed Effects Deep Learning (ARMED) Models Improve Interpretability, Performance, and Generalization on Clustered (non-iid) Data,
PAMI(45), No. 7, July 2023, pp. 8081-8093.
IEEE DOI 2306
Data models, Biological system modeling, Deep learning, Adaptation models, Training, Predictive models, Bayes methods, clinical data BibRef

Hu, K.W.[Kai-Wen], Gao, J.[Jing], Mao, F.Y.[Fang-Yuan], Song, X.H.[Xin-Hui], Cheng, L.C.[Le-Chao], Feng, Z.L.[Zun-Lei], Song, M.L.[Ming-Li],
Disassembling Convolutional Segmentation Network,
IJCV(131), No. 7, July 2023, pp. 1741-1760.
Springer DOI 2307
BibRef

Wang, P.[Pei], Vasconcelos, N.M.[Nuno M.],
A Generalized Explanation Framework for Visualization of Deep Learning Model Predictions,
PAMI(45), No. 8, August 2023, pp. 9265-9283.
IEEE DOI 2307
Birds, Visualization, Deep learning, Task analysis, Protocols, Perturbation methods, Attribution, confidence scores, explainable AI BibRef

Xu, J.J.[Jian-Jin], Zhang, Z.X.[Zhao-Xiang], Hu, X.L.[Xiao-Lin],
Extracting Semantic Knowledge From GANs With Unsupervised Learning,
PAMI(45), No. 8, August 2023, pp. 9654-9668.
IEEE DOI 2307
Semantics, Semantic segmentation, Generative adversarial networks, Clustering algorithms, unsupervised learning BibRef

Iida, T.[Tsumugi], Komatsu, T.[Takumi], Kaneda, K.[Kanta], Hirakawa, T.[Tsubasa], Yamashita, T.[Takayoshi], Fujiyoshi, H.[Hironobu], Sugiura, K.[Komei],
Visual Explanation Generation Based on Lambda Attention Branch Networks,
ACCV22(II:475-490).
Springer DOI 2307
BibRef

Li, X.[Xin], Lei, H.J.[Hao-Jie], Zhang, L.[Li], Wang, M.Z.[Ming-Zhong],
Differentiable Logic Policy for Interpretable Deep Reinforcement Learning: A Study From an Optimization Perspective,
PAMI(45), No. 10, October 2023, pp. 11654-11667.
IEEE DOI 2310
BibRef

Wargnier-Dauchelle, V.[Valentine], Grenier, T.[Thomas], Durand-Dubief, F.[Françoise], Cotton, F.[François], Sdika, M.[Michaël],
A Weakly Supervised Gradient Attribution Constraint for Interpretable Classification and Anomaly Detection,
MedImg(42), No. 11, November 2023, pp. 3336-3347.
IEEE DOI 2311
BibRef

Asnani, V.[Vishal], Yin, X.[Xi], Hassner, T.[Tal], Liu, X.M.[Xiao-Ming],
Reverse Engineering of Generative Models: Inferring Model Hyperparameters From Generated Images,
PAMI(45), No. 12, December 2023, pp. 15477-15493.
IEEE DOI 2311
BibRef

Shi, W.[Wei], Zhang, W.T.[Wen-Tao], Zheng, W.S.[Wei-Shi], Wang, R.X.[Rui-Xuan],
PAMI: Partition Input and Aggregate Outputs for Model Interpretation,
PR(145), 2024, pp. 109898.
Elsevier DOI Code:
WWW Link. 2311
Interpretation, Visualization, Post-hoc BibRef

Echeberria-Barrio, X.[Xabier], Gil-Lerchundi, A.[Amaia], Mendialdua, I.[Iñigo], Orduna-Urrutia, R.[Raul],
Topological safeguard for evasion attack interpreting the neural networks' behavior,
PR(147), 2024, pp. 110130.
Elsevier DOI 2312
Artificial neural network interpretability, cybersecurity, countermeasure BibRef

Brocki, L.[Lennart], Chung, N.C.[Neo Christopher],
Feature perturbation augmentation for reliable evaluation of importance estimators in neural networks,
PRL(176), 2023, pp. 131-139.
Elsevier DOI Code:
WWW Link. 2312
Deep neural network, Artificial intelligence, Interpretability, Explainability, Fidelity, Importance estimator, Saliency map, Feature perturbation BibRef

Joukovsky, B.[Boris], Eldar, Y.C.[Yonina C.], Deligiannis, N.[Nikos],
Interpretable Neural Networks for Video Separation: Deep Unfolding RPCA With Foreground Masking,
IP(33), 2024, pp. 108-122.
IEEE DOI 2312
BibRef

Dan, T.T.[Ting-Ting], Kim, M.[Minjeong], Kim, W.H.[Won Hwa], Wu, G.R.[Guo-Rong],
Developing Explainable Deep Model for Discovering Novel Control Mechanism of Neuro-Dynamics,
MedImg(43), No. 1, January 2024, pp. 427-438.
IEEE DOI 2401
BibRef

Apicella, A.[Andrea], Isgrò, F.[Francesco], Prevete, R.[Roberto],
Hidden classification layers: Enhancing linear separability between classes in neural networks layers,
PRL(177), 2024, pp. 69-74.
Elsevier DOI 2401
Neural networks, Hidden layers, Hidden representations, Linearly separable BibRef

Apicella, A.[Andrea], Giugliano, S.[Salvatore], Isgrò, F.[Francesco], Prevete, R.[Roberto],
A General Approach to Compute the Relevance of Middle-level Input Features,
EDL-AI20(189-203).
Springer DOI 2103
BibRef

Li, Y.[Yanshan], Liang, H.[Huajie], Yu, R.[Rui],
BI-CAM: Generating Explanations for Deep Neural Networks Using Bipolar Information,
MultMed(26), 2024, pp. 568-580.
IEEE DOI 2402
Neural networks, Feature extraction, Convolutional neural networks, Mutual information, Visualization, point-wise mutual information (PMI) BibRef

Li, Y.S.[Yan-Shan], Liang, H.J.[Hua-Jie], Zheng, H.F.[Hong-Fang], Yu, R.[Rui],
CR-CAM: Generating explanations for deep neural networks by contrasting and ranking features,
PR(149), 2024, pp. 110251.
Elsevier DOI 2403
Class Activation Mapping (CAM), Manifold space, Interpretation BibRef

Tang, J.C.[Jia-Cheng], Kang, Q.[Qi], Zhou, M.C.[Meng-Chu], Yin, H.[Hao], Yao, S.[Siya],
MemeNet: Toward a Reliable Local Projection for Image Recognition via Semantic Featurization,
IP(33), 2024, pp. 1670-1682.
IEEE DOI 2403
Feature extraction, Reliability, Task analysis, Convolutional neural networks, Semantics, Image recognition, trustworthy machine learning BibRef

Suresh, S., Das, B., Abrol, V., Dutta-Roy, S.,
On characterizing the evolution of embedding space of neural networks using algebraic topology,
PRL(179), 2024, pp. 165-171.
Elsevier DOI 2403
Topology, Deep learning, Transfer learning BibRef

Peng, Y.T.[Yi-Tao], He, L.H.[Liang-Hua], Hu, D.[Die], Liu, Y.H.[Yi-Hang], Yang, L.Z.[Long-Zhen], Shang, S.H.[Shao-Hua],
Hierarchical Dynamic Masks for Visual Explanation of Neural Networks,
MultMed(26), 2024, pp. 5311-5325.
IEEE DOI 2404
Neural networks, Decision making, Visualization, Reliability, Predictive models, Location awareness, model-agnostic BibRef

Dombrowski, A.K.[Ann-Kathrin], Gerken, J.E.[Jan E.], Muller, K.R.[Klaus-Robert], Kessel, P.[Pan],
Diffeomorphic Counterfactuals With Generative Models,
PAMI(46), No. 5, May 2024, pp. 3257-3274.
IEEE DOI 2404
Explain classification decisions of neural networks in a human interpretable way. Manifolds, Geometry, Computational modeling, Semantics, Data models, Artificial intelligence, Task analysis, generative models BibRef

Wang, J.Q.[Jia-Qi], Liu, H.F.[Hua-Feng], Jing, L.P.[Li-Ping],
Transparent Embedding Space for Interpretable Image Recognition,
CirSysVideo(34), No. 5, May 2024, pp. 3204-3219.
IEEE DOI 2405
Transformers, Prototypes, Semantics, Visualization, Image recognition, Cognition, Task analysis, Explainable AI BibRef

Wang, J.Q.[Jia-Qi], Liu, H.F.[Hua-Feng], Wang, X.Y.[Xin-Yue], Jing, L.P.[Li-Ping],
Interpretable Image Recognition by Constructing Transparent Embedding Space,
ICCV21(875-884)
IEEE DOI 2203
Manifolds, Bridges, Image recognition, Codes, Cognitive processes, Neural networks, Explainable AI, Fairness, accountability, Visual reasoning and logical representation BibRef

Böhle, M.[Moritz], Singh, N.[Navdeeppal], Fritz, M.[Mario], Schiele, B.[Bernt],
B-Cos Alignment for Inherently Interpretable CNNs and Vision Transformers,
PAMI(46), No. 6, June 2024, pp. 4504-4518.
IEEE DOI 2405
Computational modeling, Task analysis, Optimization, Visualization, Transformers, Training, Measurement, Convolutional neural networks, XAI BibRef

Kim, S.[Seonggyeom], Chae, D.K.[Dong-Kyu],
What Does a Model Really Look at?: Extracting Model-Oriented Concepts for Explaining Deep Neural Networks,
PAMI(46), No. 7, July 2024, pp. 4612-4624.
IEEE DOI 2406
Annotations, Image segmentation, Computational modeling, Predictive models, Convolutional neural networks, Crops, explainable AI BibRef

Luo, J.Q.[Jia-Qi], Xu, S.X.[Shi-Xin],
NCART: Neural Classification and Regression Tree for tabular data,
PR(154), 2024, pp. 110578.
Elsevier DOI Code:
WWW Link. 2406
Tabular data, Neural networks, Interpretability, Classification and Regression Tree BibRef

Islam, M.T.[Md Tauhidul], Xing, L.[Lei],
Deciphering the Feature Representation of Deep Neural Networks for High-Performance AI,
PAMI(46), No. 8, August 2024, pp. 5273-5287.
IEEE DOI 2407
Kernel, Feature extraction, Measurement, Euclidean distance, Principal component analysis, Transformers, X-ray imaging, interpretability BibRef

Rodrigues, C.M.[Caroline Mazini], Boutry, N.[Nicolas], Najman, L.[Laurent],
Transforming gradient-based techniques into interpretable methods,
PRL(184), 2024, pp. 66-73.
Elsevier DOI 2408
Explainable artificial intelligence, Convolutional Neural Network, Gradient-based, Interpretability BibRef

Remusati, H.[Héloïse], Caillec, J.M.L.[Jean-Marc Le], Schneider, J.Y.[Jean-Yves], Petit-Frère, J.[Jacques], Merlet, T.[Thomas],
Generative Adversarial Networks for SAR Automatic Target Recognition and Classification Models Enhanced Explainability: Perspectives and Challenges,
RS(16), No. 14, 2024, pp. 2569.
DOI Link 2408
BibRef

Alami, A.[Amine], Boumhidi, J.[Jaouad], Chakir, L.[Loqman],
Explainability in CNN based Deep Learning models for medical image classification,
ISCV24(1-6)
IEEE DOI 2408
Deep learning, Uncertainty, Pneumonia, Explainable AI, Computational modeling, Decision making, Feature extraction, Grad-CAM. BibRef

Valle, M.E.[Marcos Eduardo],
Understanding Vector-Valued Neural Networks and Their Relationship With Real and Hypercomplex-Valued Neural Networks: Incorporating intercorrelation between features into neural networks,
SPMag(41), No. 3, May 2024, pp. 49-58.
IEEE DOI 2408
[Hypercomplex Signal and Image Processing] Training data, Deep learning, Image processing, Neural networks, Parallel processing, Vectors, Hypercomplex, Multidimensional signal processing BibRef

Islam, M.A.[Md Amirul], Kowal, M.[Matthew], Jia, S.[Sen], Derpanis, K.G.[Konstantinos G.], Bruce, N.D.B.[Neil D. B.],
Position, Padding and Predictions: A Deeper Look at Position Information in CNNs,
IJCV(132), No. 1, January 2024, pp. 3889-3910.
Springer DOI 2409
BibRef
Earlier:
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs,
ICCV21(773-781)
IEEE DOI 2203
Tensors, Semantics, Neurons, Linear programming, Encoding, Object recognition, Explainable AI, Adversarial learning BibRef

Shin, Y.M.[Yong-Min], Kim, S.W.[Sun-Woo], Shin, W.Y.[Won-Yong],
PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks,
PAMI(46), No. 10, October 2024, pp. 6559-6576.
IEEE DOI 2409
Prototypes, Graph neural networks, Computational modeling, Predictive models, Training, Mathematical models, prototype graph BibRef

Zhuo, Y.[Yue], Ge, Z.Q.[Zhi-Qiang],
IG2: Integrated Gradient on Iterative Gradient Path for Feature Attribution,
PAMI(46), No. 11, November 2024, pp. 7173-7190.
IEEE DOI 2410
Predictive models, Noise, Semiconductor device modeling, Perturbation methods, Explainable AI, Vectors, integrated gradient BibRef

Chormai, P.[Pattarawat], Herrmann, J.[Jan], Müller, K.R.[Klaus-Robert], Montavon, G.[Grégoire],
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces,
PAMI(46), No. 11, November 2024, pp. 7283-7299.
IEEE DOI 2410
Predictive models, Feature extraction, Explainable AI, Neural networks, Analytical models, Visualization, Standards, subspace analysis BibRef


Ahmad, O.[Ola], Béreux, N.[Nicolas], Baret, L.[Loïc], Hashemi, V.[Vahid], Lecue, F.[Freddy],
Causal Analysis for Robust Interpretability of Neural Networks,
WACV24(4673-4682)
IEEE DOI 2404
Training, Phase measurement, Computational modeling, Neural networks, Noise, Predictive models, Maintenance engineering, and algorithms BibRef

Akpudo, U.E.[Ugochukwu Ejike], Yu, X.H.[Xiao-Han], Zhou, J.[Jun], Gao, Y.S.[Yong-Sheng],
NCAF: NTD-based Concept Activation Factorisation Framework for CNN Explainability,
IVCNZ23(1-6)
IEEE DOI 2403
Visualization, Closed box, Dogs, Convolutional neural networks, Task analysis, Image reconstruction, Diseases, Explainability, non-negative Tucker decomposition BibRef

Kuttichira, D.P.[Deepthi Praveenlal], Azam, B.[Basim], Verma, B.[Brijesh], Rahman, A.[Ashfaqur], Wang, L.[Lipo], Sattar, A.[Abdul],
Neural Network Feature Explanation Using Neuron Activation Rate Based Bipartite Graph,
IVCNZ23(1-6)
IEEE DOI 2403
Computational modeling, Neurons, Computer architecture, Machine learning, Predictive models, Feature extraction, feature explanation BibRef

Jeon, G.Y.[Gi-Young], Jeong, H.[Haedong], Choi, J.[Jaesik],
Beyond Single Path Integrated Gradients for Reliable Input Attribution via Randomized Path Sampling,
ICCV23(2052-2061)
IEEE DOI 2401
Deep networks. BibRef

Huang, W.[Wei], Zhao, X.Y.[Xing-Yu], Jin, G.[Gaojie], Huang, X.W.[Xiao-Wei],
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability,
ICCV23(1988-1998)
IEEE DOI 2401
BibRef

Dravid, A.[Amil], Gandelsman, Y.[Yossi], Efros, A.A.[Alexei A.], Shocher, A.[Assaf],
Rosetta Neurons: Mining the Common Units in a Model Zoo,
ICCV23(1934-1943)
IEEE DOI 2401
Common features across different networks. BibRef

Srivastava, D.[Divyansh], Oikarinen, T.[Tuomas], Weng, T.W.[Tsui-Wei],
Corrupting Neuron Explanations of Deep Visual Features,
ICCV23(1877-1886)
IEEE DOI 2401
BibRef

Wang, X.[Xue], Wang, Z.B.[Zhi-Bo], Weng, H.Q.[Hai-Qin], Guo, H.C.[Heng-Chang], Zhang, Z.F.[Zhi-Fei], Jin, L.[Lu], Wei, T.[Tao], Ren, K.[Kui],
Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks,
ICCV23(2042-2051)
IEEE DOI 2401
BibRef

Barkan, O.[Oren], Elisha, Y.[Yehonatan], Asher, Y.[Yuval], Eshel, A.[Amit], Koenigstein, N.[Noam],
Visual Explanations via Iterated Integrated Attributions,
ICCV23(2073-2084)
IEEE DOI 2401
BibRef

Wang, S.X.[Shun-Xin], Veldhuis, R.[Raymond], Brune, C.[Christoph], Strisciuglio, N.[Nicola],
What do neural networks learn in image classification? A frequency shortcut perspective,
ICCV23(1433-1442)
IEEE DOI Code:
WWW Link. 2401
BibRef

Soelistyo, C.J.[Christopher J.], Charras, G.[Guillaume], Lowe, A.R.[Alan R.],
Virtual perturbations to assess explainability of deep-learning based cell fate predictors,
BioIm23(3973-3982)
IEEE DOI 2401
BibRef

Zhang, J.W.[Jing-Wei], Farnia, F.[Farzan],
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope,
ICCV23(2021-2030)
IEEE DOI 2401
BibRef

Huang, Q.[Qihan], Xue, M.Q.[Meng-Qi], Huang, W.Q.[Wen-Qi], Zhang, H.F.[Hao-Fei], Song, J.[Jie], Jing, Y.C.[Yong-Cheng], Song, M.L.[Ming-Li],
Evaluation and Improvement of Interpretability for Self-Explainable Part-Prototype Networks,
ICCV23(2011-2020)
IEEE DOI Code:
WWW Link. 2401
BibRef

Sicre, R., Zhang, H., Dejasmin, J., Daaloul, C., Ayache, S., Artières, T.,
DP-Net: Learning Discriminative Parts for Image Recognition,
ICIP23(1230-1234)
IEEE DOI 2312
BibRef

Meynen, T.[Toon], Behzadi-Khormouji, H.[Hamed], Oramas, J.[José],
Interpreting Convolutional Neural Networks by Explaining Their Predictions,
ICIP23(1685-1689)
IEEE DOI 2312
BibRef

Wang, F.[Fan], Kong, A.W.K.[Adams Wai-Kin],
A Practical Upper Bound for the Worst-Case Attribution Deviations,
CVPR23(24616-24625)
IEEE DOI 2309
BibRef

Ravuri, S.[Suman], Rey, M.[Mélanie], Mohamed, S.[Shakir], Deisenroth, M.P.[Marc Peter],
Understanding Deep Generative Models with Generalized Empirical Likelihoods,
CVPR23(24395-24405)
IEEE DOI 2309
BibRef

Bruintjes, R.J.[Robert-Jan], Motyka, T.[Tomasz], van Gemert, J.[Jan],
What Affects Learned Equivariance in Deep Image Recognition Models?,
L3D-IVU23(4839-4847)
IEEE DOI 2309
BibRef

Ji, Y.[Ying], Wang, Y.[Yu], Kato, J.[Jien],
Spatial-temporal Concept based Explanation of 3D ConvNets,
CVPR23(15444-15453)
IEEE DOI 2309
BibRef

Binder, A.[Alexander], Weber, L.[Leander], Lapuschkin, S.[Sebastian], Montavon, G.[Grégoire], Müller, K.R.[Klaus-Robert], Samek, W.[Wojciech],
Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations,
CVPR23(16143-16152)
IEEE DOI 2309
BibRef

Wang, B.[Bowen], Li, L.Z.[Liang-Zhi], Nakashima, Y.[Yuta], Nagahara, H.[Hajime],
Learning Bottleneck Concepts in Image Classification,
CVPR23(10962-10971)
IEEE DOI 2309

WWW Link. BibRef

Attaiki, S.[Souhaib], Ovsjanikov, M.[Maks],
Understanding and Improving Features Learned in Deep Functional Maps,
CVPR23(1316-1326)
IEEE DOI 2309
BibRef

Sarkar, S.[Soumyendu], Babu, A.R.[Ashwin Ramesh], Mousavi, S.[Sajad], Ghorbanpour, S.[Sahand], Gundecha, V.[Vineet], Guillen, A.[Antonio], Luna, R.[Ricardo], Naug, A.[Avisek],
RL-CAM: Visual Explanations for Convolutional Networks using Reinforcement Learning,
SAIAD23(3861-3869)
IEEE DOI 2309
BibRef

Pahde, F.[Frederik], Yolcu, G.Ü.[Galip Ümit], Binder, A.[Alexander], Samek, W.[Wojciech], Lapuschkin, S.[Sebastian],
Optimizing Explanations by Network Canonization and Hyperparameter Search,
SAIAD23(3819-3828)
IEEE DOI 2309
BibRef

Jeanneret, G.[Guillaume], Simon, L.[Loïc], Jurie, F.[Frédéric],
Diffusion Models for Counterfactual Explanations,
ACCV22(VII:219-237).
Springer DOI 2307
BibRef

Tayyub, J.[Jawad], Sarmad, M.[Muhammad], Schönborn, N.[Nicolas],
Explaining Deep Neural Networks for Point Clouds Using Gradient-based Visualisations,
ACCV22(II:155-170).
Springer DOI 2307
BibRef

Li, C.[Chen], Jiang, J.Z.[Jin-Zhe], Zhang, X.[Xin], Zhang, T.H.[Tong-Huan], Zhao, Y.Q.[Ya-Qian], Jiang, D.D.[Dong-Dong], Li, R.G.[Ren-Gang],
Towards Interpreting Computer Vision Based on Transformation Invariant Optimization,
CiV22(371-382).
Springer DOI 2304
BibRef

Eckstein, N.[Nils], Bukhari, H.[Habib], Bates, A.S.[Alexander S.], Jefferis, G.S.X.E.[Gregory S. X. E.], Funke, J.[Jan],
Discriminative Attribution from Paired Images,
BioImage22(406-422).
Springer DOI 2304
Highlight the most discriminative features between classes. BibRef

Gkartzonika, I.[Ioanna], Gkalelis, N.[Nikolaos], Mezaris, V.[Vasileios],
Learning Visual Explanations for DCNN-based Image Classifiers Using an Attention Mechanism,
Scarce22(396-411).
Springer DOI 2304
BibRef

Gupta, A.[Ankit], Sintorn, I.M.[Ida-Maria],
Towards Better Guided Attention and Human Knowledge Insertion in Deep Convolutional Neural Networks,
BioImage22(437-453).
Springer DOI 2304
BibRef

Tan, H.X.[Han-Xiao],
Visualizing Global Explanations of Point Cloud DNNs,
WACV23(4730-4739)
IEEE DOI 2302
Point cloud compression, Measurement, Knowledge engineering, Visualization, Codes, Neurons, Algorithms: Explainable, fair, 3D computer vision BibRef

Behzadi-Khormouji, H.[Hamed], Oramas Mogrovejo, J.A.[José A.],
A Protocol for Evaluating Model Interpretation Methods from Visual Explanations,
WACV23(1421-1429)
IEEE DOI 2302
Heating systems, Measurement, Visualization, Protocols, Semantics, Algorithms: Explainable, fair, accountable, privacy-preserving, Visualization BibRef

Valois, P.H.V.[Pedro H. V.], Niinuma, K.[Koichiro], Fukui, K.[Kazuhiro],
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space,
WACV24(4817-4826)
IEEE DOI 2404
Analytical models, Sensitivity analysis, Computational modeling, Perturbation methods, Neural networks, Predictive models, Visualization BibRef

Uchiyama, T.[Tomoki], Sogi, N.[Naoya], Niinuma, K.[Koichiro], Fukui, K.[Kazuhiro],
Visually explaining 3D-CNN predictions for video classification with an adaptive occlusion sensitivity analysis,
WACV23(1513-1522)
IEEE DOI 2302
Sensitivity analysis, Shape, Volume measurement, Decision making, Extraterrestrial measurements, Computational efficiency BibRef

Given, N.A.[No Author],
Fractual Projection Forest: Fast and Explainable Point Cloud Classifier,
WACV23(4229-4238)
IEEE DOI 2302
Portable document format, Algorithms: 3D computer vision, Explainable, fair, accountable, privacy-preserving, ethical computer vision BibRef

Zhang, Y.Y.[Ying-Ying], Zhong, Q.Y.[Qiao-Yong], Xie, D.[Di], Pu, S.L.[Shi-Liang],
KRNet: Towards Efficient Knowledge Replay,
ICPR22(4772-4778)
IEEE DOI 2212
Training, Learning systems, Knowledge engineering, Deep learning, Codes, Data compression, Recording BibRef

Yang, P.[Peiyu], Wen, Z.Y.[Ze-Yi], Mian, A.[Ajmal],
Multi-Grained Interpretable Network for Image Recognition,
ICPR22(3815-3821)
IEEE DOI 2212
Learn features at different levels. Resistance, Image recognition, Decision making, Focusing, Predictive models, Feature extraction, Cognition BibRef

Bayer, J.[Jens], Münch, D.[David], Arens, M.[Michael],
Deep Saliency Map Generators for Multispectral Video Classification,
ICPR22(3757-3764)
IEEE DOI 2212
To enable accountablity. Measurement, Training, Visualization, TV, Neural networks, Generators BibRef

Cunico, F.[Federico], Capogrosso, L.[Luigi], Setti, F.[Francesco], Carra, D.[Damiano], Fummi, F.[Franco], Cristani, M.[Marco],
I-SPLIT: Deep Network Interpretability for Split Computing,
ICPR22(2575-2581)
IEEE DOI 2212
Performance evaluation, Source coding, Pulmonary diseases, Neurons, Pipelines, Servers BibRef

Zee, T.[Timothy], Lakshmana, M.[Manohar], Nwogu, I.[Ifeoma],
Towards Understanding the Behaviors of Pretrained Compressed Convolutional Models,
ICPR22(3450-3456)
IEEE DOI 2212
Location awareness, Visualization, Image coding, Quantization (signal), Graphics processing units, Feature extraction BibRef

Li, H.[Hui], Li, Z.H.[Zi-Hao], Ma, R.[Rui], Wu, T.[Tieru],
FD-CAM: Improving Faithfulness and Discriminability of Visual Explanation for CNNs,
ICPR22(1300-1306)
IEEE DOI 2212
Visualization, Codes, Convolution, Perturbation methods, Switches, Prediction algorithms BibRef

Cekic, M.[Metehan], Bakiskan, C.[Can], Madhow, U.[Upamanyu],
Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations,
ICIP22(3843-3847)
IEEE DOI 2211
Training, Deep learning, Neurons, Neural networks, Wires, Supervised learning, Feature extraction, Interpretable ML, machine learning BibRef

Zheng, Q.[Quan], Wang, Z.W.[Zi-Wei], Zhou, J.[Jie], Lu, J.W.[Ji-Wen],
Shap-CAM: Visual Explanations for Convolutional Neural Networks Based on Shapley Value,
ECCV22(XII:459-474).
Springer DOI 2211
BibRef

Salama, A.[Ahmed], Adly, N.[Noha], Torki, M.[Marwan],
Ablation-CAM++: Grouped Recursive Visual Explanations for Deep Convolutional Networks,
ICIP22(2011-2015)
IEEE DOI 2211
Measurement, Deep learning, Visualization, Focusing, Binary trees, Predictive models, Interpretable Models, Visual Explanations, Computer Vision BibRef

Kherchouche, A.[Anouar], Ben-Ahmed, O.[Olfa], Guillevin, C.[Carole], Tremblais, B.[Benoit], Julian, A.[Adrien], Guillevin, R.[Rémy],
MRS-XNet: An Explainable One-Dimensional Deep Neural Network for Magnetic Spectroscopic Data Classification,
ICIP22(3923-3927)
IEEE DOI 2211
Protons, Solid modeling, Spectroscopy, Magnetic resonance imaging, Magnetic resonance, Brain modeling, Phosphorus, Computer-Aided Diagnosis BibRef

Rao, S.[Sukrut], Böhle, M.[Moritz], Schiele, B.[Bernt],
Towards Better Understanding Attribution Methods,
CVPR22(10213-10222)
IEEE DOI 2210
Measurement, Visualization, Systematics, Smoothing methods, Neural networks, Inspection, Explainable computer vision BibRef

Keswani, M.[Monish], Ramakrishnan, S.[Sriranjani], Reddy, N.[Nishant], Balasubramanian, V.N.[Vineeth N.],
Proto2Proto: Can you recognize the car, the way I do?,
CVPR22(10223-10233)
IEEE DOI 2210
Measurement, Knowledge engineering, Codes, Prototypes, Pattern recognition, Automobiles, Explainable computer vision, Efficient learning and inferences BibRef

Wu, Y.X.[Yu-Xi], Chen, C.[Changhuai], Che, J.[Jun], Pu, S.L.[Shi-Liang],
FAM: Visual Explanations for the Feature Representations from Deep Convolutional Networks,
CVPR22(10297-10306)
IEEE DOI 2210
Representation learning, Visualization, Privacy, Ethics, Neurons, Feature extraction, privacy and ethics in vision, accountability, Recognition: detection BibRef

Chakraborty, T.[Tanmay], Trehan, U.[Utkarsh], Mallat, K.[Khawla], Dugelay, J.L.[Jean-Luc],
Generalizing Adversarial Explanations with Grad-CAM,
ArtOfRobust22(186-192)
IEEE DOI 2210
Measurement, Heating systems, Image analysis, Computational modeling, Face recognition, Neural networks, Decision making BibRef

Dravid, A.[Amil], Schiffers, F.[Florian], Gong, B.Q.[Bo-Qing], Katsaggelos, A.K.[Aggelos K.],
medXGAN: Visual Explanations for Medical Classifiers through a Generative Latent Space,
FaDE-TCV22(2935-2944)
IEEE DOI 2210
Location awareness, Visualization, Interpolation, Pathology, Codes, Neural networks, Anatomical structure BibRef

Kowal, M.[Matthew], Siam, M.[Mennatullah], Islam, M.A.[Md Amirul], Bruce, N.D.B.[Neil D. B.], Wildes, R.P.[Richard P.], Derpanis, K.G.[Konstantinos G.],
A Deeper Dive Into What Deep Spatiotemporal Networks Encode: Quantifying Static vs. Dynamic Information,
CVPR22(13979-13989)
IEEE DOI 2210
Visualization, Computational modeling, Heuristic algorithms, Dynamics, Object segmentation, grouping and shape analysis BibRef

Somepalli, G.[Gowthami], Fowl, L.[Liam], Bansal, A.[Arpit], Yeh-Chiang, P.[Ping], Dar, Y.[Yehuda], Baraniuk, R.[Richard], Goldblum, M.[Micah], Goldstein, T.[Tom],
Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective,
CVPR22(13689-13698)
IEEE DOI 2210
Training, Interpolation, Computational modeling, Neural networks, Machine learning, Machine learning BibRef

Sandoval-Segura, P.[Pedro], Singla, V.[Vasu], Fowl, L.[Liam], Geiping, J.[Jonas], Goldblum, M.[Micah], Jacobs, D.[David], Goldstein, T.[Tom],
Poisons that are learned faster are more effective,
ArtOfRobust22(197-204)
IEEE DOI 2210
Training, Data privacy, Privacy, Toxicology, Correlation, Perturbation methods BibRef

Yang, Y.[Yu], Kim, S.[Seungbae], Joo, J.[Jungseock],
Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention,
CVPR22(8323-8333)
IEEE DOI 2210
Training, Visualization, Machine vision, Computational modeling, Semantics, Training data, Explainable computer vision, Vision applications and systems BibRef

MacDonald, L.E.[Lachlan E.], Ramasinghe, S.[Sameera], Lucey, S.[Simon],
Enabling Equivariance for Arbitrary Lie Groups,
CVPR22(8173-8182)
IEEE DOI 2210
Degradation, Perturbation methods, Benchmark testing, Mathematical models, Robustness, Pattern recognition, Explainable computer vision BibRef

Kocasari, U.[Umut], Zaman, K.[Kerem], Tiftikci, M.[Mert], Simsar, E.[Enis], Yanardag, P.[Pinar],
Rank in Style: A Ranking-based Approach to Find Interpretable Directions,
CVFAD22(2293-2297)
IEEE DOI 2210
Image synthesis, Search problems, Pattern recognition, Decoding BibRef

Marathe, A.[Aboli], Jain, P.[Pushkar], Walambe, R.[Rahee], Kotecha, K.[Ketan],
RestoreX-AI: A Contrastive Approach towards Guiding Image Restoration via Explainable AI Systems,
V4AS22(3029-3038)
IEEE DOI 2210
Training, Noise reduction, Object detection, Detectors, Transformers, Image restoration, Tornadoes BibRef

Dittakavi, B.[Bhat], Bavikadi, D.[Divyagna], Desai, S.V.[Sai Vikas], Chakraborty, S.[Soumi], Reddy, N.[Nishant], Balasubramanian, V.N.[Vineeth N], Callepalli, B.[Bharathi], Sharma, A.[Ayon],
Pose Tutor: An Explainable System for Pose Correction in the Wild,
CVSports22(3539-3548)
IEEE DOI 2210
Training, Predictive models, Muscles, Skeleton, Pattern recognition BibRef

Zhang, Y.F.[Yi-Feng], Jiang, M.[Ming], Zhao, Q.[Qi],
Query and Attention Augmentation for Knowledge-Based Explainable Reasoning,
CVPR22(15555-15564)
IEEE DOI 2210
Knowledge engineering, Visualization, Computational modeling, Neural networks, Knowledge based systems, Reinforcement learning, Visual reasoning BibRef

Chockler, H.[Hana], Kroening, D.[Daniel], Sun, Y.C.[You-Cheng],
Explanations for Occluded Images,
ICCV21(1214-1223)
IEEE DOI 2203
Approximation algorithms, Classification algorithms, Explainable AI, BibRef

Rodríguez, P.[Pau], Caccia, M.[Massimo], Lacoste, A.[Alexandre], Zamparo, L.[Lee], Laradji, I.[Issam], Charlin, L.[Laurent], Vazquez, D.[David],
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations,
ICCV21(1036-1045)
IEEE DOI 2203
Code, Explaination.
WWW Link. Codes, Computational modeling, Perturbation methods, Decision making, Machine learning, Predictive models, and ethics in vision BibRef

Kobs, K.[Konstantin], Steininger, M.[Michael], Dulny, A.[Andrzej], Hotho, A.[Andreas],
Do Different Deep Metric Learning Losses Lead to Similar Learned Features?,
ICCV21(10624-10634)
IEEE DOI 2203
Measurement, Learning systems, Analytical models, Visualization, Image color analysis, Representation learning, Explainable AI BibRef

Jung, H.[Hyungsik], Oh, Y.[Youngrock],
Towards Better Explanations of Class Activation Mapping,
ICCV21(1316-1324)
IEEE DOI 2203
Measurement, Visualization, Analytical models, Additives, Computational modeling, Linearity, Explainable AI, Fairness, Visual reasoning and logical representation BibRef

Lam, P.C.H.[Peter Cho-Ho], Chu, L.[Lingyang], Torgonskiy, M.[Maxim], Pei, J.[Jian], Zhang, Y.[Yong], Wang, L.[Lanjun],
Finding Representative Interpretations on Convolutional Neural Networks,
ICCV21(1325-1334)
IEEE DOI 2203
Heating systems, Deep learning, Costs, Semantics, Convolutional neural networks, Explainable AI, BibRef

Lee, K.H.[Kwang Hee], Park, C.[Chaewon], Oh, J.[Junghyun], Kwak, N.[Nojun],
LFI-CAM: Learning Feature Importance for Better Visual Explanation,
ICCV21(1335-1343)
IEEE DOI 2203
Visualization, Computer network reliability, Decision making, Stability analysis, Recognition and classification BibRef

Lang, O.[Oran], Gandelsman, Y.[Yossi], Yarom, M.[Michal], Wald, Y.[Yoav], Elidan, G.[Gal], Hassidim, A.[Avinatan], Freeman, W.T.[William T.], Isola, P.[Phillip], Globerson, A.[Amir], Irani, M.[Michal], Mosseri, I.[Inbar],
Explaining in Style: Training a GAN to explain a classifier in StyleSpace,
ICCV21(673-682)
IEEE DOI 2203
Training, Visualization, Animals, Semantics, Retina, Standards, Explainable AI, Image and video synthesis BibRef

Li, L.Z.[Liang-Zhi], Wang, B.[Bowen], Verma, M.[Manisha], Nakashima, Y.[Yuta], Kawasaki, R.[Ryo], Nagahara, H.[Hajime],
SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition,
ICCV21(1026-1035)
IEEE DOI 2203
Measurement, Visualization, Image recognition, Codes, Decision making, Switches, Explainable AI, Fairness, accountability, and ethics in vision BibRef

Lerman, S.[Samuel], Venuto, C.[Charles], Kautz, H.[Henry], Xu, C.L.[Chen-Liang],
Explaining Local, Global, And Higher-Order Interactions In Deep Learning,
ICCV21(1204-1213)
IEEE DOI 2203
Deep learning, Measurement, Visualization, Codes, Neural networks, Object detection, Explainable AI, Machine learning architectures and formulations BibRef

Anderson, C.[Connor], Farrell, R.[Ryan],
Improving Fractal Pre-training,
WACV22(2412-2421)
IEEE DOI 2202
Training, Image recognition, Navigation, Neural networks, Rendering (computer graphics), Fractals, Semi- and Un- supervised Learning BibRef

Guo, P.[Pei], Farrell, R.[Ryan],
Semantic Network Interpretation,
Explain-Bio22(400-409)
IEEE DOI 2202
Measurement, Training, Visualization, Correlation, Computational modeling, Semantics, Filtering algorithms BibRef

Watson, M.[Matthew], Hasan, B.A.S.[Bashar Awwad Shiekh], Al Moubayed, N.[Noura],
Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations,
WACV22(1524-1533)
IEEE DOI 2202
Deep learning, Training, Medical conditions, Neural networks, MIMICs, Logic gates, Market research, Explainable AI, Fairness, Medical Imaging/Imaging for Bioinformatics/Biological and Cell Microscopy BibRef

Fel, T.[Thomas], Vigouroux, D.[David], Cadène, R.[Rémi], Serre, T.[Thomas],
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks,
WACV22(1565-1575)
IEEE DOI 2202
Deep learning, Computational modeling, Neural networks, Network architecture, Prediction algorithms, Privacy and Ethics in Vision BibRef

Hada, S.S.[Suryabhan Singh], Carreira-Perpiñán, M.Á.[Miguel Á.],
Sampling the 'Inverse Set' of a Neuron,
ICIP21(3712-3716)
IEEE DOI 2201
What does a neuron represent? Deep learning, Visualization, Monte Carlo methods, Image processing, Neurons, Markov processes, Interpretability, GANs BibRef

Yadu, A.[Ankit], Suhas, P.K.[P K], Sinha, N.[Neelam],
Class Specific Interpretability in CNN Using Causal Analysis,
ICIP21(3702-3706)
IEEE DOI 2201
Measurement, Location awareness, Visualization, Image color analysis, Computational modeling, Machine learning, Machine Learning BibRef

Sasdelli, M.[Michele], Ajanthan, T.[Thalaiyasingam], Chin, T.J.[Tat-Jun], Carneiro, G.[Gustavo],
A Chaos Theory Approach to Understand Neural Network Optimization,
DICTA21(1-10)
IEEE DOI 2201
Deep learning, Heuristic algorithms, Digital images, Computational modeling, Neural networks, Stochastic processes, Computer architecture BibRef

Konate, S.[Salamata], Lebrat, L.[Léo], Cruz, R.S.[Rodrigo Santa], Smith, E.[Elliot], Bradley, A.[Andrew], Fookes, C.[Clinton], Salvado, O.[Olivier],
A Comparison of Saliency Methods for Deep Learning Explainability,
DICTA21(01-08)
IEEE DOI 2201
Deep learning, Backpropagation, Gradient methods, Perturbation methods, Digital images, Complexity theory, CNN BibRef

Khakzar, A.[Ashkan], Baselizadeh, S.[Soroosh], Khanduja, S.[Saurabh], Rupprecht, C.[Christian], Kim, S.T.[Seong Tae], Navab, N.[Nassir],
Neural Response Interpretation through the Lens of Critical Pathways,
CVPR21(13523-13533)
IEEE DOI 2111
Gradient methods, Computer network reliability, Neurons, Pattern recognition, Reliability, Object recognition BibRef

Stammer, W.[Wolfgang], Schramowski, P.[Patrick], Kersting, K.[Kristian],
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations,
CVPR21(3618-3628)
IEEE DOI 2111
Training, Deep learning, Visualization, Image color analysis, Semantics, Focusing BibRef

Lim, D.[Dohun], Lee, H.[Hyeonseok], Kim, S.[Sungchan],
Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation,
CVPR21(6464-6473)
IEEE DOI 2111
Analytical models, Smoothing methods, Neural networks, Predictive models, Reliability theory BibRef

Chefer, H.[Hila], Gur, S.[Shir], Wolf, L.B.[Lior B.],
Transformer Interpretability Beyond Attention Visualization,
CVPR21(782-791)
IEEE DOI 2111
Visualization, Head, Text categorization, Neural networks, Transformers, Pattern recognition BibRef

Shen, Y.J.[Yu-Jun], Zhou, B.[Bolei],
Closed-Form Factorization of Latent Semantics in GANs,
CVPR21(1532-1540)
IEEE DOI 2111
Limiting, Closed-form solutions, Annotations, Computational modeling, Semantics, Manuals BibRef

Singla, S.[Sahil], Nushi, B.[Besmira], Shah, S.[Shital], Kamar, E.[Ece], Horvitz, E.[Eric],
Understanding Failures of Deep Networks via Robust Feature Extraction,
CVPR21(12848-12857)
IEEE DOI 2111
Measurement, Visualization, Error analysis, Aggregates, Debugging, Feature extraction BibRef

Poppi, S.[Samuele], Cornia, M.[Marcella], Baraldi, L.[Lorenzo], Cucchiara, R.[Rita],
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis,
RCV21(2299-2304)
IEEE DOI 2109
Deep learning, Visualization, Protocols, Reproducibility of results, Pattern recognition BibRef

Rahnama, A.[Arash], Tseng, A.[Andrew],
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks,
TCV21(3247-3256)
IEEE DOI 2109
Deep learning, Machine learning algorithms, Computational modeling, Speech recognition, Prediction algorithms BibRef

Wang, B.[Bowen], Li, L.Z.[Liang-Zhi], Verma, M.[Manisha], Nakashima, Y.[Yuta], Kawasaki, R.[Ryo], Nagahara, H.[Hajime],
MTUNet: Few-shot Image Classification with Visual Explanations,
RCV21(2294-2298)
IEEE DOI 2109
Knowledge engineering, Visualization, Computational modeling, Neural networks, Benchmark testing BibRef

Abello, A.A.[Antonio A.], Hirata, R.[Roberto], Wang, Z.Y.[Zhang-Yang],
Dissecting the High-Frequency Bias in Convolutional Neural Networks,
UG21(863-871)
IEEE DOI 2109
Frequency conversion, Robustness, Frequency diversity, Pattern recognition BibRef

Rosenzweig, J.[Julia], Sicking, J.[Joachim], Houben, S.[Sebastian], Mock, M.[Michael], Akila, M.[Maram],
Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities,
SAIAD21(56-65)
IEEE DOI 2109
Learning to eliminate safety errors in NN. Couplings, Training, Analytical models, Systematics, Semantics, Toy manufacturing industry, Safety BibRef

Chen, Q.X.[Qiu-Xiao], Li, P.F.[Peng-Fei], Xu, M.[Meng], Qi, X.J.[Xiao-Jun],
Sparse Activation Maps for Interpreting 3D Object Detection,
SAIAD21(76-84)
IEEE DOI 2109
Visualization, Solid modeling, Neurons, Semantics, Object detection, Feature extraction BibRef

Jaworek-Korjakowska, J.[Joanna], Kostuch, A.[Aleksander], Skruch, P.[Pawel],
SafeSO: Interpretable and Explainable Deep Learning Approach for Seat Occupancy Classification in Vehicle Interior,
SAIAD21(103-112)
IEEE DOI 2109
Measurement, Heating systems, Deep learning, Visualization, Belts, Object recognition BibRef

Suzuki, M.[Muneaki], Kamcya, Y.[Yoshitaka], Kutsuna, T.[Takuro], Mitsumoto, N.[Naoki],
Understanding the Reason for Misclassification by Generating Counterfactual Images,
MVA21(1-5)
DOI Link 2109
Deep learning, Generative adversarial networks, Task analysis, Artificial intelligence, Image classification BibRef

Li, Z.Q.[Zhen-Qiang], Wang, W.M.[Wei-Min], Li, Z.Y.[Zuo-Yue], Huang, Y.F.[Yi-Fei], Sato, Y.[Yoichi],
Towards Visually Explaining Video Understanding Networks with Perturbation,
WACV21(1119-1128)
IEEE DOI 2106
Knowledge engineering, Deep learning, Visualization, Pathology, Perturbation methods BibRef

Oh, Y.[Youngrock], Jung, H.[Hyungsik], Park, J.[Jeonghyung], Kim, M.S.[Min Soo],
EVET: Enhancing Visual Explanations of Deep Neural Networks Using Image Transformations,
WACV21(3578-3586)
IEEE DOI 2106
Location awareness, Visualization, Pipelines, Neural networks, Machine learning BibRef

Song, W.[Wei], Dai, S.Y.[Shu-Yuan], Huang, D.M.[Dong-Mei], Song, J.L.[Jin-Ling], Antonio, L.[Liotta],
Median-Pooling Grad-Cam: An Efficient Inference Level Visual Explanation for CNN Networks in Remote Sensing Image Classification,
MMMod21(II:134-146).
Springer DOI 2106
BibRef

Samangouei, P.[Pouya], Saeedi, A.[Ardavan], Nakagawa, L.[Liam], Silberman, N.[Nathan],
ExplainGAN: Model Explanation via Decision Boundary Crossing Transformations,
ECCV18(X: 681-696).
Springer DOI 1810
BibRef

Pope, P.E.[Phillip E.], Kolouri, S.[Soheil], Rostami, M.[Mohammad], Martin, C.E.[Charles E.], Hoffmann, H.[Heiko],
Explainability Methods for Graph Convolutional Neural Networks,
CVPR19(10764-10773).
IEEE DOI 2002
BibRef

Huseljic, D.[Denis], Sick, B.[Bernhard], Herde, M.[Marek], Kottke, D.[Daniel],
Separation of Aleatoric and Epistemic Uncertainty in Deterministic Deep Neural Networks,
ICPR21(9172-9179)
IEEE DOI 2105
Analytical models, Uncertainty, Neural networks, Measurement uncertainty, Data models, Reliability BibRef

Shi, S.[Sheng], Du, Y.Z.[Yang-Zhou], Fan, W.[Wei],
Kernel-based LIME with feature dependency sampling,
ICPR21(9143-9148)
IEEE DOI 2105
Local Interpretable Model-agnostic Explanation. Correlation, Neural networks, Complex networks, Predictive models, Internet, Artificial intelligence, Task analysis BibRef

Charachon, M.[Martin], Hudelot, C.[Céline], Cournède, P.H.[Paul-Henry], Ruppli, C.[Camille], Ardon, R.[Roberto],
Combining Similarity and Adversarial Learning to Generate Visual Explanation: Application to Medical Image Classification,
ICPR21(7188-7195)
IEEE DOI 2105
Measurement, Visualization, Perturbation methods, Predictive models, Real-time systems, Adversarial Example BibRef

Goh, G.S.W.[Gary S. W.], Lapuschkin, S.[Sebastian], Weber, L.[Leander], Samek, W.[Wojciech], Binder, A.[Alexander],
Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution,
ICPR21(4949-4956)
IEEE DOI 2105
Adaptation models, Sensitivity, Neural networks, Taxonomy, Object recognition, Noise measurement BibRef

Yang, Q.[Qing], Zhu, X.[Xia], Fwu, J.K.[Jong-Kae], Ye, Y.[Yun], You, G.[Ganmei], Zhu, Y.[Yuan],
MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations,
ICPR21(1376-1383)
IEEE DOI 2105
Perturbation methods, Impedance matching, Neural networks, Semantics, Games, Predictive models BibRef

Jung, J.H.[Jay Hoon], Kwon, Y.M.[Young-Min],
Boundaries of Single-Class Regions in the Input Space of Piece-Wise Linear Neural Networks,
ICPR21(6027-6034)
IEEE DOI 2105
Linearity, Robustness, Convolutional neural networks, Nonlinear systems, Deep Neural Network BibRef

Zhang, M.[Moyu], Zhu, X.N.[Xin-Ning], Ji, Y.[Yang],
Input-aware Neural Knowledge Tracing Machine,
HCAU20(345-360).
Springer DOI 2103
BibRef

Veerappa, M.[Manjunatha], Anneken, M.[Mathias], Burkart, N.[Nadia],
Evaluation of Interpretable Association Rule Mining Methods on Time-series in the Maritime Domain,
EDL-AI20(204-218).
Springer DOI 2103
BibRef

Jouis, G.[Gaëlle], Mouchère, H.[Harold], Picarougne, F.[Fabien], Hardouin, A.[Alexandre],
Anchors vs Attention: Comparing XAI on a Real-life Use Case,
EDL-AI20(219-227).
Springer DOI 2103
BibRef

Henin, C.[Clément], Le Métayer, D.[Daniel],
A Multi-layered Approach for Tailored Black-box Explanations,
EDL-AI20(5-19).
Springer DOI 2103
BibRef

Kenny, E.M.[Eoin M.], Delaney, E.D.[Eoin D.], Greene, D.[Derek], Keane, M.T.[Mark T.],
Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective,
EDL-AI20(20-34).
Springer DOI 2103
BibRef

Halnaut, A.[Adrien], Giot, R.[Romain], Bourqui, R.[Romain], Auber, D.[David],
Samples Classification Analysis Across DNN Layers with Fractal Curves,
EDL-AI20(47-61).
Springer DOI 2103
BibRef

Jung, H.[Hyungsik], Oh, Y.[Youngrock], Park, J.[Jeonghyung], Kim, M.S.[Min Soo],
Jointly Optimize Positive and Negative Saliencies for Black Box Classifiers,
EDL-AI20(76-89).
Springer DOI 2103
BibRef

Zhu, P.[Pengkai], Zhu, R.Z.[Rui-Zhao], Mishra, S.[Samarth], Saligrama, V.[Venkatesh],
Low Dimensional Visual Attributes: An Interpretable Image Encoding,
EDL-AI20(90-102).
Springer DOI 2103
BibRef

Marcos, D.[Diego], Fong, R.[Ruth], Lobry, S.[Sylvain], Flamary, R.[Rémi], Courty, N.[Nicolas], Tuia, D.[Devis],
Contextual Semantic Interpretability,
ACCV20(IV:351-368).
Springer DOI 2103
BibRef

Townsend, J.[Joe], Kasioumis, T.[Theodoros], Inakoshi, H.[Hiroya],
ERIC: Extracting Relations Inferred from Convolutions,
ACCV20(III:206-222).
Springer DOI 2103
Behavior of NN approximated with a program. BibRef

Galli, A.[Antonio], Marrone, S.[Stefano], Moscato, V.[Vincenzo], Sansone, C.[Carlo],
Reliability of explainable Artificial Intelligence in Adversarial Perturbation Scenarios,
EDL-AI20(243-256).
Springer DOI 2103
BibRef

Agarwal, C.[Chirag], Nguyen, A.[Anh],
Explaining Image Classifiers by Removing Input Features Using Generative Models,
ACCV20(VI:101-118).
Springer DOI 2103
BibRef

Gorokhovatskyi, O.[Oleksii], Peredrii, O.[Olena],
Recursive Division of Image for Explanation of Shallow CNN Models,
EDL-AI20(274-286).
Springer DOI 2103
BibRef

Konforti, Y.[Yael], Shpigler, A.[Alon], Lerner, B.[Boaz], Bar-Hillel, A.[Aharon],
Inference Graphs for CNN Interpretation,
ECCV20(XXV:69-84).
Springer DOI 2011
BibRef

Singh, M.[Mayank], Kumari, N.[Nupur], Mangla, P.[Puneet], Sinha, A.[Abhishek], Balasubramanian, V.N.[Vineeth N.], Krishnamurthy, B.[Balaji],
Attributional Robustness Training Using Input-gradient Spatial Alignment,
ECCV20(XXVII:515-533).
Springer DOI 2011
BibRef

Rombach, R.[Robin], Esser, P.[Patrick], Ommer, B.[Björn],
Making Sense of CNNs: Interpreting Deep Representations and Their Invariances with INNs,
ECCV20(XVII:647-664).
Springer DOI 2011
BibRef

Li, Y.C.[Yu-Chao], Ji, R.R.[Rong-Rong], Lin, S.H.[Shao-Hui], Zhang, B.C.[Bao-Chang], Yan, C.Q.[Chen-Qian], Wu, Y.J.[Yong-Jian], Huang, F.Y.[Fei-Yue], Shao, L.[Ling],
Interpretable Neural Network Decoupling,
ECCV20(XV:653-669).
Springer DOI 2011
BibRef

Franchi, G.[Gianni], Bursuc, A.[Andrei], Aldea, E.[Emanuel], Dubuisson, S.[Séverine], Bloch, I.[Isabelle],
Tradi: Tracking Deep Neural Network Weight Distributions,
ECCV20(XVII:105-121).
Springer DOI 2011
BibRef

Bhushan, C.[Chitresh], Yang, Z.Y.[Zhao-Yuan], Virani, N.[Nurali], Iyer, N.[Naresh],
Variational Encoder-Based Reliable Classification,
ICIP20(1941-1945)
IEEE DOI 2011
Training, Image reconstruction, Reliability, Measurement, Decoding, Artificial neural networks, Uncertainty, Classification, Adversarial Attacks BibRef

Lee, J., Al Regib, G.,
Gradients as a Measure of Uncertainty in Neural Networks,
ICIP20(2416-2420)
IEEE DOI 2011
Uncertainty, Training, Measurement uncertainty, Computational modeling, Neural networks, Data models, image corruption/distortion BibRef

Sun, Y., Prabhushankar, M., Al Regib, G.,
Implicit Saliency In Deep Neural Networks,
ICIP20(2915-2919)
IEEE DOI 2011
Feature extraction, Visualization, Semantics, Saliency detection, Convolution, Robustness, Neural networks, Saliency, Deep Learning BibRef

Prabhushankar, M., Kwon, G., Temel, D., Al Regib, G.,
Contrastive Explanations In Neural Networks,
ICIP20(3289-3293)
IEEE DOI 2011
Visualization, Neural networks, Manifolds, Image recognition, Image quality, Automobiles, Image color analysis, Interpretability, Image Quality Assessment BibRef

Tao, X.Y.[Xiao-Yu], Chang, X.Y.[Xin-Yuan], Hong, X.P.[Xiao-Peng], Wei, X.[Xing], Gong, Y.H.[Yi-Hong],
Topology-preserving Class-incremental Learning,
ECCV20(XIX:254-270).
Springer DOI 2011
BibRef

Yuan, K.[Kun], Li, Q.Q.[Quan-Quan], Shao, J.[Jing], Yan, J.J.[Jun-Jie],
Learning Connectivity of Neural Networks from a Topological Perspective,
ECCV20(XXI:737-753).
Springer DOI 2011
BibRef

Bau, D.[David], Liu, S.[Steven], Wang, T.Z.[Tong-Zhou], Zhu, J.Y.[Jun-Yan], Torralba, A.B.[Antonio B.],
Rewriting a Deep Generative Model,
ECCV20(I:351-369).
Springer DOI 2011
BibRef

Liang, H.Y.[Hao-Yu], Ouyang, Z.H.[Zhi-Hao], Zeng, Y.Y.[Yu-Yuan], Su, H.[Hang], He, Z.H.[Zi-Hao], Xia, S.T.[Shu-Tao], Zhu, J.[Jun], Zhang, B.[Bo],
Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters,
ECCV20(II:622-638).
Springer DOI 2011
BibRef

Chen, S.[Shi], Jiang, M.[Ming], Yang, J.H.[Jin-Hui], Zhao, Q.[Qi],
AiR: Attention with Reasoning Capability,
ECCV20(I:91-107).
Springer DOI 2011
BibRef

Ding, Y.K.[Yu-Kun], Liu, J.L.[Jing-Lan], Xiong, J.J.[Jin-Jun], Shi, Y.Y.[Yi-Yu],
Revisiting the Evaluation of Uncertainty Estimation and Its Application to Explore Model Complexity-Uncertainty Trade-Off,
TCV20(22-31)
IEEE DOI 2008
Uncertainty, Calibration, Estimation, Predictive models, Complexity theory, Neural networks BibRef

Ye, J.W.[Jing-Wen], Ji, Y.X.[Yi-Xin], Wang, X.C.[Xin-Chao], Gao, X.[Xin], Song, M.L.[Ming-Li],
Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN,
CVPR20(12513-12522)
IEEE DOI 2008
Multiple CNN. Generators, Training, Task analysis, Knowledge engineering, Training data BibRef

Yang, Z.X.[Zong-Xin], Zhu, L.C.[Lin-Chao], Wu, Y.[Yu], Yang, Y.[Yi],
Gated Channel Transformation for Visual Recognition,
CVPR20(11791-11800)
IEEE DOI 2008
Logic gates, Task analysis, Visualization, Neurons, Training, Complexity theory BibRef

Xu, S.[Shawn], Venugopalan, S.[Subhashini], Sundararajan, M.[Mukund],
Attribution in Scale and Space,
CVPR20(9677-9686)
IEEE DOI 2008
Code, Deep Nets.
WWW Link. Perturbation methods, Task analysis, Kernel, Mathematical model, Google, Medical services BibRef

Wang, Z., Mardziel, P., Datta, A., Fredrikson, M.,
Interpreting Interpretations: Organizing Attribution Methods by Criteria,
TCV20(48-55)
IEEE DOI 2008
Perturbation methods, Visualization, Computational modeling, Measurement, Convolutional neural networks, Dogs BibRef

Taylor, E., Shekhar, S., Taylor, G.W.,
Response Time Analysis for Explainability of Visual Processing in CNNs,
MVM20(1555-1558)
IEEE DOI 2008
Grammar, Computational modeling, Semantics, Syntactics, Visualization, Analytical models, Object recognition BibRef

Hartley, T., Sidorov, K., Willis, C., Marshall, D.,
Explaining Failure: Investigation of Surprise and Expectation in CNNs,
TCV20(56-65)
IEEE DOI 2008
Training data, Training, Convolution, Data models, Convolutional neural networks, Data visualization, Mathematical model BibRef

Ramanujan, V., Wortsman, M., Kembhavi, A., Farhadi, A., Rastegari, M.,
What's Hidden in a Randomly Weighted Neural Network?,
CVPR20(11890-11899)
IEEE DOI 2008
Training, Neurons, Biological neural networks, Stochastic processes, Buildings, Standards BibRef

Bansal, N.[Naman], Agarwal, C.[Chirag], Nguyen, A.[Anh],
SAM: The Sensitivity of Attribution Methods to Hyperparameters,
CVPR20(8670-8680)
IEEE DOI 2008
BibRef
And: TCV20(11-21)
IEEE DOI 2008
Robustness, Sensitivity, Heating systems, Noise measurement, Limiting, Smoothing methods BibRef

Wang, H., Wu, X., Huang, Z., Xing, E.P.,
High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks,
CVPR20(8681-8691)
IEEE DOI 2008
Training, Robustness, Hybrid fiber coaxial cables, Mathematical model, Convolutional neural networks, Data models BibRef

Wu, W., Su, Y., Chen, X., Zhao, S., King, I., Lyu, M.R., Tai, Y.,
Towards Global Explanations of Convolutional Neural Networks With Concept Attribution,
CVPR20(8649-8658)
IEEE DOI 2008
Feature extraction, Predictive models, Detectors, Cognition, Semantics, Neurons, Computational modeling BibRef

Agarwal, A., Singh, R., Vatsa, M.,
The Role of 'Sign' and 'Direction' of Gradient on the Performance of CNN,
WMF20(2748-2756)
IEEE DOI 2008
Databases, Machine learning, Computational modeling, Object recognition, Training, Optimization BibRef

Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., Hu, X.,
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks,
TCV20(111-119)
IEEE DOI 2008
Visualization, Convolution, Noise measurement, Convolutional neural networks, Task analysis, Debugging, Tools BibRef

Kim, E., Gopinath, D., Pasareanu, C., Seshia, S.A.,
A Programmatic and Semantic Approach to Explaining and Debugging Neural Network Based Object Detectors,
CVPR20(11125-11134)
IEEE DOI 2008
Semantics, Automobiles, Feature extraction, Detectors, Probabilistic logic, Debugging BibRef

Jalwana, M.A.A.K.[M. A. A. K.], Akhtar, N., Bennamoun, M., Mian, A.,
Attack to Explain Deep Representation,
CVPR20(9540-9549)
IEEE DOI 2008
Perturbation methods, Computational modeling, Visualization, Robustness, Image generation, Machine learning, Task analysis BibRef

Koperski, M.[Michal], Konopczynski, T.[Tomasz], Nowak, R.[Rafal], Semberecki, P.[Piotr], Trzcinski, T.[Tomasz],
Plugin Networks for Inference under Partial Evidence,
WACV20(2872-2880)
IEEE DOI 2006
Plugin layers to pre-trained network. Task analysis, Training, Visualization, Neural networks, Image segmentation, Image annotation, Image recognition BibRef

Chen, L., Chen, J., Hajimirsadeghi, H., Mori, G.,
Adapting Grad-CAM for Embedding Networks,
WACV20(2783-2792)
IEEE DOI 2006
Visualization, Testing, Training, Databases, Estimation, Heating systems, Task analysis BibRef

Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.,
Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks,
WACV18(839-847)
IEEE DOI 1806
convolution, feedforward neural nets, gradient methods, learning (artificial intelligence), Visualization BibRef

Zhang, J., Zhang, J., Ghosh, S., Li, D., Tasci, S., Heck, L., Zhang, H., Kuo, C.C.J.[C.C. Jay],
Class-incremental Learning via Deep Model Consolidation,
WACV20(1120-1129)
IEEE DOI 2006
Data models, Task analysis, Training, Monte Carlo methods, Training data, Computational modeling, Adaptation models BibRef

Vasu, B., Long, C.,
Iterative and Adaptive Sampling with Spatial Attention for Black-Box Model Explanations,
WACV20(2949-2958)
IEEE DOI 2006
Adaptation models, Neural networks, Feature extraction, Mathematical model, Decision making, Machine learning, Visualization BibRef

Desai, S., Ramaswamy, H.G.,
Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization,
WACV20(972-980)
IEEE DOI 2006
Visualization, Neurons, Task analysis, Data models, Data visualization, Backpropagation BibRef

Gkalelis, N.[Nikolaos], Mezaris, V.[Vasileios],
Subclass Deep Neural Networks: Re-enabling Neglected Classes in Deep Network Training for Multimedia Classification,
MMMod20(I:227-238).
Springer DOI 2003
BibRef

Wu, T., Song, X.,
Towards Interpretable Object Detection by Unfolding Latent Structures,
ICCV19(6032-6042)
IEEE DOI 2004
Code, Object Detection.
WWW Link. convolutional neural nets, grammars, learning (artificial intelligence), object detection, Predictive models BibRef

Sun, Y., Ravi, S., Singh, V.,
Adaptive Activation Thresholding: Dynamic Routing Type Behavior for Interpretability in Convolutional Neural Networks,
ICCV19(4937-4946)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), Standards BibRef

Michelini, P.N., Liu, H., Lu, Y., Jiang, X.,
A Tour of Convolutional Networks Guided by Linear Interpreters,
ICCV19(4752-4761)
IEEE DOI 2004
convolutional neural nets, image classification, image resolution, copy-move strategies, Switches BibRef

Shoshan, A.[Alon], Mechrez, R.[Roey], Zelnik-Manor, L.[Lihi],
Dynamic-Net: Tuning the Objective Without Re-Training for Synthesis Tasks,
ICCV19(3214-3222)
IEEE DOI 2004
convolutional neural nets, image processing, optimisation, Dynamic-Net, synthesis tasks, optimization, modern CNN, Face BibRef

Subramanya, A., Pillai, V., Pirsiavash, H.,
Fooling Network Interpretation in Image Classification,
ICCV19(2020-2029)
IEEE DOI 2004
decision making, image classification, learning (artificial intelligence), neural nets, Task analysis BibRef

Liang, M.[Megan], Palado, G.[Gabrielle], Browne, W.N.[Will N.],
Identifying Simple Shapes to Classify the Big Picture,
IVCNZ19(1-6)
IEEE DOI 2004
evolutionary computation, feature extraction, image classification, learning (artificial intelligence), Learning Classifier Systems BibRef

Yin, B., Tran, L., Li, H., Shen, X., Liu, X.,
Towards Interpretable Face Recognition,
ICCV19(9347-9356)
IEEE DOI 2004
convolutional neural nets, face recognition, feature extraction, image representation, learning (artificial intelligence), Feature extraction BibRef

O'Neill, D., Xue, B., Zhang, M.,
The Evolution of Adjacency Matrices for Sparsity of Connection in DenseNets,
IVCNZ19(1-6)
IEEE DOI 2004
convolutional neural nets, genetic algorithms, image classification, matrix algebra, image classification, reduced model complexity BibRef

Sulc, M., Matas, J.G.,
Improving CNN Classifiers by Estimating Test-Time Priors,
TASKCV19(3220-3226)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), maximum likelihood estimation, pattern classification, Probabilistic Classifiers BibRef

Huang, J., Qu, L., Jia, R., Zhao, B.,
O2U-Net: A Simple Noisy Label Detection Approach for Deep Neural Networks,
ICCV19(3325-3333)
IEEE DOI 2004
learning (artificial intelligence), neural nets, probability, deep neural networks, human annotations, BibRef

Konuk, E., Smith, K.,
An Empirical Study of the Relation Between Network Architecture and Complexity,
Preregister19(4597-4599)
IEEE DOI 2004
generalisation (artificial intelligence), image classification, network architecture, preregistration submission, complexity BibRef

Navarrete Michelini, P., Liu, H., Lu, Y., Jiang, X.,
Understanding Convolutional Networks Using Linear Interpreters - Extended Abstract,
VXAI19(4186-4189)
IEEE DOI 2004
convolutional neural nets, feature extraction, image classification, image resolution, image segmentation, deep-learning BibRef

Iqbal, A., Gall, J.,
Level Selector Network for Optimizing Accuracy-Specificity Trade-Offs,
HVU19(1466-1473)
IEEE DOI 2004
directed graphs, image processing, learning (artificial intelligence), video signal processing, Deep Learning BibRef

Lee, H., Kim, H., Nam, H.,
SRM: A Style-Based Recalibration Module for Convolutional Neural Networks,
ICCV19(1854-1862)
IEEE DOI 2004
calibration, convolutional neural nets, feature extraction, image recognition, image representation, Training BibRef

Chen, R., Chen, H., Huang, G., Ren, J., Zhang, Q.,
Explaining Neural Networks Semantically and Quantitatively,
ICCV19(9186-9195)
IEEE DOI 2004
convolutional neural nets, image processing, learning (artificial intelligence), semantic explanation, Task analysis BibRef

Stergiou, A., Kapidis, G., Kalliatakis, G., Chrysoulas, C., Poppe, R., Veltkamp, R.,
Class Feature Pyramids for Video Explanation,
VXAI19(4255-4264)
IEEE DOI 2004
convolutional neural nets, feature extraction, image classification, image motion analysis, saliency-visualization BibRef

Kang, S., Jung, H., Lee, S.,
Interpreting Undesirable Pixels for Image Classification on Black-Box Models,
VXAI19(4250-4254)
IEEE DOI 2004
data visualisation, explanation, image classification, image segmentation, neural nets, neural networks, Interpretability BibRef

Zhuang, J., Dvornek, N.C., Li, X., Yang, J., Duncan, J.,
Decision explanation and feature importance for invertible networks,
VXAI19(4235-4239)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. neural nets, pattern classification, linear classifier, feature space, decision boundary, feature importance, Decision-Boundary BibRef

Yoon, J., Kim, K., Jang, J.,
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation,
VXAI19(4226-4234)
IEEE DOI 2004
convolutional neural nets, image classification, image denoising, learning (artificial intelligence), cosine distance, adversarial-attack BibRef

Marcos, D., Lobry, S., Tuia, D.,
Semantically Interpretable Activation Maps: what-where-how explanations within CNNs,
VXAI19(4207-4215)
IEEE DOI 2004
convolutional neural nets, image classification, learning (artificial intelligence), attributes BibRef

Graziani, M.[Mara], Müller, H.[Henning], Andrearczyk, V.[Vincent], Graziani, M., Müller, H., Andrearczyk, V.,
Interpreting Intentionally Flawed Models with Linear Probes,
SDL-CV19(743-747)
IEEE DOI 2004
learning (artificial intelligence), pattern classification, regression analysis, statistical irregularities, regression, linear probes BibRef

Demidovskij, A., Gorbachev, Y., Fedorov, M., Slavutin, I., Tugarev, A., Fatekhov, M., Tarkan, Y.,
OpenVINO Deep Learning Workbench: Comprehensive Analysis and Tuning of Neural Networks Inference,
SDL-CV19(783-787)
IEEE DOI 2004
interactive systems, learning (artificial intelligence), neural nets, optimisation, user interfaces, hyper parameters, optimization BibRef

Iwana, B.K., Kuroki, R., Uchida, S.,
Explaining Convolutional Neural Networks using Softmax Gradient Layer-wise Relevance Propagation,
VXAI19(4176-4185)
IEEE DOI 2004
convolutional neural nets, data visualisation, image classification, image representation, probability, SGLRP, explainability BibRef

Lazarow, J., Jin, L., Tu, Z.,
Introspective Neural Networks for Generative Modeling,
ICCV17(2793-2802)
IEEE DOI 1802
image classification, image representation, image texture, neural nets, neurocontrollers, statistics, unsupervised learning, Training BibRef

Ren, J.[Jian], Li, Z.[Zhe], Yang, J.C.[Jian-Chao], Xu, N.[Ning], Yang, T.[Tianbao], Foran, D.J.[David J.],
EIGEN: Ecologically-Inspired GENetic Approach for Neural Network Structure Searching From Scratch,
CVPR19(9051-9060).
IEEE DOI 2002
BibRef

Liang, X.D.[Xiao-Dan],
Learning Personalized Modular Network Guided by Structured Knowledge,
CVPR19(8936-8944).
IEEE DOI 2002
BibRef

Zeng, W.Y.[Wen-Yuan], Luo, W.J.[Wen-Jie], Suo, S.[Simon], Sadat, A.[Abbas], Yang, B.[Bin], Casas, S.[Sergio], Urtasun, R.[Raquel],
End-To-End Interpretable Neural Motion Planner,
CVPR19(8652-8661).
IEEE DOI 2002
BibRef

Blanchard, N.[Nathaniel], Kinnison, J.[Jeffery], RichardWebster, B.[Brandon], Bashivan, P.[Pouya], Scheirer, W.J.[Walter J.],
A Neurobiological Evaluation Metric for Neural Network Model Search,
CVPR19(5399-5408).
IEEE DOI 2002
BibRef

Yu, L.[Lu], Yazici, V.O.[Vacit Oguz], Liu, X.L.[Xia-Lei], van de Weijer, J.[Joost], Cheng, Y.M.[Yong-Mei], Ramisa, A.[Arnau],
Learning Metrics From Teachers: Compact Networks for Image Embedding,
CVPR19(2902-2911).
IEEE DOI 2002
BibRef

Ye, J.W.[Jing-Wen], Ji, Y.X.[Yi-Xin], Wang, X.C.[Xin-Chao], Ou, K.[Kairi], Tao, D.P.[Da-Peng], Song, M.L.[Ming-Li],
Student Becoming the Master: Knowledge Amalgamation for Joint Scene Parsing, Depth Estimation, and More,
CVPR19(2824-2833).
IEEE DOI 2002
Train one model that combines the knowledge of 2 other trained nets. BibRef

Zhang, Q.S.[Quan-Shi], Yang, Y.[Yu], Ma, H.T.[Hao-Tian], Wu, Y.N.[Ying Nian],
Interpreting CNNs via Decision Trees,
CVPR19(6254-6263).
IEEE DOI 2002
BibRef

Orekondy, T.[Tribhuvanesh], Schiele, B.[Bernt], Fritz, M.[Mario],
Knockoff Nets: Stealing Functionality of Black-Box Models,
CVPR19(4949-4958).
IEEE DOI 2002
BibRef

Morgado, P.[Pedro], Vasconcelos, N.M.[Nuno M.],
NetTailor: Tuning the Architecture, Not Just the Weights,
CVPR19(3039-3049).
IEEE DOI 2002
BibRef

Stergiou, A.[Alexandros], Kapidis, G.[Georgios], Kalliatakis, G.[Grigorios], Chrysoulas, C.[Christos], Veltkamp, R.[Remco], Poppe, R.[Ronald],
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions,
ICIP19(1830-1834)
IEEE DOI 1910
3-D convolutions. How to visualize the results. Visual explanations, explainable convolutions, spatio-temporal feature representation BibRef

Rao, Z., He, M., Zhu, Z.,
Input-Perturbation-Sensitivity for Performance Analysis of CNNS on Image Recognition,
ICIP19(2496-2500)
IEEE DOI 1910
Global Sensitivity Analysis, Convolutional Neural Networks, Quality, Image Classification BibRef

Chen, Y., Saporta, A., Dapogny, A., Cord, M.,
Delving Deep into Interpreting Neural Nets with Piece-Wise Affine Representation,
ICIP19(609-613)
IEEE DOI 1910
Deep learning, deep neural networks, attribution, pixel contribution, bias BibRef

Lee, J., Kim, S.T., Ro, Y.M.,
Probenet: Probing Deep Networks,
ICIP19(3821-3825)
IEEE DOI 1910
ProbeNet, Deep network probing, Deep network interpretation, Human-understandable BibRef

Buhrmester, V.[Vanessa], Münch, D.[David], Bulatov, D.[Dimitri], Arens, M.[Michael],
Evaluating the Impact of Color Information in Deep Neural Networks,
IbPRIA19(I:302-316).
Springer DOI 1910
BibRef

de la Calle, A.[Alejandro], Tovar, J.[Javier], Almazán, E.J.[Emilio J.],
Geometric Interpretation of CNNs' Last Layer,
IbPRIA(I:137-147).
Springer DOI 1910
BibRef

Rio-Torto, I.[Isabel], Fernandes, K.[Kelwin], Teixeira, L.F.[Luís F.],
Towards a Joint Approach to Produce Decisions and Explanations Using CNNs,
IbPRIA(I:3-15).
Springer DOI 1910
BibRef

Ghojogh, B.[Benyamin], Karray, F.[Fakhri], Crowley, M.[Mark],
Backprojection for Training Feedforward Neural Networks in the Input and Feature Spaces,
ICIAR20(II:16-24).
Springer DOI 2007
BibRef

Kamma, K.[Koji], Isoda, Y.[Yuki], Inoue, S.[Sarimu], Wada, T.[Toshikazu],
Behavior-Based Compression for Convolutional Neural Networks,
ICIAR19(I:427-439).
Springer DOI 1909
Reducing redundancy. BibRef

Tartaglione, E.[Enzo], Grangetto, M.[Marco],
Take a Ramble into Solution Spaces for Classification Problems in Neural Networks,
CIAP19(I:345-355).
Springer DOI 1909
BibRef

Gu, J.D.[Jin-Dong], Yang, Y.C.[Yin-Chong], Tresp, V.[Volker],
Understanding Individual Decisions of CNNs via Contrastive Backpropagation,
ACCV18(III:119-134).
Springer DOI 1906
BibRef

Yu, T.[Tao], Long, H.[Huan], Hopcroft, J.E.[John E.],
Curvature-based Comparison of Two Neural Networks,
ICPR18(441-447)
IEEE DOI 1812
Manifolds, Biological neural networks, Tensile stress, Measurement, Matrix decomposition, Covariance matrices BibRef

Malakhova, K.[Katerina],
Representation of Categories in Filters of Deep Neural Networks,
Cognitive18(2054-20542)
IEEE DOI 1812
Visualization, Face, Feature extraction, Detectors, Biological neural networks, Neurons, Automobiles BibRef

Kanbak, C.[Can], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
Geometric Robustness of Deep Networks: Analysis and Improvement,
CVPR18(4441-4449)
IEEE DOI 1812
Robustness, Manifolds, Additives, Training, Atmospheric measurements, Particle measurements BibRef

Fawzi, A.[Alhussein], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal], Soatto, S.,
Empirical Study of the Topology and Geometry of Deep Networks,
CVPR18(3762-3770)
IEEE DOI 1812
Neural networks, Perturbation methods, Geometry, Network topology, Topology, Robustness, Optimization BibRef

Zhang, Z.M.[Zi-Ming], Wu, Y.W.[Yuan-Wei], Wang, G.H.[Guang-Hui],
BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning,
CVPR18(3301-3309)
IEEE DOI 1812
Optimization, Linear programming, Upper bound, Approximation algorithms, Biological neural networks, Convergence BibRef

Palacio, S., Folz, J., Hees, J., Raue, F., Borth, D., Dengel, A.,
What do Deep Networks Like to See?,
CVPR18(3108-3117)
IEEE DOI 1812
Image reconstruction, Training, Neural networks, Decoding, Task analysis, Convolution, Image coding BibRef

Aodha, O.M., Su, S., Chen, Y., Perona, P., Yue, Y.,
Teaching Categories to Human Learners with Visual Explanations,
CVPR18(3820-3828)
IEEE DOI 1812
Education, Visualization, Task analysis, Adaptation models, Mathematical model, Computational modeling BibRef

Fong, R., Vedaldi, A.,
Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks,
CVPR18(8730-8738)
IEEE DOI 1812
Semantics, Visualization, Image segmentation, Probes, Neural networks, Task analysis, Training BibRef

Mascharka, D., Tran, P., Soklaski, R., Majumdar, A.,
Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning,
CVPR18(4942-4950)
IEEE DOI 1812
Visualization, Cognition, Task analysis, Neural networks, Image color analysis, Knowledge discovery, Automobiles BibRef

Wang, Y., Su, H., Zhang, B., Hu, X.,
Interpret Neural Networks by Identifying Critical Data Routing Paths,
CVPR18(8906-8914)
IEEE DOI 1812
Routing, Logic gates, Neural networks, Predictive models, Encoding, Semantics, Analytical models BibRef

Dong, Y.P.[Yin-Peng], Su, H.[Hang], Zhu, J.[Jun], Zhang, B.[Bo],
Improving Interpretability of Deep Neural Networks with Semantic Information,
CVPR17(975-983)
IEEE DOI 1711
Computational modeling, Decoding, Feature extraction, Neurons, Semantics, Visualization BibRef

Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.B.,
Network Dissection: Quantifying Interpretability of Deep Visual Representations,
CVPR17(3319-3327)
IEEE DOI 1711
Detectors, Image color analysis, Image segmentation, Semantics, Training, Visualization BibRef

Hu, R.H.[Rong-Hang], Andreas, J.[Jacob], Darrell, T.J.[Trevor J.], Saenko, K.[Kate],
Explainable Neural Computation via Stack Neural Module Networks,
ECCV18(VII: 55-71).
Springer DOI 1810
BibRef

Rupprecht, C., Laina, I., Navab, N., Hager, G.D., Tombari, F.,
Guide Me: Interacting with Deep Networks,
CVPR18(8551-8561)
IEEE DOI 1812
Image segmentation, Visualization, Natural languages, Task analysis, Semantics, Head, Training BibRef

Zhang, Q., Wu, Y.N., Zhu, S.,
Interpretable Convolutional Neural Networks,
CVPR18(8827-8836)
IEEE DOI 1812
Visualization, Semantics, Integrated circuits, Convolutional neural networks, Task analysis, Training, Entropy BibRef

Khan, S.H.[Salman H.], Hayat, M.[Munawar], Porikli, F.M.[Fatih Murat],
Scene Categorization with Spectral Features,
ICCV17(5639-5649)
IEEE DOI 1802
Explain the network results. feature extraction, image classification, image representation, learning (artificial intelligence), natural scenes, transforms, Transforms BibRef

Worrall, D.E.[Daniel E.], Garbin, S.J.[Stephan J.], Turmukhambetov, D.[Daniyar], Brostow, G.J.[Gabriel J.],
Interpretable Transformations with Encoder-Decoder Networks,
ICCV17(5737-5746)
IEEE DOI 1802
I.e. rotation effects. Explain results. decoding, image coding, interpolation, transforms, complex transformation encoding, BibRef

Sankaranarayanan, S.[Swami], Jain, A.[Arpit], Lim, S.N.[Ser Nam],
Guided Perturbations: Self-Corrective Behavior in Convolutional Neural Networks,
ICCV17(3582-3590)
IEEE DOI 1802
Perturb the inputs, understand NN results. Explain. image classification, image representation, neural nets, CIFAR10 datasets, MNIST, PASCAL VOC dataset, Semantics BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Forgetting, Learning without Forgetting, Convolutional Neural Networks .


Last update:Oct 22, 2024 at 22:09:59