Chung, F.L.,
Wang, S.,
Deng, Z.,
Hu, D.,
CATSMLP: Toward a Robust and Interpretable Multilayer Perceptron With
Sigmoid Activation Functions,
SMC-B(36), No. 6, December 2006, pp. 1319-1331.
IEEE DOI
0701
BibRef
Fan, C.X.[Chun-Xiao],
Li, Y.[Yang],
Tian, L.[Lei],
Li, Y.[Yong],
Rectifying Transformation Networks for Transformation-Invariant
Representations with Power Law,
IEICE(E102-D), No. 3, March 2019, pp. 675-679.
WWW Link.
1904
CNN to rectify learned feature representations.
BibRef
Zhou, B.[Bolei],
Bau, D.[David],
Oliva, A.[Aude],
Torralba, A.B.[Antonio B.],
Interpreting Deep Visual Representations via Network Dissection,
PAMI(41), No. 9, Sep. 2019, pp. 2131-2145.
IEEE DOI
1908
Method quantifies the interpretability of CNN representations.
Visualization, Detectors, Training, Image color analysis,
Task analysis, Image segmentation, Semantics,
interpretable machine learning
BibRef
Liu, R.S.[Ri-Sheng],
Cheng, S.C.[Shi-Chao],
Ma, L.[Long],
Fan, X.[Xin],
Luo, Z.X.[Zhong-Xuan],
Deep Proximal Unrolling:
Algorithmic Framework, Convergence Analysis and Applications,
IP(28), No. 10, October 2019, pp. 5013-5026.
IEEE DOI
1909
Task analysis, Optimization, Convergence, Mathematical model,
Network architecture, Data models,
low-level computer vision
BibRef
Hu, S.X.[Shell Xu],
Zagoruyko, S.[Sergey],
Komodakis, N.[Nikos],
Exploring weight symmetry in deep neural networks,
CVIU(187), 2019, pp. 102786.
Elsevier DOI
1909
BibRef
Gao, X.J.[Xin-Jian],
Zhang, Z.[Zhao],
Mu, T.T.[Ting-Ting],
Zhang, X.D.[Xu-Dong],
Cui, C.R.[Chao-Ran],
Wang, M.[Meng],
Self-attention driven adversarial similarity learning network,
PR(105), 2020, pp. 107331.
Elsevier DOI
2006
Self-attention mechanism, Adversarial loss,
Similarity learning network, Explainable deep learning
BibRef
Fu, W.J.[Wei-Jie],
Wang, M.[Meng],
Du, M.N.[Meng-Nan],
Liu, N.H.[Ning-Hao],
Hao, S.J.[Shi-Jie],
Hu, X.[Xia],
Differentiated Explanation of Deep Neural Networks With Skewed
Distributions,
PAMI(44), No. 6, June 2022, pp. 2909-2922.
IEEE DOI
2205
Generators, Perturbation methods, Tuning, Neural networks,
Convolution, Visualization, Training, Deep neural networks,
differentiated saliency maps
BibRef
Rickmann, A.M.[Anne-Marie],
Roy, A.G.[Abhijit Guha],
Sarasua, I.[Ignacio],
Wachinger, C.[Christian],
Recalibrating 3D ConvNets With Project Excite,
MedImg(39), No. 7, July 2020, pp. 2461-2471.
IEEE DOI
2007
Biomedical imaging, Image segmentation, Task analysis,
volumetric segmentation
BibRef
Sarasua, I.[Ignacio],
Pölsterl, S.,
Wachinger, C.[Christian],
Recalibration of Neural Networks for Point Cloud Analysis,
3DV20(443-451)
IEEE DOI
2102
Shape, Solid modeling, Calibration,
Feature extraction, Image analysis, Computer architecture
BibRef
Wang, Y.,
Su, H.,
Zhang, B.,
Hu, X.,
Learning Reliable Visual Saliency For Model Explanations,
MultMed(22), No. 7, July 2020, pp. 1796-1807.
IEEE DOI
2007
Visualization, Reliability, Predictive models, Task analysis,
Perturbation methods, Backpropagation, Real-time systems,
deep learning
BibRef
Patro, B.N.[Badri N.],
Lunayach, M.[Mayank],
Namboodiri, V.P.[Vinay P.],
Uncertainty Class Activation Map (U-CAM) Using Gradient Certainty
Method,
IP(30), 2021, pp. 1910-1924.
IEEE DOI
2101
Uncertainty, Visualization, Predictive models, Task analysis,
Knowledge discovery, Mathematical model, Deep learning,
epistemic uncertainty
BibRef
Patro, B.N.[Badri N.],
Lunayach, M.[Mayank],
Patel, S.[Shivansh],
Namboodiri, V.P.[Vinay P.],
U-CAM:
Visual Explanation Using Uncertainty Based Class Activation Maps,
ICCV19(7443-7452)
IEEE DOI
2004
inference mechanisms, learning (artificial intelligence),
visual explanation, uncertainty based class activation maps, Data models
BibRef
Monga, V.,
Li, Y.,
Eldar, Y.C.,
Algorithm Unrolling: Interpretable, Efficient Deep Learning for
Signal and Image Processing,
SPMag(38), No. 2, March 2021, pp. 18-44.
IEEE DOI
2103
Training data, Systematics, Neural networks,
Signal processing algorithms, Performance gain,
Machine learning
BibRef
Van Luong, H.,
Joukovsky, B.,
Deligiannis, N.,
Designing Interpretable Recurrent Neural Networks for Video
Reconstruction via Deep Unfolding,
IP(30), 2021, pp. 4099-4113.
IEEE DOI
2104
Image reconstruction, Minimization, Recurrent neural networks,
Image coding, Signal reconstruction, Task analysis,
sequential frame reconstruction
BibRef
La Gatta, V.[Valerio],
Moscato, V.[Vincenzo],
Postiglione, M.[Marco],
Sperlì, G.[Giancarlo],
PASTLE: Pivot-aided space transformation for local explanations,
PRL(149), 2021, pp. 67-74.
Elsevier DOI
2108
eXplainable artificial intelligence,
Interpretable machine learning, Artificial intelligence
BibRef
Dazeley, R.[Richard],
Vamplew, P.[Peter],
Foale, C.[Cameron],
Young, C.[Charlotte],
Aryal, S.I.[Sun-Il],
Cruz, F.[Francisco],
Levels of explainable artificial intelligence for human-aligned
conversational explanations,
AI(299), 2021, pp. 103525.
Elsevier DOI
2108
Explainable Artificial Intelligence (XAI), Broad-XAI,
Interpretable Machine Learning (IML), Human-Computer Interaction (HCI)
BibRef
Yang, Z.B.[Ze-Bin],
Zhang, A.[Aijun],
Sudjianto, A.[Agus],
GAMI-Net: An explainable neural network based on generalized additive
models with structured interactions,
PR(120), 2021, pp. 108192.
Elsevier DOI
2109
Explainable neural network, Generalized additive model,
Pairwise interaction, Interpretability constraints
BibRef
Ivanovs, M.[Maksims],
Kadikis, R.[Roberts],
Ozols, K.[Kaspars],
Perturbation-based methods for explaining deep neural networks:
A survey,
PRL(150), 2021, pp. 228-234.
Elsevier DOI
2109
Survey, Explainable Networks. Deep learning, Explainable artificial intelligence, Perturbation-based methods
BibRef
Zhu, S.[Sijie],
Yang, T.[Taojiannan],
Chen, C.[Chen],
Visual Explanation for Deep Metric Learning,
IP(30), 2021, pp. 7593-7607.
IEEE DOI
2109
Measurement, Visualization, Mouth,
Image retrieval, Computational modeling, Perturbation methods,
activation decomposition
BibRef
Cui, Y.[Yunbo],
Du, Y.T.[You-Tian],
Wang, X.[Xue],
Wang, H.[Hang],
Su, C.[Chang],
Leveraging attention-based visual clue extraction for image
classification,
IET-IPR(15), No. 12, 2021, pp. 2937-2947.
DOI Link
2109
What features are really used.
BibRef
Dombrowski, A.K.[Ann-Kathrin],
Anders, C.J.[Christopher J.],
Müller, K.R.[Klaus-Robert],
Kessel, P.[Pan],
Towards robust explanations for deep neural networks,
PR(121), 2022, pp. 108194.
Elsevier DOI
2109
Explanation method, Saliency map, Adversarial attacks,
Manipulation, Neural networks,
BibRef
Losch, M.M.[Max Maria],
Fritz, M.[Mario],
Schiele, B.[Bernt],
Semantic Bottlenecks: Quantifying and Improving Inspectability of Deep
Representations,
IJCV(129), No. 11, November 2021, pp. 3136-3153.
Springer DOI
2110
BibRef
Earlier:
GCPR20(15-29).
Springer DOI
2110
BibRef
Kook, L.[Lucas],
Herzog, L.[Lisa],
Hothorn, T.[Torsten],
Dürr, O.[Oliver],
Sick, B.[Beate],
Deep and interpretable regression models for ordinal outcomes,
PR(122), 2022, pp. 108263.
Elsevier DOI
2112
Deep learning, Interpretability, Distributional regression,
Ordinal regression, Transformation models
BibRef
Kim, J.[Junho],
Kim, S.[Seongyeop],
Kim, S.T.[Seong Tae],
Ro, Y.M.[Yong Man],
Robust Perturbation for Visual Explanation: Cross-Checking Mask
Optimization to Avoid Class Distortion,
IP(31), 2022, pp. 301-313.
IEEE DOI
2112
Distortion, Perturbation methods, Visualization, Optimization, Cats,
Bicycles, Automobiles, Visual explanation, attribution map,
mask perturbation
BibRef
Sokolovska, N.[Nataliya],
Behbahani, Y.M.[Yasser Mohseni],
Vanishing boosted weights: A consistent algorithm to learn
interpretable rules,
PRL(152), 2021, pp. 63-69.
Elsevier DOI
2112
Machine learning, Fine-tuning procedure,
Interpretable sparse models, Decision stumps
BibRef
Narwaria, M.[Manish],
Does explainable machine learning uncover the black box in vision
applications?,
IVC(118), 2022, pp. 104353.
Elsevier DOI
2202
Explainable machine learning, Deep learning, Vision, Signal processing
BibRef
Ben Sahel, Y.[Yair],
Bryan, J.P.[John P.],
Cleary, B.[Brian],
Farhi, S.L.[Samouil L.],
Eldar, Y.C.[Yonina C.],
Deep Unrolled Recovery in Sparse Biological Imaging:
Achieving fast, accurate results,
SPMag(39), No. 2, March 2022, pp. 45-57.
IEEE DOI
2203
Architectures that combine the interpretability of iterative algorithms
with the performance of deep learning.
Location awareness, Learning systems, Biological system modeling,
Algorithm design and analysis, Biomedical imaging, Performance gain
BibRef
Zheng, T.Y.[Tian-You],
Wang, Q.[Qiang],
Shen, Y.[Yue],
Ma, X.[Xiang],
Lin, X.T.[Xiao-Tian],
High-resolution rectified gradient-based visual explanations for
weakly supervised segmentation,
PR(129), 2022, pp. 108724.
Elsevier DOI
2206
BibRef
Cooper, J.[Jessica],
Arandjelovic, O.[Ognjen],
Harrison, D.J.[David J.],
Believe the HiPe: Hierarchical perturbation for fast, robust, and
model-agnostic saliency mapping,
PR(129), 2022, pp. 108743.
Elsevier DOI
2206
XAI, AI safety, Saliency mapping, Deep learning explanation,
Interpretability, Prediction attribution
BibRef
Mochaourab, R.[Rami],
Venkitaraman, A.[Arun],
Samsten, I.[Isak],
Papapetrou, P.[Panagiotis],
Rojas, C.R.[Cristian R.],
Post Hoc Explainability for Time Series Classification:
Toward a signal processing perspective,
SPMag(39), No. 4, July 2022, pp. 119-129.
IEEE DOI
2207
Tracking, Solid modeling, Time series analysis, Neural networks,
Speech recognition, Transforms, Signal processing
BibRef
Ho, T.K.[Tin Kam],
Luo, Y.F.[Yen-Fu],
Guido, R.C.[Rodrigo Capobianco],
Explainability of Methods for Critical Information Extraction From
Clinical Documents: A survey of representative works,
SPMag(39), No. 4, July 2022, pp. 96-106.
IEEE DOI
2207
Vocabulary, Symbols, Natural language processing, Cognition,
Real-time systems, Data mining, Artificial intelligence
BibRef
Letzgus, S.[Simon],
Wagner, P.[Patrick],
Lederer, J.[Jonas],
Samek, W.[Wojciech],
Müller, K.R.[Klaus-Robert],
Montavon, G.[Grégoire],
Toward Explainable Artificial Intelligence for Regression Models: A
methodological perspective,
SPMag(39), No. 4, July 2022, pp. 40-58.
IEEE DOI
2207
Deep learning, Neural networks, Predictive models,
Medical diagnosis, Task analysis
BibRef
Al Regib, G.[Ghassan],
Prabhushankar, M.[Mohit],
Explanatory Paradigms in Neural Networks: Towards relevant and
contextual explanations,
SPMag(39), No. 4, July 2022, pp. 59-72.
IEEE DOI
2207
Correlation, Codes, Neural networks, Decision making,
Probabilistic logic, Cognition, Reproducibility of results, Context awareness
BibRef
Nielsen, I.E.[Ian E.],
Dera, D.[Dimah],
Rasool, G.[Ghulam],
Ramachandran, R.P.[Ravi P.],
Bouaynaya, N.C.[Nidhal Carla],
Robust Explainability: A tutorial on gradient-based attribution
methods for deep neural networks,
SPMag(39), No. 4, July 2022, pp. 73-84.
IEEE DOI
2207
Deep learning, Neural networks, Tutorials, Predictive models, Reproducibility of results
BibRef
Das, P.[Payel],
Varshney, L.R.[Lav R.],
Explaining Artificial Intelligence Generation and Creativity: Human
interpretability for novel ideas and artifacts,
SPMag(39), No. 4, July 2022, pp. 85-95.
IEEE DOI
2207
Training data, Signal processing algorithms,
Intellectual property, Gaussian distribution,
Creativity
BibRef
Temenos, A.[Anastasios],
Tzortzis, I.N.[Ioannis N.],
Kaselimi, M.[Maria],
Rallis, I.[Ioannis],
Doulamis, A.[Anastasios],
Doulamis, N.[Nikolaos],
Novel Insights in Spatial Epidemiology Utilizing Explainable AI (XAI)
and Remote Sensing,
RS(14), No. 13, 2022, pp. xx-yy.
DOI Link
2208
BibRef
Corbière, C.[Charles],
Thome, N.[Nicolas],
Saporta, A.[Antoine],
Vu, T.H.[Tuan-Hung],
Cord, M.[Matthieu],
Pérez, P.[Patrick],
Confidence Estimation via Auxiliary Models,
PAMI(44), No. 10, October 2022, pp. 6043-6055.
IEEE DOI
2209
Task analysis, Estimation, Neural networks, Semantics,
Predictive models, Uncertainty, Training, Confidence estimation,
semantic image segmentation
BibRef
Dietterich, T.G.[Thomas G.],
Guyer, A.[Alex],
The familiarity hypothesis:
Explaining the behavior of deep open set methods,
PR(132), 2022, pp. 108931.
Elsevier DOI
2209
Anomaly detection, Open set learning, Object recognition,
Novel category detection, Representation learning, Deep learning
BibRef
Jung, H.G.[Hong-Gyu],
Kang, S.H.[Sin-Han],
Kim, H.D.[Hee-Dong],
Won, D.O.[Dong-Ok],
Lee, S.W.[Seong-Whan],
Counterfactual explanation based on gradual construction for deep
networks,
PR(132), 2022, pp. 108958.
Elsevier DOI
2209
Explainable AI, Counterfactual explanation, Interpretability,
Model-agnostics, Generative model
BibRef
Schnake, T.[Thomas],
Eberle, O.[Oliver],
Lederer, J.[Jonas],
Nakajima, S.[Shinichi],
Schütt, K.T.[Kristof T.],
Müller, K.R.[Klaus-Robert],
Montavon, G.[Grégoire],
Higher-Order Explanations of Graph Neural Networks via Relevant Walks,
PAMI(44), No. 11, November 2022, pp. 7581-7596.
IEEE DOI
2210
Graph neural networks, Neural networks, Predictive models,
Optimization, Taylor series, Feature extraction, Adaptation models,
explainable machine learning
BibRef
Giryes, R.[Raja],
A Function Space Analysis of Finite Neural Networks With Insights
From Sampling Theory,
PAMI(45), No. 1, January 2023, pp. 27-37.
IEEE DOI
2212
Neural networks, Training data, Discrete Fourier transforms,
Interpolation, Training, Transforms, Splines (mathematics),
band-limited mappings
BibRef
Fu, Y.W.[Yan-Wei],
Liu, C.[Chen],
Li, D.H.[Dong-Hao],
Zhong, Z.Y.[Zu-Yuan],
Sun, X.W.[Xin-Wei],
Zeng, J.S.[Jin-Shan],
Yao, Y.[Yuan],
Exploring Structural Sparsity of Deep Networks Via Inverse Scale
Spaces,
PAMI(45), No. 2, February 2023, pp. 1749-1765.
IEEE DOI
2301
Training, Computational modeling, Neural networks, Deep learning,
Convergence, Mirrors, Couplings, Early stopping, growing network,
structural sparsity
BibRef
Wang, X.[Xiang],
Wu, Y.X.[Ying-Xin],
Zhang, A.[An],
Feng, F.[Fuli],
He, X.N.[Xiang-Nan],
Chua, T.S.[Tat-Seng],
Reinforced Causal Explainer for Graph Neural Networks,
PAMI(45), No. 2, February 2023, pp. 2297-2309.
IEEE DOI
2301
Predictive models, Task analysis, Computational modeling,
Analytical models, Visualization, Representation learning,
cause-effect
BibRef
Gautam, S.[Srishti],
Höhne, M.M.C.[Marina M.C.],
Hansen, S.[Stine],
Jenssen, R.[Robert],
Kampffmeyer, M.[Michael],
This looks More Like that: Enhancing Self-Explaining Models by
Prototypical Relevance Propagation,
PR(136), 2023, pp. 109172.
Elsevier DOI
2301
Self-explaining models, Explainable AI, Deep learning,
Spurious Correlation Detection
BibRef
Sousa, E.V.[Eduardo Vera],
Vasconcelos, C.N.[Cristina Nader],
Fernandes, L.A.F.[Leandro A.F.],
An analysis of ConformalLayers' robustness to corruptions in natural
images,
PRL(166), 2023, pp. 190-197.
Elsevier DOI
2302
BibRef
Alfarra, M.[Motasem],
Bibi, A.[Adel],
Hammoud, H.[Hasan],
Gaafar, M.[Mohamed],
Ghanem, B.[Bernard],
On the Decision Boundaries of Neural Networks:
A Tropical Geometry Perspective,
PAMI(45), No. 4, April 2023, pp. 5027-5037.
IEEE DOI
2303
Geometry, Neural networks, Standards, Optimization, Task analysis,
Generators, Complexity theory, Adversarial attacks,
tropical geometry
BibRef
Yuan, H.[Hao],
Yu, H.Y.[Hai-Yang],
Gui, S.[Shurui],
Ji, S.W.[Shui-Wang],
Explainability in Graph Neural Networks: A Taxonomic Survey,
PAMI(45), No. 5, May 2023, pp. 5782-5799.
IEEE DOI
2304
Predictive models, Task analysis, Taxonomy,
Biological system modeling, Graph neural networks, Data models, survey
BibRef
Wickstrøm, K.K.[Kristoffer K.],
Trosten, D.J.[Daniel J.],
Løkse, S.[Sigurd],
Boubekki, A.[Ahcène],
Mikalsen, K.ø.[Karl øyvind],
Kampffmeyer, M.C.[Michael C.],
Jenssen, R.[Robert],
RELAX: Representation Learning Explainability,
IJCV(131), No. 6, June 2023, pp. 1584-1610.
Springer DOI
2305
BibRef
Qiao, S.S.[Shi-Shi],
Wang, R.P.[Rui-Ping],
Shan, S.G.[Shi-Guang],
Chen, X.L.[Xi-Lin],
Hierarchical disentangling network for object representation learning,
PR(140), 2023, pp. 109539.
Elsevier DOI
2305
Object understanding, Hierarchical learning,
Representation disentanglement, Network interpretability
BibRef
Lin, C.S.[Ci-Siang],
Wang, Y.C.A.F.[Yu-Chi-Ang Frank],
Describe, Spot and Explain: Interpretable Representation Learning for
Discriminative Visual Reasoning,
IP(32), 2023, pp. 2481-2492.
IEEE DOI
2305
Prototypes, Visualization, Transformers, Task analysis, Heating systems,
Training, Annotations, Interpretable prototypes, deep learning
BibRef
Nguyen, K.P.[Kevin P.],
Treacher, A.H.[Alex H.],
Montillo, A.A.[Albert A.],
Adversarially-Regularized Mixed Effects Deep Learning (ARMED) Models
Improve Interpretability, Performance, and Generalization on
Clustered (non-iid) Data,
PAMI(45), No. 7, July 2023, pp. 8081-8093.
IEEE DOI
2306
Data models, Biological system modeling, Deep learning,
Adaptation models, Training, Predictive models, Bayes methods, clinical data
BibRef
Wang, P.[Pei],
Vasconcelos, N.M.[Nuno M.],
A Generalized Explanation Framework for Visualization of Deep
Learning Model Predictions,
PAMI(45), No. 8, August 2023, pp. 9265-9283.
IEEE DOI
2307
Birds, Visualization, Deep learning, Task analysis, Protocols,
Perturbation methods, Attribution, confidence scores,
explainable AI
BibRef
Xu, J.J.[Jian-Jin],
Zhang, Z.X.[Zhao-Xiang],
Hu, X.L.[Xiao-Lin],
Extracting Semantic Knowledge From GANs With Unsupervised Learning,
PAMI(45), No. 8, August 2023, pp. 9654-9668.
IEEE DOI
2307
Semantics, Semantic segmentation,
Generative adversarial networks, Clustering algorithms,
unsupervised learning
BibRef
Iida, T.[Tsumugi],
Komatsu, T.[Takumi],
Kaneda, K.[Kanta],
Hirakawa, T.[Tsubasa],
Yamashita, T.[Takayoshi],
Fujiyoshi, H.[Hironobu],
Sugiura, K.[Komei],
Visual Explanation Generation Based on Lambda Attention Branch Networks,
ACCV22(II:475-490).
Springer DOI
2307
BibRef
Li, X.[Xin],
Lei, H.J.[Hao-Jie],
Zhang, L.[Li],
Wang, M.Z.[Ming-Zhong],
Differentiable Logic Policy for Interpretable Deep Reinforcement
Learning: A Study From an Optimization Perspective,
PAMI(45), No. 10, October 2023, pp. 11654-11667.
IEEE DOI
2310
BibRef
Wargnier-Dauchelle, V.[Valentine],
Grenier, T.[Thomas],
Durand-Dubief, F.[Françoise],
Cotton, F.[François],
Sdika, M.[Michaël],
A Weakly Supervised Gradient Attribution Constraint for Interpretable
Classification and Anomaly Detection,
MedImg(42), No. 11, November 2023, pp. 3336-3347.
IEEE DOI
2311
BibRef
Asnani, V.[Vishal],
Yin, X.[Xi],
Hassner, T.[Tal],
Liu, X.M.[Xiao-Ming],
Reverse Engineering of Generative Models:
Inferring Model Hyperparameters From Generated Images,
PAMI(45), No. 12, December 2023, pp. 15477-15493.
IEEE DOI
2311
BibRef
Shi, W.[Wei],
Zhang, W.T.[Wen-Tao],
Zheng, W.S.[Wei-Shi],
Wang, R.X.[Rui-Xuan],
PAMI: Partition Input and Aggregate Outputs for Model Interpretation,
PR(145), 2024, pp. 109898.
Elsevier DOI Code:
WWW Link.
2311
Interpretation, Visualization, Post-hoc
BibRef
Echeberria-Barrio, X.[Xabier],
Gil-Lerchundi, A.[Amaia],
Mendialdua, I.[Iñigo],
Orduna-Urrutia, R.[Raul],
Topological safeguard for evasion attack interpreting the neural
networks' behavior,
PR(147), 2024, pp. 110130.
Elsevier DOI
2312
Artificial neural network interpretability, cybersecurity, countermeasure
BibRef
Brocki, L.[Lennart],
Chung, N.C.[Neo Christopher],
Feature perturbation augmentation for reliable evaluation of
importance estimators in neural networks,
PRL(176), 2023, pp. 131-139.
Elsevier DOI Code:
WWW Link.
2312
Deep neural network, Artificial intelligence, Interpretability,
Explainability, Fidelity, Importance estimator, Saliency map,
Feature perturbation
BibRef
Joukovsky, B.[Boris],
Eldar, Y.C.[Yonina C.],
Deligiannis, N.[Nikos],
Interpretable Neural Networks for Video Separation:
Deep Unfolding RPCA With Foreground Masking,
IP(33), 2024, pp. 108-122.
IEEE DOI
2312
BibRef
Dan, T.T.[Ting-Ting],
Kim, M.[Minjeong],
Kim, W.H.[Won Hwa],
Wu, G.R.[Guo-Rong],
Developing Explainable Deep Model for Discovering Novel Control
Mechanism of Neuro-Dynamics,
MedImg(43), No. 1, January 2024, pp. 427-438.
IEEE DOI
2401
BibRef
Apicella, A.[Andrea],
Isgrò, F.[Francesco],
Prevete, R.[Roberto],
Hidden classification layers: Enhancing linear separability between
classes in neural networks layers,
PRL(177), 2024, pp. 69-74.
Elsevier DOI
2401
Neural networks, Hidden layers, Hidden representations, Linearly separable
BibRef
Apicella, A.[Andrea],
Giugliano, S.[Salvatore],
Isgrò, F.[Francesco],
Prevete, R.[Roberto],
A General Approach to Compute the Relevance of Middle-level Input
Features,
EDL-AI20(189-203).
Springer DOI
2103
BibRef
Li, Y.S.[Yan-Shan],
Liang, H.J.[Hua-Jie],
Zheng, H.F.[Hong-Fang],
Yu, R.[Rui],
CR-CAM: Generating explanations for deep neural networks by
contrasting and ranking features,
PR(149), 2024, pp. 110251.
Elsevier DOI
2403
Class Activation Mapping (CAM), Manifold space, Interpretation
BibRef
Suresh, S.,
Das, B.,
Abrol, V.,
Dutta-Roy, S.,
On characterizing the evolution of embedding space of neural networks
using algebraic topology,
PRL(179), 2024, pp. 165-171.
Elsevier DOI
2403
Topology, Deep learning, Transfer learning
BibRef
Peng, Y.T.[Yi-Tao],
He, L.H.[Liang-Hua],
Hu, D.[Die],
Liu, Y.H.[Yi-Hang],
Yang, L.Z.[Long-Zhen],
Shang, S.H.[Shao-Hua],
Hierarchical Dynamic Masks for Visual Explanation of Neural Networks,
MultMed(26), 2024, pp. 5311-5325.
IEEE DOI
2404
Neural networks, Decision making, Visualization, Reliability,
Predictive models, Location awareness, model-agnostic
BibRef
Dombrowski, A.K.[Ann-Kathrin],
Gerken, J.E.[Jan E.],
Muller, K.R.[Klaus-Robert],
Kessel, P.[Pan],
Diffeomorphic Counterfactuals With Generative Models,
PAMI(46), No. 5, May 2024, pp. 3257-3274.
IEEE DOI
2404
Explain classification decisions of neural networks in a human
interpretable way.
Manifolds, Geometry, Computational modeling, Semantics, Data models,
Artificial intelligence, Task analysis, generative models
BibRef
Wang, J.Q.[Jia-Qi],
Liu, H.F.[Hua-Feng],
Jing, L.P.[Li-Ping],
Transparent Embedding Space for Interpretable Image Recognition,
CirSysVideo(34), No. 5, May 2024, pp. 3204-3219.
IEEE DOI
2405
Transformers, Prototypes, Semantics, Visualization,
Image recognition, Cognition, Task analysis, Explainable AI
BibRef
Wang, J.Q.[Jia-Qi],
Liu, H.F.[Hua-Feng],
Wang, X.Y.[Xin-Yue],
Jing, L.P.[Li-Ping],
Interpretable Image Recognition by Constructing Transparent Embedding
Space,
ICCV21(875-884)
IEEE DOI
2203
Manifolds, Bridges, Image recognition, Codes, Cognitive processes,
Neural networks, Explainable AI, Fairness, accountability,
Visual reasoning and logical representation
BibRef
Luo, J.Q.[Jia-Qi],
Xu, S.X.[Shi-Xin],
NCART: Neural Classification and Regression Tree for tabular data,
PR(154), 2024, pp. 110578.
Elsevier DOI Code:
WWW Link.
2406
Tabular data, Neural networks, Interpretability,
Classification and Regression Tree
BibRef
Islam, M.T.[Md Tauhidul],
Xing, L.[Lei],
Deciphering the Feature Representation of Deep Neural Networks for
High-Performance AI,
PAMI(46), No. 8, August 2024, pp. 5273-5287.
IEEE DOI
2407
Kernel, Feature extraction, Measurement, Euclidean distance,
Principal component analysis, Transformers, X-ray imaging,
interpretability
BibRef
Remusati, H.[Héloïse],
Caillec, J.M.L.[Jean-Marc Le],
Schneider, J.Y.[Jean-Yves],
Petit-Frère, J.[Jacques],
Merlet, T.[Thomas],
Generative Adversarial Networks for SAR Automatic Target Recognition
and Classification Models Enhanced Explainability: Perspectives and
Challenges,
RS(16), No. 14, 2024, pp. 2569.
DOI Link
2408
BibRef
Valle, M.E.[Marcos Eduardo],
Understanding Vector-Valued Neural Networks and Their Relationship
With Real and Hypercomplex-Valued Neural Networks: Incorporating
intercorrelation between features into neural networks,
SPMag(41), No. 3, May 2024, pp. 49-58.
IEEE DOI
2408
[Hypercomplex Signal and Image Processing]
Training data, Deep learning, Image processing, Neural networks,
Parallel processing, Vectors, Hypercomplex, Multidimensional signal processing
BibRef
Shin, Y.M.[Yong-Min],
Kim, S.W.[Sun-Woo],
Shin, W.Y.[Won-Yong],
PAGE: Prototype-Based Model-Level Explanations for Graph Neural
Networks,
PAMI(46), No. 10, October 2024, pp. 6559-6576.
IEEE DOI
2409
Prototypes, Graph neural networks, Computational modeling,
Predictive models, Training, Mathematical models, prototype graph
BibRef
Zhuo, Y.[Yue],
Ge, Z.Q.[Zhi-Qiang],
IG2: Integrated Gradient on Iterative Gradient Path for Feature
Attribution,
PAMI(46), No. 11, November 2024, pp. 7173-7190.
IEEE DOI
2410
Predictive models, Noise, Semiconductor device modeling,
Perturbation methods, Explainable AI, Vectors, integrated gradient
BibRef
Chormai, P.[Pattarawat],
Herrmann, J.[Jan],
Müller, K.R.[Klaus-Robert],
Montavon, G.[Grégoire],
Disentangled Explanations of Neural Network Predictions by Finding
Relevant Subspaces,
PAMI(46), No. 11, November 2024, pp. 7283-7299.
IEEE DOI
2410
Predictive models, Feature extraction, Explainable AI,
Neural networks, Analytical models, Visualization, Standards,
subspace analysis
BibRef
Su, X.Z.[Xing-Zhe],
Qiang, W.W.[Wen-Wen],
Hu, J.[Jie],
Zheng, C.W.[Chang-Wen],
Wu, F.G.[Feng-Ge],
Sun, F.C.[Fu-Chun],
Intriguing Property and Counterfactual Explanation of GAN for Remote
Sensing Image Generation,
IJCV(132), No. 11, November 2024, pp. 5192-5216.
Springer DOI
2411
BibRef
Xiao, T.X.[Ting-Xiong],
Zhang, W.[Weihang],
Cheng, Y.X.[Yu-Xiao],
Suo, J.[Jinli],
HOPE: High-Order Polynomial Expansion of Black-Box Neural Networks,
PAMI(46), No. 12, December 2024, pp. 7924-7939.
IEEE DOI
2411
Artificial neural networks, Polynomials, Taylor series,
Biological neural networks, Computational modeling, deep learning
BibRef
Kowal, M.[Matthew],
Siam, M.[Mennatullah],
Islam, M.A.[Md Amirul],
Bruce, N.D.B.[Neil D. B.],
Wildes, R.P.[Richard P.],
Derpanis, K.G.[Konstantinos G.],
Quantifying and Learning Static vs. Dynamic Information in Deep
Spatiotemporal Networks,
PAMI(47), No. 1, January 2025, pp. 190-205.
IEEE DOI
2412
BibRef
Earlier:
A Deeper Dive Into What Deep Spatiotemporal Networks Encode:
Quantifying Static vs. Dynamic Information,
CVPR22(13979-13989)
IEEE DOI
2210
Spatiotemporal phenomena, Dynamics, Training,
Computational modeling, Instance segmentation,
action recognition.
Visualization, Heuristic algorithms,
Object segmentation, grouping and shape analysis
BibRef
Wu, H.F.[He-Feng],
Jiang, H.[Hao],
Wang, K.[Keze],
Tang, Z.[Ziyi],
He, X.H.[Xiang-Huan],
Lin, L.[Liang],
Improving Network Interpretability via Explanation Consistency
Evaluation,
MultMed(26), 2024, pp. 11261-11273.
IEEE DOI
2412
Training, Semantics, Predictive models, Visualization,
Heating systems, Birds, Accuracy, neural networks
BibRef
Batreddy, S.[Subbareddy],
Mishra, P.[Pushkal],
Kakarla, Y.[Yaswanth],
Siripuram, A.[Aditya],
Inpainting-Driven Graph Learning via Explainable Neural Networks,
SPLetters(32), 2025, pp. 111-115.
IEEE DOI
2501
Signal processing algorithms, Neural networks, Optimization,
Laplace equations, Data models, Covariance matrices, unrolling
BibRef
Nam, W.J.[Woo-Jeoung],
Lee, S.W.[Seong-Whan],
Illuminating Salient Contributions in Neuron Activation With
Attribution Equilibrium,
PAMI(47), No. 2, February 2025, pp. 1120-1131.
IEEE DOI
2501
Neurons, Visualization, Computational modeling,
Artificial neural networks, Backpropagation,
visual explanation
BibRef
Zhang, R.[Rui],
Du, X.[Xingbo],
Yan, J.C.[Jun-Chi],
Zhang, S.H.[Shi-Hua],
The Decoupling Concept Bottleneck Model,
PAMI(47), No. 2, February 2025, pp. 1250-1265.
IEEE DOI
2501
Interpretable neural network.
Distortion, Human computer interaction, Training, Neural networks,
Accuracy, Predictive models, Mathematical models, Birds,
interpretable AI
BibRef
Rong, Y.B.[Yi-Biao],
Identifying Patterns for Convolutional Neural Networks in Regression
Tasks to Make Specific Predictions via Genetic Algorithms,
SPLetters(32), 2025, pp. 626-630.
IEEE DOI
2502
Convolutional neural networks, Genetic algorithms, Training,
Convolution, Analytical models, Vectors, Predictive models, regression
BibRef
Liu, W.Q.[Wei-Quan],
Liu, M.H.[Ming-Hao],
Zheng, S.J.[Shi-Jun],
Shen, S.Q.[Si-Qi],
Bian, X.S.[Xue-Sheng],
Zang, Y.[Yu],
Zhong, P.[Ping],
Wang, C.[Cheng],
Interpreting Hidden Semantics in the Intermediate Layers of 3D Point
Cloud Classification Neural Network,
MultMed(27), 2025, pp. 965-977.
IEEE DOI
2502
Semantics, Point cloud compression, Neurons,
Biological neural networks, Visualization, Task analysis, hidden semantics
BibRef
Ning, C.[Chao],
Gan, H.P.[Hong-Ping],
SS ViT: Observing pathologies of multi-layer perceptron weights and
re-setting vision transformer,
PR(162), 2025, pp. 111422.
Elsevier DOI Code:
WWW Link.
2503
Vision transformer, Multi-layer perceptron,
Fully connected layer, Image classification
BibRef
Chen, Y.M.[Yi-Meng],
Wang, B.[Bo],
Su, C.[Changshan],
Li, A.[Ao],
Li, G.[Gen],
Tang, Y.X.[Yu-Xing],
Striving for understanding: Deconstructing neural networks in
side-channel analysis,
PR(162), 2025, pp. 111374.
Elsevier DOI
2503
Side-channel analysis, Hardware security, Profiling attack,
Deep learning, Explainability
BibRef
Wang, T.J.[Tang-Jun],
Bao, C.L.[Cheng-Long],
Shi, Z.Q.[Zuo-Qiang],
Convection-Diffusion Equation: A Theoretically Certified Framework
for Neural Networks,
PAMI(47), No. 5, May 2025, pp. 4170-4182.
IEEE DOI
2504
Artificial neural networks, Mathematical models, Neural networks,
Feature extraction, Training, Linearity, Convection, scale-space theory
BibRef
Nieradzik, L.[Lars],
Stephani, H.[Henrike],
Keuper, J.[Janis],
Reliable Evaluation of Attribution Maps in CNNs:
A Perturbation-Based Approach,
IJCV(133), No. 5, May 2025, pp. 2392-2409.
Springer DOI
2504
BibRef
O'Mahony, L.[Laura],
Nikolov, N.S.[Nikola S.],
O'Sullivan, D.J.P.[David J.P.],
Towards Utilising a Range of Neural Activations for Comprehending
Representational Associations,
WACV25(2495-2506)
IEEE DOI
2505
Correlation, Accuracy, Neurons, Artificial neural networks,
Benchmark testing, Data models, Biological neural networks,
BibRef
Kim, J.[Jeeyung],
Wang, Z.[Ze],
Qiu, Q.[Qiang],
Constructing Concept-based Models to Mitigate Spurious Correlations
with Minimal Human Effort,
ECCV24(LXXX: 137-153).
Springer DOI
2412
BibRef
Subramanyam, R.[Rakshith],
Thopalli, K.[Kowshik],
Narayanaswamy, V.[Vivek],
Thiagarajan, J.J.[Jayaraman J.],
Decider: Leveraging Foundation Model Priors for Improved Model Failure
Detection and Explanation,
ECCV24(LXXIX: 465-482).
Springer DOI
2412
BibRef
Hachiya, H.[Hirotaka],
Nisawa, D.[Daiki],
Randomized Channel-pass Mask for Channel-wise Explanation of Black-box
Models,
ACCV24(VIII: 454-468).
Springer DOI
2412
BibRef
Kim, S.[Sangwon],
Ahn, D.[Dasom],
Ko, B.C.[Byoung Chul],
Jang, I.S.[In-Su],
Kim, K.J.[Kwang-Ju],
EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and
Quantized Vectors,
ACCV24(VII: 270-286).
Springer DOI
2412
BibRef
Ayyar, M.P.[Meghna P.],
Benois-Pineau, J.[Jenny],
Zemmari, A.[Akka],
ET: Explain to Train: Leveraging Explanations to Enhance the Training
of a Multimodal Transformer,
ICIP24(235-241)
IEEE DOI Code:
WWW Link.
2411
Training, Codes, Explainable AI, Neural networks, Benchmark testing,
Transformers, Transformers, Explainable AI,
Classification
BibRef
Yarici, Y.[Yavuz],
Kokilepersaud, K.[Kiran],
Prabhushankar, M.[Mohit],
Al Regib, G.[Ghassan],
Explaining Representation Learning with Perceptual Components,
ICIP24(228-234)
IEEE DOI
2411
Training, Representation learning, Analytical models, Shape,
Image color analysis, Semantics, Visual perception, Explainability,
Texture
BibRef
Ko, M.[Myeongseob],
Kang, F.Y.[Fei-Yang],
Shi, W.Y.[Wei-Yan],
Jin, M.[Ming],
Yu, Z.[Zhou],
Jia, R.X.[Ruo-Xi],
The Mirrored Influence Hypothesis: Efficient Data Influence
Estimation by Harnessing Forward Passes,
CVPR24(26276-26285)
IEEE DOI
2410
Training, Inverse problems, Computational modeling, Training data,
Estimation, Predictive models, Diffusion models
BibRef
Augustin, M.[Maximilian],
Neuhaus, Y.[Yannic],
Hein, M.[Matthias],
DiG-IN: Diffusion Guidance for Investigating Networks: Uncovering
Classifier Differences, Neuron Visualisations, and Visual
Counterfactual Explanations,
CVPR24(11093-11103)
IEEE DOI
2410
Visualization, Systematics, Shape, Image synthesis, Neurons,
Data visualization, Feature extraction
BibRef
Yu, R.P.[Run-Peng],
Wang, X.C.[Xin-Chao],
Neural Lineage,
CVPR24(4797-4807)
IEEE DOI
2410
Adaptation models, Visualization, Accuracy, Neural networks,
Detectors, Predictive models, Performance gain
BibRef
Ahn, Y.H.[Yong Hyun],
Kim, H.B.[Hyeon Bae],
Kim, S.T.[Seong Tae],
WWW: A Unified Framework for Explaining what, Where and why of Neural
Networks by Interpretation of Neuron Concepts,
CVPR24(10968-10977)
IEEE DOI Code:
WWW Link.
2410
Heating systems, Measurement, Uncertainty, Neurons, Decision making,
Transformers, Model Interpretation, Explainable AI, Shapley-value,
Concept-based Explanation
BibRef
Chowdhury, T.F.[Townim Faisal],
Liao, K.[Kewen],
Phan, V.M.H.[Vu Minh Hieu],
To, M.S.[Minh-Son],
Xie, Y.T.[Yu-Tong],
Hung, K.[Kevin],
Ross, D.[David],
van den Hengel, A.J.[Anton J.],
Verjans, J.W.[Johan W.],
Liao, Z.B.[Zhi-Bin],
CAPE: CAM as a Probabilistic Ensemble for Enhanced DNN Interpretation,
CVPR24(11072-11081)
IEEE DOI Code:
WWW Link.
2410
Heating systems, Visualization, Decision making, Imaging,
Artificial neural networks, Predictive models,
explainable AI
BibRef
Dreyer, M.[Maximilian],
Achtibat, R.[Reduan],
Samek, W.[Wojciech],
Lapuschkin, S.[Sebastian],
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions
with Prototypical Concept-based Explanations,
SAIAD24(3491-3501)
IEEE DOI Code:
WWW Link.
2410
Explainable AI, Data integrity, Decision making, Prototypes,
Training data, Predictive models, concept-based XAI, prototypes,
outlier detection
BibRef
Ahmad, O.[Ola],
Béreux, N.[Nicolas],
Baret, L.[Loïc],
Hashemi, V.[Vahid],
Lecue, F.[Freddy],
Causal Analysis for Robust Interpretability of Neural Networks,
WACV24(4673-4682)
IEEE DOI
2404
Training, Phase measurement, Computational modeling, Neural networks,
Noise, Predictive models, Maintenance engineering, and algorithms
BibRef
Kuttichira, D.P.[Deepthi Praveenlal],
Azam, B.[Basim],
Verma, B.[Brijesh],
Rahman, A.[Ashfaqur],
Wang, L.[Lipo],
Sattar, A.[Abdul],
Neural Network Feature Explanation Using Neuron Activation Rate Based
Bipartite Graph,
IVCNZ23(1-6)
IEEE DOI
2403
Computational modeling, Neurons,
Machine learning, Predictive models, Feature extraction,
feature explanation
BibRef
Jeon, G.Y.[Gi-Young],
Jeong, H.[Haedong],
Choi, J.[Jaesik],
Beyond Single Path Integrated Gradients for Reliable Input
Attribution via Randomized Path Sampling,
ICCV23(2052-2061)
IEEE DOI
2401
Deep networks.
BibRef
Huang, W.[Wei],
Zhao, X.Y.[Xing-Yu],
Jin, G.[Gaojie],
Huang, X.W.[Xiao-Wei],
SAFARI: Versatile and Efficient Evaluations for Robustness of
Interpretability,
ICCV23(1988-1998)
IEEE DOI
2401
BibRef
Dravid, A.[Amil],
Gandelsman, Y.[Yossi],
Efros, A.A.[Alexei A.],
Shocher, A.[Assaf],
Rosetta Neurons: Mining the Common Units in a Model Zoo,
ICCV23(1934-1943)
IEEE DOI
2401
Common features across different networks.
BibRef
Srivastava, D.[Divyansh],
Oikarinen, T.[Tuomas],
Weng, T.W.[Tsui-Wei],
Corrupting Neuron Explanations of Deep Visual Features,
ICCV23(1877-1886)
IEEE DOI
2401
BibRef
Wang, X.[Xue],
Wang, Z.B.[Zhi-Bo],
Weng, H.Q.[Hai-Qin],
Guo, H.C.[Heng-Chang],
Zhang, Z.F.[Zhi-Fei],
Jin, L.[Lu],
Wei, T.[Tao],
Ren, K.[Kui],
Counterfactual-based Saliency Map: Towards Visual Contrastive
Explanations for Neural Networks,
ICCV23(2042-2051)
IEEE DOI
2401
BibRef
Barkan, O.[Oren],
Elisha, Y.[Yehonatan],
Asher, Y.[Yuval],
Eshel, A.[Amit],
Koenigstein, N.[Noam],
Visual Explanations via Iterated Integrated Attributions,
ICCV23(2073-2084)
IEEE DOI
2401
BibRef
Wang, S.X.[Shun-Xin],
Veldhuis, R.[Raymond],
Brune, C.[Christoph],
Strisciuglio, N.[Nicola],
What do neural networks learn in image classification? A frequency
shortcut perspective,
ICCV23(1433-1442)
IEEE DOI Code:
WWW Link.
2401
BibRef
Soelistyo, C.J.[Christopher J.],
Charras, G.[Guillaume],
Lowe, A.R.[Alan R.],
Virtual perturbations to assess explainability of deep-learning based
cell fate predictors,
BioIm23(3973-3982)
IEEE DOI
2401
BibRef
Zhang, J.W.[Jing-Wei],
Farnia, F.[Farzan],
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
Moreau Envelope,
ICCV23(2021-2030)
IEEE DOI
2401
BibRef
Huang, Q.[Qihan],
Xue, M.Q.[Meng-Qi],
Huang, W.Q.[Wen-Qi],
Zhang, H.F.[Hao-Fei],
Song, J.[Jie],
Jing, Y.C.[Yong-Cheng],
Song, M.L.[Ming-Li],
Evaluation and Improvement of Interpretability for Self-Explainable
Part-Prototype Networks,
ICCV23(2011-2020)
IEEE DOI Code:
WWW Link.
2401
BibRef
Sicre, R.,
Zhang, H.,
Dejasmin, J.,
Daaloul, C.,
Ayache, S.,
Artières, T.,
DP-Net: Learning Discriminative Parts for Image Recognition,
ICIP23(1230-1234)
IEEE DOI
2312
BibRef
Wang, F.[Fan],
Kong, A.W.K.[Adams Wai-Kin],
A Practical Upper Bound for the Worst-Case Attribution Deviations,
CVPR23(24616-24625)
IEEE DOI
2309
BibRef
Ravuri, S.[Suman],
Rey, M.[Mélanie],
Mohamed, S.[Shakir],
Deisenroth, M.P.[Marc Peter],
Understanding Deep Generative Models with Generalized Empirical
Likelihoods,
CVPR23(24395-24405)
IEEE DOI
2309
BibRef
Bruintjes, R.J.[Robert-Jan],
Motyka, T.[Tomasz],
van Gemert, J.[Jan],
What Affects Learned Equivariance in Deep Image Recognition Models?,
L3D-IVU23(4839-4847)
IEEE DOI
2309
BibRef
Ji, Y.[Ying],
Wang, Y.[Yu],
Kato, J.[Jien],
Spatial-temporal Concept based Explanation of 3D ConvNets,
CVPR23(15444-15453)
IEEE DOI
2309
BibRef
Binder, A.[Alexander],
Weber, L.[Leander],
Lapuschkin, S.[Sebastian],
Montavon, G.[Grégoire],
Müller, K.R.[Klaus-Robert],
Samek, W.[Wojciech],
Shortcomings of Top-Down Randomization-Based Sanity Checks for
Evaluations of Deep Neural Network Explanations,
CVPR23(16143-16152)
IEEE DOI
2309
BibRef
Wang, B.[Bowen],
Li, L.Z.[Liang-Zhi],
Nakashima, Y.[Yuta],
Nagahara, H.[Hajime],
Learning Bottleneck Concepts in Image Classification,
CVPR23(10962-10971)
IEEE DOI
2309
WWW Link.
BibRef
Magnet, R.[Robin],
Ovsjanikov, M.[Maks],
Memory-Scalable and Simplified Functional Map Learning,
CVPR24(4041-4050)
IEEE DOI
2410
Linear systems, Learning systems, Accuracy, Shape, Scalability,
Pipelines, shape matching, functional maps
BibRef
Attaiki, S.[Souhaib],
Ovsjanikov, M.[Maks],
Understanding and Improving Features Learned in Deep Functional Maps,
CVPR23(1316-1326)
IEEE DOI
2309
BibRef
Pahde, F.[Frederik],
Yolcu, G.Ü.[Galip Ümit],
Binder, A.[Alexander],
Samek, W.[Wojciech],
Lapuschkin, S.[Sebastian],
Optimizing Explanations by Network Canonization and Hyperparameter
Search,
SAIAD23(3819-3828)
IEEE DOI
2309
BibRef
Jeanneret, G.[Guillaume],
Simon, L.[Loïc],
Jurie, F.[Frédéric],
Diffusion Models for Counterfactual Explanations,
ACCV22(VII:219-237).
Springer DOI
2307
BibRef
Tayyub, J.[Jawad],
Sarmad, M.[Muhammad],
Schönborn, N.[Nicolas],
Explaining Deep Neural Networks for Point Clouds Using Gradient-based
Visualisations,
ACCV22(II:155-170).
Springer DOI
2307
BibRef
Li, C.[Chen],
Jiang, J.Z.[Jin-Zhe],
Zhang, X.[Xin],
Zhang, T.H.[Tong-Huan],
Zhao, Y.Q.[Ya-Qian],
Jiang, D.D.[Dong-Dong],
Li, R.G.[Ren-Gang],
Towards Interpreting Computer Vision Based on Transformation Invariant
Optimization,
CiV22(371-382).
Springer DOI
2304
BibRef
Eckstein, N.[Nils],
Bukhari, H.[Habib],
Bates, A.S.[Alexander S.],
Jefferis, G.S.X.E.[Gregory S. X. E.],
Funke, J.[Jan],
Discriminative Attribution from Paired Images,
BioImage22(406-422).
Springer DOI
2304
Highlight the most discriminative features between classes.
BibRef
Tan, H.X.[Han-Xiao],
Visualizing Global Explanations of Point Cloud DNNs,
WACV23(4730-4739)
IEEE DOI
2302
Point cloud compression, Measurement, Knowledge engineering,
Visualization, Codes, Neurons, Algorithms: Explainable, fair,
3D computer vision
BibRef
Behzadi-Khormouji, H.[Hamed],
Oramas Mogrovejo, J.A.[José A.],
A Protocol for Evaluating Model Interpretation Methods from Visual
Explanations,
WACV23(1421-1429)
IEEE DOI
2302
Heating systems, Measurement, Visualization, Protocols, Semantics,
Algorithms: Explainable, fair, accountable, privacy-preserving, Visualization
BibRef
Valois, P.H.V.[Pedro H. V.],
Niinuma, K.[Koichiro],
Fukui, K.[Kazuhiro],
Occlusion Sensitivity Analysis with Augmentation Subspace
Perturbation in Deep Feature Space,
WACV24(4817-4826)
IEEE DOI
2404
Analytical models, Sensitivity analysis, Computational modeling,
Perturbation methods, Neural networks, Predictive models, Visualization
BibRef
Given, N.A.[No Author],
Fractual Projection Forest:
Fast and Explainable Point Cloud Classifier,
WACV23(4229-4238)
IEEE DOI
2302
Portable document format, Algorithms: 3D computer vision,
Explainable, fair, accountable, privacy-preserving, ethical computer vision
BibRef
Zhang, Y.Y.[Ying-Ying],
Zhong, Q.Y.[Qiao-Yong],
Xie, D.[Di],
Pu, S.L.[Shi-Liang],
KRNet: Towards Efficient Knowledge Replay,
ICPR22(4772-4778)
IEEE DOI
2212
Training, Learning systems, Knowledge engineering, Deep learning,
Codes, Data compression, Recording
BibRef
Yang, P.[Peiyu],
Wen, Z.Y.[Ze-Yi],
Mian, A.[Ajmal],
Multi-Grained Interpretable Network for Image Recognition,
ICPR22(3815-3821)
IEEE DOI
2212
Learn features at different levels.
Resistance, Image recognition, Decision making, Focusing,
Predictive models, Feature extraction, Cognition
BibRef
Bayer, J.[Jens],
Münch, D.[David],
Arens, M.[Michael],
Deep Saliency Map Generators for Multispectral Video Classification,
ICPR22(3757-3764)
IEEE DOI
2212
To enable accountablity.
Measurement, Training, Visualization, TV, Neural networks, Generators
BibRef
Cunico, F.[Federico],
Capogrosso, L.[Luigi],
Setti, F.[Francesco],
Carra, D.[Damiano],
Fummi, F.[Franco],
Cristani, M.[Marco],
I-SPLIT: Deep Network Interpretability for Split Computing,
ICPR22(2575-2581)
IEEE DOI
2212
Performance evaluation, Source coding, Pulmonary diseases, Neurons,
Pipelines, Servers
BibRef
Cekic, M.[Metehan],
Bakiskan, C.[Can],
Madhow, U.[Upamanyu],
Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations,
ICIP22(3843-3847)
IEEE DOI
2211
Training, Deep learning, Neurons, Neural networks, Wires,
Supervised learning, Feature extraction, Interpretable ML,
machine learning
BibRef
Kherchouche, A.[Anouar],
Ben-Ahmed, O.[Olfa],
Guillevin, C.[Carole],
Tremblais, B.[Benoit],
Julian, A.[Adrien],
Guillevin, R.[Rémy],
MRS-XNet: An Explainable One-Dimensional Deep Neural Network for
Magnetic Spectroscopic Data Classification,
ICIP22(3923-3927)
IEEE DOI
2211
Protons, Solid modeling, Spectroscopy, Magnetic resonance imaging,
Magnetic resonance, Brain modeling, Phosphorus, Computer-Aided Diagnosis
BibRef
Rao, S.[Sukrut],
Böhle, M.[Moritz],
Schiele, B.[Bernt],
Towards Better Understanding Attribution Methods,
CVPR22(10213-10222)
IEEE DOI
2210
Measurement, Visualization, Systematics, Smoothing methods,
Neural networks, Inspection, Explainable computer vision
BibRef
Keswani, M.[Monish],
Ramakrishnan, S.[Sriranjani],
Reddy, N.[Nishant],
Balasubramanian, V.N.[Vineeth N.],
Proto2Proto: Can you recognize the car, the way I do?,
CVPR22(10223-10233)
IEEE DOI
2210
Measurement, Knowledge engineering, Codes, Prototypes,
Automobiles, Explainable computer vision,
Efficient learning and inferences
BibRef
Chakraborty, T.[Tanmay],
Trehan, U.[Utkarsh],
Mallat, K.[Khawla],
Dugelay, J.L.[Jean-Luc],
Generalizing Adversarial Explanations with Grad-CAM,
ArtOfRobust22(186-192)
IEEE DOI
2210
Measurement, Heating systems, Image analysis,
Computational modeling, Face recognition, Neural networks, Decision making
BibRef
Dravid, A.[Amil],
Schiffers, F.[Florian],
Gong, B.Q.[Bo-Qing],
Katsaggelos, A.K.[Aggelos K.],
medXGAN: Visual Explanations for Medical Classifiers through a
Generative Latent Space,
FaDE-TCV22(2935-2944)
IEEE DOI
2210
Location awareness, Visualization, Interpolation, Pathology, Codes,
Neural networks, Anatomical structure
BibRef
Somepalli, G.[Gowthami],
Fowl, L.[Liam],
Bansal, A.[Arpit],
Yeh-Chiang, P.[Ping],
Dar, Y.[Yehuda],
Baraniuk, R.[Richard],
Goldblum, M.[Micah],
Goldstein, T.[Tom],
Can Neural Nets Learn the Same Model Twice? Investigating
Reproducibility and Double Descent from the Decision Boundary
Perspective,
CVPR22(13689-13698)
IEEE DOI
2210
Training, Interpolation, Computational modeling, Neural networks,
Machine learning, Machine learning
BibRef
Sandoval-Segura, P.[Pedro],
Singla, V.[Vasu],
Fowl, L.[Liam],
Geiping, J.[Jonas],
Goldblum, M.[Micah],
Jacobs, D.[David],
Goldstein, T.[Tom],
Poisons that are learned faster are more effective,
ArtOfRobust22(197-204)
IEEE DOI
2210
Training, Data privacy, Privacy, Toxicology, Correlation, Perturbation methods
BibRef
MacDonald, L.E.[Lachlan E.],
Ramasinghe, S.[Sameera],
Lucey, S.[Simon],
Enabling Equivariance for Arbitrary Lie Groups,
CVPR22(8173-8182)
IEEE DOI
2210
Degradation, Perturbation methods, Benchmark testing,
Mathematical models, Robustness,
Explainable computer vision
BibRef
Kocasari, U.[Umut],
Zaman, K.[Kerem],
Tiftikci, M.[Mert],
Simsar, E.[Enis],
Yanardag, P.[Pinar],
Rank in Style: A Ranking-based Approach to Find Interpretable
Directions,
CVFAD22(2293-2297)
IEEE DOI
2210
Image synthesis, Search problems, Decoding
BibRef
Marathe, A.[Aboli],
Jain, P.[Pushkar],
Walambe, R.[Rahee],
Kotecha, K.[Ketan],
RestoreX-AI: A Contrastive Approach towards Guiding Image Restoration
via Explainable AI Systems,
V4AS22(3029-3038)
IEEE DOI
2210
Training, Noise reduction, Object detection, Detectors, Transformers,
Image restoration, Tornadoes
BibRef
Dittakavi, B.[Bhat],
Bavikadi, D.[Divyagna],
Desai, S.V.[Sai Vikas],
Chakraborty, S.[Soumi],
Reddy, N.[Nishant],
Balasubramanian, V.N.[Vineeth N],
Callepalli, B.[Bharathi],
Sharma, A.[Ayon],
Pose Tutor: An Explainable System for Pose Correction in the Wild,
CVSports22(3539-3548)
IEEE DOI
2210
Training, Predictive models, Muscles, Skeleton
BibRef
Zhang, Y.F.[Yi-Feng],
Jiang, M.[Ming],
Zhao, Q.[Qi],
Query and Attention Augmentation for Knowledge-Based Explainable
Reasoning,
CVPR22(15555-15564)
IEEE DOI
2210
Knowledge engineering, Visualization, Computational modeling,
Neural networks, Knowledge based systems, Reinforcement learning,
Visual reasoning
BibRef
Chockler, H.[Hana],
Kroening, D.[Daniel],
Sun, Y.C.[You-Cheng],
Explanations for Occluded Images,
ICCV21(1214-1223)
IEEE DOI
2203
Approximation algorithms, Classification algorithms,
Explainable AI,
BibRef
Rodríguez, P.[Pau],
Caccia, M.[Massimo],
Lacoste, A.[Alexandre],
Zamparo, L.[Lee],
Laradji, I.[Issam],
Charlin, L.[Laurent],
Vazquez, D.[David],
Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations,
ICCV21(1036-1045)
IEEE DOI
2203
Code, Explaination.
WWW Link. Codes, Computational modeling, Perturbation methods,
Decision making, Machine learning, Predictive models,
and ethics in vision
BibRef
Kobs, K.[Konstantin],
Steininger, M.[Michael],
Dulny, A.[Andrzej],
Hotho, A.[Andreas],
Do Different Deep Metric Learning Losses Lead to Similar Learned
Features?,
ICCV21(10624-10634)
IEEE DOI
2203
Measurement, Learning systems, Analytical models, Visualization,
Image color analysis, Representation learning, Explainable AI
BibRef
Jung, H.[Hyungsik],
Oh, Y.[Youngrock],
Towards Better Explanations of Class Activation Mapping,
ICCV21(1316-1324)
IEEE DOI
2203
Measurement, Visualization, Analytical models, Additives,
Computational modeling, Linearity, Explainable AI, Fairness,
Visual reasoning and logical representation
BibRef
Lee, K.H.[Kwang Hee],
Park, C.[Chaewon],
Oh, J.[Junghyun],
Kwak, N.[Nojun],
LFI-CAM: Learning Feature Importance for Better Visual Explanation,
ICCV21(1335-1343)
IEEE DOI
2203
Visualization, Computer network reliability, Decision making,
Stability analysis,
Recognition and classification
BibRef
Lang, O.[Oran],
Gandelsman, Y.[Yossi],
Yarom, M.[Michal],
Wald, Y.[Yoav],
Elidan, G.[Gal],
Hassidim, A.[Avinatan],
Freeman, W.T.[William T.],
Isola, P.[Phillip],
Globerson, A.[Amir],
Irani, M.[Michal],
Mosseri, I.[Inbar],
Explaining in Style: Training a GAN to explain a classifier in
StyleSpace,
ICCV21(673-682)
IEEE DOI
2203
Training, Visualization, Animals, Semantics, Retina, Standards,
Explainable AI, Image and video synthesis
BibRef
Li, L.Z.[Liang-Zhi],
Wang, B.[Bowen],
Verma, M.[Manisha],
Nakashima, Y.[Yuta],
Kawasaki, R.[Ryo],
Nagahara, H.[Hajime],
SCOUTER: Slot Attention-based Classifier for Explainable Image
Recognition,
ICCV21(1026-1035)
IEEE DOI
2203
Measurement, Visualization, Image recognition, Codes,
Decision making, Switches, Explainable AI, Fairness, accountability,
and ethics in vision
BibRef
Lerman, S.[Samuel],
Venuto, C.[Charles],
Kautz, H.[Henry],
Xu, C.L.[Chen-Liang],
Explaining Local, Global, And Higher-Order Interactions In Deep
Learning,
ICCV21(1204-1213)
IEEE DOI
2203
Deep learning, Measurement, Visualization, Codes, Neural networks,
Object detection, Explainable AI,
Machine learning architectures and formulations
BibRef
Anderson, C.[Connor],
Farrell, R.[Ryan],
Improving Fractal Pre-training,
WACV22(2412-2421)
IEEE DOI
2202
Training, Image recognition, Navigation,
Neural networks, Rendering (computer graphics), Fractals,
Semi- and Un- supervised Learning
BibRef
Guo, P.[Pei],
Farrell, R.[Ryan],
Semantic Network Interpretation,
Explain-Bio22(400-409)
IEEE DOI
2202
Measurement, Training, Visualization, Correlation,
Computational modeling, Semantics, Filtering algorithms
BibRef
Watson, M.[Matthew],
Hasan, B.A.S.[Bashar Awwad Shiekh],
Al Moubayed, N.[Noura],
Agree to Disagree: When Deep Learning Models With Identical
Architectures Produce Distinct Explanations,
WACV22(1524-1533)
IEEE DOI
2202
Deep learning, Training, Medical conditions, Neural networks, MIMICs,
Logic gates, Explainable AI, Fairness,
Medical Imaging/Imaging for Bioinformatics/Biological and Cell Microscopy
BibRef
Fel, T.[Thomas],
Vigouroux, D.[David],
Cadène, R.[Rémi],
Serre, T.[Thomas],
How Good is your Explanation? Algorithmic Stability Measures to
Assess the Quality of Explanations for Deep Neural Networks,
WACV22(1565-1575)
IEEE DOI
2202
Deep learning, Computational modeling,
Neural networks, Network architecture, Prediction algorithms,
Privacy and Ethics in Vision
BibRef
Hada, S.S.[Suryabhan Singh],
Carreira-Perpiñán, M.Á.[Miguel Á.],
Sampling the 'Inverse Set' of a Neuron,
ICIP21(3712-3716)
IEEE DOI
2201
What does a neuron represent?
Deep learning, Visualization, Monte Carlo methods,
Image processing, Neurons, Markov processes, Interpretability,
GANs
BibRef
Sasdelli, M.[Michele],
Ajanthan, T.[Thalaiyasingam],
Chin, T.J.[Tat-Jun],
Carneiro, G.[Gustavo],
A Chaos Theory Approach to Understand Neural Network Optimization,
DICTA21(1-10)
IEEE DOI
2201
Deep learning, Heuristic algorithms, Digital images,
Computational modeling, Neural networks, Stochastic processes, Computer architecture
BibRef
Konate, S.[Salamata],
Lebrat, L.[Léo],
Cruz, R.S.[Rodrigo Santa],
Smith, E.[Elliot],
Bradley, A.[Andrew],
Fookes, C.[Clinton],
Salvado, O.[Olivier],
A Comparison of Saliency Methods for Deep Learning Explainability,
DICTA21(01-08)
IEEE DOI
2201
Deep learning, Backpropagation, Gradient methods,
Perturbation methods, Digital images, Complexity theory, CNN
BibRef
Khakzar, A.[Ashkan],
Baselizadeh, S.[Soroosh],
Khanduja, S.[Saurabh],
Rupprecht, C.[Christian],
Kim, S.T.[Seong Tae],
Navab, N.[Nassir],
Neural Response Interpretation through the Lens of Critical Pathways,
CVPR21(13523-13533)
IEEE DOI
2111
Gradient methods, Computer network reliability,
Neurons, Reliability, Object recognition
BibRef
Stammer, W.[Wolfgang],
Schramowski, P.[Patrick],
Kersting, K.[Kristian],
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
Interacting with their Explanations,
CVPR21(3618-3628)
IEEE DOI
2111
Training, Deep learning, Visualization,
Image color analysis, Semantics, Focusing
BibRef
Lim, D.[Dohun],
Lee, H.[Hyeonseok],
Kim, S.[Sungchan],
Building Reliable Explanations of Unreliable Neural Networks: Locally
Smoothing Perspective of Model Interpretation,
CVPR21(6464-6473)
IEEE DOI
2111
Analytical models, Smoothing methods,
Neural networks, Predictive models, Reliability theory
BibRef
Chefer, H.[Hila],
Gur, S.[Shir],
Wolf, L.B.[Lior B.],
Transformer Interpretability Beyond Attention Visualization,
CVPR21(782-791)
IEEE DOI
2111
Visualization, Head, Text categorization,
Neural networks, Transformers
BibRef
Shen, Y.J.[Yu-Jun],
Zhou, B.[Bolei],
Closed-Form Factorization of Latent Semantics in GANs,
CVPR21(1532-1540)
IEEE DOI
2111
Limiting, Closed-form solutions, Annotations,
Computational modeling, Semantics, Manuals
BibRef
Singla, S.[Sahil],
Nushi, B.[Besmira],
Shah, S.[Shital],
Kamar, E.[Ece],
Horvitz, E.[Eric],
Understanding Failures of Deep Networks via Robust Feature Extraction,
CVPR21(12848-12857)
IEEE DOI
2111
Measurement, Visualization, Error analysis,
Aggregates, Debugging, Feature extraction
BibRef
Poppi, S.[Samuele],
Cornia, M.[Marcella],
Baraldi, L.[Lorenzo],
Cucchiara, R.[Rita],
Revisiting The Evaluation of Class Activation Mapping for
Explainability: A Novel Metric and Experimental Analysis,
RCV21(2299-2304)
IEEE DOI
2109
Deep learning, Visualization, Protocols,
Reproducibility of results
BibRef
Rahnama, A.[Arash],
Tseng, A.[Andrew],
An Adversarial Approach for Explaining the Predictions of Deep Neural
Networks,
TCV21(3247-3256)
IEEE DOI
2109
Deep learning, Machine learning algorithms,
Computational modeling, Speech recognition, Prediction algorithms
BibRef
Wang, B.[Bowen],
Li, L.Z.[Liang-Zhi],
Verma, M.[Manisha],
Nakashima, Y.[Yuta],
Kawasaki, R.[Ryo],
Nagahara, H.[Hajime],
MTUNet: Few-shot Image Classification with Visual Explanations,
RCV21(2294-2298)
IEEE DOI
2109
Knowledge engineering, Visualization,
Computational modeling, Neural networks, Benchmark testing
BibRef
Rosenzweig, J.[Julia],
Sicking, J.[Joachim],
Houben, S.[Sebastian],
Mock, M.[Michael],
Akila, M.[Maram],
Patch Shortcuts: Interpretable Proxy Models Efficiently Find
Black-Box Vulnerabilities,
SAIAD21(56-65)
IEEE DOI
2109
Learning to eliminate safety errors in NN.
Couplings, Training, Analytical models, Systematics, Semantics,
Toy manufacturing industry, Safety
BibRef
Chen, Q.X.[Qiu-Xiao],
Li, P.F.[Peng-Fei],
Xu, M.[Meng],
Qi, X.J.[Xiao-Jun],
Sparse Activation Maps for Interpreting 3D Object Detection,
SAIAD21(76-84)
IEEE DOI
2109
Visualization, Solid modeling, Neurons,
Semantics, Object detection, Feature extraction
BibRef
Jaworek-Korjakowska, J.[Joanna],
Kostuch, A.[Aleksander],
Skruch, P.[Pawel],
SafeSO: Interpretable and Explainable Deep Learning Approach for Seat
Occupancy Classification in Vehicle Interior,
SAIAD21(103-112)
IEEE DOI
2109
Measurement, Heating systems, Deep learning, Visualization,
Belts, Object recognition
BibRef
Suzuki, M.[Muneaki],
Kamcya, Y.[Yoshitaka],
Kutsuna, T.[Takuro],
Mitsumoto, N.[Naoki],
Understanding the Reason for Misclassification by Generating
Counterfactual Images,
MVA21(1-5)
DOI Link
2109
Deep learning, Generative adversarial networks,
Task analysis, Artificial intelligence, Image classification
BibRef
Li, Z.Q.[Zhen-Qiang],
Wang, W.M.[Wei-Min],
Li, Z.Y.[Zuo-Yue],
Huang, Y.F.[Yi-Fei],
Sato, Y.[Yoichi],
Towards Visually Explaining Video Understanding Networks with
Perturbation,
WACV21(1119-1128)
IEEE DOI
2106
Knowledge engineering, Deep learning, Visualization, Pathology,
Perturbation methods
BibRef
Oh, Y.[Youngrock],
Jung, H.[Hyungsik],
Park, J.[Jeonghyung],
Kim, M.S.[Min Soo],
EVET: Enhancing Visual Explanations of Deep Neural Networks Using
Image Transformations,
WACV21(3578-3586)
IEEE DOI
2106
Location awareness, Visualization,
Pipelines, Neural networks, Machine learning
BibRef
Samangouei, P.[Pouya],
Saeedi, A.[Ardavan],
Nakagawa, L.[Liam],
Silberman, N.[Nathan],
ExplainGAN: Model Explanation via Decision Boundary Crossing
Transformations,
ECCV18(X: 681-696).
Springer DOI
1810
BibRef
Huseljic, D.[Denis],
Sick, B.[Bernhard],
Herde, M.[Marek],
Kottke, D.[Daniel],
Separation of Aleatoric and Epistemic Uncertainty in Deterministic
Deep Neural Networks,
ICPR21(9172-9179)
IEEE DOI
2105
Analytical models, Uncertainty, Neural networks,
Measurement uncertainty, Data models, Reliability
BibRef
Shi, S.[Sheng],
Du, Y.Z.[Yang-Zhou],
Fan, W.[Wei],
Kernel-based LIME with feature dependency sampling,
ICPR21(9143-9148)
IEEE DOI
2105
Local Interpretable Model-agnostic Explanation.
Correlation, Neural networks, Complex networks, Predictive models,
Internet, Artificial intelligence, Task analysis
BibRef
Charachon, M.[Martin],
Hudelot, C.[Céline],
Cournède, P.H.[Paul-Henry],
Ruppli, C.[Camille],
Ardon, R.[Roberto],
Combining Similarity and Adversarial Learning to Generate Visual
Explanation: Application to Medical Image Classification,
ICPR21(7188-7195)
IEEE DOI
2105
Measurement, Visualization,
Perturbation methods, Predictive models, Real-time systems,
Adversarial Example
BibRef
Goh, G.S.W.[Gary S. W.],
Lapuschkin, S.[Sebastian],
Weber, L.[Leander],
Samek, W.[Wojciech],
Binder, A.[Alexander],
Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution,
ICPR21(4949-4956)
IEEE DOI
2105
Adaptation models, Sensitivity, Neural networks, Taxonomy,
Object recognition, Noise measurement
BibRef
Yang, Q.[Qing],
Zhu, X.[Xia],
Fwu, J.K.[Jong-Kae],
Ye, Y.[Yun],
You, G.[Ganmei],
Zhu, Y.[Yuan],
MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box
Model Explanations,
ICPR21(1376-1383)
IEEE DOI
2105
Perturbation methods, Impedance matching, Neural networks,
Semantics, Games, Predictive models
BibRef
Zhang, M.[Moyu],
Zhu, X.N.[Xin-Ning],
Ji, Y.[Yang],
Input-aware Neural Knowledge Tracing Machine,
HCAU20(345-360).
Springer DOI
2103
BibRef
Veerappa, M.[Manjunatha],
Anneken, M.[Mathias],
Burkart, N.[Nadia],
Evaluation of Interpretable Association Rule Mining Methods on
Time-series in the Maritime Domain,
EDL-AI20(204-218).
Springer DOI
2103
BibRef
Jouis, G.[Gaëlle],
Mouchère, H.[Harold],
Picarougne, F.[Fabien],
Hardouin, A.[Alexandre],
Anchors vs Attention: Comparing XAI on a Real-life Use Case,
EDL-AI20(219-227).
Springer DOI
2103
BibRef
Henin, C.[Clément],
Le Métayer, D.[Daniel],
A Multi-layered Approach for Tailored Black-box Explanations,
EDL-AI20(5-19).
Springer DOI
2103
BibRef
Kenny, E.M.[Eoin M.],
Delaney, E.D.[Eoin D.],
Greene, D.[Derek],
Keane, M.T.[Mark T.],
Post-hoc Explanation Options for XAI in Deep Learning: The Insight
Centre for Data Analytics Perspective,
EDL-AI20(20-34).
Springer DOI
2103
BibRef
Halnaut, A.[Adrien],
Giot, R.[Romain],
Bourqui, R.[Romain],
Auber, D.[David],
Samples Classification Analysis Across DNN Layers with Fractal Curves,
EDL-AI20(47-61).
Springer DOI
2103
BibRef
Jung, H.[Hyungsik],
Oh, Y.[Youngrock],
Park, J.[Jeonghyung],
Kim, M.S.[Min Soo],
Jointly Optimize Positive and Negative Saliencies for Black Box
Classifiers,
EDL-AI20(76-89).
Springer DOI
2103
BibRef
Zhu, P.[Pengkai],
Zhu, R.Z.[Rui-Zhao],
Mishra, S.[Samarth],
Saligrama, V.[Venkatesh],
Low Dimensional Visual Attributes: An Interpretable Image Encoding,
EDL-AI20(90-102).
Springer DOI
2103
BibRef
Marcos, D.[Diego],
Fong, R.[Ruth],
Lobry, S.[Sylvain],
Flamary, R.[Rémi],
Courty, N.[Nicolas],
Tuia, D.[Devis],
Contextual Semantic Interpretability,
ACCV20(IV:351-368).
Springer DOI
2103
BibRef
Townsend, J.[Joe],
Kasioumis, T.[Theodoros],
Inakoshi, H.[Hiroya],
ERIC: Extracting Relations Inferred from Convolutions,
ACCV20(III:206-222).
Springer DOI
2103
Behavior of NN approximated with a program.
BibRef
Galli, A.[Antonio],
Marrone, S.[Stefano],
Moscato, V.[Vincenzo],
Sansone, C.[Carlo],
Reliability of explainable Artificial Intelligence in Adversarial
Perturbation Scenarios,
EDL-AI20(243-256).
Springer DOI
2103
BibRef
Agarwal, C.[Chirag],
Nguyen, A.[Anh],
Explaining Image Classifiers by Removing Input Features Using
Generative Models,
ACCV20(VI:101-118).
Springer DOI
2103
BibRef
Singh, M.[Mayank],
Kumari, N.[Nupur],
Mangla, P.[Puneet],
Sinha, A.[Abhishek],
Balasubramanian, V.N.[Vineeth N.],
Krishnamurthy, B.[Balaji],
Attributional Robustness Training Using Input-gradient Spatial
Alignment,
ECCV20(XXVII:515-533).
Springer DOI
2011
BibRef
Li, Y.C.[Yu-Chao],
Ji, R.R.[Rong-Rong],
Lin, S.H.[Shao-Hui],
Zhang, B.C.[Bao-Chang],
Yan, C.Q.[Chen-Qian],
Wu, Y.J.[Yong-Jian],
Huang, F.Y.[Fei-Yue],
Shao, L.[Ling],
Interpretable Neural Network Decoupling,
ECCV20(XV:653-669).
Springer DOI
2011
BibRef
Franchi, G.[Gianni],
Bursuc, A.[Andrei],
Aldea, E.[Emanuel],
Dubuisson, S.[Séverine],
Bloch, I.[Isabelle],
Tradi: Tracking Deep Neural Network Weight Distributions,
ECCV20(XVII:105-121).
Springer DOI
2011
BibRef
Bhushan, C.[Chitresh],
Yang, Z.Y.[Zhao-Yuan],
Virani, N.[Nurali],
Iyer, N.[Naresh],
Variational Encoder-Based Reliable Classification,
ICIP20(1941-1945)
IEEE DOI
2011
Training, Image reconstruction, Reliability, Measurement, Decoding,
Artificial neural networks, Uncertainty, Classification,
Adversarial Attacks
BibRef
Lee, J.,
Al Regib, G.,
Gradients as a Measure of Uncertainty in Neural Networks,
ICIP20(2416-2420)
IEEE DOI
2011
Uncertainty, Training, Measurement uncertainty,
Computational modeling, Neural networks, Data models,
image corruption/distortion
BibRef
Sun, Y.,
Prabhushankar, M.,
Al Regib, G.,
Implicit Saliency In Deep Neural Networks,
ICIP20(2915-2919)
IEEE DOI
2011
Feature extraction, Visualization, Semantics, Saliency detection,
Convolution, Robustness, Neural networks, Saliency,
Deep Learning
BibRef
Prabhushankar, M.,
Kwon, G.,
Temel, D.,
Al Regib, G.,
Contrastive Explanations In Neural Networks,
ICIP20(3289-3293)
IEEE DOI
2011
Visualization, Neural networks, Manifolds, Image recognition,
Image quality, Automobiles, Image color analysis, Interpretability,
Image Quality Assessment
BibRef
Tao, X.Y.[Xiao-Yu],
Chang, X.Y.[Xin-Yuan],
Hong, X.P.[Xiao-Peng],
Wei, X.[Xing],
Gong, Y.H.[Yi-Hong],
Topology-preserving Class-incremental Learning,
ECCV20(XIX:254-270).
Springer DOI
2011
BibRef
Yuan, K.[Kun],
Li, Q.Q.[Quan-Quan],
Shao, J.[Jing],
Yan, J.J.[Jun-Jie],
Learning Connectivity of Neural Networks from a Topological Perspective,
ECCV20(XXI:737-753).
Springer DOI
2011
BibRef
Bau, D.[David],
Liu, S.[Steven],
Wang, T.Z.[Tong-Zhou],
Zhu, J.Y.[Jun-Yan],
Torralba, A.B.[Antonio B.],
Rewriting a Deep Generative Model,
ECCV20(I:351-369).
Springer DOI
2011
BibRef
Chen, S.[Shi],
Jiang, M.[Ming],
Yang, J.H.[Jin-Hui],
Zhao, Q.[Qi],
AiR: Attention with Reasoning Capability,
ECCV20(I:91-107).
Springer DOI
2011
BibRef
Ding, Y.K.[Yu-Kun],
Liu, J.L.[Jing-Lan],
Xiong, J.J.[Jin-Jun],
Shi, Y.Y.[Yi-Yu],
Revisiting the Evaluation of Uncertainty Estimation and Its
Application to Explore Model Complexity-Uncertainty Trade-Off,
TCV20(22-31)
IEEE DOI
2008
Uncertainty, Calibration, Estimation, Predictive models,
Complexity theory, Neural networks
BibRef
Yang, Z.X.[Zong-Xin],
Zhu, L.C.[Lin-Chao],
Wu, Y.[Yu],
Yang, Y.[Yi],
Gated Channel Transformation for Visual Recognition,
CVPR20(11791-11800)
IEEE DOI
2008
Logic gates, Task analysis, Visualization, Neurons,
Training, Complexity theory
BibRef
Xu, S.[Shawn],
Venugopalan, S.[Subhashini],
Sundararajan, M.[Mukund],
Attribution in Scale and Space,
CVPR20(9677-9686)
IEEE DOI
2008
Code, Deep Nets.
WWW Link. Perturbation methods, Task analysis,
Kernel, Mathematical model, Google, Medical services
BibRef
Ramanujan, V.,
Wortsman, M.,
Kembhavi, A.,
Farhadi, A.,
Rastegari, M.,
What's Hidden in a Randomly Weighted Neural Network?,
CVPR20(11890-11899)
IEEE DOI
2008
Training, Neurons, Biological neural networks,
Stochastic processes, Buildings, Standards
BibRef
Bansal, N.[Naman],
Agarwal, C.[Chirag],
Nguyen, A.[Anh],
SAM: The Sensitivity of Attribution Methods to Hyperparameters,
CVPR20(8670-8680)
IEEE DOI
2008
BibRef
And:
TCV20(11-21)
IEEE DOI
2008
Robustness, Sensitivity, Heating systems, Noise measurement,
Limiting, Smoothing methods
BibRef
Kim, E.,
Gopinath, D.,
Pasareanu, C.,
Seshia, S.A.,
A Programmatic and Semantic Approach to Explaining and Debugging
Neural Network Based Object Detectors,
CVPR20(11125-11134)
IEEE DOI
2008
Semantics, Automobiles, Feature extraction, Detectors,
Probabilistic logic, Debugging
BibRef
Jalwana, M.A.A.K.[M. A. A. K.],
Akhtar, N.,
Bennamoun, M.,
Mian, A.,
Attack to Explain Deep Representation,
CVPR20(9540-9549)
IEEE DOI
2008
Perturbation methods, Computational modeling, Visualization,
Robustness, Image generation, Machine learning, Task analysis
BibRef
Koperski, M.[Michal],
Konopczynski, T.[Tomasz],
Nowak, R.[Rafal],
Semberecki, P.[Piotr],
Trzcinski, T.[Tomasz],
Plugin Networks for Inference under Partial Evidence,
WACV20(2872-2880)
IEEE DOI
2006
Plugin layers to pre-trained network.
Task analysis, Training, Visualization, Neural networks,
Image segmentation, Image annotation, Image recognition
BibRef
Chen, L.,
Chen, J.,
Hajimirsadeghi, H.,
Mori, G.,
Adapting Grad-CAM for Embedding Networks,
WACV20(2783-2792)
IEEE DOI
2006
Visualization, Testing, Training, Databases, Estimation,
Heating systems, Task analysis
BibRef
Zhang, J.,
Zhang, J.,
Ghosh, S.,
Li, D.,
Tasci, S.,
Heck, L.,
Zhang, H.,
Kuo, C.C.J.[C.C. Jay],
Class-incremental Learning via Deep Model Consolidation,
WACV20(1120-1129)
IEEE DOI
2006
Data models, Task analysis, Training, Monte Carlo methods,
Training data, Computational modeling, Adaptation models
BibRef
Vasu, B.,
Long, C.,
Iterative and Adaptive Sampling with Spatial Attention for Black-Box
Model Explanations,
WACV20(2949-2958)
IEEE DOI
2006
Adaptation models, Neural networks, Feature extraction,
Mathematical model, Decision making, Machine learning, Visualization
BibRef
Gkalelis, N.[Nikolaos],
Mezaris, V.[Vasileios],
Subclass Deep Neural Networks: Re-enabling Neglected Classes in Deep
Network Training for Multimedia Classification,
MMMod20(I:227-238).
Springer DOI
2003
BibRef
Subramanya, A.,
Pillai, V.,
Pirsiavash, H.,
Fooling Network Interpretation in Image Classification,
ICCV19(2020-2029)
IEEE DOI
2004
decision making, image classification,
learning (artificial intelligence), neural nets, Task analysis
BibRef
Liang, M.[Megan],
Palado, G.[Gabrielle],
Browne, W.N.[Will N.],
Identifying Simple Shapes to Classify the Big Picture,
IVCNZ19(1-6)
IEEE DOI
2004
evolutionary computation, feature extraction,
image classification, learning (artificial intelligence),
Learning Classifier Systems
BibRef
Huang, J.,
Qu, L.,
Jia, R.,
Zhao, B.,
O2U-Net: A Simple Noisy Label Detection Approach for Deep Neural
Networks,
ICCV19(3325-3333)
IEEE DOI
2004
learning (artificial intelligence), neural nets,
probability, deep neural networks, human annotations,
BibRef
Konuk, E.,
Smith, K.,
An Empirical Study of the Relation Between Network Architecture and
Complexity,
Preregister19(4597-4599)
IEEE DOI
2004
generalisation (artificial intelligence), image classification,
network architecture, preregistration submission,
complexity
BibRef
Iqbal, A.,
Gall, J.,
Level Selector Network for Optimizing Accuracy-Specificity Trade-Offs,
HVU19(1466-1473)
IEEE DOI
2004
directed graphs, image processing,
learning (artificial intelligence), video signal processing, Deep Learning
BibRef
Kang, S.,
Jung, H.,
Lee, S.,
Interpreting Undesirable Pixels for Image Classification on Black-Box
Models,
VXAI19(4250-4254)
IEEE DOI
2004
data visualisation, explanation, image classification,
image segmentation, neural nets, neural networks,
Interpretability
BibRef
Zhuang, J.,
Dvornek, N.C.,
Li, X.,
Yang, J.,
Duncan, J.,
Decision explanation and feature importance for invertible networks,
VXAI19(4235-4239)
IEEE DOI
2004
Code, Neural Networks.
WWW Link. neural nets, pattern classification, linear classifier,
feature space, decision boundary, feature importance, Decision-Boundary
BibRef
Graziani, M.[Mara],
Müller, H.[Henning],
Andrearczyk, V.[Vincent],
Graziani, M.,
Müller, H.,
Andrearczyk, V.,
Interpreting Intentionally Flawed Models with Linear Probes,
SDL-CV19(743-747)
IEEE DOI
2004
learning (artificial intelligence), pattern classification,
regression analysis, statistical irregularities, regression, linear probes
BibRef
Demidovskij, A.,
Gorbachev, Y.,
Fedorov, M.,
Slavutin, I.,
Tugarev, A.,
Fatekhov, M.,
Tarkan, Y.,
OpenVINO Deep Learning Workbench: Comprehensive Analysis and Tuning
of Neural Networks Inference,
SDL-CV19(783-787)
IEEE DOI
2004
interactive systems, learning (artificial intelligence),
neural nets, optimisation, user interfaces, hyper parameters, optimization
BibRef
Lazarow, J.,
Jin, L.,
Tu, Z.,
Introspective Neural Networks for Generative Modeling,
ICCV17(2793-2802)
IEEE DOI
1802
image classification, image representation, image texture,
neural nets, neurocontrollers, statistics, unsupervised learning,
Training
BibRef
Ren, J.[Jian],
Li, Z.[Zhe],
Yang, J.C.[Jian-Chao],
Xu, N.[Ning],
Yang, T.[Tianbao],
Foran, D.J.[David J.],
EIGEN: Ecologically-Inspired GENetic Approach for Neural Network
Structure Searching From Scratch,
CVPR19(9051-9060).
IEEE DOI
2002
BibRef
Liang, X.D.[Xiao-Dan],
Learning Personalized Modular Network Guided by Structured Knowledge,
CVPR19(8936-8944).
IEEE DOI
2002
BibRef
Zeng, W.Y.[Wen-Yuan],
Luo, W.J.[Wen-Jie],
Suo, S.[Simon],
Sadat, A.[Abbas],
Yang, B.[Bin],
Casas, S.[Sergio],
Urtasun, R.[Raquel],
End-To-End Interpretable Neural Motion Planner,
CVPR19(8652-8661).
IEEE DOI
2002
BibRef
Blanchard, N.[Nathaniel],
Kinnison, J.[Jeffery],
RichardWebster, B.[Brandon],
Bashivan, P.[Pouya],
Scheirer, W.J.[Walter J.],
A Neurobiological Evaluation Metric for Neural Network Model Search,
CVPR19(5399-5408).
IEEE DOI
2002
BibRef
Yu, L.[Lu],
Yazici, V.O.[Vacit Oguz],
Liu, X.L.[Xia-Lei],
van de Weijer, J.[Joost],
Cheng, Y.M.[Yong-Mei],
Ramisa, A.[Arnau],
Learning Metrics From Teachers: Compact Networks for Image Embedding,
CVPR19(2902-2911).
IEEE DOI
2002
BibRef
Ye, J.W.[Jing-Wen],
Ji, Y.X.[Yi-Xin],
Wang, X.C.[Xin-Chao],
Ou, K.[Kairi],
Tao, D.P.[Da-Peng],
Song, M.L.[Ming-Li],
Student Becoming the Master: Knowledge Amalgamation for Joint Scene
Parsing, Depth Estimation, and More,
CVPR19(2824-2833).
IEEE DOI
2002
Train one model that combines the knowledge of 2 other trained nets.
BibRef
Orekondy, T.[Tribhuvanesh],
Schiele, B.[Bernt],
Fritz, M.[Mario],
Knockoff Nets: Stealing Functionality of Black-Box Models,
CVPR19(4949-4958).
IEEE DOI
2002
BibRef
Morgado, P.[Pedro],
Vasconcelos, N.M.[Nuno M.],
NetTailor: Tuning the Architecture, Not Just the Weights,
CVPR19(3039-3049).
IEEE DOI
2002
BibRef
Stergiou, A.[Alexandros],
Kapidis, G.[Georgios],
Kalliatakis, G.[Grigorios],
Chrysoulas, C.[Christos],
Veltkamp, R.[Remco],
Poppe, R.[Ronald],
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions,
ICIP19(1830-1834)
IEEE DOI
1910
3-D convolutions. How to visualize the results.
Visual explanations, explainable convolutions,
spatio-temporal feature representation
BibRef
Chen, Y.,
Saporta, A.,
Dapogny, A.,
Cord, M.,
Delving Deep into Interpreting Neural Nets with Piece-Wise Affine
Representation,
ICIP19(609-613)
IEEE DOI
1910
Deep learning, deep neural networks, attribution,
pixel contribution, bias
BibRef
Lee, J.,
Kim, S.T.,
Ro, Y.M.,
Probenet: Probing Deep Networks,
ICIP19(3821-3825)
IEEE DOI
1910
ProbeNet, Deep network probing, Deep network interpretation,
Human-understandable
BibRef
Buhrmester, V.[Vanessa],
Münch, D.[David],
Bulatov, D.[Dimitri],
Arens, M.[Michael],
Evaluating the Impact of Color Information in Deep Neural Networks,
IbPRIA19(I:302-316).
Springer DOI
1910
BibRef
Ghojogh, B.[Benyamin],
Karray, F.[Fakhri],
Crowley, M.[Mark],
Backprojection for Training Feedforward Neural Networks in the Input
and Feature Spaces,
ICIAR20(II:16-24).
Springer DOI
2007
BibRef
Tartaglione, E.[Enzo],
Grangetto, M.[Marco],
Take a Ramble into Solution Spaces for Classification Problems in
Neural Networks,
CIAP19(I:345-355).
Springer DOI
1909
BibRef
Yu, T.[Tao],
Long, H.[Huan],
Hopcroft, J.E.[John E.],
Curvature-based Comparison of Two Neural Networks,
ICPR18(441-447)
IEEE DOI
1812
Manifolds, Biological neural networks, Tensile stress, Measurement,
Matrix decomposition, Covariance matrices
BibRef
Malakhova, K.[Katerina],
Representation of Categories in Filters of Deep Neural Networks,
Cognitive18(2054-20542)
IEEE DOI
1812
Visualization, Face, Feature extraction, Detectors,
Biological neural networks, Neurons, Automobiles
BibRef
Kanbak, C.[Can],
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Frossard, P.[Pascal],
Geometric Robustness of Deep Networks: Analysis and Improvement,
CVPR18(4441-4449)
IEEE DOI
1812
Robustness, Manifolds, Additives, Training, Atmospheric measurements,
Particle measurements
BibRef
Fawzi, A.[Alhussein],
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Frossard, P.[Pascal],
Soatto, S.,
Empirical Study of the Topology and Geometry of Deep Networks,
CVPR18(3762-3770)
IEEE DOI
1812
Neural networks, Perturbation methods, Geometry, Network topology,
Topology, Robustness, Optimization
BibRef
Zhang, Z.M.[Zi-Ming],
Wu, Y.W.[Yuan-Wei],
Wang, G.H.[Guang-Hui],
BPGrad: Towards Global Optimality in Deep Learning via Branch and
Pruning,
CVPR18(3301-3309)
IEEE DOI
1812
Optimization, Linear programming, Upper bound,
Approximation algorithms, Biological neural networks, Convergence
BibRef
Palacio, S.,
Folz, J.,
Hees, J.,
Raue, F.,
Borth, D.,
Dengel, A.,
What do Deep Networks Like to See?,
CVPR18(3108-3117)
IEEE DOI
1812
Image reconstruction, Training, Neural networks, Decoding,
Task analysis, Convolution, Image coding
BibRef
Aodha, O.M.,
Su, S.,
Chen, Y.,
Perona, P.,
Yue, Y.,
Teaching Categories to Human Learners with Visual Explanations,
CVPR18(3820-3828)
IEEE DOI
1812
Education, Visualization, Task analysis, Adaptation models,
Mathematical model, Computational modeling
BibRef
Fong, R.,
Vedaldi, A.,
Net2Vec: Quantifying and Explaining How Concepts are Encoded by
Filters in Deep Neural Networks,
CVPR18(8730-8738)
IEEE DOI
1812
Semantics, Visualization, Image segmentation, Probes,
Neural networks, Task analysis, Training
BibRef
Mascharka, D.,
Tran, P.,
Soklaski, R.,
Majumdar, A.,
Transparency by Design: Closing the Gap Between Performance and
Interpretability in Visual Reasoning,
CVPR18(4942-4950)
IEEE DOI
1812
Visualization, Cognition, Task analysis, Neural networks,
Image color analysis, Knowledge discovery, Automobiles
BibRef
Wang, Y.,
Su, H.,
Zhang, B.,
Hu, X.,
Interpret Neural Networks by Identifying Critical Data Routing Paths,
CVPR18(8906-8914)
IEEE DOI
1812
Routing, Logic gates, Neural networks, Predictive models, Encoding,
Semantics, Analytical models
BibRef
Dong, Y.P.[Yin-Peng],
Su, H.[Hang],
Zhu, J.[Jun],
Zhang, B.[Bo],
Improving Interpretability of Deep Neural Networks with Semantic
Information,
CVPR17(975-983)
IEEE DOI
1711
Computational modeling, Decoding, Feature extraction, Neurons,
Semantics, Visualization
BibRef
Bau, D.,
Zhou, B.,
Khosla, A.,
Oliva, A.,
Torralba, A.B.,
Network Dissection:
Quantifying Interpretability of Deep Visual Representations,
CVPR17(3319-3327)
IEEE DOI
1711
Detectors, Image color analysis, Image segmentation, Semantics,
Training, Visualization
BibRef
Hu, R.H.[Rong-Hang],
Andreas, J.[Jacob],
Darrell, T.J.[Trevor J.],
Saenko, K.[Kate],
Explainable Neural Computation via Stack Neural Module Networks,
ECCV18(VII: 55-71).
Springer DOI
1810
BibRef
Rupprecht, C.,
Laina, I.,
Navab, N.,
Hager, G.D.,
Tombari, F.,
Guide Me: Interacting with Deep Networks,
CVPR18(8551-8561)
IEEE DOI
1812
Image segmentation, Visualization, Natural languages,
Task analysis, Semantics, Head, Training
BibRef
Khan, S.H.[Salman H.],
Hayat, M.[Munawar],
Porikli, F.M.[Fatih Murat],
Scene Categorization with Spectral Features,
ICCV17(5639-5649)
IEEE DOI
1802
Explain the network results.
feature extraction, image classification, image representation,
learning (artificial intelligence), natural scenes, transforms,
Transforms
BibRef
Worrall, D.E.[Daniel E.],
Garbin, S.J.[Stephan J.],
Turmukhambetov, D.[Daniyar],
Brostow, G.J.[Gabriel J.],
Interpretable Transformations with Encoder-Decoder Networks,
ICCV17(5737-5746)
IEEE DOI
1802
I.e. rotation effects. Explain results.
decoding, image coding, interpolation, transforms,
complex transformation encoding,
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
CNN Intrepretation, Explanation, Understanding of Convolutional Neural Networks .