13.6.3.2 Explainable Aritficial Intelligence

Chapter Contents (Back)
Explainable. Knowledge. Applied to CNNs especially:
See also Intrepretation, Explaination, Understanding of Convolutional Neural Networks.

Wellman, M.P., Henrion, M.,
Explaining 'explaining away',
PAMI(15), No. 3, March 1993, pp. 287-292.
IEEE DOI 0401
BibRef

Montavon, G.[Grégoire], Lapuschkin, S.[Sebastian], Binder, A.[Alexander], Samek, W.[Wojciech], Müller, K.R.[Klaus-Robert],
Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition,
PR(65), No. 1, 2017, pp. 211-222.
Elsevier DOI 1702
Award, Pattern Recognition. Deep neural networks BibRef

Lapuschkin, S., Binder, A., Montavon, G.[Grégoire], Müller, K.R.[Klaus-Robert], Samek, W.[Wojciech],
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks,
CVPR16(2912-2920)
IEEE DOI 1612
BibRef

Jung, A., Nardelli, P.H.J.,
An Information-Theoretic Approach to Personalized Explainable Machine Learning,
SPLetters(27), 2020, pp. 825-829.
IEEE DOI 2006
Predictive models, Data models, Probabilistic logic, Machine learning, Decision making, Linear regression, decision support systems BibRef

Muñoz-Romero, S.[Sergio], Gorostiaga, A.[Arantza], Soguero-Ruiz, C.[Cristina], Mora-Jiménez, I.[Inmaculada], Rojo-Álvarez, J.L.[José Luis],
Informative variable identifier: Expanding interpretability in feature selection,
PR(98), 2020, pp. 107077.
Elsevier DOI 1911
Feature selection, Interpretability, Explainable machine learning, Resampling, Classification BibRef

Kauffmann, J.[Jacob], Müller, K.R.[Klaus-Robert], Montavon, G.[Grégoire],
Towards explaining anomalies: A deep Taylor decomposition of one-class models,
PR(101), 2020, pp. 107198.
Elsevier DOI 2003
Outlier detection, Explainable machine learning, Deep Taylor decomposition, Kernel machines, Unsupervised learning BibRef

Yeom, S.K.[Seul-Ki], Seegerer, P.[Philipp], Lapuschkin, S.[Sebastian], Binder, A.[Alexander], Wiedemann, S.[Simon], Müller, K.R.[Klaus-Robert], Samek, W.[Wojciech],
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning,
PR(115), 2021, pp. 107899.
Elsevier DOI 2104
Pruning, Layer-wise relevance propagation (LRP), Convolutional neural network (CNN), Interpretation of models, Explainable AI (XAI) BibRef

Pierrard, R.[Régis], Poli, J.P.[Jean-Philippe], Hudelot, C.[Céline],
Spatial relation learning for explainable image classification and annotation in critical applications,
AI(292), 2021, pp. 103434.
Elsevier DOI 2102
Explainable artificial intelligence, Relation learning, Fuzzy logic BibRef

Langer, M.[Markus], Oster, D.[Daniel], Speith, T.[Timo], Hermanns, H.[Holger], Kästner, L.[Lena], Schmidt, E.[Eva], Sesing, A.[Andreas], Baum, K.[Kevin],
What do we want from Explainable Artificial Intelligence (XAI)?: A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research,
AI(296), 2021, pp. 103473.
Elsevier DOI 2106
Explainable Artificial Intelligence, Explainability, Interpretability, Explanations, Understanding, Human-Computer Interaction BibRef

Rio-Torto, I.[Isabel], Fernandes, K.[Kelwin], Teixeira, L.F.[Luís F.],
Understanding the decisions of CNNs: An in-model approach,
PRL(133), 2020, pp. 373-380.
Elsevier DOI 2005
Explainable AI, Explainability, Interpretability, Deep Llearning, Convolutional Nneural Nnetworks BibRef

Mokoena, T.[Tshepiso], Celik, T.[Turgay], Marivate, V.[Vukosi],
Why is this an anomaly? Explaining anomalies using sequential explanations,
PR(121), 2022, pp. 108227.
Elsevier DOI 2109
Outlier explanation, Sequential feature explanation, Sequential explanation, Anomaly validation, Explainable AI BibRef

Anjomshoae, S.[Sule], Omeiza, D.[Daniel], Jiang, L.[Lili],
Context-based image explanations for deep neural networks,
IVC(116), 2021, pp. 104310.
Elsevier DOI 2112
DNNs, Explainable AI, Contextual importance, Visual explanations BibRef

Sattarzadeh, S.[Sam], Sudhakar, M.[Mahesh], Plataniotis, K.N.[Konstantinos N.],
SVEA: A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition,
HTCV21(4141-4150)
IEEE DOI 2112
Performance evaluation, Visualization, Image recognition, Correlation, Machine learning, Benchmark testing BibRef


Riva, M.[Mateus], Gori, P.[Pietro], Yger, F.[Florian], Bloch, I.[Isabelle],
Is the U-NET Directional-Relationship Aware?,
ICIP22(3391-3395)
IEEE DOI 2211
Training data, Cognition, Task analysis, Standards, XAI, structural information, directional relationships, U-Net BibRef

Teney, D.[Damien], Peyrard, M.[Maxime], Abbasnejad, E.[Ehsan],
Predicting Is Not Understanding: Recognizing and Addressing Underspecification in Machine Learning,
ECCV22(XXIII:458-476).
Springer DOI 2211
BibRef

Sovatzidi, G.[Georgia], Vasilakakis, M.D.[Michael D.], Iakovidis, D.K.[Dimitris K.],
Automatic Fuzzy Graph Construction For Interpretable Image Classification,
ICIP22(3743-3747)
IEEE DOI 2211
Image edge detection, Semantics, Machine learning, Predictive models, Feature extraction, Interpretability BibRef

Chari, P.[Pradyumna], Ba, Y.H.[Yun-Hao], Athreya, S.[Shreeram], Kadambi, A.[Achuta],
MIME: Minority Inclusion for Majority Group Enhancement of AI Performance,
ECCV22(XIII:326-343).
Springer DOI 2211

WWW Link. BibRef

Deng, A.[Ailin], Li, S.[Shen], Xiong, M.[Miao], Chen, Z.[Zhirui], Hooi, B.[Bryan],
Trust, but Verify: Using Self-supervised Probing to Improve Trustworthiness,
ECCV22(XIII:361-377).
Springer DOI 2211
BibRef

Rymarczyk, D.[Dawid], Struski, L.[Lukasz], Górszczak, M.[Michal], Lewandowska, K.[Koryna], Tabor, J.[Jacek], Zielinski, B.[Bartosz],
Interpretable Image Classification with Differentiable Prototypes Assignment,
ECCV22(XII:351-368).
Springer DOI 2211
BibRef

Vandenhende, S.[Simon], Mahajan, D.[Dhruv], Radenovic, F.[Filip], Ghadiyaram, D.[Deepti],
Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals,
ECCV22(XII:261-279).
Springer DOI 2211

WWW Link. BibRef

Kim, S.S.Y.[Sunnie S. Y.], Meister, N.[Nicole], Ramaswamy, V.V.[Vikram V.], Fong, R.[Ruth], Russakovsky, O.[Olga],
HIVE: Evaluating the Human Interpretability of Visual Explanations,
ECCV22(XII:280-298).
Springer DOI 2211
BibRef

Jacob, P.[Paul], Zablocki, É.[Éloi], Ben-Younes, H.[Hédi], Chen, M.[Mickaël], Pérez, P.[Patrick], Cord, M.[Matthieu],
STEEX: Steering Counterfactual Explanations with Semantics,
ECCV22(XII:387-403).
Springer DOI 2211
BibRef

Machiraju, G.[Gautam], Plevritis, S.[Sylvia], Mallick, P.[Parag],
A Dataset Generation Framework for Evaluating Megapixel Image Classifiers and Their Explanations,
ECCV22(XII:422-442).
Springer DOI 2211
BibRef

Kolek, S.[Stefan], Nguyen, D.A.[Duc Anh], Levie, R.[Ron], Bruna, J.[Joan], Kutyniok, G.[Gitta],
Cartoon Explanations of Image Classifiers,
ECCV22(XII:443-458).
Springer DOI 2211
BibRef

Motzkus, F.[Franz], Weber, L.[Leander], Lapuschkin, S.[Sebastian],
Measurably Stronger Explanation Reliability Via Model Canonization,
ICIP22(516-520)
IEEE DOI 2211
Location awareness, Deep learning, Visualization, Current measurement, Neural networks, Network architecture BibRef

Yang, G.[Guang], Rao, A.[Arvind], Fernandez-Maloigne, C.[Christine], Calhoun, V.[Vince], Menegaz, G.[Gloria],
Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges,
ICIP22(1531-1535)
IEEE DOI 2211
Deep learning, Image segmentation, Special issues and sections, Biological system modeling, Signal processing, Data models, Biomedical Data BibRef

Paiss, R.[Roni], Chefer, H.[Hila], Wolf, L.B.[Lior B.],
No Token Left Behind: Explainability-Aided Image Classification and Generation,
ECCV22(XII:334-350).
Springer DOI 2211
BibRef

Khorram, S.[Saeed], Li, F.X.[Fu-Xin],
Cycle-Consistent Counterfactuals by Latent Transformations,
CVPR22(10193-10202)
IEEE DOI 2210
Try to find images similar to the query image that change the decision. Training, Measurement, Visualization, Image resolution, Machine vision, Computational modeling, Explainable computer vision BibRef

Hepburn, A.[Alexander], Santos-Rodriguez, R.[Raul],
Explainers in the Wild: Making Surrogate Explainers Robust to Distortions Through Perception,
ICIP21(3717-3721)
IEEE DOI 2201
Training, Measurement, Image processing, Predictive models, Distortion, Robustness, Explainability, surrogates, perception BibRef

Palacio, S.[Sebastian], Lucieri, A.[Adriano], Munir, M.[Mohsin], Ahmed, S.[Sheraz], Hees, J.[Jörn], Dengel, A.[Andreas],
XAI Handbook: Towards a Unified Framework for Explainable AI,
RPRMI21(3759-3768)
IEEE DOI 2112
Measurement, Terminology, Pipelines, Market research, Concrete BibRef

Vierling, A.[Axel], James, C.[Charu], Berns, K.[Karsten], Katsaouni, N.[Nikoletta],
Provable Translational Robustness for Object Detection With Convolutional Neural Networks,
ICIP21(694-698)
IEEE DOI 2201
Training, Support vector machines, Analytical models, Scattering, Object detection, Detectors, Feature extraction, Explainable AI BibRef

Ortega, A.[Alfonso], Fierrez, J.[Julian], Morales, A.[Aythami], Wang, Z.L.[Zi-Long], Ribeiro, T.[Tony],
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment,
WACVW21(78-87) Explainable and Interpretable AI
IEEE DOI 2105
Training, Machine learning algorithms, Biometrics (access control), Resumes, Neural networks, Tools BibRef

Kwon, H.J.[Hyuk Jin], Koo, H.I.[Hyung Il], Cho, N.I.[Nam Ik],
Improving Explainability of Integrated Gradients with Guided Non-Linearity,
ICPR21(385-391)
IEEE DOI 2105
Measurement, Heating systems, Visualization, Gradient methods, Action potentials, Perturbation methods, Neurons BibRef

Fuhl, W.[Wolfgang], Rong, Y.[Yao], Motz, T.[Thomas], Scheidt, M.[Michael], Hartel, A.[Andreas], Koch, A.[Andreas], Kasneci, E.[Enkelejda],
Explainable Online Validation of Machine Learning Models for Practical Applications,
ICPR21(3304-3311)
IEEE DOI 2105
Machine learning algorithms, Microcontrollers, Memory management, Data acquisition, Training data, Transforms, Machine learning BibRef

Mänttäri, J.[Joonatan], Broomé, S.[Sofia], Folkesson, J.[John], Kjellström, H.[Hedvig],
Interpreting Video Features: A Comparison of 3d Convolutional Networks and Convolutional LSTM Networks,
ACCV20(V:411-426).
Springer DOI 2103

See also Interpretable Explanations of Black Boxes by Meaningful Perturbation. BibRef

Oussalah, M.[Mourad],
Ai Explainability. A Bridge Between Machine Vision and Natural Language Processing,
EDL-AI20(257-273).
Springer DOI 2103
BibRef

Petkovic, D., Alavi, A., Cai, D., Wong, M.,
Random Forest Model and Sample Explainer for Non-experts in Machine Learning: Two Case Studies,
EDL-AI20(62-75).
Springer DOI 2103
BibRef

Muddamsetty, S.M.[Satya M.], Jahromi, M.N.S.[Mohammad N. S.], Moeslund, T.B.[Thomas B.],
Expert Level Evaluations for Explainable Ai (XAI) Methods in the Medical Domain,
EDL-AI20(35-46).
Springer DOI 2103
BibRef

Muddamsetty, S.M., Mohammad, N.S.J., Moeslund, T.B.,
SIDU: Similarity Difference And Uniqueness Method for Explainable AI,
ICIP20(3269-3273)
IEEE DOI 2011
Visualization, Predictive models, Machine learning, Computational modeling, Measurement, Task analysis, Explainable AI, CNN BibRef

Sun, Y.C.[You-Cheng], Chockler, H.[Hana], Huang, X.W.[Xiao-Wei], Kroening, D.[Daniel],
Explaining Image Classifiers Using Statistical Fault Localization,
ECCV20(XXVIII:391-406).
Springer DOI 2011
BibRef

Choi, H., Som, A., Turaga, P.,
AMC-Loss: Angular Margin Contrastive Loss for Improved Explainability in Image Classification,
Diff-CVML20(3659-3666)
IEEE DOI 2008
Training, Task analysis, Feature extraction, Euclidean distance, Airplanes, Media BibRef

Parafita, Á., Vitrià, J.,
Explaining Visual Models by Causal Attribution,
VXAI19(4167-4175)
IEEE DOI 2004
data handling, feature extraction, intervened causal model, causal attribution, visual models, image generative models, learning BibRef

Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., Keim, D.A.,
Towards A Rigorous Evaluation Of XAI Methods On Time Series,
VXAI19(4197-4201)
IEEE DOI 2004
image processing, learning (artificial intelligence), text analysis, time series, SHAP, image domain, text-domain, explainable-ai-evaluation BibRef

Fong, R.C.[Ruth C.], Vedaldi, A.[Andrea],
Interpretable Explanations of Black Boxes by Meaningful Perturbation,
ICCV17(3449-3457)
IEEE DOI 1802
Explain the result of learning. image classification, learning (artificial intelligence), black box algorithm, black boxes, classifier decision, Visualization BibRef

Hossam, M.[Mahmoud], Le, T.[Trung], Zhao, H.[He], Phung, D.[Dinh],
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability,
ICPR21(8922-8928)
IEEE DOI 2105
Training, Deep learning, Computational modeling, Perturbation methods, Text categorization, Natural languages, Training data BibRef

Plummer, B.A.[Bryan A.], Vasileva, M.I.[Mariya I.], Petsiuk, V.[Vitali], Saenko, K.[Kate], Forsyth, D.A.[David A.],
Why Do These Match? Explaining the Behavior of Image Similarity Models,
ECCV20(XI:652-669).
Springer DOI 2011
BibRef

Cheng, X., Rao, Z., Chen, Y., Zhang, Q.,
Explaining Knowledge Distillation by Quantifying the Knowledge,
CVPR20(12922-12932)
IEEE DOI 2008
Visualization, Task analysis, Measurement, Knowledge engineering, Optimization, Entropy, Neural networks BibRef

Chen, Y.,
Nonparametric Learning Via Successive Subspace Modeling (SSM),
ICIP19(3031-3032)
IEEE DOI 1910
Machine Learning, Explainable Machine Learning, Nonparametric Learning, Subspace Modeling, Successive Subspace Modeling BibRef

Shi, J.X.[Jia-Xin], Zhang, H.W.[Han-Wang], Li, J.Z.[Juan-Zi],
Explainable and Explicit Visual Reasoning Over Scene Graphs,
CVPR19(8368-8376).
IEEE DOI 2002
BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Constraint Based Matching .


Last update:Jan 23, 2023 at 16:42:47