14.5.8.7.1 Countering Adversarial Attacks, Defense, Robustness

Chapter Contents (Back)
Adversarial Networks. Generative Networks. See also Adversarial Networks, Adversarial Inputs, Generative Adversarial.

Miller, D.J., Xiang, Z., Kesidis, G.,
Adversarial Learning Targeting Deep Neural Network Classification: A Comprehensive Review of Defenses Against Attacks,
PIEEE(108), No. 3, March 2020, pp. 402-433.
IEEE DOI 2003
Training data, Neural networks, Reverse engineering, Machine learning, Robustness, Training data, Feature extraction, white box BibRef

Ozbulak, U.[Utku], Gasparyan, M.[Manvel], De Neve, W.[Wesley], Van Messem, A.[Arnout],
Perturbation analysis of gradient-based adversarial attacks,
PRL(135), 2020, pp. 313-320.
Elsevier DOI 2006
Adversarial attacks, Adversarial examples, Deep learning, Perturbation analysis BibRef

Amini, S., Ghaemmaghami, S.,
Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations,
MultMed(22), No. 7, July 2020, pp. 1889-1903.
IEEE DOI 2007
Robustness, Perturbation methods, Training, Deep learning, Computer architecture, Neural networks, Signal to noise ratio, interpretable BibRef

Li, X.R.[Xu-Rong], Ji, S.L.[Shou-Ling], Ji, J.T.[Jun-Tao], Ren, Z.Y.[Zhen-Yu], Wu, C.M.[Chun-Ming], Li, B.[Bo], Wang, T.[Ting],
Adversarial examples detection through the sensitivity in space mappings,
IET-CV(14), No. 5, August 2020, pp. 201-213.
DOI Link 2007
BibRef


Machiraju, H.[Harshitha], Balasubramanian, V.N.[Vineeth N],
A Little Fog for a Large Turn,
WACV20(2891-2900)
IEEE DOI 2006
Perturbation methods, Meteorology, Autonomous robots, Task analysis, Data models, Predictive models, Robustness BibRef

Zhao, Y., Tian, Y., Fowlkes, C., Shen, W., Yuille, A.L.,
Resisting Large Data Variations via Introspective Transformation Network,
WACV20(3069-3078)
IEEE DOI 2006
Training, Testing, Robustness, Training data, Linear programming, Resists BibRef

Kim, D.[Donghyun], Bargal, S.A.[Sarah Adel], Zhang, J.M.[Jian-Ming], Sclaroff, S.[Stan],
Multi-way Encoding for Robustness,
WACV20(1341-1349)
IEEE DOI 2006
To counter adversarial attacks. Encoding, Robustness, Perturbation methods, Training, Biological system modeling, Neurons, Correlation BibRef

Folz, J., Palacio, S., Hees, J., Dengel, A.,
Adversarial Defense based on Structure-to-Signal Autoencoders,
WACV20(3568-3577)
IEEE DOI 2006
Perturbation methods, Semantics, Robustness, Predictive models, Training, Decoding, Neural networks BibRef

Peterson, J.[Joshua], Battleday, R.[Ruairidh], Griffiths, T.[Thomas], Russakovsky, O.[Olga],
Human Uncertainty Makes Classification More Robust,
ICCV19(9616-9625)
IEEE DOI 2004
CIFAR10H dataset. To make deep network robust ot adversarial attacks. convolutional neural nets, learning (artificial intelligence), pattern classification, classification performance, Dogs BibRef

Wang, J., Zhang, H.,
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks,
ICCV19(6628-6637)
IEEE DOI 2004
entropy, learning (artificial intelligence), neural nets, security of data, adversarial attacks, Data models BibRef

Ye, S., Xu, K., Liu, S., Cheng, H., Lambrechts, J., Zhang, H., Zhou, A., Ma, K., Wang, Y., Lin, X.,
Adversarial Robustness vs. Model Compression, or Both?,
ICCV19(111-120)
IEEE DOI 2004
minimax techniques, neural nets, security of data, adversarial attacks, concurrent adversarial training BibRef

Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Fawzi, A.[Alhussein], Uesato, J.[Jonathan], Frossard, P.[Pascal],
Robustness via Curvature Regularization, and Vice Versa,
CVPR19(9070-9078).
IEEE DOI 2002
Adversarial training leads to more linear boundaries. BibRef

Xie, C.[Cihang], Wu, Y.X.[Yu-Xin], van der Maaten, L.[Laurens], Yuille, A.L.[Alan L.], He, K.M.[Kai-Ming],
Feature Denoising for Improving Adversarial Robustness,
CVPR19(501-509).
IEEE DOI 2002
BibRef

He, Z.[Zhezhi], Rakin, A.S.[Adnan Siraj], Fan, D.L.[De-Liang],
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack,
CVPR19(588-597).
IEEE DOI 2002
BibRef

Kaneko, T.[Takuhiro], Ushiku, Y.[Yoshitaka], Harada, T.[Tatsuya],
Label-Noise Robust Generative Adversarial Networks,
CVPR19(2462-2471).
IEEE DOI 2002
BibRef

Stutz, D.[David], Hein, M.[Matthias], Schiele, B.[Bernt],
Disentangling Adversarial Robustness and Generalization,
CVPR19(6969-6980).
IEEE DOI 2002
BibRef

Miyazato, S., Wang, X., Yamasaki, T., Aizawa, K.,
Reinforcing the Robustness of a Deep Neural Network to Adversarial Examples by Using Color Quantization of Training Image Data,
ICIP19(884-888)
IEEE DOI 1910
convolutional neural network, adversarial example, color quantization BibRef

Ramanathan, T., Manimaran, A., You, S., Kuo, C.J.,
Robustness of Saak Transform Against Adversarial Attacks,
ICIP19(2531-2535)
IEEE DOI 1910
Saak transform, Adversarial attacks, Deep Neural Networks, Image Classification BibRef

Yang, C.H., Liu, Y., Chen, P., Ma, X., Tsai, Y.J.,
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks,
ICIP19(3811-3815)
IEEE DOI 1910
Causal Reasoning, Adversarial Example, Adversarial Robustness, Interpretable Deep Learning, Visual Reasoning BibRef

Prakash, A., Moran, N., Garber, S., DiLillo, A., Storer, J.,
Deflecting Adversarial Attacks with Pixel Deflection,
CVPR18(8571-8580)
IEEE DOI 1812
Perturbation methods, Transforms, Minimization, Robustness, Noise reduction, Training, Computer vision BibRef

Mummadi, C.K., Brox, T., Metzen, J.H.,
Defending Against Universal Perturbations With Shared Adversarial Training,
ICCV19(4927-4936)
IEEE DOI 2004
image classification, image segmentation, neural nets, universal perturbations, shared adversarial training, Computational modeling BibRef

Chen, H., Liang, J., Chang, S., Pan, J., Chen, Y., Wei, W., Juan, D.,
Improving Adversarial Robustness via Guided Complement Entropy,
ICCV19(4880-4888)
IEEE DOI 2004
entropy, learning (artificial intelligence), neural nets, probability, adversarial defense, adversarial robustness, BibRef

Bai, Y., Feng, Y., Wang, Y., Dai, T., Xia, S., Jiang, Y.,
Hilbert-Based Generative Defense for Adversarial Examples,
ICCV19(4783-4792)
IEEE DOI 2004
feature extraction, Hilbert transforms, neural nets, security of data, scan mode, advanced Hilbert curve scan order BibRef

Jang, Y., Zhao, T., Hong, S., Lee, H.,
Adversarial Defense via Learning to Generate Diverse Attacks,
ICCV19(2740-2749)
IEEE DOI 2004
learning (artificial intelligence), neural nets, pattern classification, security of data, adversarial defense, Machine learning BibRef

Mustafa, A., Khan, S., Hayat, M., Goecke, R., Shen, J., Shao, L.,
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks,
ICCV19(3384-3393)
IEEE DOI 2004
convolutional neural nets, feature extraction, image classification, image representation, Iterative methods BibRef

Taran, O.[Olga], Rezaeifar, S.[Shideh], Holotyak, T.[Taras], Voloshynovskiy, S.[Slava],
Defending Against Adversarial Attacks by Randomized Diversification,
CVPR19(11218-11225).
IEEE DOI 2002
BibRef

Sun, B.[Bo], Tsai, N.H.[Nian-Hsuan], Liu, F.C.[Fang-Chen], Yu, R.[Ronald], Su, H.[Hao],
Adversarial Defense by Stratified Convolutional Sparse Coding,
CVPR19(11439-11448).
IEEE DOI 2002
BibRef

Ho, C.H.[Chih-Hui], Leung, B.[Brandon], Sandstrom, E.[Erik], Chang, Y.[Yen], Vasconcelos, N.M.[Nuno M.],
Catastrophic Child's Play: Easy to Perform, Hard to Defend Adversarial Attacks,
CVPR19(9221-9229).
IEEE DOI 2002
BibRef

Dubey, A.[Abhimanyu], van der Maaten, L.[Laurens], Yalniz, Z.[Zeki], Li, Y.[Yixuan], Mahajan, D.[Dhruv],
Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search,
CVPR19(8759-8768).
IEEE DOI 2002
BibRef

Dong, Y.P.[Yin-Peng], Pang, T.[Tianyu], Su, H.[Hang], Zhu, J.[Jun],
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks,
CVPR19(4307-4316).
IEEE DOI 2002
BibRef

Rony, J.[Jerome], Hafemann, L.G.[Luiz G.], Oliveira, L.S.[Luiz S.], Ben Ayed, I.[Ismail], Sabourin, R.[Robert], Granger, E.[Eric],
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses,
CVPR19(4317-4325).
IEEE DOI 2002
BibRef

Qiu, Y.X.[Yu-Xian], Leng, J.W.[Jing-Wen], Guo, C.[Cong], Chen, Q.[Quan], Li, C.[Chao], Guo, M.[Minyi], Zhu, Y.H.[Yu-Hao],
Adversarial Defense Through Network Profiling Based Path Extraction,
CVPR19(4772-4781).
IEEE DOI 2002
BibRef

Jia, X.J.[Xiao-Jun], Wei, X.X.[Xing-Xing], Cao, X.C.[Xiao-Chun], Foroosh, H.[Hassan],
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples,
CVPR19(6077-6085).
IEEE DOI 2002
BibRef

Raff, E.[Edward], Sylvester, J.[Jared], Forsyth, S.[Steven], McLean, M.[Mark],
Barrage of Random Transforms for Adversarially Robust Defense,
CVPR19(6521-6530).
IEEE DOI 2002
BibRef

Theagarajan, R.[Rajkumar], Chen, M.[Ming], Bhanu, B.[Bir], Zhang, J.[Jing],
ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness,
CVPR19(6981-6989).
IEEE DOI 2002
BibRef

Yao, H., Regan, M., Yang, Y., Ren, Y.,
Image Decomposition and Classification Through a Generative Model,
ICIP19(400-404)
IEEE DOI 1910
Generative model, classification, adversarial defense BibRef

Ji, J., Zhong, B., Ma, K.,
Multi-Scale Defense of Adversarial Images,
ICIP19(4070-4074)
IEEE DOI 1910
deep learning, adversarial images, defense, multi-scale, image evolution BibRef

Agarwal, C., Nguyen, A., Schonfeld, D.,
Improving Robustness to Adversarial Examples by Encouraging Discriminative Features,
ICIP19(3801-3805)
IEEE DOI 1910
Adversarial Machine Learning, Robustness, Defenses, Deep Learning BibRef

Taran, O.[Olga], Rezaeifar, S.[Shideh], Voloshynovskiy, S.[Slava],
Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks,
Objectionable18(II:267-279).
Springer DOI 1905
BibRef

Naseer, M., Khan, S., Porikli, F.,
Local Gradients Smoothing: Defense Against Localized Adversarial Attacks,
WACV19(1300-1307)
IEEE DOI 1904
data compression, feature extraction, gradient methods, image classification, image coding, image representation, High frequency BibRef

Akhtar, N., Liu, J., Mian, A.,
Defense Against Universal Adversarial Perturbations,
CVPR18(3389-3398)
IEEE DOI 1812
Perturbation methods, Training, Computational modeling, Detectors, Neural networks, Robustness, Integrated circuits BibRef

Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.,
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser,
CVPR18(1778-1787)
IEEE DOI 1812
Training, Perturbation methods, Noise reduction, Image reconstruction, Predictive models, Neural networks, Adaptation models BibRef

Behpour, S., Xing, W., Ziebart, B.D.,
ARC: Adversarial Robust Cuts for Semi-Supervised and Multi-label Classification,
WiCV18(1986-19862)
IEEE DOI 1812
Markov random fields, Task analysis, Training, Testing, Support vector machines, Fasteners, Games BibRef

Karim, R., Islam, M.A., Mohammed, N., Bruce, N.D.B.,
On the Robustness of Deep Learning Models to Universal Adversarial Attack,
CRV18(55-62)
IEEE DOI 1812
Perturbation methods, Computational modeling, Neural networks, Task analysis, Image segmentation, Data models, Semantics, Semantic Segmentation BibRef

Jakubovitz, D.[Daniel], Giryes, R.[Raja],
Improving DNN Robustness to Adversarial Attacks Using Jacobian Regularization,
ECCV18(XII: 525-541).
Springer DOI 1810
BibRef

Rozsa, A., Gunther, M., Boult, T.E.,
Towards Robust Deep Neural Networks with BANG,
WACV18(803-811)
IEEE DOI 1806
image processing, learning (artificial intelligence), neural nets, BANG technique, adversarial image utilization, Training BibRef

Lu, J., Issaranon, T., Forsyth, D.A.,
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly,
ICCV17(446-454)
IEEE DOI 1802
image colour analysis, image reconstruction, learning (artificial intelligence), neural nets, BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Bayesian Learning, Bayes Network, Bayesian Networks .


Last update:Aug 4, 2020 at 13:31:31