Miller, D.J.,
Xiang, Z.,
Kesidis, G.,
Adversarial Learning Targeting Deep Neural Network Classification:
A Comprehensive Review of Defenses Against Attacks,
PIEEE(108), No. 3, March 2020, pp. 402-433.
IEEE DOI
2003
Training data, Neural networks, Reverse engineering,
Machine learning, Robustness, Training data, Feature extraction,
white box
BibRef
Ozbulak, U.[Utku],
Gasparyan, M.[Manvel],
de Neve, W.[Wesley],
van Messem, A.[Arnout],
Perturbation analysis of gradient-based adversarial attacks,
PRL(135), 2020, pp. 313-320.
Elsevier DOI
2006
Adversarial attacks, Adversarial examples, Deep learning, Perturbation analysis
BibRef
Amini, S.,
Ghaemmaghami, S.,
Towards Improving Robustness of Deep Neural Networks to Adversarial
Perturbations,
MultMed(22), No. 7, July 2020, pp. 1889-1903.
IEEE DOI
2007
Robustness, Perturbation methods, Training, Deep learning,
Computer architecture, Neural networks, Signal to noise ratio,
interpretable
BibRef
Li, X.R.[Xu-Rong],
Ji, S.L.[Shou-Ling],
Ji, J.T.[Jun-Tao],
Ren, Z.Y.[Zhen-Yu],
Wu, C.M.[Chun-Ming],
Li, B.[Bo],
Wang, T.[Ting],
Adversarial examples detection through the sensitivity in space
mappings,
IET-CV(14), No. 5, August 2020, pp. 201-213.
DOI Link
2007
BibRef
Li, H.,
Zeng, Y.,
Li, G.,
Lin, L.,
Yu, Y.,
Online Alternate Generator Against Adversarial Attacks,
IP(29), 2020, pp. 9305-9315.
IEEE DOI
2010
Generators, Training, Perturbation methods, Knowledge engineering,
Convolutional neural networks, Deep learning, image classification
BibRef
Ma, X.J.[Xing-Jun],
Niu, Y.[Yuhao],
Gu, L.[Lin],
Wang, Y.[Yisen],
Zhao, Y.T.[Yi-Tian],
Bailey, J.[James],
Lu, F.[Feng],
Understanding adversarial attacks on deep learning based medical
image analysis systems,
PR(110), 2021, pp. 107332.
Elsevier DOI
2011
Adversarial attack, Adversarial example detection,
Medical image analysis, Deep learning
BibRef
Zhou, M.[Mo],
Niu, Z.X.[Zhen-Xing],
Wang, L.[Le],
Zhang, Q.L.[Qi-Lin],
Hua, G.[Gang],
Adversarial Ranking Attack and Defense,
ECCV20(XIV:781-799).
Springer DOI
2011
BibRef
Wan, S.[Sheng],
Wu, T.Y.[Tung-Yu],
Hsu, H.W.[Heng-Wei],
Wong, W.H.[Wing Hung],
Lee, C.Y.[Chen-Yi],
Feature Consistency Training With JPEG Compressed Images,
CirSysVideo(30), No. 12, December 2020, pp. 4769-4780.
IEEE DOI
2012
Deep neural networks are vulnerable to JPEG compression artifacts.
Image coding, Distortion, Training, Transform coding, Robustness,
Quantization (signal), Feature extraction, Compression artifacts,
classification robustness
BibRef
Dong, X.S.[Xin-Shuai],
Liu, H.[Hong],
Ji, R.R.[Rong-Rong],
Cao, L.J.[Liu-Juan],
Ye, Q.X.[Qi-Xiang],
Liu, J.Z.[Jian-Zhuang],
Tian, Q.[Qi],
API-net: Robust Generative Classifier via a Single Discriminator,
ECCV20(XIII:379-394).
Springer DOI
2011
BibRef
Li, Y.W.[Ying-Wei],
Bai, S.[Song],
Xie, C.H.[Ci-Hang],
Liao, Z.Y.[Zhen-Yu],
Shen, X.H.[Xiao-Hui],
Yuille, A.L.[Alan L.],
Regional Homogeneity: Towards Learning Transferable Universal
Adversarial Perturbations Against Defenses,
ECCV20(XI:795-813).
Springer DOI
2011
BibRef
Liu, A.S.[Ai-Shan],
Huang, T.R.[Tai-Ran],
Liu, X.L.[Xiang-Long],
Xu, Y.T.[Yi-Tao],
Ma, Y.Q.[Yu-Qing],
Chen, X.[Xinyun],
Maybank, S.J.[Stephen J.],
Tao, D.C.[Da-Cheng],
Spatiotemporal Attacks for Embodied Agents,
ECCV20(XVII:122-138).
Springer DOI
2011
Code, Adversarial Attack.
WWW Link.
BibRef
Fan, Y.[Yanbo],
Wu, B.Y.[Bao-Yuan],
Li, T.H.[Tuan-Hui],
Zhang, Y.[Yong],
Li, M.Y.[Ming-Yang],
Li, Z.F.[Zhi-Feng],
Yang, Y.[Yujiu],
Sparse Adversarial Attack via Perturbation Factorization,
ECCV20(XXII:35-50).
Springer DOI
2011
BibRef
Guo, J.F.[Jun-Feng],
Liu, C.[Cong],
Practical Poisoning Attacks on Neural Networks,
ECCV20(XXVII:142-158).
Springer DOI
2011
BibRef
Shao, R.[Rui],
Perera, P.[Pramuditha],
Yuen, P.C.[Pong C.],
Patel, V.M.[Vishal M.],
Open-set Adversarial Defense,
ECCV20(XVII:682-698).
Springer DOI
2011
BibRef
Bui, A.[Anh],
Le, T.[Trung],
Zhao, H.[He],
Montague, P.[Paul],
deVel, O.[Olivier],
Abraham, T.[Tamas],
Phung, D.[Dinh],
Improving Adversarial Robustness by Enforcing Local and Global
Compactness,
ECCV20(XXVII:209-223).
Springer DOI
2011
BibRef
Xu, J.,
Li, Y.,
Jiang, Y.,
Xia, S.T.,
Adversarial Defense Via Local Flatness Regularization,
ICIP20(2196-2200)
IEEE DOI
2011
Training, Standards, Perturbation methods, Robustness, Visualization,
Linearity, Taylor series, adversarial defense,
gradient-based regularization
BibRef
Sadhukhan, R.,
Saha, A.,
Mukhopadhyay, J.,
Patra, A.,
Knowledge Distillation Inspired Fine-Tuning Of Tucker Decomposed CNNS
and Adversarial Robustness Analysis,
ICIP20(1876-1880)
IEEE DOI
2011
Robustness, Knowledge engineering, Convolution, Tensile stress,
Neural networks, Perturbation methods, Acceleration,
Adversarial Robustness
BibRef
Maung, M.,
Pyone, A.,
Kiya, H.,
Encryption Inspired Adversarial Defense For Visual Classification,
ICIP20(1681-1685)
IEEE DOI
2011
Training, Transforms, Encryption, Perturbation methods,
Computer vision, Machine learning, Adversarial defense,
perceptual image encryption
BibRef
Cui, W.,
Li, X.,
Huang, J.,
Wang, W.,
Wang, S.,
Chen, J.,
Substitute Model Generation for Black-Box Adversarial Attack Based on
Knowledge Distillation,
ICIP20(648-652)
IEEE DOI
2011
Perturbation methods, Task analysis, Training,
Computational modeling, Approximation algorithms,
black-box models
BibRef
Shah, S.A.A.,
Bougre, M.,
Akhtar, N.,
Bennamoun, M.,
Zhang, L.,
Efficient Detection of Pixel-Level Adversarial Attacks,
ICIP20(718-722)
IEEE DOI
2011
Robots, Training, Perturbation methods, Machine learning, Robustness,
Task analysis, Testing, Adversarial attack, perturbation detection,
deep learning
BibRef
Deng, Y.,
Karam, L.J.,
Universal Adversarial Attack Via Enhanced Projected Gradient Descent,
ICIP20(1241-1245)
IEEE DOI
2011
Perturbation methods, Computational modeling, Training,
Computer architecture, Convolutional neural networks,
projected gradient descent (PGD)
BibRef
Sun, C.,
Chen, S.,
Cai, J.,
Huang, X.,
Type I Attack For Generative Models,
ICIP20(593-597)
IEEE DOI
2011
Image reconstruction, Decoding,
Aerospace electronics, Generative adversarial networks,
generative models
BibRef
Jia, S.[Shuai],
Ma, C.[Chao],
Song, Y.B.[Yi-Bing],
Yang, X.K.[Xiao-Kang],
Robust Tracking Against Adversarial Attacks,
ECCV20(XIX:69-84).
Springer DOI
2011
BibRef
Yang, C.L.[Cheng-Lin],
Kortylewski, A.[Adam],
Xie, C.[Cihang],
Cao, Y.[Yinzhi],
Yuille, A.L.[Alan L.],
Patchattack: A Black-box Texture-based Attack with Reinforcement
Learning,
ECCV20(XXVI:681-698).
Springer DOI
2011
BibRef
Braunegg, A.,
Chakraborty, A.[Amartya],
Krumdick, M.[Michael],
Lape, N.[Nicole],
Leary, S.[Sara],
Manville, K.[Keith],
Merkhofer, E.[Elizabeth],
Strickhart, L.[Laura],
Walmer, M.[Matthew],
Apricot: A Dataset of Physical Adversarial Attacks on Object Detection,
ECCV20(XXI:35-50).
Springer DOI
2011
BibRef
Zhang, H.[Hu],
Zhu, L.C.[Lin-Chao],
Zhu, Y.[Yi],
Yang, Y.[Yi],
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior,
ECCV20(XX:240-256).
Springer DOI
2011
BibRef
Masi, I.[Iacopo],
Killekar, A.[Aditya],
Mascarenhas, R.M.[Royston Marian],
Gurudatt, S.P.[Shenoy Pratik],
Abd Almageed, W.[Wael],
Two-branch Recurrent Network for Isolating Deepfakes in Videos,
ECCV20(VII:667-684).
Springer DOI
2011
BibRef
Wang, R.[Ren],
Zhang, G.Y.[Gao-Yuan],
Liu, S.J.[Si-Jia],
Chen, P.Y.[Pin-Yu],
Xiong, J.J.[Jin-Jun],
Wang, M.[Meng],
Practical Detection of Trojan Neural Networks:
Data-limited and Data-free Cases,
ECCV20(XXIII:222-238).
Springer DOI
2011
(or poisoning backdoor attack) Manipulate the learned network.
BibRef
Mao, C.Z.[Cheng-Zhi],
Gupta, A.[Amogh],
Nitin, V.[Vikram],
Ray, B.[Baishakhi],
Song, S.[Shuran],
Yang, J.F.[Jun-Feng],
Vondrick, C.[Carl],
Multitask Learning Strengthens Adversarial Robustness,
ECCV20(II:158-174).
Springer DOI
2011
BibRef
Li, S.[Shasha],
Zhu, S.T.[Shi-Tong],
Paul, S.[Sudipta],
Roy-Chowdhury, A.[Amit],
Song, C.Y.[Cheng-Yu],
Krishnamurthy, S.[Srikanth],
Swami, A.[Ananthram],
Chan, K.S.[Kevin S.],
Connecting the Dots: Detecting Adversarial Perturbations Using Context
Inconsistency,
ECCV20(XXIII:396-413).
Springer DOI
2011
BibRef
Li, Y.[Yueru],
Cheng, S.Y.[Shu-Yu],
Su, H.[Hang],
Zhu, J.[Jun],
Defense Against Adversarial Attacks via Controlling Gradient Leaking on
Embedded Manifolds,
ECCV20(XXVIII:753-769).
Springer DOI
2011
BibRef
Gao, L.L.[Lian-Li],
Zhang, Q.L.[Qi-Long],
Song, J.K.[Jing-Kuan],
Liu, X.L.[Xiang-Long],
Shen, H.T.[Heng Tao],
Patch-wise Attack for Fooling Deep Neural Network,
ECCV20(XXVIII:307-322).
Springer DOI
2011
BibRef
Andriushchenko, M.[Maksym],
Croce, F.[Francesco],
Flammarion, N.[Nicolas],
Hein, M.[Matthias],
Square Attack: A Query-efficient Black-box Adversarial Attack via
Random Search,
ECCV20(XXIII:484-501).
Springer DOI
2011
BibRef
Bai, J.W.[Jia-Wang],
Chen, B.[Bin],
Li, Y.M.[Yi-Ming],
Wu, D.X.[Dong-Xian],
Guo, W.W.[Wei-Wei],
Xia, S.T.[Shu-Tao],
Yang, E.H.[En-Hui],
Targeted Attack for Deep Hashing Based Retrieval,
ECCV20(I:618-634).
Springer DOI
2011
BibRef
Nakka, K.K.[Krishna Kanth],
Salzmann, M.[Mathieu],
Indirect Local Attacks for Context-aware Semantic Segmentation Networks,
ECCV20(V:611-628).
Springer DOI
2011
BibRef
Wu, Z.X.[Zu-Xuan],
Lim, S.N.[Ser-Nam],
Davis, L.S.[Larry S.],
Goldstein, T.[Tom],
Making an Invisibility Cloak: Real World Adversarial Attacks on Object
Detectors,
ECCV20(IV:1-17).
Springer DOI
2011
BibRef
Li, Q.Z.[Qi-Zhang],
Guo, Y.W.[Yi-Wen],
Chen, H.[Hao],
Yet Another Intermediate-level Attack,
ECCV20(XVI: 241-257).
Springer DOI
2010
BibRef
Rounds, J.[Jeremiah],
Kingsland, A.[Addie],
Henry, M.J.[Michael J.],
Duskin, K.R.[Kayla R.],
Probing for Artifacts: Detecting Imagenet Model Evasions,
AML-CV20(3432-3441)
IEEE DOI
2008
Perturbation methods, Probes, Computational modeling, Robustness,
Image color analysis, Machine learning, Indexes
BibRef
Zhao, S.,
Ma, X.,
Zheng, X.,
Bailey, J.,
Chen, J.,
Jiang, Y.,
Clean-Label Backdoor Attacks on Video Recognition Models,
CVPR20(14431-14440)
IEEE DOI
2008
Training, Data models, Toxicology, Perturbation methods,
Training data, Image resolution, Pipelines
BibRef
Kolouri, S.,
Saha, A.,
Pirsiavash, H.,
Hoffmann, H.,
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs,
CVPR20(298-307)
IEEE DOI
2008
Training, Perturbation methods, Data models,
Computational modeling, Machine learning, Benchmark testing, Computer vision
BibRef
Li, J.,
Ji, R.,
Liu, H.,
Liu, J.,
Zhong, B.,
Deng, C.,
Tian, Q.,
Projection Probability-Driven Black-Box Attack,
CVPR20(359-368)
IEEE DOI
2008
Perturbation methods, Sensors, Optimization, Sparse matrices,
Compressed sensing, Google, Neural networks
BibRef
Huang, L.,
Gao, C.,
Zhou, Y.,
Xie, C.,
Yuille, A.L.,
Zou, C.,
Liu, N.,
Universal Physical Camouflage Attacks on Object Detectors,
CVPR20(717-726)
IEEE DOI
2008
Proposals, Detectors, Semantics, Perturbation methods, Strain,
Optimization
BibRef
Kariyappa, S.,
Qureshi, M.K.,
Defending Against Model Stealing Attacks With Adaptive Misinformation,
CVPR20(767-775)
IEEE DOI
2008
Data models, Adaptation models, Cloning, Predictive models,
Computational modeling, Security, Perturbation methods
BibRef
Yan, B.,
Wang, D.,
Lu, H.,
Yang, X.,
Cooling-Shrinking Attack:
Blinding the Tracker With Imperceptible Noises,
CVPR20(987-996)
IEEE DOI
2008
Target tracking, Generators, Heating systems, Perturbation methods,
Object tracking, Training
BibRef
Li, H.,
Xu, X.,
Zhang, X.,
Yang, S.,
Li, B.,
QEBA: Query-Efficient Boundary-Based Blackbox Attack,
CVPR20(1218-1227)
IEEE DOI
2008
Perturbation methods, Estimation, Predictive models,
Machine learning, Cats, Pipelines, Neural networks
BibRef
Mohapatra, J.,
Weng, T.,
Chen, P.,
Liu, S.,
Daniel, L.,
Towards Verifying Robustness of Neural Networks Against A Family of
Semantic Perturbations,
CVPR20(241-249)
IEEE DOI
2008
Semantics, Perturbation methods, Robustness, Image color analysis,
Brightness, Neural networks, Tools
BibRef
Liu, X.,
Xiao, T.,
Si, S.,
Cao, Q.,
Kumar, S.,
Hsieh, C.,
How Does Noise Help Robustness? Explanation and Exploration under the
Neural SDE Framework,
CVPR20(279-287)
IEEE DOI
2008
Neural networks, Robustness, Stochastic processes, Training,
Random variables, Gaussian noise, Mathematical model
BibRef
Wu, M.,
Kwiatkowska, M.,
Robustness Guarantees for Deep Neural Networks on Videos,
CVPR20(308-317)
IEEE DOI
2008
Robustness, Videos, Optical imaging, Adaptive optics,
Optical sensors, Measurement, Neural networks
BibRef
Chan, A.,
Tay, Y.,
Ong, Y.,
What It Thinks Is Important Is Important: Robustness Transfers
Through Input Gradients,
CVPR20(329-338)
IEEE DOI
2008
Robustness, Task analysis, Training, Computational modeling,
Perturbation methods, Impedance matching, Predictive models
BibRef
Zhang, L.,
Yu, M.,
Chen, T.,
Shi, Z.,
Bao, C.,
Ma, K.,
Auxiliary Training: Towards Accurate and Robust Models,
CVPR20(369-378)
IEEE DOI
2008
Training, Robustness, Perturbation methods, Neural networks,
Data models, Task analysis, Feature extraction
BibRef
Li, M.,
Deng, C.,
Li, T.,
Yan, J.,
Gao, X.,
Huang, H.,
Towards Transferable Targeted Attack,
CVPR20(638-646)
IEEE DOI
2008
Curing, Iterative methods, Extraterrestrial measurements, Entropy,
Perturbation methods, Robustness
BibRef
Baráth, D.,
Noskova, J.,
Ivashechkin, M.,
Matas, J.,
MAGSAC++, a Fast, Reliable and Accurate Robust Estimator,
CVPR20(1301-1309)
IEEE DOI
2008
Robustness, Data models, Estimation, Computer vision, Noise level,
Pattern recognition, Kernel
BibRef
Saha, A.,
Subramanya, A.,
Patil, K.,
Pirsiavash, H.,
Role of Spatial Context in Adversarial Robustness for Object
Detection,
AML-CV20(3403-3412)
IEEE DOI
2008
Detectors, Object detection, Cognition, Training, Blindness,
Perturbation methods, Optimization
BibRef
Truong, L.,
Jones, C.,
Hutchinson, B.,
August, A.,
Praggastis, B.,
Jasper, R.,
Nichols, N.,
Tuor, A.,
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image
Classifiers,
AML-CV20(3422-3431)
IEEE DOI
2008
Data models, Training, Computational modeling, Machine learning,
Training data, Computer vision, Safety
BibRef
Jefferson, B.,
Marrero, C.O.,
Robust Assessment of Real-World Adversarial Examples,
AML-CV20(3442-3449)
IEEE DOI
2008
Cameras, Light emitting diodes, Robustness, Lighting, Detectors,
Testing, Perturbation methods
BibRef
Gupta, S.,
Dube, P.,
Verma, A.,
Improving the affordability of robustness training for DNNs,
AML-CV20(3383-3392)
IEEE DOI
2008
Training, Mathematical model, Computational modeling, Robustness,
Neural networks, Computer architecture, Optimization
BibRef
Zhang, Z.,
Wu, T.,
Learning Ordered Top-k Adversarial Attacks via Adversarial
Distillation,
AML-CV20(3364-3373)
IEEE DOI
2008
Perturbation methods, Robustness, Task analysis, Semantics, Training,
Visualization, Protocols
BibRef
Goel, A.,
Agarwal, A.,
Vatsa, M.,
Singh, R.,
Ratha, N.K.,
DNDNet: Reconfiguring CNN for Adversarial Robustness,
TCV20(103-110)
IEEE DOI
2008
Mathematical model, Perturbation methods, Machine learning,
Computer architecture, Robustness, Computational modeling, Databases
BibRef
Cohen, G.,
Sapiro, G.,
Giryes, R.,
Detecting Adversarial Samples Using Influence Functions and Nearest
Neighbors,
CVPR20(14441-14450)
IEEE DOI
2008
Training, Robustness, Loss measurement, Feature extraction,
Neural networks, Perturbation methods, Training data
BibRef
He, Z.,
Rakin, A.S.,
Li, J.,
Chakrabarti, C.,
Fan, D.,
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack,
CVPR20(14083-14091)
IEEE DOI
2008
Training, Neural networks, Random access memory, Indexes,
Optimization, Degradation, Immune system
BibRef
Dong, X.,
Han, J.,
Chen, D.,
Liu, J.,
Bian, H.,
Ma, Z.,
Li, H.,
Wang, X.,
Zhang, W.,
Yu, N.,
Robust Superpixel-Guided Attentional Adversarial Attack,
CVPR20(12892-12901)
IEEE DOI
2008
Perturbation methods, Robustness, Noise measurement,
Image color analysis, Pipelines, Agriculture
BibRef
Chen, X.,
Yan, X.,
Zheng, F.,
Jiang, Y.,
Xia, S.,
Zhao, Y.,
Ji, R.,
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention,
CVPR20(10173-10182)
IEEE DOI
2008
Target tracking, Task analysis, Visualization,
Perturbation methods, Object tracking, Computer vision, Optimization
BibRef
Zhou, H.,
Chen, D.,
Liao, J.,
Chen, K.,
Dong, X.,
Liu, K.,
Zhang, W.,
Hua, G.,
Yu, N.,
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack
of Point Cloud Based Deep Networks,
CVPR20(10353-10362)
IEEE DOI
2008
Feature extraction,
Perturbation methods, Decoding, Training, Neural networks, Target recognition
BibRef
Rahmati, A.,
Moosavi-Dezfooli, S.,
Frossard, P.,
Dai, H.,
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks,
CVPR20(8443-8452)
IEEE DOI
2008
Perturbation methods, Estimation, Covariance matrices,
Gaussian distribution, Measurement, Neural networks, Robustness
BibRef
Rahnama, A.,
Nguyen, A.T.,
Raff, E.,
Robust Design of Deep Neural Networks Against Adversarial Attacks
Based on Lyapunov Theory,
CVPR20(8175-8184)
IEEE DOI
2008
Robustness, Nonlinear systems, Training, Control theory,
Stability analysis, Perturbation methods, Transient analysis
BibRef
Zhao, Y.,
Wu, Y.,
Chen, C.,
Lim, A.,
On Isometry Robustness of Deep 3D Point Cloud Models Under
Adversarial Attacks,
CVPR20(1198-1207)
IEEE DOI
2008
Robustness, Data models,
Solid modeling, Computational modeling, Perturbation methods
BibRef
Gowal, S.,
Qin, C.,
Huang, P.,
Cemgil, T.,
Dvijotham, K.,
Mann, T.,
Kohli, P.,
Achieving Robustness in the Wild via Adversarial Mixing With
Disentangled Representations,
CVPR20(1208-1217)
IEEE DOI
2008
Perturbation methods, Robustness, Training, Semantics, Correlation,
Task analysis, Mathematical model
BibRef
Jeddi, A.,
Shafiee, M.J.,
Karg, M.,
Scharfenberger, C.,
Wong, A.,
Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve
Adversarial Robustness,
CVPR20(1238-1247)
IEEE DOI
2008
Perturbation methods, Robustness, Training, Neural networks,
Data models, Uncertainty, Optimization
BibRef
Dabouei, A.,
Soleymani, S.,
Taherkhani, F.,
Dawson, J.,
Nasrabadi, N.M.,
Exploiting Joint Robustness to Adversarial Perturbations,
CVPR20(1119-1128)
IEEE DOI
2008
Robustness, Perturbation methods, Training, Predictive models,
Optimization, Adaptation models
BibRef
Addepalli, S.[Sravanti],
Vivek, B.S.,
Baburaj, A.[Arya],
Sriramanan, G.[Gaurang],
Babu, R.V.[R. Venkatesh],
Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes,
CVPR20(1017-1026)
IEEE DOI
2008
Training, Robustness, Quantization (signal), Visual systems,
Perturbation methods, Computer vision, Neural networks
BibRef
Duan, R.,
Ma, X.,
Wang, Y.,
Bailey, J.,
Qin, A.K.,
Yang, Y.,
Adversarial Camouflage: Hiding Physical-World Attacks With Natural
Styles,
CVPR20(997-1005)
IEEE DOI
2008
Perturbation methods, Cameras, Robustness, Feature extraction,
Distortion, Visualization, Measurement
BibRef
Yuan, J.,
He, Z.,
Ensemble Generative Cleaning With Feedback Loops for Defending
Adversarial Attacks,
CVPR20(578-587)
IEEE DOI
2008
Cleaning, Feedback loop, Transforms, Neural networks, Estimation,
Fuses, Iterative methods
BibRef
Guo, M.,
Yang, Y.,
Xu, R.,
Liu, Z.,
Lin, D.,
When NAS Meets Robustness: In Search of Robust Architectures Against
Adversarial Attacks,
CVPR20(628-637)
IEEE DOI
2008
Computer architecture, Robustness, Training, Network architecture,
Neural networks, Convolution, Architecture
BibRef
Borkar, T.,
Heide, F.,
Karam, L.,
Defending Against Universal Attacks Through Selective Feature
Regeneration,
CVPR20(706-716)
IEEE DOI
2008
Perturbation methods, Training, Robustness, Noise reduction,
Image restoration, Computer vision, Transforms
BibRef
Li, G.,
Ding, S.,
Luo, J.,
Liu, C.,
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid
Decoder,
CVPR20(797-805)
IEEE DOI
2008
Noise reduction, Robustness, Training, Image restoration,
Noise measurement, Decoding, Neural networks
BibRef
Chen, T.,
Liu, S.,
Chang, S.,
Cheng, Y.,
Amini, L.,
Wang, Z.,
Adversarial Robustness:
From Self-Supervised Pre-Training to Fine-Tuning,
CVPR20(696-705)
IEEE DOI
2008
Robustness, Task analysis, Training, Standards, Data models,
Computational modeling, Tuning
BibRef
Lee, S.,
Lee, H.,
Yoon, S.,
Adversarial Vertex Mixup: Toward Better Adversarially Robust
Generalization,
CVPR20(269-278)
IEEE DOI
2008
Robustness, Training, Standards, Perturbation methods,
Complexity theory, Upper bound, Data models
BibRef
Dong, Y.,
Fu, Q.,
Yang, X.,
Pang, T.,
Su, H.,
Xiao, Z.,
Zhu, J.,
Benchmarking Adversarial Robustness on Image Classification,
CVPR20(318-328)
IEEE DOI
2008
Robustness, Adaptation models, Training, Predictive models,
Perturbation methods, Data models, Measurement
BibRef
Xiao, C.,
Zheng, C.,
One Man's Trash Is Another Man's Treasure:
Resisting Adversarial Examples by Adversarial Examples,
CVPR20(409-418)
IEEE DOI
2008
Training, Robustness, Perturbation methods, Neural networks,
Transforms, Mathematical model, Numerical models
BibRef
Naseer, M.,
Khan, S.,
Hayat, M.,
Khan, F.S.,
Porikli, F.M.,
A Self-supervised Approach for Adversarial Robustness,
CVPR20(259-268)
IEEE DOI
2008
Perturbation methods, Task analysis, Distortion, Training,
Robustness, Feature extraction, Neural networks
BibRef
Machiraju, H.[Harshitha],
Balasubramanian, V.N.[Vineeth N],
A Little Fog for a Large Turn,
WACV20(2891-2900)
IEEE DOI
2006
Perturbation methods, Meteorology, Autonomous robots,
Task analysis, Data models, Predictive models, Robustness
BibRef
Zhao, Y.,
Tian, Y.,
Fowlkes, C.,
Shen, W.,
Yuille, A.L.,
Resisting Large Data Variations via Introspective Transformation
Network,
WACV20(3069-3078)
IEEE DOI
2006
Training, Testing, Robustness, Training data,
Linear programming, Resists
BibRef
Kim, D.H.[Dong-Hyun],
Bargal, S.A.[Sarah Adel],
Zhang, J.M.[Jian-Ming],
Sclaroff, S.[Stan],
Multi-way Encoding for Robustness,
WACV20(1341-1349)
IEEE DOI
2006
To counter adversarial attacks.
Encoding, Robustness, Perturbation methods, Training,
Biological system modeling, Neurons, Correlation
BibRef
Folz, J.,
Palacio, S.,
Hees, J.,
Dengel, A.,
Adversarial Defense based on Structure-to-Signal Autoencoders,
WACV20(3568-3577)
IEEE DOI
2006
Perturbation methods, Semantics, Robustness, Predictive models,
Training, Decoding, Neural networks
BibRef
Peterson, J.[Joshua],
Battleday, R.[Ruairidh],
Griffiths, T.[Thomas],
Russakovsky, O.[Olga],
Human Uncertainty Makes Classification More Robust,
ICCV19(9616-9625)
IEEE DOI
2004
CIFAR10H dataset.
To make deep network robust ot adversarial attacks.
convolutional neural nets, learning (artificial intelligence),
pattern classification, classification performance,
Dogs
BibRef
Wang, J.,
Zhang, H.,
Bilateral Adversarial Training:
Towards Fast Training of More Robust Models Against Adversarial Attacks,
ICCV19(6628-6637)
IEEE DOI
2004
entropy, learning (artificial intelligence), neural nets,
security of data, adversarial attacks, Data models
BibRef
Ye, S.,
Xu, K.,
Liu, S.,
Cheng, H.,
Lambrechts, J.,
Zhang, H.,
Zhou, A.,
Ma, K.,
Wang, Y.,
Lin, X.,
Adversarial Robustness vs. Model Compression, or Both?,
ICCV19(111-120)
IEEE DOI
2004
minimax techniques, neural nets, security of data,
adversarial attacks, concurrent adversarial training
BibRef
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Fawzi, A.[Alhussein],
Uesato, J.[Jonathan],
Frossard, P.[Pascal],
Robustness via Curvature Regularization, and Vice Versa,
CVPR19(9070-9078).
IEEE DOI
2002
Adversarial training leads to more linear boundaries.
BibRef
Xie, C.[Cihang],
Wu, Y.X.[Yu-Xin],
van der Maaten, L.[Laurens],
Yuille, A.L.[Alan L.],
He, K.M.[Kai-Ming],
Feature Denoising for Improving Adversarial Robustness,
CVPR19(501-509).
IEEE DOI
2002
BibRef
He, Z.[Zhezhi],
Rakin, A.S.[Adnan Siraj],
Fan, D.L.[De-Liang],
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural
Network Robustness Against Adversarial Attack,
CVPR19(588-597).
IEEE DOI
2002
BibRef
Kaneko, T.[Takuhiro],
Harada, T.[Tatsuya],
Noise Robust Generative Adversarial Networks,
CVPR20(8401-8411)
IEEE DOI
2008
Training, Noise measurement, Generators,
Noise robustness, Gaussian noise, Image generation
BibRef
Kaneko, T.[Takuhiro],
Ushiku, Y.[Yoshitaka],
Harada, T.[Tatsuya],
Label-Noise Robust Generative Adversarial Networks,
CVPR19(2462-2471).
IEEE DOI
2002
BibRef
Stutz, D.[David],
Hein, M.[Matthias],
Schiele, B.[Bernt],
Disentangling Adversarial Robustness and Generalization,
CVPR19(6969-6980).
IEEE DOI
2002
BibRef
Miyazato, S.,
Wang, X.,
Yamasaki, T.,
Aizawa, K.,
Reinforcing the Robustness of a Deep Neural Network to Adversarial
Examples by Using Color Quantization of Training Image Data,
ICIP19(884-888)
IEEE DOI
1910
convolutional neural network, adversarial example, color quantization
BibRef
Ramanathan, T.,
Manimaran, A.,
You, S.,
Kuo, C.J.,
Robustness of Saak Transform Against Adversarial Attacks,
ICIP19(2531-2535)
IEEE DOI
1910
Saak transform, Adversarial attacks, Deep Neural Networks, Image Classification
BibRef
Yang, C.H.,
Liu, Y.,
Chen, P.,
Ma, X.,
Tsai, Y.J.,
When Causal Intervention Meets Adversarial Examples and Image Masking
for Deep Neural Networks,
ICIP19(3811-3815)
IEEE DOI
1910
Causal Reasoning, Adversarial Example, Adversarial Robustness,
Interpretable Deep Learning, Visual Reasoning
BibRef
Prakash, A.,
Moran, N.,
Garber, S.,
DiLillo, A.,
Storer, J.,
Deflecting Adversarial Attacks with Pixel Deflection,
CVPR18(8571-8580)
IEEE DOI
1812
Perturbation methods, Transforms, Minimization, Robustness,
Noise reduction, Training, Computer vision
BibRef
Mummadi, C.K.,
Brox, T.,
Metzen, J.H.,
Defending Against Universal Perturbations With Shared Adversarial
Training,
ICCV19(4927-4936)
IEEE DOI
2004
image classification, image segmentation, neural nets,
universal perturbations, shared adversarial training,
Computational modeling
BibRef
Chen, H.,
Liang, J.,
Chang, S.,
Pan, J.,
Chen, Y.,
Wei, W.,
Juan, D.,
Improving Adversarial Robustness via Guided Complement Entropy,
ICCV19(4880-4888)
IEEE DOI
2004
entropy, learning (artificial intelligence), neural nets,
probability, adversarial defense, adversarial robustness,
BibRef
Bai, Y.,
Feng, Y.,
Wang, Y.,
Dai, T.,
Xia, S.,
Jiang, Y.,
Hilbert-Based Generative Defense for Adversarial Examples,
ICCV19(4783-4792)
IEEE DOI
2004
feature extraction, Hilbert transforms, neural nets,
security of data, scan mode, advanced Hilbert curve scan order
BibRef
Jang, Y.,
Zhao, T.,
Hong, S.,
Lee, H.,
Adversarial Defense via Learning to Generate Diverse Attacks,
ICCV19(2740-2749)
IEEE DOI
2004
learning (artificial intelligence), neural nets,
pattern classification, security of data, adversarial defense, Machine learning
BibRef
Mustafa, A.,
Khan, S.,
Hayat, M.,
Goecke, R.,
Shen, J.,
Shao, L.,
Adversarial Defense by Restricting the Hidden Space of Deep Neural
Networks,
ICCV19(3384-3393)
IEEE DOI
2004
convolutional neural nets, feature extraction,
image classification, image representation, Iterative methods
BibRef
Taran, O.[Olga],
Rezaeifar, S.[Shideh],
Holotyak, T.[Taras],
Voloshynovskiy, S.[Slava],
Defending Against Adversarial Attacks by Randomized Diversification,
CVPR19(11218-11225).
IEEE DOI
2002
BibRef
Sun, B.[Bo],
Tsai, N.H.[Nian-Hsuan],
Liu, F.C.[Fang-Chen],
Yu, R.[Ronald],
Su, H.[Hao],
Adversarial Defense by Stratified Convolutional Sparse Coding,
CVPR19(11439-11448).
IEEE DOI
2002
BibRef
Ho, C.H.[Chih-Hui],
Leung, B.[Brandon],
Sandstrom, E.[Erik],
Chang, Y.[Yen],
Vasconcelos, N.M.[Nuno M.],
Catastrophic Child's Play:
Easy to Perform, Hard to Defend Adversarial Attacks,
CVPR19(9221-9229).
IEEE DOI
2002
BibRef
Dubey, A.[Abhimanyu],
van der Maaten, L.[Laurens],
Yalniz, Z.[Zeki],
Li, Y.[Yixuan],
Mahajan, D.[Dhruv],
Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor
Search,
CVPR19(8759-8768).
IEEE DOI
2002
BibRef
Dong, Y.P.[Yin-Peng],
Pang, T.[Tianyu],
Su, H.[Hang],
Zhu, J.[Jun],
Evading Defenses to Transferable Adversarial Examples by
Translation-Invariant Attacks,
CVPR19(4307-4316).
IEEE DOI
2002
BibRef
Rony, J.[Jerome],
Hafemann, L.G.[Luiz G.],
Oliveira, L.S.[Luiz S.],
Ben Ayed, I.[Ismail],
Sabourin, R.[Robert],
Granger, E.[Eric],
Decoupling Direction and Norm for Efficient Gradient-Based L2
Adversarial Attacks and Defenses,
CVPR19(4317-4325).
IEEE DOI
2002
BibRef
Qiu, Y.X.[Yu-Xian],
Leng, J.W.[Jing-Wen],
Guo, C.[Cong],
Chen, Q.[Quan],
Li, C.[Chao],
Guo, M.[Minyi],
Zhu, Y.H.[Yu-Hao],
Adversarial Defense Through Network Profiling Based Path Extraction,
CVPR19(4772-4781).
IEEE DOI
2002
BibRef
Jia, X.J.[Xiao-Jun],
Wei, X.X.[Xing-Xing],
Cao, X.C.[Xiao-Chun],
Foroosh, H.[Hassan],
ComDefend: An Efficient Image Compression Model to Defend Adversarial
Examples,
CVPR19(6077-6085).
IEEE DOI
2002
BibRef
Raff, E.[Edward],
Sylvester, J.[Jared],
Forsyth, S.[Steven],
McLean, M.[Mark],
Barrage of Random Transforms for Adversarially Robust Defense,
CVPR19(6521-6530).
IEEE DOI
2002
BibRef
Yao, H.,
Regan, M.,
Yang, Y.,
Ren, Y.,
Image Decomposition and Classification Through a Generative Model,
ICIP19(400-404)
IEEE DOI
1910
Generative model, classification, adversarial defense
BibRef
Ji, J.,
Zhong, B.,
Ma, K.,
Multi-Scale Defense of Adversarial Images,
ICIP19(4070-4074)
IEEE DOI
1910
deep learning, adversarial images, defense, multi-scale, image evolution
BibRef
Agarwal, C.,
Nguyen, A.,
Schonfeld, D.,
Improving Robustness to Adversarial Examples by Encouraging
Discriminative Features,
ICIP19(3801-3805)
IEEE DOI
1910
Adversarial Machine Learning, Robustness, Defenses, Deep Learning
BibRef
Taran, O.[Olga],
Rezaeifar, S.[Shideh],
Voloshynovskiy, S.[Slava],
Bridging Machine Learning and Cryptography in Defence Against
Adversarial Attacks,
Objectionable18(II:267-279).
Springer DOI
1905
BibRef
Naseer, M.,
Khan, S.,
Porikli, F.,
Local Gradients Smoothing: Defense Against Localized Adversarial
Attacks,
WACV19(1300-1307)
IEEE DOI
1904
data compression, feature extraction, gradient methods,
image classification, image coding, image representation,
High frequency
BibRef
Akhtar, N.,
Liu, J.,
Mian, A.,
Defense Against Universal Adversarial Perturbations,
CVPR18(3389-3398)
IEEE DOI
1812
Perturbation methods, Training, Computational modeling, Detectors,
Neural networks, Robustness, Integrated circuits
BibRef
Liao, F.,
Liang, M.,
Dong, Y.,
Pang, T.,
Hu, X.,
Zhu, J.,
Defense Against Adversarial Attacks Using High-Level Representation
Guided Denoiser,
CVPR18(1778-1787)
IEEE DOI
1812
Training, Perturbation methods, Noise reduction,
Image reconstruction, Predictive models, Neural networks, Adaptation models
BibRef
Behpour, S.,
Xing, W.,
Ziebart, B.D.,
ARC: Adversarial Robust Cuts for Semi-Supervised and Multi-label
Classification,
WiCV18(1986-19862)
IEEE DOI
1812
Markov random fields, Task analysis, Training, Testing,
Support vector machines, Fasteners, Games
BibRef
Karim, R.,
Islam, M.A.,
Mohammed, N.,
Bruce, N.D.B.,
On the Robustness of Deep Learning Models to Universal Adversarial
Attack,
CRV18(55-62)
IEEE DOI
1812
Perturbation methods, Computational modeling, Neural networks,
Task analysis, Image segmentation, Data models, Semantics,
Semantic Segmentation
BibRef
Jakubovitz, D.[Daniel],
Giryes, R.[Raja],
Improving DNN Robustness to Adversarial Attacks Using Jacobian
Regularization,
ECCV18(XII: 525-541).
Springer DOI
1810
BibRef
Rozsa, A.,
Gunther, M.,
Boult, T.E.,
Towards Robust Deep Neural Networks with BANG,
WACV18(803-811)
IEEE DOI
1806
image processing, learning (artificial intelligence),
neural nets, BANG technique, adversarial image utilization,
Training
BibRef
Lu, J.,
Issaranon, T.,
Forsyth, D.A.,
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly,
ICCV17(446-454)
IEEE DOI
1802
image colour analysis, image reconstruction,
learning (artificial intelligence), neural nets,
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
VAE, Variational Autoencoder .