14.5.10.10.3 Countering Adversarial Attacks, Defense

Chapter Contents (Back)
Adversarial Networks. Attacks. Defense. Defence. GAN. Generative Networks. A subset:
See also Countering Adversarial Attacks, Robustness. More for the attack iteslf:
See also Adversarial Attacks.
See also Adversarial Trainning for Defense.
See also Adversarial Patch Attacks, Spatial Context, Defense. Noise for attack:
See also Noise in Adversarial Attacks, Removing, Detection, Use.
See also Backdoor Attacks.
See also Adversarial Networks, Adversarial Inputs, Generative Adversarial.
See also Black-Box Attacks, Robustness.

Li, X.R.[Xu-Rong], Ji, S.L.[Shou-Ling], Ji, J.T.[Jun-Tao], Ren, Z.Y.[Zhen-Yu], Wu, C.M.[Chun-Ming], Li, B.[Bo], Wang, T.[Ting],
Adversarial examples detection through the sensitivity in space mappings,
IET-CV(14), No. 5, August 2020, pp. 201-213.
DOI Link 2007
BibRef

Li, H., Zeng, Y., Li, G., Lin, L., Yu, Y.,
Online Alternate Generator Against Adversarial Attacks,
IP(29), 2020, pp. 9305-9315.
IEEE DOI 2010
Generators, Training, Perturbation methods, Knowledge engineering, Convolutional neural networks, Deep learning, image classification BibRef

Yüce, G.[Gizem], Ortiz-Jiménez, G.[Guillermo], Besbinar, B.[Beril], Frossard, P.[Pascal],
A Structured Dictionary Perspective on Implicit Neural Representations,
CVPR22(19206-19216)
IEEE DOI 2210
Deep learning, Dictionaries, Data visualization, Power system harmonics, Harmonic analysis, Self- semi- meta- unsupervised learning BibRef

Ma, X.J.[Xing-Jun], Niu, Y.H.[Yu-Hao], Gu, L.[Lin], Wang, Y.S.[Yi-Sen], Zhao, Y.T.[Yi-Tian], Bailey, J.[James], Lu, F.[Feng],
Understanding adversarial attacks on deep learning based medical image analysis systems,
PR(110), 2021, pp. 107332.
Elsevier DOI 2011
Adversarial attack, Adversarial example detection, Medical image analysis, Deep learning BibRef

Zhou, M.[Mo], Niu, Z.X.[Zhen-Xing], Wang, L.[Le], Zhang, Q.L.[Qi-Lin], Hua, G.[Gang],
Adversarial Ranking Attack and Defense,
ECCV20(XIV:781-799).
Springer DOI 2011
BibRef

Agarwal, A.[Akshay], Vatsa, M.[Mayank], Singh, R.[Richa], Ratha, N.[Nalini],
Cognitive data augmentation for adversarial defense via pixel masking,
PRL(146), 2021, pp. 244-251.
Elsevier DOI 2105
Adversarial attacks, Deep learning, Data augmentation BibRef

Agarwal, A.[Akshay], Ratha, N.[Nalini], Vatsa, M.[Mayank], Singh, R.[Richa],
Exploring Robustness Connection between Artificial and Natural Adversarial Examples,
ArtOfRobust22(178-185)
IEEE DOI 2210
Deep learning, Neural networks, Semantics, Transformers, Robustness, Convolutional neural networks BibRef

Zhang, S.D.[Shu-Dong], Gao, H.[Haichang], Rao, Q.X.[Qing-Xun],
Defense Against Adversarial Attacks by Reconstructing Images,
IP(30), 2021, pp. 6117-6129.
IEEE DOI 2107
Perturbation methods, Image reconstruction, Training, Iterative methods, Computational modeling, Predictive models, perceptual loss BibRef

Khodabakhsh, A.[Ali], Akhtar, Z.[Zahid],
Unknown presentation attack detection against rational attackers,
IET-Bio(10), No. 5, 2021, pp. 460-479.
DOI Link 2109
BibRef

Xu, Y.H.[Yong-Hao], Du, B.[Bo], Zhang, L.P.[Liang-Pei],
Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification,
IP(30), 2021, pp. 8671-8685.
IEEE DOI 2110
Deep learning, Training, Hyperspectral imaging, Feature extraction, Task analysis, Perturbation methods, Predictive models, deep learning BibRef

Dai, T.[Tao], Feng, Y.[Yan], Chen, B.[Bin], Lu, J.[Jian], Xia, S.T.[Shu-Tao],
Deep image prior based defense against adversarial examples,
PR(122), 2022, pp. 108249.
Elsevier DOI 2112
Deep neural network, Adversarial example, Image prior, Defense BibRef

Wang, J.W.[Jin-Wei], Zhao, J.J.[Jun-Jie], Yin, Q.L.[Qi-Lin], Luo, X.Y.[Xiang-Yang], Zheng, Y.H.[Yu-Hui], Shi, Y.Q.[Yun-Qing], Jha, S.I.K.[Sun-Il Kr.],
SmsNet: A New Deep Convolutional Neural Network Model for Adversarial Example Detection,
MultMed(24), 2022, pp. 230-244.
IEEE DOI 2202
Feature extraction, Training, Manuals, Perturbation methods, Information science, Principal component analysis, SmsConnection BibRef

Liang, Q.[Qi], Li, Q.[Qiang], Nie, W.Z.[Wei-Zhi],
LD-GAN: Learning perturbations for adversarial defense based on GAN structure,
SP:IC(103), 2022, pp. 116659.
Elsevier DOI 2203
Adversarial attacks, Adversarial defense, Adversarial robustness, Image classification BibRef

Shao, R.[Rui], Perera, P.[Pramuditha], Yuen, P.C.[Pong C.], Patel, V.M.[Vishal M.],
Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning,
IJCV(130), No. 1, January 2022, pp. 1070-1087.
Springer DOI 2204
BibRef
Earlier:
Open-set Adversarial Defense,
ECCV20(XVII:682-698).
Springer DOI 2011
BibRef

Subramanyam, A.V.,
Sinkhorn Adversarial Attack and Defense,
IP(31), 2022, pp. 4039-4049.
IEEE DOI 2206
Iterative methods, Training, Perturbation methods, Loss measurement, Standards, Robustness, Linear programming, adversarial attack and defense BibRef

Wang, K.[Kun], Liu, M.Z.[Mao-Zhen],
YOLO-Anti: YOLO-based counterattack model for unseen congested object detection,
PR(131), 2022, pp. 108814.
Elsevier DOI 2208
Deep learning, Congested and occluded objects, Object detection BibRef

Xue, W.[Wei], Chen, Z.M.[Zhi-Ming], Tian, W.W.[Wei-Wei], Wu, Y.H.[Yun-Hua], Hua, B.[Bing],
A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection,
RS(14), No. 15, 2022, pp. xx-yy.
DOI Link 2208
BibRef

Rakin, A.S.[Adnan Siraj], He, Z.[Zhezhi], Li, J.T.[Jing-Tao], Yao, F.[Fan], Chakrabarti, C.[Chaitali], Fan, D.L.[De-Liang],
T-BFA: Targeted Bit-Flip Adversarial Weight Attack,
PAMI(44), No. 11, November 2022, pp. 7928-7939.
IEEE DOI 2210
BibRef
Earlier: A2, A1, A3, A5, A6, Only:
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack,
CVPR20(14083-14091)
IEEE DOI 2008
Computational modeling, Random access memory, Computer security, Training, Quantization (signal), Data models, Memory management, bit-flip. Neural networks, Random access memory, Indexes, Optimization, Degradation, Immune system BibRef

Melacci, S.[Stefano], Ciravegna, G.[Gabriele], Sotgiu, A.[Angelo], Demontis, A.[Ambra], Biggio, B.[Battista], Gori, M.[Marco], Roli, F.[Fabio],
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers,
PAMI(44), No. 12, December 2022, pp. 9944-9959.
IEEE DOI 2212
Training, Training data, Robustness, Task analysis, Adversarial machine learning, Ink, Semisupervised learning, multi-label classification BibRef

Rathore, H.[Hemant], Sasan, A.[Animesh], Sahay, S.K.[Sanjay K.], Sewak, M.[Mohit],
Defending malware detection models against evasion based adversarial attacks,
PRL(164), 2022, pp. 119-125.
Elsevier DOI 2212
Adversarial robustness, Deep neural network, Evasion attack, Malware analysis and detection, Machine learning BibRef

Niu, Z.H.[Zhong-Han], Yang, Y.B.[Yu-Bin],
Defense Against Adversarial Attacks with Efficient Frequency-Adaptive Compression and Reconstruction,
PR(138), 2023, pp. 109382.
Elsevier DOI 2303
Deep neural networks, Adversarial defense, Adversarial robustness, Closed-set attack, Open-set attack BibRef

Brau, F.[Fabio], Rossolini, G.[Giulio], Biondi, A.[Alessandro], Buttazzo, G.[Giorgio],
On the Minimal Adversarial Perturbation for Deep Neural Networks With Provable Estimation Error,
PAMI(45), No. 4, April 2023, pp. 5038-5052.
IEEE DOI 2303
Perturbation methods, Robustness, Estimation, Neural networks, Deep learning, Error analysis, Computational modeling, verification methods BibRef

Quan, C.[Chen], Sriranga, N.[Nandan], Yang, H.D.[Hao-Dong], Han, Y.H.S.[Yung-Hsiang S.], Geng, B.C.[Bao-Cheng], Varshney, P.K.[Pramod K.],
Efficient Ordered-Transmission Based Distributed Detection Under Data Falsification Attacks,
SPLetters(30), 2023, pp. 145-149.
IEEE DOI 2303
Energy efficiency, Wireless sensor networks, Upper bound, Optimization, Distributed databases, Simulation, distributed detection BibRef

Naseer, M.[Muzammal], Khan, S.[Salman], Hayat, M.[Munawar], Khan, F.S.[Fahad Shahbaz], Porikli, F.M.[Fatih M.],
Stylized Adversarial Defense,
PAMI(45), No. 5, May 2023, pp. 6403-6414.
IEEE DOI 2304
Training, Perturbation methods, Robustness, Multitasking, Predictive models, Computational modeling, Visualization, multi-task objective BibRef

Xu, Q.Q.[Qian-Qian], Yang, Z.Y.[Zhi-Yong], Zhao, Y.R.[Yun-Rui], Cao, X.C.[Xiao-Chun], Huang, Q.M.[Qing-Ming],
Rethinking Label Flipping Attack: From Sample Masking to Sample Thresholding,
PAMI(45), No. 6, June 2023, pp. 7668-7685.
IEEE DOI 2305
Data models, Training data, Training, Deep learning, Predictive models, Testing, Optimization, Label flipping attack, machine learning BibRef

Zago, J.G.[João G.], Antonelo, E.A.[Eric A.], Baldissera, F.L.[Fabio L.], Saad, R.T.[Rodrigo T.],
Benford's law: What does it say on adversarial images?,
JVCIR(93), 2023, pp. 103818.
Elsevier DOI 2305
Benford's law, Adversarial attacks, Convolutional neural networks, Adversarial detection BibRef

Zhang, Y.X.[Yu-Xuan], Meng, H.[Hua], Cao, X.M.[Xue-Mei], Zhou, Z.C.[Zheng-Chun], Yang, M.[Mei], Adhikary, A.R.[Avik Ranjan],
Interpreting vulnerabilities of multi-instance learning to adversarial perturbations,
PR(142), 2023, pp. 109725.
Elsevier DOI 2307
Customized perturbation, Multi-instance learning, Universal perturbation, Vulnerability BibRef

Lee, H.[Hakmin], Ro, Y.M.[Yong Man],
Adversarial anchor-guided feature refinement for adversarial defense,
IVC(136), 2023, pp. 104722.
Elsevier DOI 2308
Adversarial example, Adversarial robustness, Adversarial anchor, Covariate shift, Feature refinement BibRef

Gao, W.[Wei], Zhang, X.[Xu], Guo, S.[Shangwei], Zhang, T.W.[Tian-Wei], Xiang, T.[Tao], Qiu, H.[Han], Wen, Y.G.[Yong-Gang], Liu, Y.[Yang],
Automatic Transformation Search Against Deep Leakage From Gradients,
PAMI(45), No. 9, September 2023, pp. 10650-10668.
IEEE DOI Collaborative learning, deal with attacks that reveal shared data. 2309
BibRef

Huang, L.F.[Li-Feng], Gao, C.Y.[Cheng-Ying], Liu, N.[Ning],
Erosion Attack: Harnessing Corruption To Improve Adversarial Examples,
IP(32), 2023, pp. 4828-4841.
IEEE DOI Code:
WWW Link. 2310
BibRef

Yang, S.R.[Suo-Rong], Li, J.Q.[Jin-Qiao], Zhang, T.Y.[Tian-Yue], Zhao, J.[Jian], Shen, F.[Furao],
AdvMask: A sparse adversarial attack-based data augmentation method for image classification,
PR(144), 2023, pp. 109847.
Elsevier DOI 2310
Data augmentation, Image classification, Sparse adversarial attack, Generalization BibRef

Ding, F.[Feng], Shen, Z.Y.[Zhang-Yi], Zhu, G.P.[Guo-Pu], Kwong, S.[Sam], Zhou, Y.C.[Yi-Cong], Lyu, S.W.[Si-Wei],
ExS-GAN: Synthesizing Anti-Forensics Images via Extra Supervised GAN,
Cyber(53), No. 11, November 2023, pp. 7162-7173.
IEEE DOI 2310
BibRef

Shi, C.[Cheng], Liu, Y.[Ying], Zhao, M.H.[Ming-Hua], Pun, C.M.[Chi-Man], Miao, Q.G.[Qi-Guang],
Attack-invariant attention feature for adversarial defense in hyperspectral image classification,
PR(145), 2024, pp. 109955.
Elsevier DOI Code:
WWW Link. 2311
Hyperspectral image classification, Adversarial defense, Attack-invariant attention feature, Adversarial attack BibRef

Liu, D.[Deyin], Wu, L.Y.B.[Lin Yuan-Bo], Li, B.[Bo], Boussaid, F.[Farid], Bennamoun, M.[Mohammed], Xie, X.H.[Xiang-Hua], Liang, C.W.[Cheng-Wu],
Jacobian norm with Selective Input Gradient Regularization for interpretable adversarial defense,
PR(145), 2024, pp. 109902.
Elsevier DOI Code:
WWW Link. 2311
Selective input gradient regularization, Jacobian normalization, Adversarial robustness BibRef

Liu, H.[Hui], Zhao, B.[Bo], Guo, J.[Jiabao], Zhang, K.[Kehuan], Liu, P.[Peng],
A lightweight unsupervised adversarial detector based on autoencoder and isolation forest,
PR(147), 2024, pp. 110127.
Elsevier DOI 2312
Deep neural networks, Adversarial examples, Adversarial detection, Autoencoder, Isolation forest BibRef

Zhang, X.X.[Xing-Xing], Gui, S.[Shupeng], Jin, J.[Jian], Zhu, Z.F.[Zhen-Feng], Zhao, Y.[Yao],
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries,
MultMed(26), 2024, pp. 15-27.
IEEE DOI 2401
BibRef

Wang, D.H.[Dong-Hua], Yao, W.[Wen], Jiang, T.S.[Ting-Song], Chen, X.Q.[Xiao-Qian],
AdvOps: Decoupling adversarial examples,
PR(149), 2024, pp. 110252.
Elsevier DOI 2403
Adversarial attack, Analysis of adversarial examples, Analysis of neural network BibRef

Wang, W.D.[Wei-Dong], Li, Z.[Zhi], Liu, S.[Shuaiwei], Zhang, L.[Li], Yang, J.[Jin], Wang, Y.[Yi],
Feature decoupling and interaction network for defending against adversarial examples,
IVC(144), 2024, pp. 104931.
Elsevier DOI 2404
Deep neural networks, Adversarial examples, Adversarial defense, Feature decoupling-interaction BibRef

Zhao, C.L.[Cheng-Long], Mei, S.B.[Shi-Bin], Ni, B.B.[Bing-Bing], Yuan, S.C.[Sheng-Chao], Yu, Z.B.[Zhen-Bo], Wang, J.[Jun],
Variational Adversarial Defense: A Bayes Perspective for Adversarial Training,
PAMI(46), No. 5, May 2024, pp. 3047-3063.
IEEE DOI 2404
Training, Training data, Data models, Complexity theory, Robustness, Perturbation methods, Optimization, Variational inference, model robustness BibRef

Yao, Q.S.[Qing-Song], He, Z.C.[Ze-Cheng], Li, Y.X.[Yue-Xiang], Lin, Y.[Yi], Ma, K.[Kai], Zheng, Y.F.[Ye-Feng], Zhou, S.K.[S. Kevin],
Adversarial Medical Image With Hierarchical Feature Hiding,
MedImg(43), No. 4, April 2024, pp. 1296-1307.
IEEE DOI 2404
Medical diagnostic imaging, Hybrid fiber coaxial cables, Perturbation methods, Iterative methods, Feature extraction, adversarial attacks and defense BibRef

He, S.Y.[Shi-Yuan], Wei, J.[Jiwei], Zhang, C.N.[Chao-Ning], Xu, X.[Xing], Song, J.K.[Jing-Kuan], Yang, Y.[Yang], Shen, H.T.[Heng Tao],
Boosting Adversarial Training with Hardness-Guided Attack Strategy,
MultMed(26), 2024, pp. 7748-7760.
IEEE DOI 2405
Training, Robustness, Data models, Perturbation methods, Adaptation models, Standards, Predictive models, model robustness BibRef

Liu, A.[Aishan], Tang, S.Y.[Shi-Yu], Chen, X.Y.[Xin-Yun], Huang, L.[Lei], Qin, H.T.[Hao-Tong], Liu, X.L.[Xiang-Long], Tao, D.C.[Da-Cheng],
Towards Defending Multiple-Norm Bounded Adversarial Perturbations via Gated Batch Normalization,
IJCV(132), No. 6, June 2024, pp. 1881-1898.
Springer DOI 2406
BibRef

Zhou, M.[Mo], Wang, L.[Le], Niu, Z.X.[Zhen-Xing], Zhang, Q.[Qilin], Zheng, N.N.[Nan-Ning], Hua, G.[Gang],
Adversarial Attack and Defense in Deep Ranking,
PAMI(46), No. 8, August 2024, pp. 5306-5324.
IEEE DOI 2407
Robustness, Perturbation methods, Glass box, Training, Face recognition, Adaptation models, Task analysis, ranking model robustness BibRef

Zhu, R.[Rui], Ma, S.P.[Shi-Ping], He, L.Y.[Lin-Yuan], Ge, W.[Wei],
FFA: Foreground Feature Approximation Digitally against Remote Sensing Object Detection,
RS(16), No. 17, 2024, pp. 3194.
DOI Link 2409
BibRef

Liu, Y.J.[Yu-Jia], Yang, C.X.[Chen-Xi], Li, D.Q.[Ding-Quan], Ding, J.H.[Jian-Hao], Jiang, T.T.[Ting-Ting],
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization,
CVPR24(25554-25563)
IEEE DOI 2410
Image quality, Training, Performance evaluation, Perturbation methods, Computational modeling, Predictive models, adversarial defense method BibRef

Zhang, L.[Lilin], Yang, N.[Ning], Sun, Y.C.[Yan-Chao], Yu, P.S.[Philip S.],
Provable Unrestricted Adversarial Training Without Compromise With Generalizability,
PAMI(46), No. 12, December 2024, pp. 8302-8319.
IEEE DOI 2411
Robustness, Training, Standards, Perturbation methods, Stars, Optimization, Adversarial robustness, standard generalizability BibRef

Sun, X.X.[Xu-Xiang], Cheng, G.[Gong], Li, H.[Hongda], Peng, H.Y.[Hong-Yu], Han, J.W.[Jun-Wei],
Task-Specific Importance-Awareness Matters: On Targeted Attacks Against Object Detection,
CirSysVideo(34), No. 11, November 2024, pp. 11619-11629.
IEEE DOI 2412
Object detection, Task analysis, Detectors, Optimization, Remote sensing, Optical imaging, Image recognition, task-specific importance-aware attack BibRef

Wang, C.[Chao], Qi, S.[Shuren], Huang, Z.Q.[Zhi-Qiu], Zhang, Y.S.[Yu-Shu], Lan, R.[Rushi], Cao, X.C.[Xiao-Chun], Fan, F.L.[Feng-Lei],
Spatial-Frequency Discriminability for Revealing Adversarial Perturbations,
CirSysVideo(34), No. 12, December 2024, pp. 12608-12623.
IEEE DOI 2501
Detectors, Perturbation methods, Feature extraction, Accuracy, Training, Robustness, Artificial neural networks, spatial-frequency BibRef

Kakizaki, K.[Kazuya], Fukuchi, K.[Kazuto], Sakuma, J.[Jun],
Deterministic and Probabilistic Certified Defenses for Content-Based Image Retrieval,
IEICE(E108-D), No. 1, January 2025, pp. 92-109.
WWW Link. 2502
BibRef
Earlier:
Certified Defense for Content Based Image Retrieval,
WACV23(4550-4559)
IEEE DOI 2302
Training, Deep learning, Image retrieval, Neural networks, Linear programming, Feature extraction, visual reasoning BibRef

Rahman, M.A., Tunny, S.S.[Salma Sultana], Kayes, A.S.M., Cheng, P.[Peng], Huq, A.[Aminul], Rana, M.S., Islam, M.R.[Md. Rashidul], Tusher, A.S.[Animesh Sarkar],
Approximation-based energy-efficient cyber-secured image classification framework,
SP:IC(133), 2025, pp. 117261.
Elsevier DOI 2502
Image classification, Image approximation, Memory efficiency, Adversarial attacks, Cybersecurity BibRef

Song, H.X.[Hao-Xian], Wang, Z.[Zichi], Zhang, X.P.[Xin-Peng],
Defending Against Adversarial Attack Through Generative Adversarial Networks,
SPLetters(32), 2025, pp. 1730-1734.
IEEE DOI 2505
Perturbation methods, Generators, Training, Error analysis, Generative adversarial networks, Deep learning, Data models, image identification BibRef

Peng, X.[Xiyu], Zhou, J.Y.[Jing-Yi], Wu, X.F.[Xiao-Feng],
Distillation-Based Cross-Model Transferable Adversarial Attack for Remote Sensing Image Classification,
RS(17), No. 10, 2025, pp. 1700.
DOI Link 2505
BibRef

Peng, X.[Xiong], Liu, F.[Feng], Wang, N.N.[Nan-Nan], Lan, L.[Long], Liu, T.L.[Tong-Liang], Cheung, Y.M.[Yiu-Ming], Han, B.[Bo],
Unknown-Aware Bilateral Dependency Optimization for Defending Against Model Inversion Attacks,
PAMI(47), No. 8, August 2025, pp. 6382-6395.
IEEE DOI 2507
Data models, Training, Privacy, Optimization, Security, Feature extraction, Robustness, Face recognition, Data privacy, out-of-distribution detection BibRef

Kong, X.Y.[Xiang-Yin], Jiang, X.Y.[Xiao-Yu], Song, Z.H.[Zhi-Huan], Ge, Z.Q.[Zhi-Qiang],
Data ID Extraction Networks for Unsupervised Class- and Classifier-Free Detection of Adversarial Examples,
PAMI(47), No. 9, September 2025, pp. 7428-7442.
IEEE DOI 2508
Role of structural information in detecting adversarial examples. Detectors, Transformers, Training, Data mining, Image reconstruction, Feature extraction, Object detection, reconstruction model BibRef

Li, C.[Chaobo], Li, H.J.[Hong-Jun], Zhang, G.[Guoan],
Detecting Adversarial Attacks Based on Tracking Differences in Frequency Bands,
MultMed(27), 2025, pp. 4597-4612.
IEEE DOI 2509
Videos, Perturbation methods, Target tracking, Visualization, Discrete cosine transforms, Training, Real-time systems, Mirrors, visual object tracking BibRef

Cui, J.H.[Jia-Hao], Cao, H.[Hang], Meng, L.Q.[Ling-Quan], Guo, W.[Wang], Zhang, K.[Keyi], Wang, Q.[Qi], Chang, C.[Cheng], Li, H.F.[Hai-Feng],
CAGMC-Defence: A Cross-Attention-Guided Multimodal Collaborative Defence Method for Multimodal Remote Sensing Image Target Recognition,
RS(17), No. 19, 2025, pp. 3300.
DOI Link 2510
BibRef

Liu, J.L.[Jun-Lin], Lyu, X.C.[Xin-Chen], Ren, C.[Chenshan], Cui, Q.[Qimei],
Crafting More Transferable Adversarial Examples via Quality-Aware Transformation Combination,
MultMed(27), 2025, pp. 7917-7929.
IEEE DOI 2510
Robustness, Training, Probability distribution, Diversity reception, Splines (mathematics), Perturbation methods, adversarial transferability BibRef


Li, Z.[Zhikai], Liu, X.W.[Xue-Wen], Fu, D.R.J.[Dong-Rong Joe], Li, J.Q.[Jian-Quan], Gu, Q.Y.[Qing-Yi], Keutzer, K.[Kurt], Dong, Z.[Zhen],
K-Sort Arena: Efficient and Reliable Benchmarking for Generative Models via K-wise Human Preferences,
CVPR25(9131-9141)
IEEE DOI 2508
Visualization, Noise, Text to image, Benchmark testing, Probabilistic logic, Robustness, Bayes methods, Text to video, Convergence BibRef

Qian, G.C.[Guo-Cheng], Wang, K.C.[Kuan-Chieh], Patashnik, O.[Or], Heravi, N.[Negin], Ostashev, D.[Daniil], Tulyakov, S.[Sergey], Cohen-Or, D.[Daniel], Aberman, K.[Kfir],
Omni-ID: Holistic Identity Representation Designed for Generative Tasks,
CVPR25(8786-8795)
IEEE DOI 2508
Training, Visualization, Technological innovation, Face recognition, Semantics, Lighting, Robustness, Skin, Decoding, personalized generation BibRef

Kamberov, G.[George],
Doppelgängers and Adversarial Vulnerability,
CVPR25(10244-10254)
IEEE DOI 2508
Measurement, Visualization, Accuracy, Machine learning, Robustness, Entropy, adversarial doppelganagers, fooling rate BibRef

Medi, T.[Tejaswini], Jung, S.[Steffen], Keuper, M.[Margret],
FAIR-TAT: Improving Model Fairness Using Targeted Adversarial Training,
WACV25(7827-7836)
IEEE DOI 2505
Training, Analytical models, Accuracy, Computational modeling, Buildings, Artificial neural networks, Robustness, Data models, adversarial training BibRef

Sistla, M.[Manojna], Wen, Y.[Yu], Shah, A.B.[Aamir Bader], Huang, C.[Chenpei], Wang, L.[Lening], Wu, X.[Xuqing], Chen, J.[Jiefu], Pan, M.[Miao], Fu, X.[Xin],
Bit-Flip Induced Latency Attacks in Object Detection,
WACV25(6709-6718)
IEEE DOI 2505
Deep learning, Accuracy, Computational modeling, System performance, Surveillance, Object detection, bit-flips BibRef

Pal, D.[Debasmita], Sony, R.[Redwan], Ross, A.[Arun],
A Parametric Approach to Adversarial Augmentation for Cross-Domain Iris Presentation Attack Detection,
WACV25(5719-5729)
IEEE DOI Code:
WWW Link. 2505
Training, Translation, Databases, Instruments, Refining, Data augmentation, Sensors, Iris recognition, Testing, Lenses, cross-domain generalization BibRef

Serez, D.[Dario], Cristani, M.[Marco], Bue, A.D.[Alessio Del], Murino, V.[Vittorio], Morerio, P.[Pietro],
Pre-trained Multiple Latent Variable Generative Models are Good Defenders Against Adversarial Attacks,
WACV25(6506-6516)
IEEE DOI Code:
WWW Link. 2505
Codes, Purification, Foundation models, Computational modeling, Noise, Data preprocessing, Autoencoders, Image filtering, Generators, variational autoencoder BibRef

Hung, L.Y.[Li-Ying], Ku, C.C.Y.[Cooper Cheng-Yuan],
Knockoff Branch: Model Stealing Attack via Adding Neurons in the Pre-Trained Model,
WACV25(7062-7070)
IEEE DOI Code:
WWW Link. 2505
Analytical models, Accuracy, Computational modeling, Neurons, Containers, Transformers, Feature extraction, Overfitting, Glass box BibRef

Kulkarni, A.[Akshay], Weng, T.W.[Tsui-Wei],
Interpretability-guided Test-time Adversarial Defense,
ECCV24(XXXVII: 466-483).
Springer DOI 2412
BibRef

Chen, F.Y.[Fei-Yu], Lin, W.[Wei], Liu, Z.Q.[Zi-Quan], Chan, A.B.[Antoni B.],
A Secure Image Watermarking Framework with Statistical Guarantees via Adversarial Attacks on Secret Key Networks,
ECCV24(XL: 428-445).
Springer DOI 2412
BibRef

Qiu, Y.X.[Yi-Xiang], Fang, H.[Hao], Yu, H.Y.[Hong-Yao], Chen, B.[Bin], Qiu, M.[MeiKang], Xia, S.T.[Shu-Tao],
A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks,
ECCV24(XXXII: 109-126).
Springer DOI 2412
BibRef

Tang, K.[Keke], Huang, L.[Lujie], Peng, W.L.[Wei-Long], Liu, D.[Daizong], Wang, X.F.[Xiao-Fei], Ma, Y.[Yang], Liu, L.G.[Li-Gang], Tian, Z.H.[Zhi-Hong],
Flat: Flux-aware Imperceptible Adversarial Attacks on 3d Point Clouds,
ECCV24(VI: 198-215).
Springer DOI 2412
BibRef

Li, Y.[Yi], Angelov, P.[Plamen], Suri, N.[Neeraj],
Self-supervised Representation Learning for Adversarial Attack Detection,
ECCV24(LX: 236-252).
Springer DOI 2412
BibRef

Li, S.X.[Shao-Xin], Liao, X.F.[Xiao-Feng], Che, X.[Xin], Li, X.T.[Xin-Tong], Zhang, Y.[Yong], Chu, L.Y.[Ling-Yang],
Cocktail Universal Adversarial Attack on Deep Neural Networks,
ECCV24(LXV: 396-412).
Springer DOI 2412
BibRef

Hwang, J.[Jaehui], Han, D.Y.[Dong-Yoon], Heo, B.[Byeongho], Park, S.[Song], Chun, S.[Sanghyuk], Lee, J.S.[Jong-Seok],
Similarity of Neural Architectures Using Adversarial Attack Transferability,
ECCV24(LXVIII: 106-126).
Springer DOI 2412
BibRef

Le, B.M.[Binh M.], Tariq, S.[Shahroz], Woo, S.S.[Simon S.],
Bridging Optimal Transport and Jacobian Regularization by Optimal Trajectory for Enhanced Adversarial Defense,
ACCV24(VII: 109-127).
Springer DOI 2412
BibRef

Hao, K.J.[Koh Jun], Ho, S.T.[Sy-Tuyen], Nguyen, N.B.[Ngoc-Bao], Cheung, N.M.[Ngai-Man],
On the Vulnerability of Skip Connections to Model Inversion Attacks,
ECCV24(LXXXI: 140-157).
Springer DOI 2412
BibRef

Katzav, R.[Roye], Giloni, A.[Amit], Grolman, E.[Edita], Saito, H.[Hiroo], Shibata, T.[Tomoyuki], Omino, T.[Tsukasa], Komatsu, M.[Misaki], Hanatani, Y.[Yoshikazu], Elovici, Y.[Yuval], Shabtai, A.[Asaf],
Adversarialeak: External Information Leakage Attack Using Adversarial Samples on Face Recognition Systems,
ECCV24(LXXV: 288-303).
Springer DOI 2412
BibRef

Chen, E.C.[Erh-Chung], Chen, P.Y.[Pin-Yu], Chung, I.H.[I-Hsin], Lee, C.R.[Che-Rung],
Latency Attack Resilience in Object Detectors: Insights from Computing Architecture,
ACCV24(VIII: 229-245).
Springer DOI 2412
BibRef

Fang, H.[Hao], Kong, J.W.[Jia-Wei], Chen, B.[Bin], Dai, T.[Tao], Wu, H.[Hao], Xia, S.T.[Shu-Tao],
CLIP-guided Generative Networks for Transferable Targeted Adversarial Attacks,
ECCV24(XXVIII: 1-19).
Springer DOI 2412
BibRef

Hsu, C.C.[Chih-Chung], Wu, M.H.[Ming-Hsuan], Liu, E.C.[En-Chao],
LFGN: Low-Level Feature-Guided Network for Adversarial Defense,
ICIP24(563-567)
IEEE DOI 2411
Training, Deep learning, Computational modeling, Pipelines, Noise, Transforms, Artificial neural networks, Adversarial defense, security BibRef

Niu, Y.[Yue], Ali, R.E.[Ramy E.], Prakash, S.[Saurav], Avestimehr, S.[Salman],
All Rivers Run to the Sea: Private Learning with Asymmetric Flows,
CVPR24(12353-12362)
IEEE DOI 2410
Training, Privacy, Quantization (signal), Accuracy, Computational modeling, Machine learning, Complexity theory, Distributed Machine Learning BibRef

Hong, S.H.[Sang-Hwa],
Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities,
AML24(2957-2966)
IEEE DOI 2410
Resistance, Schedules, Image synthesis, Computational modeling, Face recognition, Reinforcement learning, Adversarial Attack BibRef

Mumcu, F.[Furkan], Yilmaz, Y.[Yasin],
Multimodal Attack Detection for Action Recognition Models,
AML24(2967-2976)
IEEE DOI 2410
Target recognition, Graphics processing units, Detectors, Real-time systems, Robustness, Action Recognition Models BibRef

Wang, Y.T.[Yan-Ting], Fu, H.Y.[Hong-Ye], Zou, W.[Wei], Jia, J.Y.[Jin-Yuan],
MMCert: Provable Defense Against Adversarial Attacks to Multi-Modal Models,
CVPR24(24655-24664)
IEEE DOI 2410
Solid modeling, Image segmentation, Emotion recognition, Perturbation methods, Computational modeling, Roads, multi-modal BibRef

Wang, K.Y.[Kun-Yu], He, X.R.[Xuan-Ran], Wang, W.X.[Wen-Xuan], Wang, X.S.[Xiao-Sen],
Boosting Adversarial Transferability by Block Shuffle and Rotation,
CVPR24(24336-24346)
IEEE DOI Code:
WWW Link. 2410
Heating systems, Deep learning, Limiting, Codes, Perturbation methods, Computational modeling, adversarial attack, BibRef

Zheng, J.H.[Jun-Hao], Lin, C.H.[Chen-Hao], Sun, J.H.[Jia-Hao], Zhao, Z.Y.[Zheng-Yu], Li, Q.[Qian], Shen, C.[Chao],
Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving,
CVPR24(24452-24461)
IEEE DOI Code:
WWW Link. 2410
Solid modeling, Rain, Shape, Computational modeling, Estimation, Robustness, Monocular Depth Estimation, Autonomous Driving, Adversarial Attack BibRef

Tao, Y.[Yunbo], Liu, D.Z.[Dai-Zong], Zhou, P.[Pan], Xie, Y.[Yulai], Du, W.[Wei], Hu, W.[Wei],
3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D Point Cloud Attack,
ICCV23(14294-14304)
IEEE DOI 2401
BibRef

Ruan, S.W.[Shou-Wei], Dong, Y.P.[Yin-Peng], Su, H.[Hang], Peng, J.T.[Jian-Teng], Chen, N.[Ning], Wei, X.X.[Xing-Xing],
Towards Viewpoint-Invariant Visual Recognition via Adversarial Training,
ICCV23(4686-4696)
IEEE DOI 2401
BibRef

Lee, B.K.[Byung-Kwan], Kim, J.[Junho], Ro, Y.M.[Yong Man],
Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning,
ICCV23(4476-4486)
IEEE DOI 2401
BibRef

Fang, H.[Han], Zhang, J.[Jiyi], Qiu, Y.P.[Yu-Peng], Liu, J.Y.[Jia-Yang], Xu, K.[Ke], Fang, C.F.[Cheng-Fang], Chang, E.C.[Ee-Chien],
Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence,
ICCV23(4312-4321)
IEEE DOI 2401
BibRef

Zhu, P.[Peifei], Osada, G.[Genki], Kataoka, H.[Hirokatsu], Takahashi, T.[Tsubasa],
Frequency-aware GAN for Adversarial Manipulation Generation,
ICCV23(4292-4301)
IEEE DOI 2401
BibRef

Frosio, I.[Iuri], Kautz, J.[Jan],
The Best Defense is a Good Offense: Adversarial Augmentation Against Adversarial Attacks,
CVPR23(4067-4076)
IEEE DOI 2309
BibRef

Silva, H.P.[Hondamunige Prasanna], Seidenari, L.[Lorenzo], del Bimbo, A.[Alberto],
Diffdefense: Defending Against Adversarial Attacks via Diffusion Models,
CIAP23(II:430-442).
Springer DOI 2312
BibRef

di Domenico, N.[Nicolò], Borghi, G.[Guido], Franco, A.[Annalisa], Maltoni, D.[Davide],
Combining Identity Features and Artifact Analysis for Differential Morphing Attack Detection,
CIAP23(I:100-111).
Springer DOI 2312
BibRef

Tapia, J.[Juan], Busch, C.[Christoph],
Impact of Synthetic Images on Morphing Attack Detection Using a Siamese Network,
CIARP23(I:343-357).
Springer DOI 2312
BibRef

Zeng, H.[Hui], Chen, B.W.[Bi-Wei], Deng, K.[Kang], Peng, A.J.[An-Jie],
Adversarial Example Detection Bayesian Game,
ICIP23(1710-1714)
IEEE DOI Code:
WWW Link. 2312
BibRef

Zhang, J.F.[Jie-Fei], Wang, J.[Jie], Lyu, W.L.[Wan-Li], Yin, Z.X.[Zhao-Xia],
Local Texture Complexity Guided Adversarial Attack,
ICIP23(2065-2069)
IEEE DOI 2312
BibRef

Nguyen, N.B.[Ngoc-Bao], Chandrasegaran, K.[Keshigeyan], Abdollahzadeh, M.[Milad], Cheung, N.M.[Ngai-Man],
Re-Thinking Model Inversion Attacks Against Deep Neural Networks,
CVPR23(16384-16393)
IEEE DOI 2309
BibRef

Tan, C.C.[Chuang-Chuang], Zhao, Y.[Yao], Wei, S.[Shikui], Gu, G.H.[Guang-Hua], Wei, Y.C.[Yun-Chao],
Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection,
CVPR23(12105-12114)
IEEE DOI 2309
BibRef

Bai, Q.Y.[Qing-Yan], Yang, C.[Ceyuan], Xu, Y.H.[Ying-Hao], Liu, X.H.[Xi-Hui], Yang, Y.[Yujiu], Shen, Y.J.[Yu-Jun],
GLeaD: Improving GANs with A Generator-Leading Task,
CVPR23(12094-12104)
IEEE DOI 2309
BibRef

Jamil, H.[Huma], Liu, Y.J.[Ya-Jing], Caglar, T.[Turgay], Cole, C.[Christina], Blanchard, N.[Nathaniel], Peterson, C.[Christopher], Kirby, M.[Michael],
Hamming Similarity and Graph Laplacians for Class Partitioning and Adversarial Image Detection,
TAG-PRA23(590-599)
IEEE DOI 2309
BibRef

Li, S.[Simin], Zhang, S.[Shuning], Chen, G.[Gujun], Wang, D.[Dong], Feng, P.[Pu], Wang, J.[Jiakai], Liu, A.[Aishan], Yi, X.[Xin], Liu, X.L.[Xiang-Long],
Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks,
CVPR23(12324-12333)
IEEE DOI 2309
BibRef

Godfrey, C.[Charles], Kvinge, H.[Henry], Bishoff, E.[Elise], Mckay, M.[Myles], Brown, D.[Davis], Doster, T.[Tim], Byler, E.[Eleanor],
How many dimensions are required to find an adversarial example?,
AML23(2353-2360)
IEEE DOI 2309
BibRef

Chen, Y.W.[Yu-Wei], Chu, S.Y.[Shi-Yong],
Adversarial Defense in Aerial Detection,
AML23(2306-2313)
IEEE DOI 2309
BibRef

Zhou, Q.G.[Qing-Guo], Lei, M.[Ming], Zhi, P.[Peng], Zhao, R.[Rui], Shen, J.[Jun], Yong, B.B.[Bin-Bin],
Towards Improving the Anti-Attack Capability of the Rangenet++,
ACCVWS22(60-70).
Springer DOI 2307
BibRef

Zhao, Z.Y.[Zheng-Yu], Dang, N.[Nga], Larson, M.[Martha],
The Importance of Image Interpretation: Patterns of Semantic Misclassification in Real-world Adversarial Images,
MMMod23(II: 718-725).
Springer DOI 2304
BibRef

Dargaud, L.[Laurine], Ibsen, M.[Mathias], Tapia, J.[Juan], Busch, C.[Christoph],
A Principal Component Analysis-Based Approach for Single Morphing Attack Detection,
Explain-Bio23(683-692)
IEEE DOI 2302
Training, Learning systems, Visualization, Image color analysis, Feature extraction, Human in the loop, Detection algorithms BibRef

Drenkow, N.[Nathan], Lennon, M.[Max], Wang, I.J.[I-Jeng], Burlina, P.[Philippe],
Do Adaptive Active Attacks Pose Greater Risk Than Static Attacks?,
WACV23(1380-1389)
IEEE DOI 2302
Measurement, Sensitivity analysis, Aggregates, Kinematics, Observers, Trajectory, Algorithms: Adversarial learning, visual reasoning BibRef

Chen, Y.K.[Yong-Kang], Zhang, M.[Ming], Li, J.[Jin], Kuang, X.H.[Xiao-Hui],
Adversarial Attacks and Defenses in Image Classification: A Practical Perspective,
ICIVC22(424-430)
IEEE DOI 2301
Training, Deep learning, Benchmark testing, Security, Image classification, deep learning, security, defenses BibRef

Hwang, D.[Duhun], Lee, E.[Eunjung], Rhee, W.[Wonjong],
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense,
ICPR22(2401-2407)
IEEE DOI 2212
Training, Codes, Purification, Boosting, Robustness BibRef

Tasaki, H.[Hajime], Kaneko, Y.[Yuji], Chao, J.H.[Jin-Hui],
Curse of co-Dimensionality: Explaining Adversarial Examples by Embedding Geometry of Data Manifold,
ICPR22(2364-2370)
IEEE DOI 2212
Manifolds, Geometry, Training, Deep learning, Neural networks, Training data BibRef

Khalsi, R.[Rania], Smati, I.[Imen], Sallami, M.M.[Mallek Mziou], Ghorbel, F.[Faouzi],
A Novel System for Deep Contour Classifiers Certification Under Filtering Attacks,
ICIP22(3561-3565)
IEEE DOI 2211
Deep learning, Upper bound, Image recognition, Filtering, Perturbation methods, Robustness, Kernel, Contours classification, Uncertainty in AI BibRef

Zhang, Y.X.[Yu-Xuan], Dong, B.[Bo], Heide, F.[Felix],
All You Need Is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines,
ECCV22(XIX:323-343).
Springer DOI 2211
BibRef

Lu, B.[Bingyi], Liu, J.Y.[Ji-Yuan], Xiong, H.L.[Hui-Lin],
Transformation-Based Adversarial Defense Via Sparse Representation,
ICIP22(1726-1730)
IEEE DOI 2211
Bridges, Training, Deep learning, Dictionaries, Perturbation methods, Neural networks, adversarial examples, adversarial defense, image classification BibRef

Subramanyam, A.V., Raj, A.[Abhigyan],
Barycentric Defense,
ICIP22(2276-2280)
IEEE DOI 2211
Training, Codes, Extraterrestrial measurements, Robustness, Barycenter, Dual Wasserstein, Adversarial defense BibRef

Kowalski, C.[Charles], Famili, A.[Azadeh], Lao, Y.J.[Ying-Jie],
Towards Model Quantization on the Resilience Against Membership Inference Attacks,
ICIP22(3646-3650)
IEEE DOI 2211
Resistance, Performance evaluation, Privacy, Quantization (signal), Computational modeling, Neural networks, Training data, Neural Network BibRef

Nayak, G.K.[Gaurav Kumar], Rawal, R.[Ruchit], Lal, R.[Rohit], Patil, H.[Himanshu], Chakraborty, A.[Anirban],
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems,
HCIS22(4331-4340)
IEEE DOI 2210
Measurement, Training, Knowledge engineering, Predictive models, Reliability engineering BibRef

Chen, Y.W.[Yu-Wei],
Rethinking Adversarial Examples in Wargames,
ArtOfRobust22(100-106)
IEEE DOI 2210
Neural networks, Decision making, Games, Prediction algorithms, Software, Security BibRef

Haque, M.[Mirazul], Budnik, C.J.[Christof J.], Yang, W.[Wei],
CorrGAN: Input Transformation Technique Against Natural Corruptions,
ArtOfRobust22(193-196)
IEEE DOI 2210
Deep learning, Perturbation methods, Neural networks, Generative adversarial networks BibRef

Ren, S.C.[Su-Cheng], Gao, Z.Q.[Zheng-Qi], Hua, T.Y.[Tian-Yu], Xue, Z.H.[Zi-Hui], Tian, Y.L.[Yong-Long], He, S.F.[Sheng-Feng], Zhao, H.[Hang],
Co-advise: Cross Inductive Bias Distillation,
CVPR22(16752-16761)
IEEE DOI 2210
Training, Representation learning, Convolutional codes, Convolution, Transformers, Adversarial attack and defense BibRef

Pang, T.Y.[Tian-Yu], Zhang, H.[Huishuai], He, D.[Di], Dong, Y.P.[Yin-Peng], Su, H.[Hang], Chen, W.[Wei], Zhu, J.[Jun], Liu, T. .Y.[Tie- Yan],
Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart,
CVPR22(15202-15212)
IEEE DOI 2210
Measurement, Training, Couplings, Machine learning, Predictive models, Robustness, Adversarial attack and defense, Machine learning BibRef

Vellaichamy, S.[Sivapriya], Hull, M.[Matthew], Wang, Z.J.J.[Zi-Jie J.], Das, N.[Nilaksh], Peng, S.Y.[Sheng-Yun], Park, H.[Haekyu], Chau, D.H.P.[Duen Horng Polo],
DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors,
CVPR22(21452-21459)
IEEE DOI 2210
Visualization, Head, Detectors, Object detection, Feature extraction, Magnetic heads, Behavioral sciences BibRef

Dong, J.H.[Jun-Hao], Wang, Y.[Yuan], Lai, J.H.[Jian-Huang], Xie, X.H.[Xiao-Hua],
Improving Adversarially Robust Few-shot Image Classification with Generalizable Representations,
CVPR22(9015-9024)
IEEE DOI 2210
Training, Deep learning, Image recognition, Benchmark testing, Task analysis, Adversarial attack and defense BibRef

Chen, T.L.[Tian-Long], Zhang, Z.Y.[Zhen-Yu], Zhang, Y.H.[Yi-Hua], Chang, S.Y.[Shi-Yu], Liu, S.[Sijia], Wang, Z.Y.[Zhang-Yang],
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free,
CVPR22(588-599)
IEEE DOI 2210
Training, Deep learning, Neural networks, Training data, Network architecture, Adversarial attack and defense BibRef

Yin, M.J.[Ming-Jun], Li, S.[Shasha], Cai, Z.[Zikui], Song, C.Y.[Cheng-Yu], Asif, M.S.[M. Salman], Roy-Chowdhury, A.K.[Amit K.], Krishnamurthy, S.V.[Srikanth V.],
Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes,
ICCV21(7838-7847)
IEEE DOI 2203
Deep learning, Machine vision, Computational modeling, Neural networks, Detectors, Context modeling, Adversarial learning, Scene analysis and understanding BibRef

Abusnaina, A.[Ahmed], Wu, Y.H.[Yu-Hang], Arora, S.[Sunpreet], Wang, Y.Z.[Yi-Zhen], Wang, F.[Fei], Yang, H.[Hao], Mohaisen, D.[David],
Adversarial Example Detection Using Latent Neighborhood Graph,
ICCV21(7667-7676)
IEEE DOI 2203
Training, Manifolds, Deep learning, Network topology, Perturbation methods, Neural networks, Adversarial learning, Recognition and classification BibRef

Mao, C.Z.[Cheng-Zhi], Chiquier, M.[Mia], Wang, H.[Hao], Yang, J.F.[Jun-Feng], Vondrick, C.[Carl],
Adversarial Attacks are Reversible with Natural Supervision,
ICCV21(641-651)
IEEE DOI 2203
Training, Benchmark testing, Robustness, Inference algorithms, Image restoration, Recognition and classification, Adversarial learning BibRef

Zhao, X.J.[Xue-Jun], Zhang, W.C.[Wen-Can], Xiao, X.K.[Xiao-Kui], Lim, B.[Brian],
Exploiting Explanations for Model Inversion Attacks,
ICCV21(662-672)
IEEE DOI 2203
Privacy, Semantics, Data visualization, Medical services, Predictive models, Data models, Artificial intelligence, Recognition and classification BibRef

Wang, Q.[Qian], Kurz, D.[Daniel],
Reconstructing Training Data from Diverse ML Models by Ensemble Inversion,
WACV22(3870-3878)
IEEE DOI 2202
Training, Analytical models, Filtering, Training data, Machine learning, Predictive models, Security/Surveillance BibRef

Tursynbek, N.[Nurislam], Petiushko, A.[Aleksandr], Oseledets, I.[Ivan],
Geometry-Inspired Top-k Adversarial Perturbations,
WACV22(4059-4068)
IEEE DOI 2202
Perturbation methods, Prediction algorithms, Multitasking, Classification algorithms, Task analysis, Adversarial Attack and Defense Methods BibRef

Nayak, G.K.[Gaurav Kumar], Rawal, R.[Ruchit], Chakraborty, A.[Anirban],
DAD: Data-free Adversarial Defense at Test Time,
WACV22(3788-3797)
IEEE DOI 2202
Training, Adaptation models, Biological system modeling, Frequency-domain analysis, Training data, Adversarial Attack and Defense Methods BibRef

Scheliga, D.[Daniel], Mäder, P.[Patrick], Seeland, M.[Marco],
PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage,
WACV22(3605-3614)
IEEE DOI 2202
Training, Privacy, Data privacy, Perturbation methods, Computational modeling, Training data, Stochastic processes, Deep Learning Gradient Inversion Attacks BibRef

Drenkow, N.[Nathan], Fendley, N.[Neil], Burlina, P.[Philippe],
Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis,
WACV22(2815-2825)
IEEE DOI 2202
Training, Performance evaluation, Perturbation methods, Training data, Detectors, Feature extraction, Security/Surveillance BibRef

Cheng, H.[Hao], Xu, K.D.[Kai-Di], Li, Z.G.[Zhen-Gang], Zhao, P.[Pu], Wang, C.[Chenan], Lin, X.[Xue], Kailkhura, B.[Bhavya], Goldhahn, R.[Ryan],
More or Less (MoL): Defending against Multiple Perturbation Attacks on Deep Neural Networks through Model Ensemble and Compression,
Hazards22(645-655)
IEEE DOI 2202
Training, Deep learning, Perturbation methods, Computational modeling, Conferences, Neural networks BibRef

Lang, I.[Itai], Kotlicki, U.[Uriel], Avidan, S.[Shai],
Geometric Adversarial Attacks and Defenses on 3D Point Clouds,
3DV21(1196-1205)
IEEE DOI 2201
Point cloud compression, Geometry, Deep learning, Solid modeling, Shape, Semantics, 3D Point Clouds, Geometry Processing, Defense Methods BibRef

Wang, Y.P.[Yao-Peng], Xie, L.[Lehui], Liu, X.M.[Xi-Meng], Yin, J.L.[Jia-Li], Zheng, T.J.[Ting-Jie],
Model-Agnostic Adversarial Example Detection Through Logit Distribution Learning,
ICIP21(3617-3621)
IEEE DOI 2201
Deep learning, Resistance, Semantics, Feature extraction, Task analysis, deep learning, adversarial detector, adversarial defenses BibRef

Chai, W.H.[Wei-Heng], Lu, Y.T.[Yan-Tao], Velipasalar, S.[Senem],
Weighted Average Precision: Adversarial Example Detection for Visual Perception of Autonomous Vehicles,
ICIP21(804-808)
IEEE DOI 2201
Measurement, Perturbation methods, Image processing, Pipelines, Neural networks, Optimization methods, Object detection, Neural Networks BibRef

Kung, B.H.[Bo-Han], Chen, P.C.[Pin-Chun], Liu, Y.C.[Yu-Cheng], Chen, J.C.[Jun-Cheng],
Squeeze and Reconstruct: Improved Practical Adversarial Defense Using Paired Image Compression and Reconstruction,
ICIP21(849-853)
IEEE DOI 2201
Training, Deep learning, Image coding, Perturbation methods, Transform coding, Robustness, Adversarial Attack, JPEG Compression, Artifact Correction BibRef

Li, C.Y.[Chau Yi], Sánchez-Matilla, R.[Ricardo], Shamsabadi, A.S.[Ali Shahin], Mazzon, R.[Riccardo], Cavallaro, A.[Andrea],
On the Reversibility of Adversarial Attacks,
ICIP21(3073-3077)
IEEE DOI 2201
Deep learning, Perturbation methods, Image processing, Benchmark testing, Adversarial perturbations, Reversibility BibRef

Bakiskan, C.[Can], Cekic, M.[Metehan], Sezer, A.D.[Ahmet Dundar], Madhow, U.[Upamanyu],
A Neuro-Inspired Autoencoding Defense Against Adversarial Attacks,
ICIP21(3922-3926)
IEEE DOI 2201
Training, Deep learning, Image coding, Perturbation methods, Neural networks, Decoding, Adversarial, Machine learning, Robust, Defense BibRef

Truong, J.B.[Jean-Baptiste], Maini, P.[Pratyush], Walls, R.J.[Robert J.], Papernot, N.[Nicolas],
Data-Free Model Extraction,
CVPR21(4769-4778)
IEEE DOI 2111
Adaptation models, Computational modeling, Intellectual property, Predictive models, Data models, Complexity theory BibRef

Deng, Z.J.[Zhi-Jie], Yang, X.[Xiao], Xu, S.Z.[Shi-Zhen], Su, H.[Hang], Zhu, J.[Jun],
LiBRe: A Practical Bayesian Approach to Adversarial Detection,
CVPR21(972-982)
IEEE DOI 2111
Training, Deep learning, Costs, Uncertainty, Neural networks, Bayes methods BibRef

Yang, K.[Karren], Lin, W.Y.[Wan-Yi], Barman, M.[Manash], Condessa, F.[Filipe], Kolter, Z.[Zico],
Defending Multimodal Fusion Models against Single-Source Adversaries,
CVPR21(3339-3348)
IEEE DOI 2111
Training, Sentiment analysis, Perturbation methods, Neural networks, Object detection, Robustness BibRef

Ong, D.S.[Ding Sheng], Chan, C.S.[Chee Seng], Ng, K.W.[Kam Woh], Fan, L.X.[Li-Xin], Yang, Q.[Qiang],
Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attacks,
CVPR21(3629-3638)
IEEE DOI 2111
Deep learning, Knowledge engineering, Image synthesis, Superresolution, Intellectual property, Watermarking BibRef

Pestana, C.[Camilo], Liu, W.[Wei], Glance, D.[David], Mian, A.[Ajmal],
Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for Perturbation Difficulty,
WACV21(556-565)
IEEE DOI 2106
Measurement, Deep learning, Machine learning algorithms, Image recognition BibRef

Kyatham, V.[Vinay], Mishra, D.[Deepak], Prathosh, A.P.,
Variational Inference with Latent Space Quantization for Adversarial Resilience,
ICPR21(9593-9600)
IEEE DOI 2105
Manifolds, Degradation, Quantization (signal), Perturbation methods, Neural networks, Data models, Real-time systems BibRef

Li, H.L.[Hong-Lin], Fan, Y.F.[Yi-Fei], Ganz, F.[Frieder], Yezzi, A.J.[Anthony J.], Barnaghi, P.[Payam],
Verifying the Causes of Adversarial Examples,
ICPR21(6750-6757)
IEEE DOI 2105
Geometry, Perturbation methods, Neural networks, Linearity, Estimation, Aerospace electronics, Probabilistic logic BibRef

Huang, Y.T.[Yen-Ting], Liao, W.H.[Wen-Hung], Huang, C.W.[Chen-Wei],
Defense Mechanism Against Adversarial Attacks Using Density-based Representation of Images,
ICPR21(3499-3504)
IEEE DOI 2105
Deep learning, Perturbation methods, Transforms, Hybrid power systems, Intelligent systems BibRef

Chhabra, S.[Saheb], Agarwal, A.[Akshay], Singh, R.[Richa], Vatsa, M.[Mayank],
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound,
ICPR21(5302-5309)
IEEE DOI 2105
Visualization, Sensitivity, Databases, Computational modeling, Perturbation methods, Predictive models, Prediction algorithms BibRef

Watson, M.[Matthew], Moubayed, N.A.[Noura Al],
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning,
ICPR21(8180-8187)
IEEE DOI 2105
Training, Deep learning, Perturbation methods, MIMICs, Medical services, Predictive models, Feature extraction, Medical Data BibRef

Carrara, F.[Fabio], Caldelli, R.[Roberto], Falchi, F.[Fabrizio], Amato, G.[Giuseppe],
Defending Neural ODE Image Classifiers from Adversarial Attacks with Tolerance Randomization,
MMForWild20(425-438).
Springer DOI 2103
BibRef

Li, Y.W.[Ying-Wei], Bai, S.[Song], Xie, C.H.[Ci-Hang], Liao, Z.Y.[Zhen-Yu], Shen, X.H.[Xiao-Hui], Yuille, A.L.[Alan L.],
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses,
ECCV20(XI:795-813).
Springer DOI 2011
BibRef

Xu, J., Li, Y., Jiang, Y., Xia, S.T.,
Adversarial Defense Via Local Flatness Regularization,
ICIP20(2196-2200)
IEEE DOI 2011
Training, Standards, Perturbation methods, Robustness, Visualization, Linearity, Taylor series, adversarial defense, gradient-based regularization BibRef

Maung, M., Pyone, A., Kiya, H.,
Encryption Inspired Adversarial Defense For Visual Classification,
ICIP20(1681-1685)
IEEE DOI 2011
Training, Transforms, Encryption, Perturbation methods, Machine learning, Adversarial defense, perceptual image encryption BibRef

Shah, S.A.A., Bougre, M., Akhtar, N., Bennamoun, M., Zhang, L.,
Efficient Detection of Pixel-Level Adversarial Attacks,
ICIP20(718-722)
IEEE DOI 2011
Robots, Training, Perturbation methods, Machine learning, Robustness, Task analysis, Testing, Adversarial attack, perturbation detection, deep learning BibRef

Mao, C.Z.[Cheng-Zhi], Cha, A.[Augustine], Gupta, A.[Amogh], Wang, H.[Hao], Yang, J.F.[Jun-Feng], Vondrick, C.[Carl],
Generative Interventions for Causal Learning,
CVPR21(3946-3955)
IEEE DOI 2111
Training, Visualization, Correlation, Computational modeling, Control systems BibRef

Li, S.S.[Sha-Sha], Zhu, S.T.[Shi-Tong], Paul, S.[Sudipta], Roy-Chowdhury, A.K.[Amit K.], Song, C.Y.[Cheng-Yu], Krishnamurthy, S.[Srikanth], Swami, A.[Ananthram], Chan, K.S.[Kevin S.],
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency,
ECCV20(XXIII:396-413).
Springer DOI 2011
BibRef

Li, Y.[Yueru], Cheng, S.Y.[Shu-Yu], Su, H.[Hang], Zhu, J.[Jun],
Defense Against Adversarial Attacks via Controlling Gradient Leaking on Embedded Manifolds,
ECCV20(XXVIII:753-769).
Springer DOI 2011
BibRef

Rounds, J.[Jeremiah], Kingsland, A.[Addie], Henry, M.J.[Michael J.], Duskin, K.R.[Kayla R.],
Probing for Artifacts: Detecting Imagenet Model Evasions,
AML-CV20(3432-3441)
IEEE DOI 2008
Perturbation methods, Probes, Computational modeling, Robustness, Image color analysis, Machine learning, Indexes BibRef

Kariyappa, S., Qureshi, M.K.,
Defending Against Model Stealing Attacks With Adaptive Misinformation,
CVPR20(767-775)
IEEE DOI 2008
Data models, Adaptation models, Cloning, Predictive models, Computational modeling, Security, Perturbation methods BibRef

Cohen, G., Sapiro, G., Giryes, R.,
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors,
CVPR20(14441-14450)
IEEE DOI 2008
Training, Robustness, Loss measurement, Feature extraction, Neural networks, Perturbation methods, Training data BibRef

Yuan, J., He, Z.,
Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks,
CVPR20(578-587)
IEEE DOI 2008
Cleaning, Feedback loop, Transforms, Neural networks, Estimation, Fuses, Iterative methods BibRef

Xiao, C., Zheng, C.,
One Man's Trash Is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples,
CVPR20(409-418)
IEEE DOI 2008
Training, Robustness, Perturbation methods, Neural networks, Transforms, Mathematical model, Numerical models BibRef

Zhao, Y., Tian, Y., Fowlkes, C., Shen, W., Yuille, A.L.,
Resisting Large Data Variations via Introspective Transformation Network,
WACV20(3069-3078)
IEEE DOI 2006
Training, Testing, Robustness, Training data, Linear programming, Resists BibRef

Folz, J., Palacio, S., Hees, J., Dengel, A.,
Adversarial Defense based on Structure-to-Signal Autoencoders,
WACV20(3568-3577)
IEEE DOI 2006
Perturbation methods, Semantics, Robustness, Predictive models, Training, Decoding, Neural networks BibRef

Zheng, S., Zhu, Z., Zhang, X., Liu, Z., Cheng, J., Zhao, Y.,
Distribution-Induced Bidirectional Generative Adversarial Network for Graph Representation Learning,
CVPR20(7222-7231)
IEEE DOI 2008
Generative adversarial networks, Robustness, Data models, Generators, Task analysis, Gaussian distribution BibRef

Benz, P.[Philipp], Zhang, C.N.[Chao-Ning], Imtiaz, T.[Tooba], Kweon, I.S.[In So],
Double Targeted Universal Adversarial Perturbations,
ACCV20(IV:284-300).
Springer DOI 2103
BibRef
Earlier: A2,, A1, A3, A4:
Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations,
CVPR20(14509-14518)
IEEE DOI 2008
Perturbation methods, Correlation, Training data, Feature extraction, Training, Task analysis, Robustness BibRef

Xie, C., Tan, M., Gong, B., Wang, J., Yuille, A.L., Le, Q.V.,
Adversarial Examples Improve Image Recognition,
CVPR20(816-825)
IEEE DOI 2008
Training, Robustness, Degradation, Image recognition, Perturbation methods, Standards, Supervised learning BibRef

Dabouei, A., Soleymani, S., Taherkhani, F., Dawson, J., Nasrabadi, N.M.,
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations,
WACV20(2654-2663)
IEEE DOI 2006
Perturbation methods, Frequency-domain analysis, Robustness, Training, Optimization, Network architecture, Topology BibRef

Bai, Y., Feng, Y., Wang, Y., Dai, T., Xia, S., Jiang, Y.,
Hilbert-Based Generative Defense for Adversarial Examples,
ICCV19(4783-4792)
IEEE DOI 2004
feature extraction, Hilbert transforms, neural nets, security of data, scan mode, advanced Hilbert curve scan order BibRef

Jang, Y., Zhao, T., Hong, S., Lee, H.,
Adversarial Defense via Learning to Generate Diverse Attacks,
ICCV19(2740-2749)
IEEE DOI 2004
learning (artificial intelligence), neural nets, pattern classification, security of data, adversarial defense, Machine learning BibRef

Mustafa, A., Khan, S., Hayat, M., Goecke, R., Shen, J., Shao, L.,
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks,
ICCV19(3384-3393)
IEEE DOI 2004
convolutional neural nets, feature extraction, image classification, image representation, Iterative methods BibRef

Taran, O.[Olga], Rezaeifar, S.[Shideh], Holotyak, T.[Taras], Voloshynovskiy, S.[Slava],
Defending Against Adversarial Attacks by Randomized Diversification,
CVPR19(11218-11225).
IEEE DOI 2002
BibRef

Sun, B.[Bo], Tsai, N.H.[Nian-Hsuan], Liu, F.C.[Fang-Chen], Yu, R.[Ronald], Su, H.[Hao],
Adversarial Defense by Stratified Convolutional Sparse Coding,
CVPR19(11439-11448).
IEEE DOI 2002
BibRef

Ho, C.H.[Chih-Hui], Leung, B.[Brandon], Sandstrom, E.[Erik], Chang, Y.[Yen], Vasconcelos, N.M.[Nuno M.],
Catastrophic Child's Play: Easy to Perform, Hard to Defend Adversarial Attacks,
CVPR19(9221-9229).
IEEE DOI 2002
BibRef

Dubey, A.[Abhimanyu], van der Maaten, L.[Laurens], Yalniz, Z.[Zeki], Li, Y.X.[Yi-Xuan], Mahajan, D.[Dhruv],
Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search,
CVPR19(8759-8768).
IEEE DOI 2002
BibRef

Dong, Y.P.[Yin-Peng], Pang, T.Y.[Tian-Yu], Su, H.[Hang], Zhu, J.[Jun],
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks,
CVPR19(4307-4316).
IEEE DOI 2002
BibRef

Rony, J.[Jerome], Hafemann, L.G.[Luiz G.], Oliveira, L.S.[Luiz S.], Ben Ayed, I.[Ismail], Sabourin, R.[Robert], Granger, E.[Eric],
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses,
CVPR19(4317-4325).
IEEE DOI 2002
BibRef

Qiu, Y.X.[Yu-Xian], Leng, J.W.[Jing-Wen], Guo, C.[Cong], Chen, Q.[Quan], Li, C.[Chao], Guo, M.[Minyi], Zhu, Y.H.[Yu-Hao],
Adversarial Defense Through Network Profiling Based Path Extraction,
CVPR19(4772-4781).
IEEE DOI 2002
BibRef

Jia, X.J.[Xiao-Jun], Wei, X.X.[Xing-Xing], Cao, X.C.[Xiao-Chun], Foroosh, H.[Hassan],
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples,
CVPR19(6077-6085).
IEEE DOI 2002
BibRef

Ji, J., Zhong, B., Ma, K.,
Multi-Scale Defense of Adversarial Images,
ICIP19(4070-4074)
IEEE DOI 1910
deep learning, adversarial images, defense, multi-scale, image evolution BibRef

Saha, S., Kumar, A., Sahay, P., Jose, G., Kruthiventi, S., Muralidhara, H.,
Attack Agnostic Statistical Method for Adversarial Detection,
SDL-CV19(798-802)
IEEE DOI 2004
feature extraction, image classification, learning (artificial intelligence), neural nets, Adversarial Attack BibRef

Taran, O.[Olga], Rezaeifar, S.[Shideh], Voloshynovskiy, S.[Slava],
Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks,
Objectionable18(II:267-279).
Springer DOI 1905
BibRef

Naseer, M., Khan, S., Porikli, F.M.,
Local Gradients Smoothing: Defense Against Localized Adversarial Attacks,
WACV19(1300-1307)
IEEE DOI 1904
data compression, feature extraction, gradient methods, image classification, image coding, image representation, High frequency BibRef

Akhtar, N., Liu, J., Mian, A.,
Defense Against Universal Adversarial Perturbations,
CVPR18(3389-3398)
IEEE DOI 1812
Perturbation methods, Training, Computational modeling, Detectors, Neural networks, Robustness, Integrated circuits BibRef

Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Fawzi, A.[Alhussein], Fawzi, O.[Omar], Frossard, P.[Pascal],
Universal Adversarial Perturbations,
CVPR17(86-94)
IEEE DOI 1711
Correlation, Neural networks, Optimization, Robustness, Training, Visualization BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Countering Adversarial Attacks, Robustness .


Last update:Nov 2, 2025 at 14:03:07