Seo, S.[Seungwan],
Lee, Y.[Yunseung],
Kang, P.[Pilsung],
Cost-free adversarial defense: Distance-based optimization for model
robustness without adversarial training,
CVIU(227), 2023, pp. 103599.
Elsevier DOI
2301
Adversarial defense, White-box attack, Adversarial robustness,
Distance-based defense
BibRef
Cheng, Z.[Zhen],
Zhu, F.[Fei],
Zhang, X.Y.[Xu-Yao],
Liu, C.L.[Cheng-Lin],
Adversarial training with distribution normalization and margin
balance,
PR(136), 2023, pp. 109182.
Elsevier DOI
2301
Adversarial robustness, Adversarial training,
Distribution normalization, Margin balance
BibRef
Lau, C.P.[Chun Pong],
Liu, J.[Jiang],
Souri, H.[Hossein],
Lin, W.A.[Wei-An],
Feizi, S.[Soheil],
Chellappa, R.[Rama],
Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses,
PAMI(45), No. 11, November 2023, pp. 13054-13067.
IEEE DOI
2310
BibRef
Miao, J.Z.[Jun-Zhong],
Yu, X.Z.[Xiang-Zhan],
Hu, Z.C.[Zhi-Chao],
Song, Y.[Yanru],
Liu, L.[Likun],
Zhou, Z.G.[Zhi-Gang],
An effective deep learning adversarial defense method based on
spatial structural constraints in embedding space,
PRL(178), 2024, pp. 160-166.
Elsevier DOI
2402
Adversarial defense, Spatial structure, Adversarial training,
Generalization, Deep learning
BibRef
Song, K.Y.[Kai-Yu],
Lai, H.J.[Han-Jiang],
Pan, Y.[Yan],
Yin, J.[Jian],
MimicDiffusion: Purifying Adversarial Perturbation via Mimicking
Clean Diffusion Model,
CVPR24(24665-24674)
IEEE DOI Code:
WWW Link.
2410
Costs, Codes, Accuracy, Purification, Perturbation methods,
Artificial neural networks, Diffusion Model, Adversarial Perturbation
BibRef
Wang, Z.[Zeyu],
Li, X.H.[Xian-Hang],
Zhu, H.[Hongru],
Xie, C.[Cihang],
Revisiting Adversarial Training at Scale,
CVPR24(24675-24685)
IEEE DOI Code:
WWW Link.
2410
Training, Visualization, Accuracy, Costs, Computational modeling,
Pipelines, Machine learning
BibRef
Xiao, Y.[Yuan],
Ma, S.Q.[Shi-Qing],
Zhai, J.[Juan],
Fang, C.R.[Chun-Rong],
Jia, J.Y.[Jin-Yuan],
Chen, Z.Y.[Zhen-Yu],
Towards General Robustness Verification of MaxPool-Based
Convolutional Neural Networks via Tightening Linear Approximation,
CVPR24(24766-24775)
IEEE DOI Code:
WWW Link.
2410
Perturbation methods, Scalability, Neural networks,
Linear approximation, Benchmark testing, Robustness
BibRef
Li, Q.[Qian],
Hu, Y.X.[Yu-Xiao],
Dong, Y.P.[Yin-Peng],
Zhang, D.X.[Dong-Xiao],
Chen, Y.[Yuntian],
Focus on Hiders: Exploring Hidden Threats for Enhancing Adversarial
Training,
CVPR24(24442-24451)
IEEE DOI
2410
Training, Adaptation models, Accuracy, Prevention and mitigation, Robustness
BibRef
Yin, X.Y.[Xiang-Yu],
Ruan, W.J.[Wen-Jie],
Boosting Adversarial Training via Fisher-Rao Norm-Based
Regularization,
CVPR24(24544-24553)
IEEE DOI Code:
WWW Link.
2410
Training, Accuracy, Sensitivity, Computational modeling,
Artificial neural networks, Network architecture, Robustness, model complexity
BibRef
Tang, L.[Linyu],
Zhang, L.[Lei],
Robust Overfitting Does Matter: Test-Time Adversarial Purification
with FGSM,
CVPR24(24347-24356)
IEEE DOI Code:
WWW Link.
2410
Training, Accuracy, Codes, Purification, Perturbation methods,
Robustness, Adversarial Defense
BibRef
Zhao, M.[Mengnan],
Zhang, L.[Lihe],
Kong, Y.Q.[Yu-Qiu],
Yin, B.C.[Bao-Cai],
Fast Adversarial Training with Smooth Convergence,
ICCV23(4697-4706)
IEEE DOI Code:
WWW Link.
2401
BibRef
Ge, Y.[Yao],
Li, Y.[Yun],
Han, K.[Keji],
Zhu, J.[Junyi],
Long, X.Z.[Xian-Zhong],
Advancing Example Exploitation Can Alleviate Critical Challenges in
Adversarial Training,
ICCV23(145-154)
IEEE DOI Code:
WWW Link.
2401
BibRef
Wei, Z.[Zeming],
Wang, Y.F.[Yi-Fei],
Guo, Y.W.[Yi-Wen],
Wang, Y.[Yisen],
CFA: Class-Wise Calibrated Fair Adversarial Training,
CVPR23(8193-8201)
IEEE DOI
2309
BibRef
Dong, J.H.[Jun-Hao],
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Lai, J.H.[Jian-Huang],
Xie, X.H.[Xiao-Hua],
The Enemy of My Enemy is My Friend: Exploring Inverse Adversaries for
Improving Adversarial Training,
CVPR23(24678-24687)
IEEE DOI
2309
BibRef
Hsiung, L.[Lei],
Tsai, Y.Y.[Yun-Yun],
Chen, P.Y.[Pin-Yu],
Ho, T.Y.[Tsung-Yi],
Towards Compositional Adversarial Robustness: Generalizing
Adversarial Training to Composite Semantic Perturbations,
CVPR23(24658-24667)
IEEE DOI
2309
BibRef
Jin, G.J.[Gao-Jie],
Yi, X.P.[Xin-Ping],
Wu, D.Y.[Deng-Yu],
Mu, R.H.[Rong-Hui],
Huang, X.W.[Xiao-Wei],
Randomized Adversarial Training via Taylor Expansion,
CVPR23(16447-16457)
IEEE DOI
2309
BibRef
Gavrikov, P.[Paul],
Keuper, J.[Janis],
Keuper, M.[Margret],
An Extended Study of Human-like Behavior under Adversarial Training,
AML23(2361-2368)
IEEE DOI
2309
BibRef
Byun, J.[Junyoung],
Go, H.[Hyojun],
Cho, S.[Seungju],
Kim, C.[Changick],
Exploiting Doubly Adversarial Examples for Improving Adversarial
Robustness,
ICIP22(1331-1335)
IEEE DOI
2211
Training, Deep learning, Neural networks, Training data, Robustness,
Adversarial training, Robustness
BibRef
Wang, Z.[Zi],
Li, C.C.[Cheng-Cheng],
Li, H.[Husheng],
Adversarial Training of Anti-Distilled Neural Network with Semantic
Regulation of Class Confidence,
ICIP22(3576-3580)
IEEE DOI
2211
Training, Knowledge engineering, Image coding, Semantics,
Neural networks, Intellectual property, Regulation, Semantic similarity
BibRef
Yin, X.[Xuwang],
Li, S.Y.[Shi-Ying],
Rohde, G.K.[Gustavo K.],
Learning Energy-Based Models with Adversarial Training,
ECCV22(V:209-226).
Springer DOI
2211
BibRef
Yang, S.[Shuo],
Xu, C.[Chang],
One Size Does NOT Fit All: Data-Adaptive Adversarial Training,
ECCV22(V:70-85).
Springer DOI
2211
BibRef
Dolatabadi, H.M.[Hadi M.],
Erfani, S.[Sarah],
Leckie, C.[Christopher],
l8-Robustness and Beyond: Unleashing Efficient Adversarial Training,
ECCV22(XI:467-483).
Springer DOI
2211
BibRef
Jia, X.J.[Xiao-Jun],
Zhang, Y.[Yong],
Wu, B.Y.[Bao-Yuan],
Ma, K.[Ke],
Wang, J.[Jue],
Cao, X.C.[Xiao-Chun],
LAS-AT: Adversarial Training with Learnable Attack Strategy,
CVPR22(13388-13398)
IEEE DOI
2210
Training, Codes, Databases, Benchmark testing, Robustness,
Adversarial attack and defense
BibRef
Li, T.[Tao],
Wu, Y.[Yingwen],
Chen, S.[Sizhe],
Fang, K.[Kun],
Huang, X.L.[Xiao-Lin],
Subspace Adversarial Training,
CVPR22(13399-13408)
IEEE DOI
2210
Training, Codes, Solids, Robustness, Standards,
Adversarial attack and defense, Machine learning, Optimization methods
BibRef
Poursaeed, O.[Omid],
Jiang, T.X.[Tian-Xing],
Yang, H.[Harry],
Belongie, S.[Serge],
Lim, S.N.[Ser-Nam],
Robustness and Generalization via Generative Adversarial Training,
ICCV21(15691-15700)
IEEE DOI
2203
Training, Deep learning, Image segmentation,
Computational modeling, Neural networks, Object detection,
Neural generative models
BibRef
Xu, W.P.[Wei-Peng],
Huang, H.C.[Hong-Cheng],
Pan, S.Y.[Shao-You],
Using Feature Alignment Can Improve Clean Average Precision and
Adversarial Robustness In Object Detection,
ICIP21(2184-2188)
IEEE DOI
2201
Training, Object detection, Detectors, Feature extraction,
Robustness, deep learning, object detection, adversarial training
BibRef
Yu, C.[Cheng],
Xue, Y.Z.[You-Ze],
Chen, J.S.[Jian-Sheng],
Wang, Y.[Yu],
Ma, H.M.[Hui-Min],
Enhancing Adversarial Robustness for Image Classification By
Regularizing Class Level Feature Distribution,
ICIP21(494-498)
IEEE DOI
2201
Training, Deep learning, Adaptation models, Image processing,
Neural networks, Robustness, Adversarial Training, Robustness
BibRef
Dabouei, A.[Ali],
Taherkhani, F.[Fariborz],
Soleymani, S.[Sobhan],
Nasrabadi, N.M.[Nasser M.],
Revisiting Outer Optimization in Adversarial Training,
ECCV22(V:244-261).
Springer DOI
2211
BibRef
Dabouei, A.[Ali],
Soleymani, S.[Sobhan],
Taherkhani, F.[Fariborz],
Dawson, J.,
Nasrabadi, N.M.[Nasser M.],
Exploiting Joint Robustness to Adversarial Perturbations,
CVPR20(1119-1128)
IEEE DOI
2008
Robustness, Perturbation methods, Training, Predictive models,
Optimization, Adaptation models
BibRef
Addepalli, S.[Sravanti],
Jain, S.[Samyak],
Sriramanan, G.[Gaurang],
Babu, R.V.[R. Venkatesh],
Scaling Adversarial Training to Large Perturbation Bounds,
ECCV22(V:301-316).
Springer DOI
2211
BibRef
Vivek, B.S.,
Revanur, A.[Ambareesh],
Venkat, N.[Naveen],
Babu, R.V.[R. Venkatesh],
Plug-And-Pipeline: Efficient Regularization for Single-Step
Adversarial Training,
TCV20(138-146)
IEEE DOI
2008
Training, Robustness, Computational modeling, Perturbation methods,
Iterative methods, Backpropagation, Data models
BibRef
Wang, J.,
Zhang, H.,
Bilateral Adversarial Training:
Towards Fast Training of More Robust Models Against Adversarial Attacks,
ICCV19(6628-6637)
IEEE DOI
2004
entropy, learning (artificial intelligence), neural nets,
security of data, adversarial attacks, Data models
BibRef
Ye, S.,
Xu, K.,
Liu, S.,
Cheng, H.,
Lambrechts, J.,
Zhang, H.,
Zhou, A.,
Ma, K.,
Wang, Y.,
Lin, X.,
Adversarial Robustness vs. Model Compression, or Both?,
ICCV19(111-120)
IEEE DOI
2004
minimax techniques, neural nets, security of data,
adversarial attacks, concurrent adversarial training
BibRef
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Fawzi, A.[Alhussein],
Uesato, J.[Jonathan],
Frossard, P.[Pascal],
Robustness via Curvature Regularization, and Vice Versa,
CVPR19(9070-9078).
IEEE DOI
2002
Adversarial training leads to more linear boundaries.
BibRef
Mummadi, C.K.,
Brox, T.,
Metzen, J.H.,
Defending Against Universal Perturbations With Shared Adversarial
Training,
ICCV19(4927-4936)
IEEE DOI
2004
image classification, image segmentation, neural nets,
universal perturbations, shared adversarial training,
Computational modeling
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Noise in Adversarial Attacks, Removing, Detection, Use .