14.5.10.10.5 Noise in Adversarial Attacks, Removing, Detection, Use

Chapter Contents (Back)
adversarial Networks. Generative Networks. Defense. Attacks. GAN.
See also Adversarial Attacks.
See also Countering Adversarial Attacks, Defense, Robustness.
See also Adversarial Networks, Adversarial Inputs, Generative Adversarial.
See also Backdoor Attacks.
See also Camouflaged Object Detection, Camouflage.

Romano, Y.[Yaniv], Aberdam, A.[Aviad], Sulam, J.[Jeremias], Elad, M.[Michael],
Adversarial Noise Attacks of Deep Learning Architectures: Stability Analysis via Sparse-Modeled Signals,
JMIV(62), No. 3, April 2020, pp. 313-327.
Springer DOI 2004
BibRef

Zhao, Z.Q.[Zhi-Qun], Wang, H.Y.[Heng-You], Sun, H.[Hao], Yuan, J.H.[Jian-He], Huang, Z.C.[Zhong-Chao], He, Z.H.[Zhi-Hai],
Removing Adversarial Noise via Low-Rank Completion of High-Sensitivity Points,
IP(30), 2021, pp. 6485-6497.
IEEE DOI 2107
Perturbation methods, Training, Neural networks, Image denoising, Optimization, TV, Sensitivity, Adversarial examples, TV norm BibRef

Nguyen, H.H.[Huy H.], Kuribayashi, M.[Minoru], Yamagishi, J.[Junichi], Echizen, I.[Isao],
Effects of Image Processing Operations on Adversarial Noise and Their Use in Detecting and Correcting Adversarial Images,
IEICE(E105-D), No. 1, January 2022, pp. 65-77.
WWW Link. 2201
BibRef

Gao, S.[Song], Yu, S.[Shui], Wu, L.W.[Li-Wen], Yao, S.W.[Shao-Wen], Zhou, X.W.[Xiao-Wei],
Detecting adversarial examples by additional evidence from noise domain,
IET-IPR(16), No. 2, 2022, pp. 378-392.
DOI Link 2201
BibRef

Cheng, Y.P.[Yu-Peng], Guo, Q.[Qing], Juefei-Xu, F.[Felix], Lin, S.W.[Shang-Wei], Feng, W.[Wei], Lin, W.S.[Wei-Si], Liu, Y.[Yang],
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack,
MultMed(24), 2022, pp. 3807-3822.
IEEE DOI 2208
Noise reduction, Kernel, Task analysis, Image denoising, Image quality, Noise measurement, Deep learning, adversarial attack BibRef

Yang, D.[Dong], Chen, W.[Wei], Wei, S.J.[Song-Jie],
DTFA: Adversarial attack with discrete cosine transform noise and target features on deep neural networks,
IET-IPR(17), No. 5, 2023, pp. 1464-1477.
DOI Link 2304
adversarial example, image classification, regional sampling, target attack BibRef

Ying, C.Y.[Cheng-Yang], You, Q.B.[Qiao-Ben], Zhou, X.N.[Xin-Ning], Su, H.[Hang], Ding, W.B.[Wen-Bo], Ai, J.Y.[Jian-Yong],
Consistent attack: Universal adversarial perturbation on embodied vision navigation,
PRL(168), 2023, pp. 57-63.
Elsevier DOI 2304
Embodied agent, Vision navigation, Deep neural networks, Universal adversarial noise BibRef

Li, Y.Z.[Yue-Zun], Zhang, C.[Cong], Qi, H.G.[Hong-Gang], Lyu, S.W.[Si-Wei],
AdaNI: Adaptive Noise Injection to improve adversarial robustness,
CVIU(238), 2024, pp. 103855.
Elsevier DOI 2312
Image classification, Adversarial examples, Adversarial robustness BibRef

Park, J.[Jeongeun], Shin, S.[Seungyoun], Hwang, S.[Sangheum], Choi, S.[Sungjoon],
Elucidating robust learning with uncertainty-aware corruption pattern estimation,
PR(138), 2023, pp. 109387.
Elsevier DOI 2303
Robust learning, Training with noisy labels, Uncertainty estimation, Corruption pattern estimation BibRef

Xie, W.C.[Wei-Cheng], Luo, C.[Cheng], Wang, G.[Gui], Shen, L.L.[Lin-Lin], Lai, Z.H.[Zhi-Hui], Song, S.Y.[Si-Yang],
Network characteristics adaption and hierarchical feature exploration for robust object recognition,
PR(149), 2024, pp. 110240.
Elsevier DOI Code:
WWW Link. 2403
Robust object recognition, Attention-based dropout, Adaptive characteristics, Hierarchically-salient features BibRef

He, X.L.[Xi-Lin], Lin, Q.L.[Qin-Liang], Luo, C.[Cheng], Xie, W.C.[Wei-Cheng], Song, S.Y.[Si-Yang], Liu, F.[Feng], Shen, L.L.[Lin-Lin],
Shift from Texture-bias to Shape-Bias: Edge Deformation-Based Augmentation for Robust Object Recognition,
ICCV23(1526-1535)
IEEE DOI Code:
WWW Link. 2401
BibRef


Azuma, H.[Hiroki], Matsui, Y.[Yusuke],
Defense-Prefix for Preventing Typographic Attacks on CLIP,
AROW23(3646-3655)
IEEE DOI Code:
WWW Link. 2401
BibRef

Luzi, L.[Lorenzo], Marrero, C.O.[Carlos Ortiz], Wynar, N.[Nile], Baraniuk, R.G.[Richard G.], Henry, M.J.[Michael J.],
Evaluating generative networks using Gaussian mixtures of image features,
WACV23(279-288)
IEEE DOI 2302
Image resolution, Inverse problems, Computational modeling, Perturbation methods, Gaussian noise, Gaussian distribution, adversarial attack and defense methods BibRef

Choi, J.H.[Jun-Ho], Zhang, H.[Huan], Kim, J.H.[Jun-Hyuk], Hsieh, C.J.[Cho-Jui], Lee, J.S.[Jong-Seok],
Deep Image Destruction: Vulnerability of Deep Image-to-Image Models against Adversarial Attacks,
ICPR22(1287-1293)
IEEE DOI 2212
Degradation, Training, Analytical models, Perturbation methods, Noise reduction, Robustness BibRef

Thakur, N.[Nupur], Li, B.X.[Bao-Xin],
PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos,
ArtOfRobust22(130-137)
IEEE DOI 2210
Training, Deep learning, Perturbation methods, Surveillance, Gaussian noise, Neural networks BibRef

Zhou, D.W.[Da-Wei], Wang, N.N.[Nan-Nan], Peng, C.L.[Chun-Lei], Gao, X.B.[Xin-Bo], Wang, X.Y.[Xiao-Yu], Yu, J.[Jun], Liu, T.L.[Tong-Liang],
Removing Adversarial Noise in Class Activation Feature Space,
ICCV21(7858-7867)
IEEE DOI 2203
Training, Deep learning, Adaptation models, Perturbation methods, Computational modeling, Noise reduction, Adversarial learning, Transfer/Low-shot/Semi/Unsupervised Learning BibRef

Zhang, C.[Cheng], Gao, P.[Pan],
Countering Adversarial Examples: Combining Input Transformation and Noisy Training,
AROW21(102-111)
IEEE DOI 2112
Training, Image coding, Quantization (signal), Perturbation methods, Computational modeling, Transform coding, Artificial neural networks BibRef

Deng, K.[Kang], Peng, A.[Anjie], Dong, W.L.[Wan-Li], Zeng, H.[Hui],
Detecting C &W Adversarial Images Based on Noise Addition-Then-Denoising,
ICIP21(3607-3611)
IEEE DOI 2201
Deep learning, Visualization, Perturbation methods, Gaussian noise, Image processing, Noise reduction, Deep neural network, Detection BibRef

Tan, Y.X.M.[Yi Xianz Marcus], Elovici, Y.[Yuval], Binder, A.[Alexander],
Adaptive Noise Injection for Training Stochastic Student Networks from Deterministic Teachers,
ICPR21(7587-7594)
IEEE DOI 2105
Training, Adaptation models, Adaptive systems, Computational modeling, Stochastic processes, Machine learning, stochastic networks BibRef

Yan, B., Wang, D., Lu, H., Yang, X.,
Cooling-Shrinking Attack: Blinding the Tracker With Imperceptible Noises,
CVPR20(987-996)
IEEE DOI 2008
Target tracking, Generators, Heating systems, Perturbation methods, Object tracking, Training BibRef

Yi, C., Li, H., Wan, R., Kot, A.C.,
Improving Robustness of DNNs against Common Corruptions via Gaussian Adversarial Training,
VCIP20(17-20)
IEEE DOI 2102
Robustness, Perturbation methods, Training, Neural networks, Standards, Gaussian noise, Tensors, Deep Learning, Data Augmentation BibRef

Liu, X., Xiao, T., Si, S., Cao, Q., Kumar, S., Hsieh, C.,
How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework,
CVPR20(279-287)
IEEE DOI 2008
Neural networks, Robustness, Stochastic processes, Training, Random variables, Gaussian noise, Mathematical model BibRef

Dong, X., Han, J., Chen, D., Liu, J., Bian, H., Ma, Z., Li, H., Wang, X., Zhang, W., Yu, N.,
Robust Superpixel-Guided Attentional Adversarial Attack,
CVPR20(12892-12901)
IEEE DOI 2008
Perturbation methods, Robustness, Noise measurement, Image color analysis, Pipelines, Agriculture BibRef

Borkar, T., Heide, F., Karam, L.J.,
Defending Against Universal Attacks Through Selective Feature Regeneration,
CVPR20(706-716)
IEEE DOI 2008
Perturbation methods, Training, Robustness, Noise reduction, Image restoration, Transforms BibRef

Li, G., Ding, S., Luo, J., Liu, C.,
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder,
CVPR20(797-805)
IEEE DOI 2008
Noise reduction, Robustness, Training, Image restoration, Noise measurement, Decoding, Neural networks BibRef

Shi, Y., Han, Y., Tian, Q.,
Polishing Decision-Based Adversarial Noise With a Customized Sampling,
CVPR20(1027-1035)
IEEE DOI 2008
Gaussian distribution, Sensitivity, Noise reduction, Optimization, Image coding, Robustness, Standards BibRef

He, Z.[Zhezhi], Rakin, A.S.[Adnan Siraj], Fan, D.L.[De-Liang],
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack,
CVPR19(588-597).
IEEE DOI 2002
BibRef

Kaneko, T.[Takuhiro], Harada, T.[Tatsuya],
Blur, Noise, and Compression Robust Generative Adversarial Networks,
CVPR21(13574-13584)
IEEE DOI 2111
Degradation, Training, Adaptation models, Image coding, Uncertainty, Computational modeling BibRef

Kaneko, T.[Takuhiro], Harada, T.[Tatsuya],
Noise Robust Generative Adversarial Networks,
CVPR20(8401-8411)
IEEE DOI 2008
Training, Noise measurement, Generators, Noise robustness, Gaussian noise, Image generation BibRef

Kaneko, T.[Takuhiro], Ushiku, Y.[Yoshitaka], Harada, T.[Tatsuya],
Label-Noise Robust Generative Adversarial Networks,
CVPR19(2462-2471).
IEEE DOI 2002
BibRef

Xie, C.[Cihang], Wu, Y.X.[Yu-Xin], van der Maaten, L.[Laurens], Yuille, A.L.[Alan L.], He, K.M.[Kai-Ming],
Feature Denoising for Improving Adversarial Robustness,
CVPR19(501-509).
IEEE DOI 2002
BibRef

Prakash, A., Moran, N., Garber, S., DiLillo, A., Storer, J.,
Deflecting Adversarial Attacks with Pixel Deflection,
CVPR18(8571-8580)
IEEE DOI 1812
Perturbation methods, Transforms, Minimization, Robustness, Noise reduction, Training BibRef

Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.,
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser,
CVPR18(1778-1787)
IEEE DOI 1812
Training, Perturbation methods, Noise reduction, Image reconstruction, Predictive models, Neural networks, Adaptation models BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Adversarial Attacks .


Last update:Apr 18, 2024 at 11:38:49