14.5.8.8.3 Adversarial Attacks

Chapter Contents (Back)
Adversarial Networks. Generative Networks. GAN.
See also Countering Adversarial Attacks, Defense, Robustness.
See also Adversarial Networks, Adversarial Inputs, Generative Adversarial.

Biggio, B.[Battista], Roli, F.[Fabio],
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning,
PR(84), 2018, pp. 317-331.
Elsevier DOI 1809
Award, Pattern Recognition, Best Paper. Adversarial machine learning, Evasion attacks, Poisoning attacks, Adversarial examples, Secure learning, Deep learning BibRef

Hang, J.[Jie], Han, K.[Keji], Chen, H.[Hui], Li, Y.[Yun],
Ensemble adversarial black-box attacks against deep learning systems,
PR(101), 2020, pp. 107184.
Elsevier DOI 2003
Black-box attack, Vulnerability, Ensemble adversarial attack, Diversity, Transferability BibRef

Croce, F.[Francesco], Rauber, J.[Jonas], Hein, M.[Matthias],
Scaling up the Randomized Gradient-Free Adversarial Attack Reveals Overestimation of Robustness Using Established Attacks,
IJCV(128), No. 4, April 2020, pp. 1028-1046.
Springer DOI 2004
BibRef
Earlier: A1, A3, Only:
A Randomized Gradient-Free Attack on ReLU Networks,
GCPR18(215-227).
Springer DOI 1905
Award, GCPR, HM. BibRef

Romano, Y.[Yaniv], Aberdam, A.[Aviad], Sulam, J.[Jeremias], Elad, M.[Michael],
Adversarial Noise Attacks of Deep Learning Architectures: Stability Analysis via Sparse-Modeled Signals,
JMIV(62), No. 3, April 2020, pp. 313-327.
Springer DOI 2004
BibRef

Ozbulak, U.[Utku], Gasparyan, M.[Manvel], de Neve, W.[Wesley], van Messem, A.[Arnout],
Perturbation analysis of gradient-based adversarial attacks,
PRL(135), 2020, pp. 313-320.
Elsevier DOI 2006
Adversarial attacks, Adversarial examples, Deep learning, Perturbation analysis BibRef

Wan, S.[Sheng], Wu, T.Y.[Tung-Yu], Hsu, H.W.[Heng-Wei], Wong, W.H.[Wing Hung], Lee, C.Y.[Chen-Yi],
Feature Consistency Training With JPEG Compressed Images,
CirSysVideo(30), No. 12, December 2020, pp. 4769-4780.
IEEE DOI 2012
Deep neural networks are vulnerable to JPEG compression artifacts. Image coding, Distortion, Training, Transform coding, Robustness, Quantization (signal), Feature extraction, Compression artifacts, classification robustness BibRef

Che, Z., Borji, A., Zhai, G., Ling, S., Li, J., Tian, Y., Guo, G., Le Callet, P.,
Adversarial Attack Against Deep Saliency Models Powered by Non-Redundant Priors,
IP(30), 2021, pp. 1973-1988.
IEEE DOI 2101
Computational modeling, Perturbation methods, Redundancy, Task analysis, Visualization, Robustness, Neural networks, gradient estimation BibRef

Xu, Y., Du, B., Zhang, L.,
Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses,
GeoRS(59), No. 2, February 2021, pp. 1604-1617.
IEEE DOI 2101
Remote sensing, Neural networks, Deep learning, Perturbation methods, Feature extraction, Task analysis, scene classification BibRef

Correia-Silva, J.R.[Jacson Rodrigues], Berriel, R.F.[Rodrigo F.], Badue, C.[Claudine], de Souza, A.F.[Alberto F.], Oliveira-Santos, T.[Thiago],
Copycat CNN: Are random non-Labeled data enough to steal knowledge from black-box models?,
PR(113), 2021, pp. 107830.
Elsevier DOI 2103
Copy a CNN model. Deep learning, Convolutional neural network, Neural network attack, Stealing network knowledge, Knowledge distillation BibRef

Xiao, Y.[Yatie], Pun, C.M.[Chi-Man], Liu, B.[Bo],
Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation,
PR(115), 2021, pp. 107903.
Elsevier DOI 2104
Object detection, Adversarial attack, Adaptive object-oriented perturbation BibRef

Yamanaka, K.[Koichiro], Takahashi, K.[Keita], Fujii, T.[Toshiaki], Matsumoto, R.[Ryuraroh],
Simultaneous Attack on CNN-Based Monocular Depth Estimation and Optical Flow Estimation,
IEICE(E104-D), No. 5, May 2021, pp. 785-788.
WWW Link. 2105
BibRef

Lin, H.Y.[Hsiao-Ying], Biggio, B.[Battista],
Adversarial Machine Learning: Attacks From Laboratories to the Real World,
Computer(54), No. 5, May 2021, pp. 56-60.
IEEE DOI 2106
Adversarial machine learning, Data models, Training data, Biological system modeling BibRef

Wang, B.[Bo], Zhao, M.[Mengnan], Wang, W.[Wei], Wei, F.[Fei], Qin, Z.[Zhan], Ren, K.[Kui],
Are You Confident That You Have Successfully Generated Adversarial Examples?,
CirSysVideo(31), No. 6, June 2021, pp. 2089-2099.
IEEE DOI 2106
Perturbation methods, Iterative methods, Computational modeling, Neural networks, Security, Training, Robustness, buffer BibRef

Gragnaniello, D.[Diego], Marra, F.[Francesco], Verdoliva, L.[Luisa], Poggi, G.[Giovanni],
Perceptual quality-preserving black-box attack against deep learning image classifiers,
PRL(147), 2021, pp. 142-149.
Elsevier DOI 2106
Image classification, Face recognition, Adversarial attacks, Black-box BibRef

Tang, S.L.[San-Li], Huang, X.L.[Xiao-Lin], Chen, M.J.[Ming-Jian], Sun, C.J.[Cheng-Jin], Yang, J.[Jie],
Adversarial Attack Type I: Cheat Classifiers by Significant Changes,
PAMI(43), No. 3, March 2021, pp. 1100-1109.
IEEE DOI 2102
Neural networks, Training, Aerospace electronics, Toy manufacturing industry, Sun, Face recognition, Task analysis, supervised variational autoencoder BibRef

Upadhyay, U., Mukherjee, P.,
Generating Out of Distribution Adversarial Attack Using Latent Space Poisoning,
SPLetters(28), 2021, pp. 523-527.
IEEE DOI 2103
Training, Aerospace electronics, Perturbation methods, Smoothing methods, Mathematical model, manifold space BibRef

Wang, L.[Lin], Yoon, K.J.[Kuk-Jin],
PSAT-GAN: Efficient Adversarial Attacks Against Holistic Scene Understanding,
IP(30), 2021, pp. 7541-7553.
IEEE DOI 2109
Task analysis, Perturbation methods, Visualization, Pipelines, Autonomous vehicles, Semantics, Generative adversarial networks, generative model BibRef

Mohamad-Nezami, O.[Omid], Chaturvedi, A.[Akshay], Dras, M.[Mark], Garain, U.[Utpal],
Pick-Object-Attack: Type-specific adversarial attack for object detection,
CVIU(211), 2021, pp. 103257.
Elsevier DOI 2110
Adversarial attack, Faster R-CNN, Deep learning, Image captioning, Computer vision BibRef

Qin, C.[Chuan], Wu, L.[Liang], Zhang, X.[Xinpeng], Feng, G.[Guorui],
Efficient Non-Targeted Attack for Deep Hashing Based Image Retrieval,
SPLetters(28), 2021, pp. 1893-1897.
IEEE DOI 2110
Codes, Perturbation methods, Hamming distance, Image retrieval, Training, Feature extraction, Databases, Adversarial example, image retrieval BibRef

Hu, H.Q.[Hao-Qi], Lu, X.F.[Xiao-Feng], Zhang, X.P.[Xin-Peng], Zhang, T.X.[Tian-Xing], Sun, G.L.[Guang-Ling],
Inheritance Attention Matrix-Based Universal Adversarial Perturbations on Vision Transformers,
SPLetters(28), 2021, pp. 1923-1927.
IEEE DOI 2110
Perturbation methods, Robustness, Visualization, Transformers, Optimization, Task analysis, Head, Vision Transformers, self-attention BibRef


Chen, Z.K.[Zhi-Kai], Xie, L.X.[Ling-Xi], Pang, S.M.[Shan-Min], He, Y.[Yong], Tian, Q.[Qi],
Appending Adversarial Frames for Universal Video Attack,
WACV21(3198-3207)
IEEE DOI 2106
Measurement, Perturbation methods, Semantics, Pipelines, Euclidean distance BibRef

Tan, Y.X.M.[Yi Xianz Marcus], Elovici, Y.[Yuval], Binder, A.[Alexander],
Adaptive Noise Injection for Training Stochastic Student Networks from Deterministic Teachers,
ICPR21(7587-7594)
IEEE DOI 2105
Training, Adaptation models, Adaptive systems, Computational modeling, Stochastic processes, Machine learning, stochastic networks BibRef

Hu, S.N.[Sheng-Nan], Zhang, Y.[Yang], Laha, S.[Sumit], Sharma, A.[Ankit], Foroosh, H.[Hassan],
CCA: Exploring the Possibility of Contextual Camouflage Attack on Object Detection,
ICPR21(7647-7654)
IEEE DOI 2105
Training, Adaptation models, Machine learning algorithms, Neural networks, Lighting, Detectors, Object detection BibRef

Cancela, B.[Brais], Bolón-Canedo, V.[Verónica], Alonso-Betanzos, A.[Amparo],
A delayed Elastic-Net approach for performing adversarial attacks,
ICPR21(378-384)
IEEE DOI 2105
Perturbation methods, Data preprocessing, Benchmark testing, Size measurement, Robustness, Pattern recognition, Security BibRef

Li, X.C.[Xiu-Chuan], Zhang, X.Y.[Xu-Yao], Yin, F.[Fei], Liu, C.L.[Cheng-Lin],
F-mixup: Attack CNNs From Fourier Perspective,
ICPR21(541-548)
IEEE DOI 2105
Training, Frequency-domain analysis, Perturbation methods, Neural networks, Robustness, Pattern recognition, High frequency BibRef

Grosse, K.[Kathrin], Smith, M.T.[Michael T.], Backes, M.[Michael],
Killing Four Birds with one Gaussian Process: The Relation between different Test-Time Attacks,
ICPR21(4696-4703)
IEEE DOI 2105
Analytical models, Reverse engineering, Training data, Gaussian processes, Data models, Classification algorithms, Pattern recognition BibRef

Barati, R.[Ramin], Safabakhsh, R.[Reza], Rahmati, M.[Mohammad],
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks,
ICPR21(7036-7042)
IEEE DOI 2105
Training, Artificial neural networks, Pattern recognition, Proposals, Convergence, adversarial attack, robustness, adversarial training BibRef

Li, W.J.[Wen-Jie], Tondi, B.[Benedetta], Ni, R.R.[Rong-Rong], Barni, M.[Mauro],
Increased-confidence Adversarial Examples for Deep Learning Counter-forensics,
MMForWild20(411-424).
Springer DOI 2103
BibRef

Dong, X.S.[Xin-Shuai], Liu, H.[Hong], Ji, R.R.[Rong-Rong], Cao, L.J.[Liu-Juan], Ye, Q.X.[Qi-Xiang], Liu, J.Z.[Jian-Zhuang], Tian, Q.[Qi],
API-net: Robust Generative Classifier via a Single Discriminator,
ECCV20(XIII:379-394).
Springer DOI 2011
BibRef

Liu, A.S.[Ai-Shan], Huang, T.R.[Tai-Ran], Liu, X.L.[Xiang-Long], Xu, Y.T.[Yi-Tao], Ma, Y.Q.[Yu-Qing], Chen, X.[Xinyun], Maybank, S.J.[Stephen J.], Tao, D.C.[Da-Cheng],
Spatiotemporal Attacks for Embodied Agents,
ECCV20(XVII:122-138).
Springer DOI 2011
Code, Adversarial Attack.
WWW Link. BibRef

Fan, Y.[Yanbo], Wu, B.Y.[Bao-Yuan], Li, T.H.[Tuan-Hui], Zhang, Y.[Yong], Li, M.Y.[Ming-Yang], Li, Z.F.[Zhi-Feng], Yang, Y.[Yujiu],
Sparse Adversarial Attack via Perturbation Factorization,
ECCV20(XXII:35-50).
Springer DOI 2011
BibRef

Guo, J.F.[Jun-Feng], Liu, C.[Cong],
Practical Poisoning Attacks on Neural Networks,
ECCV20(XXVII:142-158).
Springer DOI 2011
BibRef

Liu, Y.F.[Yun-Fei], Ma, X.J.[Xing-Jun], Bailey, J.[James], Lu, F.[Feng],
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks,
ECCV20(X:182-199).
Springer DOI 2011
BibRef

Feng, X.J.[Xin-Jie], Yao, H.X.[Hong-Xun], Che, W.B.[Wen-Bin], Zhang, S.P.[Sheng-Ping],
An Effective Way to Boost Black-box Adversarial Attack,
MMMod20(I:393-404).
Springer DOI 2003
BibRef

Costales, R., Mao, C., Norwitz, R., Kim, B., Yang, J.,
Live Trojan Attacks on Deep Neural Networks,
AML-CV20(3460-3469)
IEEE DOI 2008
Trojan horses, Computational modeling, Neural networks, Machine learning BibRef

Haque, M., Chauhan, A., Liu, C., Yang, W.,
ILFO: Adversarial Attack on Adaptive Neural Networks,
CVPR20(14252-14261)
IEEE DOI 2008
Computational modeling, Energy consumption, Robustness, Neural networks, Adaptation models, Machine learning, Perturbation methods BibRef

Zhou, M., Wu, J., Liu, Y., Liu, S., Zhu, C.,
DaST: Data-Free Substitute Training for Adversarial Attacks,
CVPR20(231-240)
IEEE DOI 2008
Data models, Training, Machine learning, Perturbation methods, Task analysis, Estimation BibRef

Ganeshan, A.[Aditya], Vivek, B.S., Radhakrishnan, V.B.[Venkatesh Babu],
FDA: Feature Disruptive Attack,
ICCV19(8068-8078)
IEEE DOI 2004
Deal with adversarial attacks. image classification, image representation, learning (artificial intelligence), neural nets, optimisation, BibRef

Han, J., Dong, X., Zhang, R., Chen, D., Zhang, W., Yu, N., Luo, P., Wang, X.,
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once,
ICCV19(5157-5166)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), pattern classification, security of data, Decoding BibRef

Deng, Y., Karam, L.J.,
Universal Adversarial Attack Via Enhanced Projected Gradient Descent,
ICIP20(1241-1245)
IEEE DOI 2011
Perturbation methods, Computational modeling, Training, Computer architecture, Convolutional neural networks, projected gradient descent (PGD) BibRef

Sun, C., Chen, S., Cai, J., Huang, X.,
Type I Attack For Generative Models,
ICIP20(593-597)
IEEE DOI 2011
Image reconstruction, Decoding, Aerospace electronics, Generative adversarial networks, generative models BibRef

Yang, C.L.[Cheng-Lin], Kortylewski, A.[Adam], Xie, C.[Cihang], Cao, Y.Z.[Yin-Zhi], Yuille, A.L.[Alan L.],
Patchattack: A Black-box Texture-based Attack with Reinforcement Learning,
ECCV20(XXVI:681-698).
Springer DOI 2011
BibRef

Braunegg, A., Chakraborty, A.[Amartya], Krumdick, M.[Michael], Lape, N.[Nicole], Leary, S.[Sara], Manville, K.[Keith], Merkhofer, E.[Elizabeth], Strickhart, L.[Laura], Walmer, M.[Matthew],
Apricot: A Dataset of Physical Adversarial Attacks on Object Detection,
ECCV20(XXI:35-50).
Springer DOI 2011
BibRef

Zhang, H.[Hu], Zhu, L.C.[Lin-Chao], Zhu, Y.[Yi], Yang, Y.[Yi],
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior,
ECCV20(XX:240-256).
Springer DOI 2011
BibRef

Gao, L.L.[Lian-Li], Zhang, Q.L.[Qi-Long], Song, J.K.[Jing-Kuan], Liu, X.L.[Xiang-Long], Shen, H.T.[Heng Tao],
Patch-wise Attack for Fooling Deep Neural Network,
ECCV20(XXVIII:307-322).
Springer DOI 2011
BibRef

Andriushchenko, M.[Maksym], Croce, F.[Francesco], Flammarion, N.[Nicolas], Hein, M.[Matthias],
Square Attack: A Query-efficient Black-box Adversarial Attack via Random Search,
ECCV20(XXIII:484-501).
Springer DOI 2011
BibRef

Bai, J.W.[Jia-Wang], Chen, B.[Bin], Li, Y.M.[Yi-Ming], Wu, D.X.[Dong-Xian], Guo, W.W.[Wei-Wei], Xia, S.T.[Shu-Tao], Yang, E.H.[En-Hui],
Targeted Attack for Deep Hashing Based Retrieval,
ECCV20(I:618-634).
Springer DOI 2011
BibRef

Nakka, K.K.[Krishna Kanth], Salzmann, M.[Mathieu],
Indirect Local Attacks for Context-aware Semantic Segmentation Networks,
ECCV20(V:611-628).
Springer DOI 2011
BibRef

Wu, Z.X.[Zu-Xuan], Lim, S.N.[Ser-Nam], Davis, L.S.[Larry S.], Goldstein, T.[Tom],
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors,
ECCV20(IV:1-17).
Springer DOI 2011
BibRef

Li, Q.Z.[Qi-Zhang], Guo, Y.W.[Yi-Wen], Chen, H.[Hao],
Yet Another Intermediate-level Attack,
ECCV20(XVI: 241-257).
Springer DOI 2010
BibRef

Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.,
Clean-Label Backdoor Attacks on Video Recognition Models,
CVPR20(14431-14440)
IEEE DOI 2008
Training, Data models, Toxicology, Perturbation methods, Training data, Image resolution, Pipelines BibRef

Kolouri, S., Saha, A., Pirsiavash, H., Hoffmann, H.,
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs,
CVPR20(298-307)
IEEE DOI 2008
Training, Perturbation methods, Data models, Computational modeling, Machine learning, Benchmark testing, Computer vision BibRef

Li, J., Ji, R., Liu, H., Liu, J., Zhong, B., Deng, C., Tian, Q.,
Projection Probability-Driven Black-Box Attack,
CVPR20(359-368)
IEEE DOI 2008
Perturbation methods, Sensors, Optimization, Sparse matrices, Compressed sensing, Google, Neural networks BibRef

Huang, L., Gao, C., Zhou, Y., Xie, C., Yuille, A.L., Zou, C., Liu, N.,
Universal Physical Camouflage Attacks on Object Detectors,
CVPR20(717-726)
IEEE DOI 2008
Proposals, Detectors, Semantics, Perturbation methods, Strain, Optimization BibRef

Yan, B., Wang, D., Lu, H., Yang, X.,
Cooling-Shrinking Attack: Blinding the Tracker With Imperceptible Noises,
CVPR20(987-996)
IEEE DOI 2008
Target tracking, Generators, Heating systems, Perturbation methods, Object tracking, Training BibRef

Li, H., Xu, X., Zhang, X., Yang, S., Li, B.,
QEBA: Query-Efficient Boundary-Based Blackbox Attack,
CVPR20(1218-1227)
IEEE DOI 2008
Perturbation methods, Estimation, Predictive models, Machine learning, Cats, Pipelines, Neural networks BibRef

Li, M., Deng, C., Li, T., Yan, J., Gao, X., Huang, H.,
Towards Transferable Targeted Attack,
CVPR20(638-646)
IEEE DOI 2008
Curing, Iterative methods, Extraterrestrial measurements, Entropy, Perturbation methods, Robustness BibRef

Truong, L., Jones, C., Hutchinson, B., August, A., Praggastis, B., Jasper, R., Nichols, N., Tuor, A.,
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers,
AML-CV20(3422-3431)
IEEE DOI 2008
Data models, Training, Computational modeling, Machine learning, Training data, Computer vision, Safety BibRef

Gupta, S., Dube, P., Verma, A.,
Improving the affordability of robustness training for DNNs,
AML-CV20(3383-3392)
IEEE DOI 2008
Training, Mathematical model, Computational modeling, Robustness, Neural networks, Computer architecture, Optimization BibRef

Zhang, Z., Wu, T.,
Learning Ordered Top-k Adversarial Attacks via Adversarial Distillation,
AML-CV20(3364-3373)
IEEE DOI 2008
Perturbation methods, Robustness, Task analysis, Semantics, Training, Visualization, Protocols BibRef

Chen, X., Yan, X., Zheng, F., Jiang, Y., Xia, S., Zhao, Y., Ji, R.,
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention,
CVPR20(10173-10182)
IEEE DOI 2008
Target tracking, Task analysis, Visualization, Perturbation methods, Object tracking, Computer vision, Optimization BibRef

Zhou, H., Chen, D., Liao, J., Chen, K., Dong, X., Liu, K., Zhang, W., Hua, G., Yu, N.,
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud Based Deep Networks,
CVPR20(10353-10362)
IEEE DOI 2008
Feature extraction, Perturbation methods, Decoding, Training, Neural networks, Target recognition BibRef

Rahmati, A., Moosavi-Dezfooli, S., Frossard, P., Dai, H.,
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks,
CVPR20(8443-8452)
IEEE DOI 2008
Perturbation methods, Estimation, Covariance matrices, Gaussian distribution, Measurement, Neural networks, Robustness BibRef

Duan, R., Ma, X., Wang, Y., Bailey, J., Qin, A.K., Yang, Y.,
Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles,
CVPR20(997-1005)
IEEE DOI 2008
Perturbation methods, Cameras, Robustness, Feature extraction, Distortion, Visualization, Measurement BibRef

Machiraju, H.[Harshitha], Balasubramanian, V.N.[Vineeth N],
A Little Fog for a Large Turn,
WACV20(2891-2900)
IEEE DOI 2006
Perturbation methods, Meteorology, Autonomous robots, Task analysis, Data models, Predictive models, Robustness BibRef

Yang, C.H., Liu, Y., Chen, P., Ma, X., Tsai, Y.J.,
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks,
ICIP19(3811-3815)
IEEE DOI 1910
Causal Reasoning, Adversarial Example, Adversarial Robustness, Interpretable Deep Learning, Visual Reasoning BibRef

Yao, H., Regan, M., Yang, Y., Ren, Y.,
Image Decomposition and Classification Through a Generative Model,
ICIP19(400-404)
IEEE DOI 1910
Generative model, classification, adversarial defense BibRef

Brunner, T., Diehl, F., Le, M.T., Knoll, A.,
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks,
ICCV19(4957-4965)
IEEE DOI 2004
application program interfaces, cloud computing, feature extraction, image classification, security of data, Training BibRef

Liu, Y.J.[Yu-Jia], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
A Geometry-Inspired Decision-Based Attack,
ICCV19(4889-4897)
IEEE DOI 2004
Deal with adversarial attack. geometry, image classification, image recognition, neural nets, security of data, black-box settings, Gaussian noise BibRef

Li, J., Ji, R., Liu, H., Hong, X., Gao, Y., Tian, Q.,
Universal Perturbation Attack Against Image Retrieval,
ICCV19(4898-4907)
IEEE DOI 2004
feature extraction, image classification, image representation, image retrieval, learning (artificial intelligence), Pipelines BibRef

Finlay, C., Pooladian, A., Oberman, A.,
The LogBarrier Adversarial Attack: Making Effective Use of Decision Boundary Information,
ICCV19(4861-4869)
IEEE DOI 2004
gradient methods, image classification, minimisation, neural nets, security of data, LogBarrier adversarial attack, Benchmark testing BibRef

Huang, Q., Katsman, I., Gu, Z., He, H., Belongie, S., Lim, S.,
Enhancing Adversarial Example Transferability With an Intermediate Level Attack,
ICCV19(4732-4741)
IEEE DOI 2004
cryptography, neural nets, optimisation, black-box transferability, source model, target models, adversarial examples, Artificial intelligence BibRef

Jandial, S., Mangla, P., Varshney, S., Balasubramanian, V.,
AdvGAN++: Harnessing Latent Layers for Adversary Generation,
NeruArch19(2045-2048)
IEEE DOI 2004
feature extraction, neural nets, MNIST datasets, CIFAR-10 datasets, attack rates, realistic images, latent features, input image, AdvGAN BibRef

Wang, C.L.[Cheng-Long], Bunel, R.[Rudy], Dvijotham, K.[Krishnamurthy], Huang, P.S.[Po-Sen], Grefenstette, E.[Edward], Kohli, P.[Pushmeet],
Knowing When to Stop: Evaluation and Verification of Conformity to Output-Size Specifications,
CVPR19(12252-12261).
IEEE DOI 2002
ulnerability of these models to attacks aimed at changing the output-size. BibRef

Modas, A.[Apostolos], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
SparseFool: A Few Pixels Make a Big Difference,
CVPR19(9079-9088).
IEEE DOI 2002
sparse attack. BibRef

Yao, Z.[Zhewei], Gholami, A.[Amir], Xu, P.[Peng], Keutzer, K.[Kurt], Mahoney, M.W.[Michael W.],
Trust Region Based Adversarial Attack on Neural Networks,
CVPR19(11342-11351).
IEEE DOI 2002
BibRef

Zeng, X.H.[Xiao-Hui], Liu, C.X.[Chen-Xi], Wang, Y.S.[Yu-Siang], Qiu, W.[Weichao], Xie, L.X.[Ling-Xi], Tai, Y.W.[Yu-Wing], Tang, C.K.[Chi-Keung], Yuille, A.L.[Alan L.],
Adversarial Attacks Beyond the Image Space,
CVPR19(4297-4306).
IEEE DOI 2002
BibRef

Corneanu, C.A.[Ciprian A.], Madadi, M.[Meysam], Escalera, S.[Sergio], Martinez, A.M.[Aleix M.],
What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?,
CVPR19(4752-4761).
IEEE DOI 2002
BibRef

Shi, Y.C.[Yu-Cheng], Wang, S.[Siyu], Han, Y.H.[Ya-Hong],
Curls and Whey: Boosting Black-Box Adversarial Attacks,
CVPR19(6512-6520).
IEEE DOI 2002
BibRef

Liu, X.Q.[Xuan-Qing], Hsieh, C.J.[Cho-Jui],
Rob-GAN: Generator, Discriminator, and Adversarial Attacker,
CVPR19(11226-11235).
IEEE DOI 2002
BibRef

Gupta, P.[Puneet], Rahtu, E.[Esa],
MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks,
GCPR19(401-413).
Springer DOI 1911
BibRef

Barni, M., Kallas, K., Tondi, B.,
A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning,
ICIP19(101-105)
IEEE DOI 1910
Adversarial learning, security of deep learning, backdoor poisoning attacks, training with poisoned data BibRef

Zhao, W.[Wei], Yang, P.P.[Peng-Peng], Ni, R.R.[Rong-Rong], Zhao, Y.[Yao], Li, W.J.[Wen-Jie],
Cycle GAN-Based Attack on Recaptured Images to Fool both Human and Machine,
IWDW18(83-92).
Springer DOI 1905
BibRef

Wang, S., Shi, Y., Han, Y.,
Universal Perturbation Generation for Black-box Attack Using Evolutionary Algorithms,
ICPR18(1277-1282)
IEEE DOI 1812
Perturbation methods, Evolutionary computation, Sociology, Statistics, Training, Neural networks, Robustness BibRef

Xu, X.J.[Xiao-Jun], Chen, X.Y.[Xin-Yun], Liu, C.[Chang], Rohrbach, A.[Anna], Darrell, T.J.[Trevor J.], Song, D.[Dawn],
Fooling Vision and Language Models Despite Localization and Attention Mechanism,
CVPR18(4951-4961)
IEEE DOI 1812
Attacks. Prediction algorithms, Computational modeling, Neural networks, Knowledge discovery, Visualization, Predictive models, Natural languages BibRef

Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.,
Boosting Adversarial Attacks with Momentum,
CVPR18(9185-9193)
IEEE DOI 1812
Iterative methods, Robustness, Training, Data models, Adaptation models, Security BibRef

Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.,
Robust Physical-World Attacks on Deep Learning Visual Classification,
CVPR18(1625-1634)
IEEE DOI 1812
Perturbation methods, Roads, Cameras, Visualization, Pipelines, Autonomous vehicles, Detectors BibRef

Narodytska, N., Kasiviswanathan, S.,
Simple Black-Box Adversarial Attacks on Deep Neural Networks,
PRIV17(1310-1318)
IEEE DOI 1709
Computer vision, Knowledge engineering, Network architecture, Neural networks, Robustness, Training BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
VAE, Variational Autoencoder .


Last update:Oct 20, 2021 at 09:45:26