20.4.3.3.10 Attacks on Vision-Language Models

Chapter Contents (Back)
Vision Language Model. Vision-Language Model. Attacks. Defense.
See also Countering Adversarial Attacks, Defense.
See also Adversarial Attacks.

Liang, J.W.[Jia-Wei], Liang, S.Y.[Si-Yuan], Liu, A.S.[Ai-Shan], Cao, X.C.[Xiao-Chun],
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models,
IJCV(133), No. 7, July 2025, pp. 3994-4013.
Springer DOI 2506
BibRef

Fu, T.C.[Ting-Chao], Zhang, J.H.[Jin-Hong], Li, F.X.[Fan-Xiao], Wei, P.[Ping], Zeng, X.L.[Xiang-Long], Zhou, W.[Wei],
Multimodal alignment augmentation transferable attack on vision-language pre-training models,
PRL(191), 2025, pp. 131-137.
Elsevier DOI 2504
Adversarial example, Vision-language pre-training model, Model vulnerability, Transfer-based attack BibRef

Jia, X.J.[Xiao-Jun], Gao, S.S.[Sen-Sen], Guo, Q.[Qing], Qin, S.[Simeng], Ma, K.[Ke], Huang, Y.H.[Yi-Hao], Liu, Y.[Yang], Tsang, I.W.[Ivor W.], Cao, X.C.[Xiao-Chun],
Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack,
PAMI(47), No. 10, October 2025, pp. 8489-8505.
IEEE DOI 2510
Semantics, Optimization, Trajectory, Pipelines, Overfitting, Data augmentation, Visualization, Training, Perturbation methods, vision-language pre-training (VLP) BibRef

Qian, Y.G.[Ya-Guan], Kong, Y.X.[Ya-Xin], Bao, Q.Q.[Qi-Qi], Gu, Z.Q.[Zhao-Quan], Wang, B.[Bin], Ji, S.[Shouling], Zhang, J.P.[Jian-Ping], Lei, Z.[Zhen],
Individual and Common Attack: Enhancing Transferability in VLP Models Through Modal Feature Exploitation,
IP(35), 2026, pp. 1082-1095.
IEEE DOI 2602
Vision–Language Pretrained. Perturbation methods, Closed box, Visualization, Overfitting, Glass box, Computational modeling, Semantics, Feature extraction, model robustness BibRef

Kuurila-Zhang, H.[Hui], Chen, H.Y.[Hao-Yu], Zhao, G.Y.[Guo-Ying],
Evaluating the Adversarial Robustness of Vision-Language Models for Facial Expression Recognition,
IEEE_Int_Sys(41), No. 1, January 2026, pp. 105-112.
IEEE DOI 2602
Visualization, Facial expressions, Face recognition, Closed box, Glass box, Question answering (information retrieval), Adversarial machine learning BibRef

Liu, C.H.[Chao-Hu], Wang, Y.[Yubo], Cao, H.Y.[Hao-Yu], Liu, B.[Bing], Jiang, D.Q.[De-Qiang],
Evaluating the Adversarial Robustness of Vision-Language Models via Internal Feature Perturbations,
CirSysVideo(36), No. 3, March 2026, pp. 3938-3950.
IEEE DOI 2603
Visualization, Perturbation methods, Robustness, Uncertainty, Feature extraction, Vectors, Optimization, Information entropy BibRef

Lu, Z.[Zimu], Xu, N.[Ning], Tian, H.[Hongshuo], Wang, L.J.[Lan-Jun], Liu, A.A.[An-An],
Medical VLP Model Is Vulnerable: Toward Multimodal Adversarial Attack on Large Medical Vision-Language Models,
CirSysVideo(36), No. 2, February 2026, pp. 2478-2491.
IEEE DOI 2602
Medical diagnostic imaging, Terminology, Robustness, Lungs, Complexity theory, Visualization, Unified modeling language, adversarial attack BibRef

Wang, B.[Bing], Qian, S.S.[Sheng-Sheng], Xu, C.S.[Chang-Sheng],
Invisible Backdoor Attack With Siamese Tuning on Pre-Trained Vision-Language Models,
MultMed(28), 2026, pp. 1663-1676.
IEEE DOI 2603
Training, Tuning, Data models, Artificial intelligence, Security, Frequency-domain analysis, Visualization, Training data, artificial intelligence security BibRef

Liu, D.Z.[Dai-Zong], Liu, W.Q.[Wang-Qin], Cai, X.W.[Xiao-Wen], Zhou, P.[Pan], Guan, R.W.[Run-Wei], Qu, X.Y.[Xiao-Ye], Du, B.[Bo],
Generating transferable attacks across large vision-language models using adversarial deformation learning,
PR(176), 2026, pp. 113194.
Elsevier DOI 2603
Large vision-language model, Adversarial attack, Transferability, Deformation learning BibRef


Cao, Y.[Yue], Xing, Y.[Yun], Zhang, J.[Jie], Lin, D.[Di], Zhang, T.W.[Tian-Wei], Tsang, I.[Ivor], Liu, Y.[Yang], Guo, Q.[Qing],
SceneTAP: Scene-Coherent Typographic Adversarial Planner against Vision-Language Models in Real-World Environments,
CVPR25(25050-25059)
IEEE DOI Code:
WWW Link. 2508
Printing, Visualization, Codes, Cognition, Planning, physical adversarial attack, typographic attack, llm agent, BibRef

Xie, P.[Peng], Bie, Y.[Yequan], Mao, J.[Jianda], Song, Y.Q.[Yang-Qiu], Wang, Y.[Yang], Chen, H.[Hao], Chen, K.[Kani],
Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks,
CVPR25(14679-14689)
IEEE DOI 2508
Correlation, Computational modeling, Semantics, Closed box, Robustness, Natural language processing, Safety, robustness BibRef

Zhang, J.M.[Jia-Ming], Ye, J.[Junhong], Ma, X.[Xingjun], Li, Y.[Yige], Yang, Y.F.[Yun-Fan], Chen, Y.H.[Yun-Hao], Sang, J.[Jitao], Yeung, D.Y.[Dit-Yan],
Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models,
CVPR25(19900-19909)
IEEE DOI 2508
Limiting, Foundation models, Scalability, Prevention and mitigation, Vectors, Internet, Security, self-supervised BibRef

Liang, S.Y.[Si-Yuan], Liang, J.W.[Jia-Wei], Pang, T.Y.[Tian-Yu], Du, C.[Chao], Liu, A.[Aishan], Zhu, M.L.[Ming-Li], Cao, X.C.[Xiao-Chun], Tao, D.C.[Da-Cheng],
Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift,
CVPR25(9477-9486)
IEEE DOI 2508
Training, Visualization, Semantics, Predictive models, Data models, Robustness, Security, Tuning, Testing, domain shift, backdoor attack, large vision-language model BibRef

Fime, A.A.[Awal Ahmed], Hossain, M.Z.[Md Zarif], Zaman, S.[Saika], Shahid, A.R.[Abdur R], Imteaj, A.[Ahmed],
Towards Trustworthy Autonomous Vehicles with Vision-Language Models under Targeted and Untargeted Adversarial Attacks,
FaDE-TCV25(619-628)
IEEE DOI 2512
Accuracy, Perturbation methods, Transportation, Reliability engineering, Robustness, Cognition, Safety, targeted and untargeted adversarial attack BibRef

Chen, L.[Long], Chen, Y.L.[Yu-Ling], Luo, Y.[Yun], Dou, H.[Hui], Zhong, X.Y.[Xin-Yang],
Attention-Guided Hierarchical Defense for Multimodal Attacks in Vision-Language Models,
TrustworthyOpen25(1598-1608)
IEEE DOI 2512
Training, Cross layer design, Computational modeling, Semantics, Collaboration, Distortion, Robustness, adversarial attack, pre-trained vision-language models BibRef

Xing, S.[Songlong], Zhao, Z.Y.[Zheng-Yu], Sebe, N.[Nicu],
CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards Zero-shot Adversarial Robustness of CLIP,
CVPR25(15172-15182)
IEEE DOI 2508
Adaptation models, Codes, Accuracy, Perturbation methods, Computational modeling, Robustness, Pattern recognition BibRef

Ishmam, A.M.[Alvi Md], Thomas, C.[Christopher],
Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-Grained Knowledge Alignment,
CVPR24(24820-24830)
IEEE DOI 2410
Training, Visualization, Correlation, Computational modeling, Large language models, Semantics, Adversarial attack and defense, Vision languge model BibRef

Wang, Y.[Yu], Liu, X.G.[Xiao-Geng], Li, Y.[Yu], Chen, M.[Muhao], Xiao, C.W.[Chao-Wei],
Adashield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting,
ECCV24(XX: 77-94).
Springer DOI 2412
BibRef

Gao, S.[Sensen], Jia, X.J.[Xiao-Jun], Ren, X.H.[Xu-Hong], Tsang, I.[Ivor], Guo, Q.[Qing],
Boosting Transferability in Vision-language Attacks via Diversification Along the Intersection Region of Adversarial Trajectory,
ECCV24(LVII: 442-460).
Springer DOI 2412
BibRef

Bai, J.[Jiawang], Gao, K.[Kuofeng], Min, S.B.[Shao-Bo], Xia, S.T.[Shu-Tao], Li, Z.F.[Zhi-Feng], Liu, W.[Wei],
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP,
CVPR24(24239-24250)
IEEE DOI Code:
WWW Link. 2410
Learning systems, Image recognition, Computational modeling, Training data, Optimization methods, Predictive models BibRef

Liang, S.Y.[Si-Yuan], Zhu, M.L.[Ming-Li], Liu, A.[Aishan], Wu, B.Y.[Bao-Yuan], Cao, X.C.[Xiao-Chun], Chang, E.C.[Ee-Chien],
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning,
CVPR24(24645-24654)
IEEE DOI Code:
WWW Link. 2410
Resistance, Visualization, Ethics, Codes, Semantics, Contrastive learning, Multimodal Contrastive Learning, Backdoor Attacks BibRef

Lu, D.[Dong], Wang, Z.Q.[Zhi-Qiang], Wang, T.[Teng], Guan, W.[Weili], Gao, H.C.[Hong-Chang], Zheng, F.[Feng],
Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models,
ICCV23(102-111)
IEEE DOI Code:
WWW Link. 2401
BibRef

Chapter on Implementations and Applications, Databases, QBIC, Video Analysis, Hardware and Software, Inspection continues in
Large Language Models for Vision, LLM, LVLM .


Last update:Mar 28, 2026 at 17:09:41