20.4.3.3.5 Jailbreaking Language Models

Chapter Contents (Back)
Vision Language Model. Jailbreak. Vision-Language Model. Large Language Models. 2509

Wang, Y.Z.[You-Ze], Hu, W.B.[Wen-Bo], Dong, Y.P.[Yin-Peng], Liu, J.[Jing], Zhang, H.W.[Han-Wang], Hong, R.C.[Ri-Chang],
Align Is Not Enough: Multimodal Universal Jailbreak Attack Against Multimodal Large Language Models,
CirSysVideo(35), No. 6, June 2025, pp. 5475-5488.
IEEE DOI 2506
Safety, Large language models, Watermarking, Robustness, Biological system modeling, Visualization, jailbreak attack BibRef


Hossain, M.Z.[Md Zarif], Imteaj, A.[Ahmed],
SLADE: Shielding against Dual Exploits in Large Vision-Language Models,
CVPR25(24244-24254)
IEEE DOI 2508
Training, Visualization, Perturbation methods, Computational modeling, Semantics, Contrastive learning, Coherence, jailbreak attacks BibRef

Jeong, J.[Joonhyun], Bae, S.[Seyun], Jung, Y.[Yeonsung], Hwang, J.[Jaeryong], Yang, E.[Eunho],
Playing the Fool: Jailbreaking LLMs and Multimodal LLMs with Out-of-Distribution Strategy,
CVPR25(29937-29946)
IEEE DOI Code:
WWW Link. 2508
Ethics, Visualization, Uncertainty, Codes, Large language models, Safety, Standards, Research and development, large language models, out-of-distribution BibRef

Yang, Z.P.[Zuo-Peng], Fan, J.[Jiluan], Yan, A.[Anli], Gao, E.[Erdun], Lin, X.[Xin], Li, T.[Tao], Mo, K.[Kanghua], Dong, C.[Changyu],
Distraction is All You Need for Multimodal Large Language Model Jailbreaking,
CVPR25(9467-9476)
IEEE DOI Code:
WWW Link. 2508
Visualization, Codes, Large language models, Buildings, Complexity theory, Safety, Pattern recognition BibRef

Hao, S.Y.[Shu-Yang], Hooi, B.[Bryan], Liu, J.[Jun], Chang, K.W.[Kai-Wei], Huang, Z.[Zi], Cai, Y.J.[Yu-Jun],
Exploring Visual Vulnerabilities via Multi-Loss Adversarial Search for Jailbreaking Vision-Language Models,
CVPR25(19890-19899)
IEEE DOI 2508
Visualization, Systematics, Image synthesis, Semantics, Collaboration, Closed box, Safety, Security, Optimization, vlm, jailbreak BibRef

Wang, H.[Han], Wang, G.[Gang], Zhang, H.[Huan],
Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreaks,
CVPR25(29947-29957)
IEEE DOI Code:
WWW Link. 2508
Training, Adaptation models, Visualization, Costs, Data preprocessing, Resists, Vectors, Safety, vlm, adversarial attack, defense BibRef

Ghosal, S.S.[Soumya Suvra], Chakraborty, S.[Souradip], Singh, V.[Vaibhav], Guan, T.R.[Tian-Rui], Wang, M.[Mengdi], Beirami, A.[Ahmad], Huang, F.[Furong], Velasquez, A.[Alvaro], Manocha, D.[Dinesh], Bedi, A.S.[Amrit Singh],
Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment,
CVPR25(25038-25049)
IEEE DOI 2508
Training, Computational modeling, Prevention and mitigation, Large language models, Benchmark testing, Mathematical models, multi-modal large language model BibRef

Xiang, Y.L.[Yong-Li], Hong, Z.M.[Zi-Ming], Yao, L.[Lina], Wang, D.D.[Da-Dong], Liu, T.L.[Tong-Liang],
Jailbreaking the Non-Transferable Barrier via Test-Time Data Disguising,
CVPR25(30671-30681)
IEEE DOI Code:
WWW Link. 2508
Codes, Accuracy, Computational modeling, Closed box, Intellectual property, Data models, Security, Glass box BibRef

Chen, J.X.[Jun-Xi], Dong, J.H.[Jun-Hao], Xie, X.H.[Xiao-Hua],
Mind the Trojan Horse: Image Prompt Adapter Enabling Scalable and Deceptive Jailbreaking,
CVPR25(23785-23794)
IEEE DOI Code:
WWW Link. 2508
Training, Threat modeling, Image synthesis, Text to image, Diffusion models, Controllability, Security, Trojan horses, jailbreaking BibRef

Li, Y.F.[Yi-Fan], Guo, H.[Hangyu], Zhou, K.[Kun], Zhao, W.X.[Wayne Xin], Wen, J.R.[Ji-Rong],
Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking Multimodal Large Language Models,
ECCV24(LXXIII: 174-189).
Springer DOI 2412
BibRef

Chapter on Implementations and Applications, Databases, QBIC, Video Analysis, Hardware and Software, Inspection continues in
Video Question Answering, Movies, Spatio-Temporal, Query, VQA .


Last update:Sep 10, 2025 at 12:00:25