Unleashing the Unseen: Harnessing Benign Datasets for Jailbreaking Large Language Models

保存先:
書誌詳細
出版年:arXiv.org (Dec 19, 2024), p. n/a
第一著者: Zhao, Wei
その他の著者: Li, Zhe, Li, Yige, Sun, Jun
出版事項:
Cornell University Library, arXiv.org
主題:
オンライン・アクセス:Citation/Abstract
Full text outside of ProQuest
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000nab a2200000uu 4500
001 3147571297
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3147571297 
045 0 |b d20241219 
100 1 |a Zhao, Wei 
245 1 |a Unleashing the Unseen: Harnessing Benign Datasets for Jailbreaking Large Language Models 
260 |b Cornell University Library, arXiv.org  |c Dec 19, 2024 
513 |a Working Paper 
520 3 |a Despite significant ongoing efforts in safety alignment, large language models (LLMs) such as GPT-4 and LLaMA 3 remain vulnerable to jailbreak attacks that can induce harmful behaviors, including through the use of adversarial suffixes. Building on prior research, we hypothesize that these adversarial suffixes are not mere bugs but may represent features that can dominate the LLM's behavior. To evaluate this hypothesis, we conduct several experiments. First, we demonstrate that benign features can be effectively made to function as adversarial suffixes, i.e., we develop a feature extraction method to extract sample-agnostic features from benign dataset in the form of suffixes and show that these suffixes may effectively compromise safety alignment. Second, we show that adversarial suffixes generated from jailbreak attacks may contain meaningful features, i.e., appending the same suffix to different prompts results in responses exhibiting specific characteristics. Third, we show that such benign-yet-safety-compromising features can be easily introduced through fine-tuning using only benign datasets. As a result, we are able to completely eliminate GPT's safety alignment in a blackbox setting through finetuning with only benign data. Our code and data is available at \url{https://github.com/suffix-maybe-feature/adver-suffix-maybe-features}. 
653 |a Feature extraction 
653 |a Datasets 
653 |a Alignment 
653 |a Large language models 
700 1 |a Li, Zhe 
700 1 |a Li, Yige 
700 1 |a Sun, Jun 
773 0 |t arXiv.org  |g (Dec 19, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3147571297/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2410.00451