Unleashing the Unseen: Harnessing Benign Datasets for Jailbreaking Large Language Models
I tiakina i:
| I whakaputaina i: | arXiv.org (Dec 19, 2024), p. n/a |
|---|---|
| Kaituhi matua: | |
| Ētahi atu kaituhi: | , , |
| I whakaputaina: |
Cornell University Library, arXiv.org
|
| Ngā marau: | |
| Urunga tuihono: | Citation/Abstract Full text outside of ProQuest |
| Ngā Tūtohu: |
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!
|
| Whakarāpopotonga: | Despite significant ongoing efforts in safety alignment, large language models (LLMs) such as GPT-4 and LLaMA 3 remain vulnerable to jailbreak attacks that can induce harmful behaviors, including through the use of adversarial suffixes. Building on prior research, we hypothesize that these adversarial suffixes are not mere bugs but may represent features that can dominate the LLM's behavior. To evaluate this hypothesis, we conduct several experiments. First, we demonstrate that benign features can be effectively made to function as adversarial suffixes, i.e., we develop a feature extraction method to extract sample-agnostic features from benign dataset in the form of suffixes and show that these suffixes may effectively compromise safety alignment. Second, we show that adversarial suffixes generated from jailbreak attacks may contain meaningful features, i.e., appending the same suffix to different prompts results in responses exhibiting specific characteristics. Third, we show that such benign-yet-safety-compromising features can be easily introduced through fine-tuning using only benign datasets. As a result, we are able to completely eliminate GPT's safety alignment in a blackbox setting through finetuning with only benign data. Our code and data is available at \url{https://github.com/suffix-maybe-feature/adver-suffix-maybe-features}. |
|---|---|
| ISSN: | 2331-8422 |
| Puna: | Engineering Database |