Unleashing the Unseen: Harnessing Benign Datasets for Jailbreaking Large Language Models

Kaydedildi:
Detaylı Bibliyografya
Yayımlandı:arXiv.org (Dec 19, 2024), p. n/a
Yazar: Zhao, Wei
Diğer Yazarlar: Li, Zhe, Li, Yige, Sun, Jun
Baskı/Yayın Bilgisi:
Cornell University Library, arXiv.org
Konular:
Online Erişim:Citation/Abstract
Full text outside of ProQuest
Etiketler: Etiketle
Etiket eklenmemiş, İlk siz ekleyin!
Diğer Bilgiler
Özet:Despite significant ongoing efforts in safety alignment, large language models (LLMs) such as GPT-4 and LLaMA 3 remain vulnerable to jailbreak attacks that can induce harmful behaviors, including through the use of adversarial suffixes. Building on prior research, we hypothesize that these adversarial suffixes are not mere bugs but may represent features that can dominate the LLM's behavior. To evaluate this hypothesis, we conduct several experiments. First, we demonstrate that benign features can be effectively made to function as adversarial suffixes, i.e., we develop a feature extraction method to extract sample-agnostic features from benign dataset in the form of suffixes and show that these suffixes may effectively compromise safety alignment. Second, we show that adversarial suffixes generated from jailbreak attacks may contain meaningful features, i.e., appending the same suffix to different prompts results in responses exhibiting specific characteristics. Third, we show that such benign-yet-safety-compromising features can be easily introduced through fine-tuning using only benign datasets. As a result, we are able to completely eliminate GPT's safety alignment in a blackbox setting through finetuning with only benign data. Our code and data is available at \url{https://github.com/suffix-maybe-feature/adver-suffix-maybe-features}.
ISSN:2331-8422
Kaynak:Engineering Database