Adversarial Attacks and Robustness in Deep Neural Networks for Sound Event Detection
Na minha lista:
| Publicado no: | PQDT - Global (2025) |
|---|---|
| Autor principal: | |
| Publicado em: |
ProQuest Dissertations & Theses
|
| Assuntos: | |
| Acesso em linha: | Citation/Abstract Full Text - PDF Full text outside of ProQuest |
| Tags: |
Sem tags, seja o primeiro a adicionar uma tag!
|
| Resumo: | As the use of Sound Event Detection (SED) systems expands into real-world and safety-critical applications, ensuring their robustness against malicious manipulation is becoming increasingly important. This thesis explores the vulnerability of deep learning models employed in Sound Event Detection (SED) to black-box adversarial attacks and examines strategies to enhance their robustness.From the attacker’s perspective, two optimization-based attacks—Particle Swarm Optimization (PSO) and Differential Evolution (DE)—are employed to generate adversarial audio samples. To maintain imperceptibility and control the additive noise, regularization terms are employed and experiments are performed under varying signal-to-noise ratios (SNRs). The attacks were evaluated across a broad spectrum of model architectures, including convolutional neural networks (CNNs) with and without Global Average Pooling, ResNet-based models like AudioCLIP, and transformer-based architectures like PaSST. Fine-tuning was applied to adapt pre-trained models like Audio-CLIP to the specific distributions of UrbanSound8K and ESC-50, allowing consistent evaluation across datasets. Experimental results show that AudioCLIP-finetuned model is highly susceptible to attacks, while transformer-based models like PaSST demonstrate greater robustness.To mitigate the effectiveness of the attacks, a denoising autoencoder is employed and integrated in each model’s head. This technique is also used for the detection of adversarial examples before passing them through the models. To be more specific, by analyzing the divergences and distances between the original and reconstructed inputs, we are able to conclude if a sample is manipulated or not.The results demonstrate that the most effective attacks were achieved using the PSO algorithm, reaching a maximum success rate of 76% on the AudioCLIP-finetuned model at a target SNR of 5 dB. As the SNR constraint increased to 15–20 dB, making perturbations less perceptible to human listeners, the attack success rates dropped, stabilizing around 40–50% for vulnerable models and falling below 20% for more robust ones, confirming the trade-off between adversarial effectiveness and imperceptibility. The evaluation with the Autoencoder-based defense showed a consistent reduction of 5–10% in the attack success rate across all models, without noticeably affecting the models’ original classification accuracy on clean inputs, making it an effective yet simple defensive approach. Additionally, the detection experiment based on prediction consistency before and after autoencoder denoising achieved a perfect precision of 1.0 but a recall of approximately 34%, indicating it can reliably flag adversarial samples when detected, although it misses a portion of attacks, suggesting the need for future improvements to increase sensitivity.These findings highlight the urgent need to enhance the robustness of neural networks, particularly for safety-critical applications where adversarial manipulation could have serious consequences. The integration of a denoising autoencoder proved effective, consistently reducing attack success rates without degrading model performance, with noticeable benefits across both CNN-based models and transformer-based architectures like PaSST. Overall, the results emphasize the crucial role of designing inherently robust model architectures and employing strategic preprocessing techniques to strengthen SED systems against adversarial threats. |
|---|---|
| ISBN: | 9798290661575 |
| Fonte: | ProQuest Dissertations & Theses Global |