Adversarial Robustness in Advanced Machine Learning Models Integrating Graph Neural Networks and Large Language Models

-д хадгалсан:
Номзүйн дэлгэрэнгүй
-д хэвлэсэн:ProQuest Dissertations and Theses (2025)
Үндсэн зохиолч: Nazzal, Mahmoud
Хэвлэсэн:
ProQuest Dissertations & Theses
Нөхцлүүд:
Онлайн хандалт:Citation/Abstract
Full Text - PDF
Шошгууд: Шошго нэмэх
Шошго байхгүй, Энэхүү баримтыг шошголох эхний хүн болох!

MARC

LEADER 00000nab a2200000uu 4500
001 3237586654
003 UK-CbPIL
020 |a 9798290935393 
035 |a 3237586654 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Nazzal, Mahmoud 
245 1 |a Adversarial Robustness in Advanced Machine Learning Models Integrating Graph Neural Networks and Large Language Models 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a Artificial intelligence (AI) has achieved remarkable performances across various domains. In most real-world applications, data often takes relational forms, such as graphs and networks, or sequential forms, such as text and time series. As AI evolves, specialized models have emerged to handle these structures; Graph Neural Networks (GNNs) for relational mining and Large Language Models (LLMs) for sequential understanding. Despite their success, these models face challenges in security, robustness, and interpretability. GNNs excel in relational reasoning but are vulnerable to adversarial manipulation and lack interpretability, while LLMs are strong in linguistic reasoning and generalization yet struggle with relational data and inherent security risks.This dissertation introduces a unified framework that integrates GNNs and LLMs to address security-critical challenges by combining their complementary strengths. This integration assumes a frozen LLM, eliminating the need for expensive fine-tuning or exposure of internal model parameters, thereby allowing the use of state-of-the-art LLMs. The framework is designed to accommodate diverse data modalities across a wide range of AI applications.Three core contributions at the intersection of GNNs and LLMs for security-critical applications are proposed. First, the dissertation introduces a novel inference-time, multi-instance adversarial attack to expose vulnerabilities in GNN-based detection systems. By jointly optimizing perturbations across multiple nodes in malicious domain graphs, the attack achieves over 80% evasion success on real-world datasets without access to model internals. This formalizes the notion of multi-instance attacks against GNNs. Second, a GNN-LLM integration is developed for optimizing prompts in LLM-based source code generation. Generative GNNs are used to efficiently navigate the prompt space of frozen LLMs, leading them to generate secure and functional code in large, non-differentiable search spaces where gradient-based methods are inapplicable. The third contribution proposes a predictive GNN that iteratively guides an LLM to generate conversational contexts that enable context-based jailbreaking attacks on LLMs. This reveals a new form of jailbreak attack targeting the context of interaction rather than the prompt itself, raising critical concerns for LLM safety.Collectively, these contributions enable secure and robust GNN-LLM integration, improving deployment readiness and guiding future research on AI security with minimal impact on performance. 
653 |a Computer engineering 
653 |a Computer science 
653 |a Artificial intelligence 
653 |a Information technology 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3237586654/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3237586654/fulltextPDF/embedded/H09TXR3UUZB2ISDL?source=fedsrch