Advancing Human-Computer Interaction Systems Through Explainable and Secure AI Integration

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: Udoidiok, Ifiok
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3246414953
003 UK-CbPIL
020 |a 9798293805839 
035 |a 3246414953 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Udoidiok, Ifiok 
245 1 |a Advancing Human-Computer Interaction Systems Through Explainable and Secure AI Integration 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a As artificial intelligence (AI) systems increasingly shape how humans interact with digital environments, the need for transparency, security, and robustness in intelligent decision making has become critical. This thesis explores how explainable and secure AI techniques can be integrated into modern human-computer interaction (HCI) systems to enhance trust, resilience, and alignment with human operators. We present three related studies, each addressing a distinct challenge in the design of human-centered AI. First, we apply XAI methods, specifically Local Interpretable Model-Agnostic Explanations (LIME), to deep learning (DL) based CAPTCHA solvers. By interpreting model attention patterns, we uncover exploitable weaknesses in text CAPTCHA designs and propose improvements aimed at making human verification systems more transparent. Second, we introduce a unified framework for evaluating machine learning (ML) robustness under structured data poisoning attacks. We assess model degradation across traditional classifiers, deep neural networks, Bayesian hybrids, and LLMs, using attacks such as label flipping, data corruption, and adversarial insertion. By incorporating LIME into our evaluation process, we move beyond accuracy scores to uncover attribution drift and internal failure patterns that are vital for building resilient AI systems. Third, we propose a justification generation system powered by LLMs for real time automation. Using the Tennessee Eastman Process (TEP) dataset, we fine-tune a compact instruction-tuned model (FLAN-T5) to produce natural language explanations from structured sensor data. The results show that lightweight LLMs can be embedded into operator dashboards to deliver interpretable reasoning, enhance traceability, and support oversight in safety-sensitive settings. Together, these studies outline a framework for building AI systems that are not only capable, but also transparent, secure, and human aligned. This work advances the field of human-centered AI by emphasizing interpretability and robustness as foundational elements in the future of interactive intelligent systems. 
653 |a Computer science 
653 |a Engineering 
653 |a Artificial intelligence 
653 |a Computer engineering 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3246414953/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3246414953/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch