Using Language for Efficient, Explainable, and Interactive Machine Learning

Spremljeno u:
Bibliografski detalji
Izdano u:ProQuest Dissertations and Theses (2025)
Glavni autor: Radhakrishnan Menon, Rakesh
Izdano:
ProQuest Dissertations & Theses
Teme:
Online pristup:Citation/Abstract
Full Text - PDF
Oznake: Dodaj oznaku
Bez oznaka, Budi prvi tko označuje ovaj zapis!

MARC

LEADER 00000nab a2200000uu 4500
001 3205838271
003 UK-CbPIL
020 |a 9798315712602 
035 |a 3205838271 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Radhakrishnan Menon, Rakesh 
245 1 |a Using Language for Efficient, Explainable, and Interactive Machine Learning 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a Language is fundamental in human learning and pedagogy. We can use language to convey complex concepts, resolve ambiguities, and refine our understanding. In contrast, modern AI systems – especially large-scale deep learning models – rely on large datasets to deliver high predictive accuracies. Yet, their opaque decision-making processes limit interpretability and trust. In this thesis, we infuse language into key stages of the machine learning workflow, demonstrating how language explanations and interactions yield more efficient, transparent, and adaptive AI systems.First, we explore natural language explanations as an alternative to large-scale dataset annotations for classification. We introduce ExEnt, an entailment-based method that leverages these explanations for zero-shot categorization of novel concepts. To systematically assess model performance in this setting, we propose CLUES, a benchmark pairing structured classification tasks with human-written explanations for rigorous evaluation.Next, we use natural language to explain broader patterns in how trained classifiers make decisions. We introduce MaNtLE, a model-agnostic framework that generates rationales describing a classifier’s reasoning across different inputs. Unlike traditional attribution-based methods, MaNtLE explanations are more faithful to the classifier’s decision process and easier for users to understand. Beyond explanations, we also explore how language can diagnose and rectify systematic errors intext classifiers. We introduce DiScErN, a framework that detects and precisely describes error-prone data groups using natural language. These descriptions then guide targeted data augmentation, improving model performance in underperforming regions.Finally, we study how language interactions can be used to actively learn new concepts. Here, we present INTERACT, an interactive learning framework enabling large language models to acquire and refine concepts through question-driven dialogues with experts. Empirically, we demonstrate that language models achieve strong performance in just a few dialogue turns, highlighting the efficiency and effectiveness of interactive learning.Collectively, these contributions demonstrate that language – in the form of explanations or interactive queries – offers a versatile mechanism for guiding machine learning models. By embedding language into the machine learning pipeline, we enable AI systems that are not only adaptable but also interpretable, trustworthy, and efficient in real-world settings. 
653 |a Computer science 
653 |a Computer engineering 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3205838271/abstract/embedded/09EF48XIB41FVQI7?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3205838271/fulltextPDF/embedded/09EF48XIB41FVQI7?source=fedsrch