Learning-Directed Systems with Safety and Robustness Certificates

I tiakina i:
Ngā taipitopito rārangi puna kōrero
I whakaputaina i:ProQuest Dissertations and Theses (2025)
Kaituhi matua: Yang, Chenxi
I whakaputaina:
ProQuest Dissertations & Theses
Ngā marau:
Urunga tuihono:Citation/Abstract
Full Text - PDF
Ngā Tūtohu: Tāpirihia he Tūtohu
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!

MARC

LEADER 00000nab a2200000uu 4500
001 3284362928
003 UK-CbPIL
020 |a 9798270229504 
035 |a 3284362928 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Yang, Chenxi 
245 1 |a Learning-Directed Systems with Safety and Robustness Certificates 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a Learning-driven systems are increasingly being deployed in critical applications. However, their inherent unpredictability often results in unforeseen behavior, raising concerns about reliability. Formal verification techniques can address these issues by providing assurances about system behavior. This dissertation introduces the "verification in the learning loop" framework, a novel approach designed to improve the formally certified performance of learning-driven systems at training convergence. The framework tackles real-world challenges by combining formal verification with learning-based methods, enabling analysis of systems with neural network components, and adapting core algorithms to practical settings, such as networked systems.First, we consider the problem of learning a safe neurosymbolic program, assuming the environment is known. We introduce Differentiable Symbolic Execution (DSE), an efficient framework that integrates verification signals into the machine learning process. DSE leverages symbolic execution to construct safety losses and employs a generalized REINFORCE estimator to backpropagate gradients through non-differentiable program operations. By synergizing learning algorithms with differentiable symbolic analysis, DSE achieves significantly safer neurosymbolic programs compared to existing approaches.Next, we study safety guarantees for control tasks in reinforcement learning, extending our framework to unknown environments. While reasoning over environments provides richer properties, handling unknown environments introduces additional challenges. To address this, we propose CAROL—a model-based reinforcement learning framework that uses a learned model of the environment and an abstract interpreter to create a differentiable robustness signal. This signal enables the training of policies with provable adversarial robustness. Our experiments demonstrate that CAROL surpasses traditional reinforcement learning algorithms that do not incorporate verification into the learning loop.Finally, we address the practical challenges of deploying learning-driven systems in real-world settings, where reliability is critical. As a case study, we focus on learning-based congestion control and introduce C3, a framework that optimizes network performance while providing formal performance guarantees. C3 integrates learning with formal verification through a novel property-driven training loop, enabling the controller to adapt to diverse network conditions without sacrificing worst-case reliability. This work demonstrates the feasibility of bridging learning and verification at scale, and highlights how the "verification in the loop" techniques can lead to robust, deployable systems in safety-critical domains.Through these innovations, which integrate verification into the learning loop while addressing end-to-end system requirements, this dissertation advances the field of learning-directed systems by providing techniques that offer formal guarantees without compromising performance. 
653 |a Computer science 
653 |a Computer engineering 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3284362928/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3284362928/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch