From Neural Networks to Large Language Models: Innovations in Financial AI, Mathematical Reasoning, and Structured Data Representation

Salvato in:
Dettagli Bibliografici
Pubblicato in:ProQuest Dissertations and Theses (2025)
Autore principale: Ye, Junyi
Pubblicazione:
ProQuest Dissertations & Theses
Soggetti:
Accesso online:Citation/Abstract
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 3237577613
003 UK-CbPIL
020 |a 9798290935249 
035 |a 3237577613 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Ye, Junyi 
245 1 |a From Neural Networks to Large Language Models: Innovations in Financial AI, Mathematical Reasoning, and Structured Data Representation 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a This dissertation explores the evolution and application of artificial intelligence techniques across three critical domains: financial modeling, mathematical reasoning, and structured data analysis. The dissertation presents seven research projects that chart a progression from specialized neural architectures to sophisticated large language models (LLMs), contributing novel methodologies and frameworks at each stage. In the financial domain, the research first introduces TS-Mixer, a MLP-based architecture for time-series forecasting that captures both feature relationships and temporal dependencies through a simple yet effective design, outperforming more complex models in S&P500 index prediction. The dissertation then presents DySTAGE, a dynamic graph representation learning framework that addresses the evolving nature of financial markets by modeling changing asset relationships, demonstrating superior performance in both predictive accuracy and portfolio optimization. Finally, the dissertation proposes a hybrid framework integrating LLMs with reinforcement learning for adaptive margin trading, enabling dynamic risk management through explainable market reasoning. For mathematical reasoning, the research develops two novel evaluation frameworks that expand beyond traditional correctness metrics: CreativeMath and FaultyMath. CreativeMath assesses LLMs' ability to generate novel, insightful solutions to mathematical problems, introducing a comprehensive benchmark of competition-level problems with multiple human solutions. FaultyMath evaluates logical robustness by testing whether models can identify logically flawed or unsolvable problems, revealing significant gaps in current systems' critical thinking capabilities. In structured data analysis, the dissertation introduces DataFrame QA, a privacy-preserving framework that enables natural language interaction with tabular data without exposing sensitive information, achieving high accuracy while eliminating data exposure risks. The research also presents TextFlow, a modular approach to flowchart understanding that separates visual extraction from semantic reasoning, demonstrating substantial improvements over end-to-end vision-language models in accuracy and interpretability. Collectively, these contributions advance AI capabilities across multiple dimensions—efficiency, adaptability, creativity, logical robustness, privacy, and interpretability—while establishing methodologies that leverage the strengths of different AI paradigms for complex analytical tasks. The dissertation provides both theoretical insights and practical frameworks that bridge the gap between specialized neural architectures and general-purpose language models, with applications in finance, education, data science, and beyond. 
653 |a Computer science 
653 |a Information science 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3237577613/abstract/embedded/Y2VX53961LHR7RE6?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3237577613/fulltextPDF/embedded/Y2VX53961LHR7RE6?source=fedsrch