Time Series Foundation Models and Their Applications to Scientific Discoveries

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ProQuest Dissertations and Theses (2025)
1. Verfasser: Li, Weijian
Veröffentlicht:
ProQuest Dissertations & Theses
Schlagworte:
Online-Zugang:Citation/Abstract
Full Text - PDF
Tags: Tag hinzufügen
Keine Tags, Fügen Sie das erste Tag hinzu!

MARC

LEADER 00000nab a2200000uu 4500
001 3245357208
003 UK-CbPIL
020 |a 9798291587218 
035 |a 3245357208 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Li, Weijian 
245 1 |a Time Series Foundation Models and Their Applications to Scientific Discoveries 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a The advent of large foundation models like ChatGPT and the concept of artificial general intelligence have shifted the machine learning paradigm from “one model per task” to “one large model, many tasks.” On one hand, the prevalent LLM-based foundation models excel at many tasks that can be described by natural language, programming language, and mathematical language. On the other hand, they still struggle with tasks of other modalities, such as DNA data and continuous time-series data. This limitation has led researchers to pursue specialized foundation models for other data domains by transferring the techniques from LLM-based foundation models. This thesis is to look at the concept of foundation models with a focus on the track with less attention from the research community, which is about time series foundation models and their applications to scientific discoveries. Initial attempts of time series foundation models (TSFMs) have demonstrated superior efficacy across various benchmark datasets, outperforming traditional one-model-per-task approaches in forecasting tasks. While these models are pre-trained on hundreds of gigabytes of multi-domain time series data and achieve good performance on benchmarks, there still exist limitations and challenges to developing a better TSFM with more comprehensive abilities. Besides, their potential contribution to scientific discoveries remains largely unexplored. Particularly, questions persist regarding their ability to handle irregularly sampled scientific time series and their effectiveness in domain-specific downstream tasks such as variable star classification in astrophysics.This thesis looks at the realm of time series foundation models from three perspectives. i) Starting from the oracle of foundation models, LLM-based foundation models, I study the adaptive batch size technique to improve the pre-training efficiency for large models. ii) I study better methodologies for each step in the time series foundation models’ pipeline that contribute to developing a better time series foundation model. iii) I study time series foundation models’ applications to accelerate scientific discoveries in astrophysics. 
653 |a Computer science 
653 |a Astrophysics 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3245357208/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3245357208/fulltextPDF/embedded/H09TXR3UUZB2ISDL?source=fedsrch