T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data

Đã lưu trong:
Chi tiết về thư mục
Xuất bản năm:arXiv.org (Dec 19, 2024), p. n/a
Tác giả chính: Thimonier, Hugo
Tác giả khác: José Lucas De Melo Costa, Popineau, Fabrice, Rimmel, Arpad, Doan, Bich-Liên
Được phát hành:
Cornell University Library, arXiv.org
Những chủ đề:
Truy cập trực tuyến:Citation/Abstract
Full text outside of ProQuest
Các nhãn: Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!

MARC

LEADER 00000nab a2200000uu 4500
001 3147571372
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3147571372 
045 0 |b d20241219 
100 1 |a Thimonier, Hugo 
245 1 |a T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data 
260 |b Cornell University Library, arXiv.org  |c Dec 19, 2024 
513 |a Working Paper 
520 3 |a Self-supervision is often used for pre-training to foster performance on a downstream task by constructing meaningful representations of samples. Self-supervised learning (SSL) generally involves generating different views of the same sample and thus requires data augmentations that are challenging to construct for tabular data. This constitutes one of the main challenges of self-supervision for structured data. In the present work, we propose a novel augmentation-free SSL method for tabular data. Our approach, T-JEPA, relies on a Joint Embedding Predictive Architecture (JEPA) and is akin to mask reconstruction in the latent space. It involves predicting the latent representation of one subset of features from the latent representation of a different subset within the same sample, thereby learning rich representations without augmentations. We use our method as a pre-training technique and train several deep classifiers on the obtained representation. Our experimental results demonstrate a substantial improvement in both classification and regression tasks, outperforming models trained directly on samples in their original data space. Moreover, T-JEPA enables some methods to consistently outperform or match the performance of traditional methods likes Gradient Boosted Decision Trees. To understand why, we extensively characterize the obtained representations and show that T-JEPA effectively identifies relevant features for downstream tasks without access to the labels. Additionally, we introduce regularization tokens, a novel regularization method critical for training of JEPA-based models on structured data. 
653 |a Structured data 
653 |a Regularization methods 
653 |a Regularization 
653 |a Data augmentation 
653 |a Self-supervised learning 
653 |a Tables (data) 
653 |a Machine learning 
653 |a Decision trees 
653 |a Representations 
700 1 |a José Lucas De Melo Costa 
700 1 |a Popineau, Fabrice 
700 1 |a Rimmel, Arpad 
700 1 |a Doan, Bich-Liên 
773 0 |t arXiv.org  |g (Dec 19, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3147571372/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2410.05016