MixMAS: A Framework for Sampling-Based Mixer Architecture Search for Multimodal Fusion and Learning

Guardado en:
書目詳細資料
發表在:arXiv.org (Dec 24, 2024), p. n/a
主要作者: Chergui, Abdelmadjid
其他作者: Bezirganyan, Grigor, Sellami, Sana, Berti-Équille, Laure, Fournier, Sébastien
出版:
Cornell University Library, arXiv.org
主題:
在線閱讀:Citation/Abstract
Full text outside of ProQuest
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
Resumen:Choosing a suitable deep learning architecture for multimodal data fusion is a challenging task, as it requires the effective integration and processing of diverse data types, each with distinct structures and characteristics. In this paper, we introduce MixMAS, a novel framework for sampling-based mixer architecture search tailored to multimodal learning. Our approach automatically selects the optimal MLP-based architecture for a given multimodal machine learning (MML) task. Specifically, MixMAS utilizes a sampling-based micro-benchmarking strategy to explore various combinations of modality-specific encoders, fusion functions, and fusion networks, systematically identifying the architecture that best meets the task's performance metrics.
ISSN:2331-8422
Fuente:Engineering Database