Fuzzy Simplicial Networks: A Topology-Inspired Model to Improve Task Generalization in Few-shot Learning

保存先:
書誌詳細
出版年:arXiv.org (Sep 23, 2020), p. n/a
第一著者: Kvinge, Henry
その他の著者: New, Zachary, Courts, Nico, Lee, Jung H, Phillips, Lauren A, Corley, Courtney D, Tuor, Aaron, Avila, Andrew, Hodas, Nathan O
出版事項:
Cornell University Library, arXiv.org
主題:
オンライン・アクセス:Citation/Abstract
Full text outside of ProQuest
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000nab a2200000uu 4500
001 2445794417
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2445794417 
045 0 |b d20200923 
100 1 |a Kvinge, Henry 
245 1 |a Fuzzy Simplicial Networks: A Topology-Inspired Model to Improve Task Generalization in Few-shot Learning 
260 |b Cornell University Library, arXiv.org  |c Sep 23, 2020 
513 |a Working Paper 
520 3 |a Deep learning has shown great success in settings with massive amounts of data but has struggled when data is limited. Few-shot learning algorithms, which seek to address this limitation, are designed to generalize well to new tasks with limited data. Typically, models are evaluated on unseen classes and datasets that are defined by the same fundamental task as they are trained for (e.g. category membership). One can also ask how well a model can generalize to fundamentally different tasks within a fixed dataset (for example: moving from category membership to tasks that involve detecting object orientation or quantity). To formalize this kind of shift we define a notion of "independence of tasks" and identify three new sets of labels for established computer vision datasets that test a model's ability to generalize to tasks which draw on orthogonal attributes in the data. We use these datasets to investigate the failure modes of metric-based few-shot models. Based on our findings, we introduce a new few-shot model called Fuzzy Simplicial Networks (FSN) which leverages a construction from topology to more flexibly represent each class from limited data. In particular, FSN models can not only form multiple representations for a given class but can also begin to capture the low-dimensional structure which characterizes class manifolds in the encoded space of deep networks. We show that FSN outperforms state-of-the-art models on the challenging tasks we introduce in this paper while remaining competitive on standard few-shot benchmarks. 
653 |a Algorithms 
653 |a Computer vision 
653 |a Datasets 
653 |a Networks 
653 |a Model testing 
653 |a Machine learning 
653 |a Failure modes 
653 |a Topology 
700 1 |a New, Zachary 
700 1 |a Courts, Nico 
700 1 |a Lee, Jung H 
700 1 |a Phillips, Lauren A 
700 1 |a Corley, Courtney D 
700 1 |a Tuor, Aaron 
700 1 |a Avila, Andrew 
700 1 |a Hodas, Nathan O 
773 0 |t arXiv.org  |g (Sep 23, 2020), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2445794417/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2009.11253