Robust auto-weighted multi-view subspace clustering with common subspace representation matrix

Guardado en:
Detalles Bibliográficos
Publicado en:PLoS One vol. 12, no. 5 (May 2017), p. e0176769
Autor principal: Zhuge, Wenzhang
Otros Autores: Hou, Chenping, Jiao, Yuanyuan, Jia Yue, Hong, Tao, Yi, Dongyun
Publicado:
Public Library of Science
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 1901715398
003 UK-CbPIL
022 |a 1932-6203 
024 7 |a 10.1371/journal.pone.0176769  |2 doi 
035 |a 1901715398 
045 2 |b d20170501  |b d20170531 
084 |a 174835  |2 nlm 
100 1 |a Zhuge, Wenzhang 
245 1 |a Robust auto-weighted multi-view subspace clustering with common subspace representation matrix 
260 |b Public Library of Science  |c May 2017 
513 |a Journal Article 
520 3 |a In many computer vision and machine learning applications, the data sets distribute on certain low-dimensional subspaces. Subspace clustering is a powerful technology to find the underlying subspaces and cluster data points correctly. However, traditional subspace clustering methods can only be applied on data from one source, and how to extend these methods and enable the extensions to combine information from various data sources has become a hot area of research. Previous multi-view subspace methods aim to learn multiple subspace representation matrices simultaneously and these learning task for different views are treated equally. After obtaining representation matrices, they stack up the learned representation matrices as the common underlying subspace structure. However, for many problems, the importance of sources and the importance of features in one source both can be varied, which makes the previous approaches ineffective. In this paper, we propose a novel method called Robust Auto-weighted Multi-view Subspace Clustering (RAMSC). In our method, the weight for both the sources and features can be learned automatically via utilizing a novel trick and introducing a sparse norm. More importantly, the objective of our method is a common representation matrix which directly reflects the common underlying subspace structure. A new efficient algorithm is derived to solve the formulated objective with rigorous theoretical proof on its convergency. Extensive experimental results on five benchmark multi-view datasets well demonstrate that the proposed method consistently outperforms the state-of-the-art methods. 
651 4 |a China 
653 |a Social 
653 |a Cybernetics 
653 |a Visual perception 
653 |a Neurocomputing 
653 |a Intelligence 
653 |a Science 
653 |a Visual discrimination learning 
653 |a Defensive behavior 
653 |a Discriminant analysis 
653 |a Pattern recognition 
653 |a Segmentation 
653 |a Information systems 
653 |a Image processing 
653 |a Mathematics 
653 |a Clustering 
653 |a Statistical analysis 
653 |a Learning algorithms 
653 |a Circuits 
653 |a Data processing 
653 |a Color 
653 |a Bayesian analysis 
653 |a Classification 
653 |a Image retrieval 
653 |a Methods 
653 |a Information processing 
653 |a Mathematical models 
653 |a Learning 
653 |a Integration 
653 |a Embedding 
653 |a Neural networks 
653 |a Data mining 
653 |a Artificial intelligence 
653 |a Computer vision 
653 |a Machine learning 
653 |a Subspaces 
653 |a Cognitive tasks 
653 |a Robustness 
653 |a Representations 
653 |a Data points 
653 |a Subspace methods 
653 |a Algorithms 
700 1 |a Hou, Chenping 
700 1 |a Jiao, Yuanyuan 
700 1 |a Jia Yue 
700 1 |a Hong, Tao 
700 1 |a Yi, Dongyun 
773 0 |t PLoS One  |g vol. 12, no. 5 (May 2017), p. e0176769 
786 0 |d ProQuest  |t Health & Medical Collection 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/1901715398/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/1901715398/fulltext/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/1901715398/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch