Abstract
Linear Subspace Learning (LSL) has been widely used in many areas of information processing, such as dimensionality reduction, data mining, pattern recognition and computer vision. Recent years have witnessed several excellent extensions of PCA in LSL. One is the recent Ll-norm maximization principal component analysis (LIMax-PCA), which aims at learning linear subspace efficiently. LIMax-PCA simply simulates PCA by replacing the covariance with the so-called Ll-norm dispersion in the mapped feature space. However, it is difficult to give an intuitive interpretation. In this paper, a novel subspace learning approach based on sparse dimension reduction is proposed, which enforces the sparsity of the mapped data to better recover cluster structures. The optimization problem is solved efficiently via Alternating Direction Method (ADM). Experimental results show that the proposed method is effective in subspace learning.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2014 International Joint Conference on Neural Networks, July 6-11, 2014, Beijing, China |
Publisher | IEEE |
Pages | 3540-3547 |
Number of pages | 8 |
ISBN (Print) | 9781479914845 |
DOIs | |
Publication status | Published - 2014 |
Event | International Joint Conference on Neural Networks - Duration: 6 Jul 2014 → … |
Conference
Conference | International Joint Conference on Neural Networks |
---|---|
Period | 6/07/14 → … |
Keywords
- dimension reduction (statistics)
- invariant subspaces
- principal components analysis