Linear Subspace Learning via sparse dimension reduction

Ming Yin, Yi Guo, Junbin Gao

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

3 Citations (Scopus)

Abstract

Linear Subspace Learning (LSL) has been widely used in many areas of information processing, such as dimensionality reduction, data mining, pattern recognition and computer vision. Recent years have witnessed several excellent extensions of PCA in LSL. One is the recent Ll-norm maximization principal component analysis (LIMax-PCA), which aims at learning linear subspace efficiently. LIMax-PCA simply simulates PCA by replacing the covariance with the so-called Ll-norm dispersion in the mapped feature space. However, it is difficult to give an intuitive interpretation. In this paper, a novel subspace learning approach based on sparse dimension reduction is proposed, which enforces the sparsity of the mapped data to better recover cluster structures. The optimization problem is solved efficiently via Alternating Direction Method (ADM). Experimental results show that the proposed method is effective in subspace learning.
Original languageEnglish
Title of host publicationProceedings of the 2014 International Joint Conference on Neural Networks, July 6-11, 2014, Beijing, China
PublisherIEEE
Pages3540-3547
Number of pages8
ISBN (Print)9781479914845
DOIs
Publication statusPublished - 2014
EventInternational Joint Conference on Neural Networks -
Duration: 6 Jul 2014 → …

Conference

ConferenceInternational Joint Conference on Neural Networks
Period6/07/14 → …

Keywords

  • dimension reduction (statistics)
  • invariant subspaces
  • principal components analysis

Fingerprint

Dive into the research topics of 'Linear Subspace Learning via sparse dimension reduction'. Together they form a unique fingerprint.

Cite this