TY - GEN
T1 - Building sparse support vector machines for multi-instance classification
AU - Fu, Zhouyu
AU - Lu, Guojun
AU - Ting, Kai Ming
AU - Zhang, Dengsheng
PY - 2011
Y1 - 2011
N2 - ![CDATA[We propose a direct approach to learning sparse Support Vector Machine (SVM) prediction models for Multi-Instance (MI) classification. The proposed sparse SVM is based on a "label-mean" formulation of MI classification which takes the average of predictions of individual instances for bag-level prediction. This leads to a convex optimization problem, which is essential for the tractability of the optimization problem arising from the sparse SVM formulation we derived subsequently, as well as the validity of the optimization strategy we employed to solve it. Based on the "label-mean" formulation, we can build sparse SVM models for MI classification and explicitly control their sparsities by enforcing the maximum number of expansions allowed in the prediction function. An effective optimization strategy is adopted to solve the formulated sparse learning problem which involves the learning of both the classifier and the expansion vectors. Experimental results on benchmark data sets have demonstrated that the proposed approach is effective in building very sparse SVM models while achieving comparable performance to the state-of-the-art MI classifiers.]]
AB - ![CDATA[We propose a direct approach to learning sparse Support Vector Machine (SVM) prediction models for Multi-Instance (MI) classification. The proposed sparse SVM is based on a "label-mean" formulation of MI classification which takes the average of predictions of individual instances for bag-level prediction. This leads to a convex optimization problem, which is essential for the tractability of the optimization problem arising from the sparse SVM formulation we derived subsequently, as well as the validity of the optimization strategy we employed to solve it. Based on the "label-mean" formulation, we can build sparse SVM models for MI classification and explicitly control their sparsities by enforcing the maximum number of expansions allowed in the prediction function. An effective optimization strategy is adopted to solve the formulated sparse learning problem which involves the learning of both the classifier and the expansion vectors. Experimental results on benchmark data sets have demonstrated that the proposed approach is effective in building very sparse SVM models while achieving comparable performance to the state-of-the-art MI classifiers.]]
UR - http://handle.uws.edu.au:8081/1959.7/560743
UR - http://www.ecmlpkdd2011.org/
U2 - 10.1007/978-3-642-23780-5_40
DO - 10.1007/978-3-642-23780-5_40
M3 - Conference Paper
SN - 9783642237799
SP - 471
EP - 486
BT - Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2011, Athens, Greece, September 5-9, 2011: Proceedings, Part I
PB - Springer
T2 - ECML PKDD (Conference)
Y2 - 5 September 2011
ER -