Fast approximate text document clustering using compressive sampling

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

4 Citations (Scopus)

Abstract

Document clustering involves repetitive scanning of a document set, therefore as the size of the set increases, the time required for the clustering task increases and may even become impossible due to computational constraints. Compressive sampling is a feature sampling technique that allows us to perfectly reconstruct a vector from a small number of samples, provided that the vector is sparse in some known domain. In this article, we apply the theory behind compressive sampling to the document clustering problem using k-means clustering. We provide a method of computing high accuracy clusters in a fraction of the time it would have taken by directly clustering the documents. This is performed by using the Discrete Fourier Transform and the Discrete Cosine Transform. We provide empirical results showing that compressive sampling provides a 14 times increase in speed with little reduction in accuracy on 7,095 documents, and we also provide a very accurate clustering of a 231,219 document set, providing 20 times increase in speed when compared to performing k-means clustering on the document set. This shows that compressive clustering is a very useful tool that can be used to quickly compute approximate clusters.
Original languageEnglish
Title of host publicationMachine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2011: Athens, Greece, September 5-9, 2011, Proceedings, Part II
PublisherSpringer
Pages565-580
Number of pages16
ISBN (Print)9783642237829
DOIs
Publication statusPublished - 2011
EventECML PKDD (Conference) -
Duration: 5 Sept 2011 → …

Publication series

Name
ISSN (Print)0302-9743

Conference

ConferenceECML PKDD (Conference)
Period5/09/11 → …

Fingerprint

Dive into the research topics of 'Fast approximate text document clustering using compressive sampling'. Together they form a unique fingerprint.

Cite this