Fast Support Vector Data Description Using K-Means Clustering

  • Conference paper
Advances in Neural Networks – ISNN 2007 (ISNN 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4493))

Included in the following conference series:

Abstract

Support Vector Data Description (SVDD) has a limitation for dealing with a large data set in which computational load drastically increases as training data size becomes large. To handle this problem, we propose a new fast SVDD method using K-means clustering method. Our method uses divide-and-conquer strategy; trains each decomposed sub-problems to get support vectors and retrains with the support vectors to find a global data description of a whole target class. The proposed method has a similar result to the original SVDD and reduces computational cost. Through experiments, we show efficiency of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Brazil)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (Brazil)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (Brazil)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Tax, D.M.J.: Support Vector Data Description. Machine Learning 54, 45–46 (2004)

    Article  MATH  Google Scholar 

  2. Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)

    MATH  Google Scholar 

  3. Joachims, T.: Making Large-scale SVM Learning Practical. In: Advances in Kernel Methods - Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1999)

    Google Scholar 

  4. Osuna, E., Freund, R., Girosi, F.: Training Support Vector Machines. In: Conf. Computer Vision and Rattern Recognition, pp. 130–136 (1997)

    Google Scholar 

  5. Platt, J.C.: Fast Training of Support Vector Machines using Sequential Minimal Optimization. In: Advances in Kernel Methods - Support Vector Learning, pp. 41–65. MIT Press, Cambridge (1999)

    Google Scholar 

  6. Collobert, R., Bengio, S., Bengio, Y.: A Parallel Mixture of SVMs for Very Large Scale Problems. Neural Computation 14(5), 143–160 (2002)

    Article  Google Scholar 

  7. Schwaighofer, A., Tresp, V.: The Bayesian Committee Support Vector Machine. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 411–417. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  8. Rida, A., Labbi, A., Pellegrini, C.: Local Experts Combination through Density Decomposition. In: Proc. Seventh Int. Workshop AI and Statistics (1999)

    Google Scholar 

  9. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. Wiley Interscience, Hoboken (2001)

    MATH  Google Scholar 

  10. Schölkopf, B., Burges, C., Vapnik, V.N.: Extracting Support Data for a Given Task. In: Proc. of First Int. Conf. Knowledge Discovery and Data Mining, pp. 252–257 (1995)

    Google Scholar 

  11. Tax, D.M.J.: DDtools, the Data Description Toolbox for Matlab (2006), http://www-ict.ewi.tudelft.nl/~ddavidt/dd_tools.html

  12. Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. Univ. of California, Irvine, Dep. of Information and Computer Science (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  13. Bottou, L., Cortes, C., Denker, J., Drucker, H., Guyou, I., Jackel, L., LeCun, Y., Muller, U., Sackinger, E., Simard, P., Vapnik, V.: Comparison of Classifier Methods: A Case of Study in Handwriting Digit Recognition. In: Proc. Int. Conf. Pattern Recognition, pp. 77–87 (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Derong Liu Shumin Fei Zengguang Hou Huaguang Zhang Changyin Sun

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Kim, P.J., Chang, H.J., Song, D.S., Choi, J.Y. (2007). Fast Support Vector Data Description Using K-Means Clustering. In: Liu, D., Fei, S., Hou, Z., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4493. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72395-0_64

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72395-0_64

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72394-3

  • Online ISBN: 978-3-540-72395-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation