Abstract
Support Vector Data Description (SVDD) has a limitation for dealing with a large data set in which computational load drastically increases as training data size becomes large. To handle this problem, we propose a new fast SVDD method using K-means clustering method. Our method uses divide-and-conquer strategy; trains each decomposed sub-problems to get support vectors and retrains with the support vectors to find a global data description of a whole target class. The proposed method has a similar result to the original SVDD and reduces computational cost. Through experiments, we show efficiency of our method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Tax, D.M.J.: Support Vector Data Description. Machine Learning 54, 45–46 (2004)
Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)
Joachims, T.: Making Large-scale SVM Learning Practical. In: Advances in Kernel Methods - Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1999)
Osuna, E., Freund, R., Girosi, F.: Training Support Vector Machines. In: Conf. Computer Vision and Rattern Recognition, pp. 130–136 (1997)
Platt, J.C.: Fast Training of Support Vector Machines using Sequential Minimal Optimization. In: Advances in Kernel Methods - Support Vector Learning, pp. 41–65. MIT Press, Cambridge (1999)
Collobert, R., Bengio, S., Bengio, Y.: A Parallel Mixture of SVMs for Very Large Scale Problems. Neural Computation 14(5), 143–160 (2002)
Schwaighofer, A., Tresp, V.: The Bayesian Committee Support Vector Machine. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 411–417. Springer, Heidelberg (2001)
Rida, A., Labbi, A., Pellegrini, C.: Local Experts Combination through Density Decomposition. In: Proc. Seventh Int. Workshop AI and Statistics (1999)
Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. Wiley Interscience, Hoboken (2001)
Schölkopf, B., Burges, C., Vapnik, V.N.: Extracting Support Data for a Given Task. In: Proc. of First Int. Conf. Knowledge Discovery and Data Mining, pp. 252–257 (1995)
Tax, D.M.J.: DDtools, the Data Description Toolbox for Matlab (2006), http://www-ict.ewi.tudelft.nl/~ddavidt/dd_tools.html
Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. Univ. of California, Irvine, Dep. of Information and Computer Science (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
Bottou, L., Cortes, C., Denker, J., Drucker, H., Guyou, I., Jackel, L., LeCun, Y., Muller, U., Sackinger, E., Simard, P., Vapnik, V.: Comparison of Classifier Methods: A Case of Study in Handwriting Digit Recognition. In: Proc. Int. Conf. Pattern Recognition, pp. 77–87 (1994)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Kim, P.J., Chang, H.J., Song, D.S., Choi, J.Y. (2007). Fast Support Vector Data Description Using K-Means Clustering. In: Liu, D., Fei, S., Hou, Z., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4493. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72395-0_64
Download citation
DOI: https://doi.org/10.1007/978-3-540-72395-0_64
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72394-3
Online ISBN: 978-3-540-72395-0
eBook Packages: Computer ScienceComputer Science (R0)