Abstract
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier called naïve Bayes is competitive with state of the art classifiers. This simple approach stands from assumptions of conditional independence among features given the class. Improvements in accuracy of naïve Bayes has been demonstrated by a number of approaches, collectively named semi naïve Bayes classifiers. Semi naïve Bayes classifiers are usually based on the search of specific values or structures. The learning process of these classifiers is usually based on greedy search algorithms. In this paper we propose to learn these semi naïve Bayes structures through estimation of distribution algorithms, which are non-deterministic, stochastic heuristic search strategies. Experimental tests have been done with 21 data sets from the UCI repository.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/
De Bonet, J.S., Isbell, C.L., Viola, P.: MIMIC: Finding optima by estimating probability densities. In: Advances in Neural Information Processing Systems, vol. 9 (1997)
Domingos, P., Pazzani, M.: Beyond independence: conditions for the optimality of the simple Bayesian classifier. In: Proceedings of the 13th International Conference on Machine Learning, pp. 105–112 (1996)
Dougherty, J., Kohavi, R., Sahami, M.: Supervised and unsupervised discretization of continuous features. In: Proceedings of the 12th International Conference on Machine Learning, pp. 194–202 (1995)
Duda, R., Hart, P.: Pattern Classification and Scene Analysis. John Wiley and Sons, Chichester (1973)
Etxeberria, R., Larrañaga, P.: Global optimization with Bayesian networks. In: II Symposium on Artificial Intelligence. CIMAF 1999, Special Session on Distributions and Evolutionary Optimization, pp. 332–339 (1999)
Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the 13th International Conference on Artificial Intelligence, pp. 1022–1027 (1993)
Ferreira, J.T.A.S., Denison, D.G.T., Hand, D.J.: Weighted naíve Bayes modelling for data mining. Technical report, Deparment of Mathematics, Imperial College (May 2001)
Friedman, N., Geiger, D., Goldszmidt, D.M.: Bayesian network classifiers. Machine Learning 29(2-3), 131–163 (1997)
Gama, J.: Iterative Bayes. Intelligent Data Analysis 4, 475–488 (2000)
Good, I.J.: The Estimation of Probabilities: An Essay on Modern Bayesian Methods. MIT Press, Cambridge (1965)
Hand, D.J., Yu, K.: Idiot’s Bayes - not so stupid after all? International Statistical Review 69(3), 385–398 (2001)
Kohavi, J., Becker, B., Sommerfield, D.: Improving simple Bayes. Technical report, Data Mining and Visualization Group, Silicon Graphics (1997)
Kohavi, R.: Scaling up the accuracy of naïve-Bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 202–207 (1996)
Kohavi, R., John, G., Long, R., Manley, D., Pfleger, K.: MLC++:A machine learning library in C++. In: Tools with Artificial Intelligence, pp. 740–743. IEEE Computer Society Press, Los Alamitos (1994)
Kononenko, I.: Semi-naïve Bayesian classifier. In: Sixth European Working Session on Learning, pp. 206–219 (1991)
Langley, P.: Induction of recursive Bayesian classifiers. In: European Conference on Machine Learning, pp. 153–164. Springer, Berlin (1993)
Langley, P., Sage, S.: Induction of selective Bayesian classifiers. In: Morgan Kaufmann (ed.) Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, Seattle, WA, pp. 399–406 (1994)
Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.M.: Optimization in continuous domains by learning and simulation of gaussian networks. In: Proceedings of the Workshop in Optimization by Building and Using Probabilistic Models. A Workshop within the 2000 Genetic and Evolutionary Computation Conference, GECCO 2000, Las Vegas, Nevada, USA, pp. 201–204 (2000)
Larrañaga, P., Lozano, J.A.: Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Dordrecht (2001)
Mühlenbein, H.: The equation for response to selection and its use for prediction. Evolutionary Computation 5, 303–346 (1998)
Mühlenbein, H., Paaß, G.: From recombination of genes to the estimation of distributions I. Binary parameters. In: Parallel Problem Solving from Nature - PPSN IV. LNCS, vol. 1411, pp. 178–187 (1996)
Pazzani, M.: Searching for dependencies in Bayesian classifiers. In: Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, pp. 239–248 (1996)
Robles, V., Larrañaga, P., Peña, J.M., Menasalvas, E., Pérez, M.S.: Interval Estimation Naïve Bayes. LNCS (2003)
Ting, K.M.: Discretization of continuous-valued attributes and instance-based learning. Technical Report 491, University of Sydney (1994)
Webb, G.I., Pazzani, M.J.: Adjusted probability naive Bayesian induction. In: Proceedings of the 11th Australian Joint Conference on Artificial Intelligence, pp. 285–295 (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Robles, V., Larrañaga, P., Peña, J.M., Pérez, M.S., Menasalvas, E., Herves, V. (2003). Learning Semi Naïve Bayes Structures by Estimation of Distribution Algorithms. In: Pires, F.M., Abreu, S. (eds) Progress in Artificial Intelligence. EPIA 2003. Lecture Notes in Computer Science(), vol 2902. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24580-3_31
Download citation
DOI: https://doi.org/10.1007/978-3-540-24580-3_31
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20589-0
Online ISBN: 978-3-540-24580-3
eBook Packages: Springer Book Archive