Abstract
The supervised classification problem offers numerous challenges to algorithm designers, most of them stemming from the size and type of the data. While dealing with large data-sets has been the focus of many studies, the task of uncovering subtle relationships within data remains an important challenge, for which new solutions concepts have to be explored. In this paper, we propose a general framework for the supervised classification problem based on game theory and the Nash equilibrium concept. The framework is used to estimate parameters of probabilistic classification models to approximate the equilibrium of a game as the optimum of a function. CMA-ES is adapted to compute such model parameters; a noise mechanism is used to enhance the diversity of the search. To illustrate the approach we use Probit regression; numerical experiments indicate that the game-theoretic approach may provide a better insight into data than other models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
UCI Machine Learning Repository https://archive.ics.uci.edu/ml/index.php, accessed January 2020.
- 2.
Version 0.21.1.
References
Czerniak, J., Zarzycki, H.: Application of rough sets in the presumptive diagnosis of urinary system diseases. In: Sołdek, J., Drobiazgiewicz, L. (eds.) Artificial Intelligence and Security in Computing Systems, vol. 752, pp. 41–51. Springer, Boston (2003). https://doi.org/10.1007/978-1-4419-9226-0_5
Gast, N., Ioannidis, S., Loiseau, P., Roussillon, B.: Linear regression from strategic data sources. ar**v:1309.7824 [cs, math, stat], July 2019
Hansen, N., Akimoto, Y., Baudis, P.: CMA-ES/pycma on Github. Zenodo, February 2019. https://doi.org/10.5281/zenodo.2559634
Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001). https://doi.org/10.1162/106365601750190398
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer Series in Statistics. Springer, New York (2001). https://doi.org/10.1007/978-0-387-21606-5
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, Heidelberg (2009). https://doi.org/10.1007/978-0-387-84858-7. http://www-stat.stanford.edu/~tibs/ElemStatLearn/
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
McKelvey, R.D., McLennan, A.: Chapter 2 Computation of equilibria in finite games. In: Handbook of Computational Economics, vol. 1, pp. 87–142. Elsevier (1996). https://doi.org/10.1016/S1574-0021(96)01004-0. http://www.sciencedirect.com/science/article/pii/S1574002196010040
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Rosset, S.: Model selection via the AUC. In: Proceedings of the Twenty-First International Conference on Machine Learning, ICML 2004. p. 89. Association for Computing Machinery, New York (2004). https://doi.org/10.1145/1015330.1015400
Schuurmans, D., Zinkevich, M.A.: Deep learning games. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 1678–1686. Curran Associates, Inc. (2016). http://papers.nips.cc/paper/6315-deep-learning-games.pdf
Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)
Tan, P.N., Steinbach, M., Karpatne, A., Kumar, V.: Introduction to Data Mining, 2nd edn. Pearson, London (2018)
Acknowledgments
The first author would like to acknowledge the financial support provided for this research by Babe-Bolyai University grant GTC 31379/2020.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Suciu, MA., Lung, R.I. (2020). Nash Equilibrium as a Solution in Supervised Classification. In: Bäck, T., et al. Parallel Problem Solving from Nature – PPSN XVI. PPSN 2020. Lecture Notes in Computer Science(), vol 12269. Springer, Cham. https://doi.org/10.1007/978-3-030-58112-1_37
Download citation
DOI: https://doi.org/10.1007/978-3-030-58112-1_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58111-4
Online ISBN: 978-3-030-58112-1
eBook Packages: Computer ScienceComputer Science (R0)