Abstract
Most machine learning applications have at least two stages: the learning stage and the deployment stage. K-nn is a lazy learner, that is, unlike the decision tree which learns a model (i.e., tree) during the learning stage, it does nothing during the learning stage. In the deployment stage, it directly computes the result by utilising the information from the training dataset D. While laziness keeps the naive K-nn away from training, it may cause significant computational issues for inference. Every inference takes O(n) time, for n the number of training instances. While linear time in theory, the actual computational time can be significant because n can be large in real-world applications. To tackle this, after the introduction of the basic learning algorithm in Sect. 6.1, we will introduce methods to speed up K-nn in Sect. 6.2. This is followed by a brief discussion regarding how to reasonably output a classification probability as required in many applications, on top of the predictive label. After these, we will present a robustness attack, and discuss other attacks. Unlike the one for decision tree, the robustness attack for K-nn in Sect. 6.4 utilises constraint solving, and is both sound and complete.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Huang, X., **, G., Ruan, W. (2023). K-Nearest Neighbor. In: Machine Learning Safety. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer, Singapore. https://doi.org/10.1007/978-981-19-6814-3_6
Download citation
DOI: https://doi.org/10.1007/978-981-19-6814-3_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-6813-6
Online ISBN: 978-981-19-6814-3
eBook Packages: Computer ScienceComputer Science (R0)