Abstract
Hierarchical clustering (HC) is a powerful tool in data analysis since it allows discovering patterns in the observed data at different scales. Similarity-based HC methods take as input a fixed number of points and the matrix of pairwise similarities and output the dendrogram representing the nested partition. However, in some cases, the entire dataset cannot be known in advance and thus neither the relations between the points. In this paper, we consider the case in which we have a collection of realizations of a random distribution, and we want to extract a hierarchical clustering for each sample. The number of elements varies at each draw. Based on a continuous relaxation of Dasgupta’s cost function, we propose to integrate a triplet loss function to Chami’s formulation in order to learn an optimal similarity function between the points to use to compute the optimal hierarchy. Two architectures are tested on four datasets as approximators of the similarity function. The results obtained are promising and the proposed method showed in many cases good robustness to noise and higher adaptability to different datasets compared with the classical approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Supplementary Figures are available at https://github.com/liubigli/similarity-learning/blob/main/GSI2021_Appendix.pdf.
References
Dasgupta, S.: A cost function for similarity-based hierarchical clustering. In: Proceedings of the 48 Annual ACM Symposium on Theory of Computing, pp. 118–127 (2016)
Chierchia, G., Perret, B.: Ultrametric fitting by gradient descent. In: NIPS, pp. 3181–3192 (2019)
Chami, I., Gu, A., Chatziafratis, V., Ré, C.: From trees to continuous embeddings and back: Hyperbolic hierarchical clustering. In: NIPS vol. 33 (2020)
Monath, N., Zaheer, M., Silva, D., McCallum, A., Ahmed, A.: Gradient-based hierarchical clustering using continuous representations of trees in hyperbolic space. In: 25th ACM SIGKDD Conference on Discovery & Data Mining, pp. 714–722 (2019)
Brannan, D.A., Esplen, M.F., Gray, J.: Geometry. Cambridge University Press, Cambridge (2011)
Bécigneul, G., Ganea, O.E.: Riemannian adaptive optimization methods. ar**v preprint ar**v:1810.00760 (2018)
Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. (TOG) 38(5), 1–12 (2019)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. ar**v preprint ar**v:1502.03167 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Gigli, L., Marcotegui, B., Velasco-Forero, S. (2021). End-to-End Similarity Learning and Hierarchical Clustering for Unfixed Size Datasets. In: Nielsen, F., Barbaresco, F. (eds) Geometric Science of Information. GSI 2021. Lecture Notes in Computer Science(), vol 12829. Springer, Cham. https://doi.org/10.1007/978-3-030-80209-7_64
Download citation
DOI: https://doi.org/10.1007/978-3-030-80209-7_64
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-80208-0
Online ISBN: 978-3-030-80209-7
eBook Packages: Computer ScienceComputer Science (R0)