Abstract
The Dirichlet and hierarchical Dirichlet processes are two important techniques for nonparametric Bayesian learning. These learning techniques allow unsupervised learning without specifying traditionally used input parameters. In topic modeling, this can be applied to discovering topics without specifying the number beforehand. Existing methods, such as those applied to topic modeling, usually take on a complex sampling calculation for inference. These techniques for inference of the Dirichlet and hierarchal Dirichlet processes are often based on Markov processes that can deviate from parametric topic modeling. This deviation may not be the best approach in the context of nonparametric topic modeling. Additionally, since they often rely on approximations they can negatively affect the predictive power of such models. In this paper we introduce a new interpretation of nonparametric Bayesian learning called the biased coin flip process—contrived for use in the context of Bayesian topic modeling. We prove mathematically the equivalence of the biased coin flip process to the Dirichlet process with an additional parameter representing the number of trials. A major benefit of the biased coin flip process is the similarity of the inference calculation to that of previous established parametric topic models—which we hope will lead to a more widespread adoption of hierarchical Dirichlet process based topic modeling. Additionally, as we show empirically the biased coin flip process leads to a nonparametric topic model with improved predictive performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Based on Google Scholar index of research publications in 2019.
References
Ahmed, A., **ng, E.P.: Dynamic non-parametric mixture models and the recurrent Chinese restaurant process: with applications to evolutionary clustering. In: Proceedings of the SIAM International Conference on Data Mining, SDM 2008, 24–26 April 2008, Atlanta, Georgia, USA, pp. 219–230 (2008)
Azzalini, A., Bowman, A.W.: A look at some data on the old faithful geyser. J. Roy. Stat. Soc. Ser. C (Appl. Stat.) 39(3), 357–365 (1990)
Bacallado, S., Favaro, S., Power, S., Trippa, L.: Perfect sampling of the posterior in the hierarchical pitman-YOR process. Bayesian Anal. 1(1), 1–25 (2021)
Blei, D.M., et al.: Hierarchical topic models and the nested Chinese restaurant process. In: Advances in Neural Information Processing Systems 16 [Neural Information Processing Systems, NIPS 2003, 8–13 December 2003, Vancouver and Whistler, British Columbia, Canada], pp. 17–24 (2003)
Blei, D.M., et al.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)
Camerlenghi, F., Lijoi, A., Prünster, I.: Survival analysis via hierarchically dependent mixture hazards. Ann. Stat. 49(2), 863–884 (2021)
Christensen, R., Johnson, W.: Modelling accelerated failure time with a Dirichlet process. Biometrika 75(4), 693–704 (1988)
Diana, A., Matechou, E., Griffin, J., Johnston, A., et al.: A hierarchical dependent Dirichlet process prior for modelling bird migration patterns in the UK. Ann. Appl. Stat. 14(1), 473–493 (2020)
Escobar, M.D., West, M.: Bayesian density estimation and inference using mixtures. J. Am. Stat. Assoc. 90(430), 577–588 (1995)
Ferguson, T.S.: A Bayesian analysis of some nonparametric problems. Ann. Stat., 209–230 (1973)
Finkel, J.R., Grenager, T., Manning, C.D.: The infinite tree. In: ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, 23–30 June 2007, Prague, Czech Republic (2007)
Griffiths, T.L., Ghahramani, Z.: The Indian buffet process: an introduction and review. J. Mach. Learn. Res. 12, 1185–1224 (2011)
Heinrich, G.: Infinite LDA implementing the HDP with minimum code complexity (2011)
Ishwaran, H., James, L.F.: Approximate Dirichlet process computing in finite normal mixtures: smoothing and prior information. J. Comput. Graph. Stat. 11(3), 508–532 (2002)
Ishwaran, H., James, L.F.: Generalized weighted Chinese restaurant processes for species sampling mixture models. Statistica Sinica, 1211–1235 (2003)
Izenman, A.J., Sommer, C.J.: Philatelic mixtures and multimodal densities. J. Am. Stat. Assoc. 83(404), 941–953 (1988)
Krueger, R., Rashidi, T.H., Vij, A.: A Dirichlet process mixture model of discrete choice: comparisons and a case study on preferences for shared automated vehicles. J. Choice Modelling 36, 100229 (2020)
Lehnert, L., Littman, M.L., Frank, M.J.: Reward-predictive representations generalize across tasks in reinforcement learning. PLoS Comput. Biol. 16(10), e1008317 (2020)
Li, W., et al.: Nonparametric Bayes pachinko allocation. In: UAI 2007, Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence, Vancouver, BC, Canada, 19–22 July 2007, pp. 243–250 (2007)
Lijoi, A., Prünster, I., Walker, S.G., et al.: Bayesian nonparametric estimators derived from conditional Gibbs structures. Ann. Appl. Probab. 18(4), 1519–1547 (2008)
Masumura, R., Asami, T., Oba, T., Sakauchi, S.: Hierarchical latent words language models for automatic speech recognition. J. Inf. Process. 29, 360–369 (2021)
McAuliffe, J.D., et al.: Nonparametric empirical Bayes for the Dirichlet process mixture model. Stat. Comput. 16(1), 5–14 (2006)
Muchene, L., Safari, W.: Two-stage topic modelling of scientific publications: a case study of University of Nairobi, Kenya. Plos One 16(1), e0243208 (2021)
Newman, D., Asuncion, A.U., Smyth, P., Welling, M.: Distributed inference for latent Dirichlet allocation. In: Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, 3–6 December 2007, pp. 1081–1088 (2007)
Paisley, J.: A simple proof of the stick-breaking construction of the Dirichlet process (2010)
Paisley, J.W., Carin, L.: Hidden Markov models with stick-breaking priors. IEEE Trans. Signal Process. 57(10), 3905–3917 (2009)
Papaspiliopoulos, O., Roberts, G.O.: Retrospective Markov chain monte Carlo methods for Dirichlet process hierarchical models. Biometrika 95(1), 169–186 (2008)
Porteous, I., Newman, D., Ihler, A.T., Asuncion, A.U., Smyth, P., Welling, M.: Fast collapsed Gibbs sampling for latent Dirichlet allocation. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, Nevada, USA, 24–27 August 2008, pp. 569–577 (2008)
Postman, M., Huchra, J.P., Geller, M.J.: Probes of large-scale structure in the corona borealis region. Astron. J. 92, 1238–1247 (1986)
Ramage, D., Manning, C.D., Dumais, S.T.: Partially labeled topic models for interpretable text mining. In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 21–24 August 2011, pp. 457–465 (2011)
Serviansky, H., et al.: Set2Graph: learning graphs from sets. In: Advances in Neural Information Processing Systems, vol. 33 (2020)
Shi, Y., Laud, P., Neuner, J.: A dependent Dirichlet process model for survival data with competing risks. Lifetime Data Anal., 1–21 (2020)
Teh, Y.W.: A hierarchical Bayesian language model based on Pitman-YOR processes. In: ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17–21 July 2006 (2006)
Teh, Y.W., Görür, D., Ghahramani, Z.: Stick-breaking construction for the Indian buffet process. In: Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, AISTATS 2007, San Juan, Puerto Rico, 21–24 March 2007, pp. 556–563 (2007)
Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical Dirichlet processes. J. Am. Stat. Assoc. 101(476), 1566–1581 (2006)
Teh, Y.W., Kurihara, K., Welling, M.: Collapsed variational inference for HDP. In: Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, 3–6 December 2007, pp. 1481–1488 (2007)
Thibaux, R., Jordan, M.I.: Hierarchical beta processes and the Indian buffet process. In: Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, AISTATS 2007, San Juan, Puerto Rico, 21–24 March 2007, pp. 564–571 (2007)
Wallach, H.M.: Structured topic models for language. Ph.D. thesis, University of Cambridge Cambridge, UK (2008)
Wang, Y., Bai, H., Stanton, M., Chen, W.-Y., Chang, E.Y.: PLDA: parallel latent Dirichlet allocation for large-scale applications. In: Goldberg, A.V., Zhou, Y. (eds.) AAIM 2009. LNCS, vol. 5564, pp. 301–314. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02158-9_26
Williamson, S., Wang, C., Heller, K.A., Blei, D.M.: The IBP compound Dirichlet process and its application to focused topic modeling. In: ICML (2010)
Wood, J., et al.: Source-LDA: enhancing probabilistic topic models using prior knowledge sources. In: 33rd IEEE International Conference on Data Engineering (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wood, J., Wang, W., Arnold, C. (2021). The Biased Coin Flip Process for Nonparametric Topic Modeling. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12822. Springer, Cham. https://doi.org/10.1007/978-3-030-86331-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-86331-9_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86330-2
Online ISBN: 978-3-030-86331-9
eBook Packages: Computer ScienceComputer Science (R0)