Clustering

  • Chapter
  • First Online:
Maschinelles Lernen
  • 2167 Accesses

Zusammenfassung

Bisher haben wir uns auf ML-Methoden konzentriert, die das ERM-Prinzip verwenden und eine Hypothese lernen, indem sie die Diskrepanz zwischen ihren Vorhersagen und den wahren Labels in einem Trainingsset minimieren.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 64.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Beachten Sie, dass der Ausdruck (8.14) nur für eine invertierbare (nicht-singuläre) Kovarianzmatrix \({\boldsymbol{\Sigma }}\) gültig ist.

  2. 2.

    Denken Sie daran, dass die Grade der Zugehörigkeit \(y^{(i)}_{c}\) als (unbekannte) Labelwerte für Datenpunkte betrachtet werden. Die Wahl oder Definition der Labels von Datenpunkten ist eine Designentscheidung. Insbesondere können wir die Labels von Datenpunkten mit einem hypothetischen probabilistischen Modell wie dem GMM definieren.

Literatur

  1. C.M. Bishop, Pattern Recognition and Machine Learning (Springer, Berlin, 2006)

    Google Scholar 

  2. B. Kulis, M.I. Jordan, Revisiting k-means: new algorithms via bayesian nonparametrics, in Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc/Omnipress (2012)

    Google Scholar 

  3. S. Wade, Z. Ghahramani, Bayesian cluster analysis: point estimation and credible balls (with discussion). Bayesian Anal. 13(2), 559–626 (2018)

    Article  MathSciNet  Google Scholar 

  4. D. Arthur, S. Vassilvitskii, k-means++: the advantages of careful seeding, in Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms (Society for Industrial and Applied Mathematics, Philadelphia, 2007)

    Google Scholar 

  5. R. Gray, J. Kieffer, Y. Linde, Locally optimal block quantizer design. Inf. Control 45, 178–198 (1980)

    Article  MathSciNet  Google Scholar 

  6. D. Bertsekas, J. Tsitsiklis, Introduction to Probability, 2. Aufl. (Athena Scientific, 2008)

    Google Scholar 

  7. E.L. Lehmann, G. Casella, Theory of Point Estimation, 2. Aufl. (Springer, New York, 1998)

    Google Scholar 

  8. S.M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice Hall, Englewood Cliffs, 1993)

    Google Scholar 

  9. T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning Springer Series in Statistics. (Springer, New York, 2001)

    Google Scholar 

  10. L. Xu, M. Jordan, On convergence properties of the EM algorithm for Gaussian mixtures. Neural Comput. 8(1), 129–151 (1996)

    Article  Google Scholar 

  11. U. von Luxburg, A tutorial on spectral clustering. Stat. Comput. 17(4), 395–416 (2007). (Dec.)

    Article  MathSciNet  Google Scholar 

  12. A.Y. Ng, M.I. Jordan, Y. Weiss, On spectral clustering: analysis and an algorithm, in Advances in Neural Information Processing Systems (2001)

    Google Scholar 

  13. A. Jung, Y. SarcheshmehPour, Local graph clustering with network lasso. IEEE Signal Process. Lett. 28, 106–110 (2021)

    Article  Google Scholar 

  14. M. Ester, H.-P. Kriegel, J. Sander, X. Xu, A density-based algorithm for discovering clusters a density-based algorithm for discovering clusters in large spatial databases with noise, in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining. (Portland, Oregon, 1996), S. 226–231

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Jung .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Der/die Autor(en), exklusiv lizenziert an Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Jung, A. (2024). Clustering. In: Maschinelles Lernen. Springer, Singapore. https://doi.org/10.1007/978-981-99-7972-1_8

Download citation

Publish with us

Policies and ethics

Navigation