Evidential Bagging: Combining Heterogeneous Classifiers in the Belief Functions Framework

  • Conference paper
  • First Online:
Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations (IPMU 2018)

Abstract

In machine learning, Ensemble Learning methodologies are known to improve predictive accuracy and robustness. They consist in the learning of many classifiers that produce outputs which are finally combined according to different techniques. Bagging, or Bootstrap Aggregating, is one of the most famous Ensemble methodologies and is usually applied to the same classification base algorithm, i.e. the same type of classifier is learnt multiple times on bootstrapped versions of the initial learning dataset. In this paper, we propose a bagging methodology that involves different types of classifier. Classifiers’ probabilist outputs are used to build mass functions which are further combined within the belief functions framework. Three different ways of building mass functions are proposed; preliminary experiments on benchmark datasets showing the relevancy of the approach are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now
Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 93.08
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 117.69
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://archive.ics.uci.edu/ml/datasets.html.

  2. 2.

    https://www.kaggle.com/datasets.

  3. 3.

    http://sci2s.ugr.es/keel/category.php?cat=clas.

References

  1. Wolpert, D.H.: The supervised learning no-free-lunch theorems. In: Roy, R., Köppen, M., Ovaska, S., Furuhashi, T., Hoffmann, F. (eds.) Soft Computing and Industry: Recent Applications, pp. 25–42. Springer, London (2002). https://doi.org/10.1007/978-1-4471-0123-9_3

    Chapter  Google Scholar 

  2. Qu, G., Wu, H.: Bucket learning: improving model quality through enhancing local patterns. Knowl.-Based Syst. 27, 51–59 (2012)

    Article  Google Scholar 

  3. Zhou, Z.H.: Ensemble Methods: Foundations and Algorithms, 1st edn. Chapman & Hall/CRC, Boca Raton (2012)

    Google Scholar 

  4. Polikar, R.: Ensemble based systems in decision making. IEEE Circ. Syst. Mag. 6(3), 21–45 (2006)

    Article  Google Scholar 

  5. Džeroski, S., Ženko, B.: Is combining classifiers with stacking better than selecting the best one? Mach. Learn. 54(3), 255–273 (2004)

    Article  Google Scholar 

  6. Vannoorenberghe, P.: On aggregating belief decision trees. Inf. Fus. 5(3), 179–188 (2004)

    Article  Google Scholar 

  7. Xu, P., Davoine, F., Zha, H., Denœux, T.: Evidential calibration of binary SVM classifiers. Int. J. Approx. Reason. 72, 55–70 (2016)

    Article  MathSciNet  Google Scholar 

  8. Ma, L., Sun, B., Li, Z.: Bagging likelihood-based belief decision trees. In: 2017 20th International Conference on Information Fusion (Fusion), pp. 1–6 (2017)

    Google Scholar 

  9. Dempster, A.P.: Upper and lower probabilities induced by a multivalued map**. Ann. Math. Stat. 38, 325–339 (1967)

    Article  MathSciNet  Google Scholar 

  10. Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976)

    MATH  Google Scholar 

  11. Denoeux, T., El Zoghby, N., Cherfaoui, V., Jouglet, A.: Optimal object association in the Dempster-Shafer framework. IEEE Trans. Cybern. 44(22), 2521–2531 (2014)

    Article  Google Scholar 

  12. Efron, B.: Bootstrap methods: another look at the jackknife. Ann. Stat. 7, 1–26 (1979)

    Article  MathSciNet  Google Scholar 

  13. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)

    Article  MathSciNet  Google Scholar 

  14. Schapire, R.E.: Explaining adaboost. In: Schölkopf, B., Luo, Z., Vovk, V. (eds.) Empirical Inference, pp. 37–52. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41136-6_5

    Chapter  Google Scholar 

  15. Wolpert, D.H.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992)

    Article  Google Scholar 

  16. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)

    Article  Google Scholar 

  17. Cortes, C., Mohri, M., Riley, M., Rostamizadeh, A.: Sample selection bias correction theory. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds.) ALT 2008. LNCS (LNAI), vol. 5254, pp. 38–53. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87987-9_8

    Chapter  MATH  Google Scholar 

  18. Duin, R.P.W.: The combining classifier: to train or not to train? In: Object Recognition Supported by User Interaction for Service Robots vol. 2, pp. 765–770 (2002)

    Google Scholar 

  19. Smets, P., Kennes, R.: The transferable belief model. Artif. Intell. 66(2), 191–234 (1994)

    Article  MathSciNet  Google Scholar 

  20. Yager, R.R.: On the dempster-shafer framework and new combination rules. Inf. Sci. 41(2), 93–137 (1987)

    Article  MathSciNet  Google Scholar 

  21. Smets, P.: Belief functions: the disjunctive rule of combination and the generalized Bayesian. Int. J. Approx. Reason. 9(1), 1–32 (2005)

    Article  MathSciNet  Google Scholar 

  22. Dubois, D., Prade, H.: Representation and combination of uncertainty with belief functions and possibility measures. Comput. Intell. 4(3), 244–264 (1988)

    Article  Google Scholar 

  23. Florea, M.C., Dezert, J., Valin, P., Smarandache, F., Jousselme, A.: Adaptative combination rule and proportional conflict redistribution rule for information fusion. CoRR abs/cs/0604042 (2006)

    Google Scholar 

  24. François, J., Grandvalet, Y., Denceux, T., Roger, J.M. In: Bagging improves uncertainty representation in evidential pattern classification. Physica-Verlag HD, pp. 295–308 (2002)

    Chapter  Google Scholar 

  25. Xu, P., Davoine, F., Denoeux, T.: Evidential combination of pedestrian detectors. In: British Machine Vision Conference, pp. 1–14. Nottingham (2014)

    Google Scholar 

  26. Denœux, T.: Maximum likelihood estimation from uncertain data in the belief function framework. IEEE Trans. Knowl. Data Eng. 25, 119–130 (2011)

    Article  Google Scholar 

  27. Sutton-Charani, N., Destercke, S., Denœux, T.: Learning decision trees from uncertain data with an evidential EM approach. In: International Conference on Machine Learning and Applications (ICMLA) (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolas Sutton-Charani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sutton-Charani, N., Imoussaten, A., Harispe, S., Montmain, J. (2018). Evidential Bagging: Combining Heterogeneous Classifiers in the Belief Functions Framework. In: Medina, J., et al. Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations. IPMU 2018. Communications in Computer and Information Science, vol 853. Springer, Cham. https://doi.org/10.1007/978-3-319-91473-2_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-91473-2_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-91472-5

  • Online ISBN: 978-3-319-91473-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation