Artificial Intelligence in Healthcare: Inherent Biases and Concerns

  • Chapter
  • First Online:
Artificial Intelligence and Machine Learning in Healthcare

Abstract

Artificial Intelligence (AI) is penetrating all the domains worldwide and healthcare is no exception. Due to the huge amount of returns on investment in the AI systems developed for healthcare, a huge amount of investments are underway. Though big leaps in the application of healthcare cannot be ignored, the application of AI specifically in mission-critical domains like healthcare must be handled with care. This paper provides insights into the possible inherent biases, unfairness and inequality of the AI algorithms. The concerns related to data quality and training datasets. The inherent bias could be due to numerous reasons that are hidden preferences or unrepresented/incomplete datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Blagoj, D., Chrisa, T., Kostić, U., European Commission. Joint Research Centre. (2020) AI watch, historical evolution of artificial intelligence : Analysis of the three main paradigm shifts in AI.

    Google Scholar 

  • de Villa, A. R. (2019). Solving Simpson’s Paradox. Towards Data Science.

    Google Scholar 

  • DHNS. (2019). Gender bias reflects in healthcare. Deccan Herald.

    Google Scholar 

  • Dridi, S. (2021) Unsupervised learning-a systematic literature review a preprint

    Google Scholar 

  • Firescholars, F., & Furtado, E. L. (2018). Role in the conception and development of role in the conception and development of intelligent machinery intelligent machinery.

    Google Scholar 

  • Loyola-Gonzalez, O. (2019). Black-box vs. White-Box: Understanding their advantages and weaknesses from a practical point of view. IEEE Access, 7, 154096–154113.

    Article  Google Scholar 

  • Mehrabi, N., Morstatter, F., & Saxena. N. et al. (2019). A survey on bias and fairness in machine learning.

    Google Scholar 

  • Volpi, G. F. (2019). Survivorship bias in data science and machine learning. Towards Data Science.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harpreet Singh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Verma, P., Kushwaha, H., Singh, H. (2023). Artificial Intelligence in Healthcare: Inherent Biases and Concerns. In: Yadav, D.K., Gulati, A. (eds) Artificial Intelligence and Machine Learning in Healthcare. Springer, Singapore. https://doi.org/10.1007/978-981-99-6472-7_12

Download citation

Publish with us

Policies and ethics

Navigation