Title

Beneficent dehumanization: Employing artificial intelligence and carebots to mitigate shame-induced barriers to medical care

Document Type

Article

Department or Administrative Unit

Philosophy and Religious Studies

Publication Date

12-23-2021

Abstract

As costs decline and technology inevitably improves, current trends suggest that artificial intelligence (AI) and a variety of “carebots” will increasingly be adopted in medical care. Medical ethicists have long expressed concerns that such technologies remove the human element from medicine, resulting in dehumanization and depersonalized care. However, we argue that where shame presents a barrier to medical care, it is sometimes ethically permissible and even desirable to deploy AI/carebots because (i) dehumanization in medicine is not always morally wrong, and (ii) dehumanization can sometimes better promote and protect important medical values. Shame is often a consequence of the human-to-human element of medical care and can prevent patients from seeking treatment and from disclosing important information to their healthcare provider. Conditions and treatments that are shame-inducing offer opportunities for introducing AI/carebots in a manner that removes the human element of medicine but does so ethically. We outline numerous examples of shame-inducing interactions and how they are overcome by implementing existing and expected developments of AI/carebot technology that remove the human element from care.

Comments

This article was originally published in Bioethics. The full-text article from the publisher can be found here.

Due to copyright restrictions, this article is not available for free download from ScholarWorks @ CWU.

Journal

​Bioethics

Rights

© 2021 John Wiley & Sons Ltd.

Share

COinS