Beneficent dehumanization: Employing artificial intelligence and carebots to mitigate shame-induced barriers to medical care
Department or Administrative Unit
Philosophy and Religious Studies
As costs decline and technology inevitably improves, current trends suggest that artificial intelligence (AI) and a variety of “carebots” will increasingly be adopted in medical care. Medical ethicists have long expressed concerns that such technologies remove the human element from medicine, resulting in dehumanization and depersonalized care. However, we argue that where shame presents a barrier to medical care, it is sometimes ethically permissible and even desirable to deploy AI/carebots because (i) dehumanization in medicine is not always morally wrong, and (ii) dehumanization can sometimes better promote and protect important medical values. Shame is often a consequence of the human-to-human element of medical care and can prevent patients from seeking treatment and from disclosing important information to their healthcare provider. Conditions and treatments that are shame-inducing offer opportunities for introducing AI/carebots in a manner that removes the human element of medicine but does so ethically. We outline numerous examples of shame-inducing interactions and how they are overcome by implementing existing and expected developments of AI/carebot technology that remove the human element from care.
Palmer, A., & Schwan, D. (2021). Beneficent dehumanization: Employing artificial intelligence and carebots to mitigate shame‐induced barriers to medical care. Bioethics, 36(2), 187-193. https://doi.org/10.1111/bioe.12986
© 2021 John Wiley & Sons Ltd.