University of Colorado Center for Bioethics and Humanities Aurora, Colorado
Abstract: Artificial Intelligence (AI) can predict patient mortality automatically and accurately, outperforming clinicians. AI prognostication is already being integrated into electronic health records and for palliative care (PC), but performance differs between patient groups, raising ethical questions around bias. We conducted a mixed-methods study involving interviews with 80 PC clinicians, patients and caregivers, and a national survey of 1788 PC physicians to understand real-world ethical challenges. Interviews were analyzed using modified constructivist grounded theory, and survey data (n=583, RR 32.6%) were analyzed using descriptive and multivariate statistics. Interviewees’ concerns around automation and overemphasis on prognosis in decision-making intersected with potential biases in prediction. Where some were willing to use biased tools (citing the unfortunate reality of bias permeating healthcare), others were reluctant due to the potential for perpetuating disparities invisibly and systematically. In our national survey, nearly all physicians (91%) wanted to know if an algorithm was biased; nearly half (41%) were comfortable using known biased algorithms, and greater knowledge of AI was associated with more comfort using biased algorithms (p < 0.001). Self-identified White respondents were less likely to think that AI prognostic tools would worsen disparities (p=0.02). Physicians who emphasized justice over other bioethics principles were least comfortable using biased algorithms (p < 0.001) and most likely to think they would worsen disparities (p < 0.001). Our findings reveal that physicians hold radically different views about the permissibility of using biased algorithms. Ethical implementation of AI requires more understanding of the contexts that influence disparities resulting from biased algorithms in order to reconcile these differences.