Marlee Akerson – University of Colorado Anschutz Medical Campus; Natalia Dellavalle – University of Colorado Anschutz Medical Campus; Mika Hamer, MPH, PhD – Assistant Professor, University of Maryland College Park; Annie Moore, MD – University of Colorado Anschutz Medical Campus; Eric Campbell, PhD – University of Colorado Anschutz Medical Campus; Matthew DeCamp, MD, PhD – Associate Professor, Center for Bioethics and Humanities, University of Colorado Anschutz Medical Campus
Senior Research Assistant University of Colorado Anschutz Medical campus Aurora, Colorado
Abstract: The proliferation of artificial intelligence (AI) chatbots in healthcare settings raises new ethical questions about justice and equal treatment of users. Can technology ever be neutral? Are patients aware of potential biases in AI? To answer these questions, we conducted a mixed methods study of a real-world, patient-facing health system chatbot with a human-like avatar. We sampled diversely to allow comparison across racial and ethnic groups. We surveyed n=617 users and interviewed n=46. Questions asked about the patient’s chatbot experience, emphasizing themes of trust, privacy, bias, and justice. Interviews suggested that many patients sought the chatbot to avoid racially-based biases or judgment in health care interactions, but not all patients (75.9%) believed the chatbot treated users fairly with regards to race or ethnicity. Interestingly, users thought that their own personal experience with the chatbot was less affected by race, ethnicity, and gender than the experience of others. Regarding appearance, 35.6% of users perceived the chatbot as white although 57.3% of users did not identify it as a particular race. In both surveys and interviews, Black or African American respondents expressed significantly greater preference for racial/ethnic concordance with the chatbot, which mirrors patient preferences for physician racial concordance. Our findings suggest areas of disconnect between how people perceive chatbot interactions, and shed light on how we understand racial, ethnic, and gender identities in ourselves and in technology. Far from being neutral tools, as AI becomes ever more human-like, there will be a growing need to create inclusive and transparent chatbot technologies.