Assistant Professor NYU Grossman School of Medicine New York, New York
Abstract: Health systems are now commonly – and often opaquely – using generative AI to draft messages between doctors and patients. Recent surveys of patients receiving such messages suggest that patients are more satisfied with AI-drafted messages – until they find out that AI is involved. How should health systems respond to this feedback? Do they have a responsibility to disclose the role of AI in patient interaction, or even to discontinue its use? To answer this question, we need to go beyond survey responses to find out why patients prefer human communication. Are these patient preferences the product of disappointed expectations, which may change as patients become more familiar with the use of AI in their care? Or do their reservations have a deeper source, such as the belief that only humans are capable of the empathy, care, or trust that they draw from the messages?
This presentation will address these questions based on 50 in-depth interviews with patients and clinicians at two large academic health systems actively testing this technology. By analyzing these perspectives, this study advances the ethical conversation on AI in healthcare beyond surface-level preferences, offering insight into the motivations behind resistance or acceptance. These insights will help us understand how to weigh preferences for human communication against other potential benefits of AI, such as higher message quality and reduced clinician burden. On the basis of this ethical balance, we propose guidelines for the appropriate use and disclosure of AI in patient interaction.