Assistant Professor Baylor College of Medicine Katy, Texas
Abstract: As Artificial intelligence (AI) increasingly embeds in clinical research and workflows, ethicists caution against full automation and highlight the need to keep clinicians – and humans in general – “in the loop”. However, major shifts in AI development (outside of healthcare) towards “agentic” AI are raising urgent questions about how to reconcile the promises of closed-loop AI systems with consensus views about the importance of human oversight. Consistent with definitions of AI that emphasize a system’s capacity to perceive and engage with its environment, closed-loop systems in healthcare are already becoming more agentic by using computer perception (e.g., computer vision, “ambient” intelligence and other “ethological” approaches) to inform AI inferences. For example, new approaches in deep brain stimulation integrate environmental and behavioral data into AI inferences that guide automatic stimulation. Mobile health applications are likewise collecting environmental and behavioral data to inform AI-based recommender systems that offer personalized health advice. Automated drug delivery systems (e.g., for insulin; anesthesia) are similarly poised to integrate biobehavioral and environmental data without direct human oversight. The rapid advancement and utility of these agentic tools may soon cause AI ethicists to reconsider the widespread consensus around “human in the loop” approaches. Drawing insights from other high-stakes scenarios characterized by human dependence on technologies under conditions of extreme uncertainty (e.g., maritime navigation; spaceflight; nuclear energy management), this presentation explores ethical rationales that may help to reconcile the competing needs for human control and technological utility in the coming era of AI agency.
Keywords: artificial intelligence, AI agent, agency
Learning Objectives:
After participating in this conference, attendees should be able to:
Describe the concept of “agentic” AI in healthcare, including how closed-loop systems integrate environmental and behavioral data to make autonomous inferences and guide interventions.
Analyze the ethical implications of reduced human oversight in AI-driven clinical applications, focusing on how agentic systems challenge consensus mandates to maintain a “human in the loop”.
Evaluate strategies for reconciling the need for human control with the potential benefits of autonomous AI technologies in high-stakes healthcare scenarios, drawing parallels from other fields characterized by technology dependence.