Abstract: The use of artificial intelligence (AI) has been rising and becoming more diverse in healthcare. Its development for implementation in healthcare has thus become an ongoing area of research. One particular area of AI implementation includes predictive models to estimate treatment outcomes. Non-AI predictive models are already in use to aid in physician decision-making and patient selection of treatment. Patient selection and care curtailment play a crucial role in efficiently allocating resources in healthcare and avoiding unnecessary procedures for patients. Although the ethical concerns of AI use in healthcare, including accuracy and information privacy have been discussed in current literature, the role of the technology specifically in patient selection has been limited due to its focus on inaccuracy. While it is important to consider the accuracy of AI predictive models and minimize the risk of denying treatment to patients who need it, the ethical discussion of AI technology in patient selection should consider a broader range of ethical concerns. Therefore, this paper aims to discuss the ethical concerns surrounding the utilization of AI predictive models for gatekeeping medical procedures under the assumption that AI technology is accurate. In this conceptual paper, I explore not only the moral justification of the use of AI-predictive models in curtailing care but also argue that physicians are morally required to incorporate the technology as a standard of care.