Skip to Main Content

The AI Balancing Act

Yale Medicine Magazine, Spring 2025 (issue 174) AI for Humanity in Medicineby Steve Hamm

Contents

Keeping pace while prioritizing ethics.

Artificial intelligence(AI) technologies are incredibly complex, but the ethics of implementing AI may be even more challenging. At Yale, these issues are being addressed from many different angles.

Before diagnostic radiologists at Yale New Haven Health adopt a new AI technology, for example, a committee of faculty members spends months evaluating it. Their reviews include careful analyses of the ethical implications of using the technology. “We’re very excited about what generative AI can do, but it’s important for us to assess it before it’s implemented.” says Melissa Davis, MD, MBA, associate professor and vice chair of medical informatics in the Department of Radiology and Biomedical Imaging.

Generative AI shows great promise in boosting the effectiveness of diagnostic radiology, enabling radiologists to identify and triage medical anomalies more efficiently, resulting in improved diagnosis and treatment planning. Bias, however, is the group’s particular concern. When Davis and her associates conducted a study of the popular ChatGPT chatbot to see how it performed in translating radiologists’ reports into language that a layperson can understand, the findings raised questions. The group found that the translations produced for Black patients were written at a lower reading grade level than those produced for Caucasian and Asian patients. “The question is what other kinds of biases will surface in the future?” says Davis.

This study exemplifies the kind of unsettling ethical issues that clinicians, researchers, and administrators across the Yale School of Medicine (YSM) and the Yale New Haven Health System are confronting as AI technologies proliferate. They welcome the new capabilities but insist on safeguards that will prevent the technologies from violating ethical standards.

“There are enormous potential benefits, along with some known risks and some unknown risks, that are only going to become apparent as we gain further experience,” says Benjamin Tolchin, MD, associate professor of neurology at YSM and director of the Center for Clinical Ethics at Yale New Haven Health.

Ethical issues were top of mind last year when the Yale Task Force on Artificial Intelligence developed a university-wide strategy to consider the myriad questions arising from the emergence of generative AI. The group’s report contains no fewer than 22 references to ethics. “We have started down the road of trying to get our arms around AI. Things are moving incredibly quickly. AI tools are just coming out of the woodwork,” says Yale University Chief Privacy Officer Susan Bouregy, PhD.

Biomedical ethics: past and future

YSM has long been committed to ethics and corresponding codes of conduct. The school has published detailed standards of professional behavior—aligning them with those established by Yale University, federal funding agencies, and medical professional associations.

Lucila Ohno-Machado, MD, PhD, MBA, Waldemar von Zedtwitz Professor of Medicine and of Biomedical Informatics and Data Science (BIDS), and chair of BIDS, plans to convene faculty members from across the medical school to develop a strategy for applying medical ethics to all the uses of AI in a uniform way. “We need to establish guidelines for what we should touch, what we should not touch, what it can be used for, and what it cannot be used for,” she says.

Biomedical ethics have evolved over time to keep pace with advances in modern medicine. Today, the four basic principles are:

• Autonomy: respecting the patient’s wishes.

• Nonmaleficence: doing no harm.

• Beneficence: balancing benefits against risks or costs.

• Justice: fairly distributing benefits, risks, and costs.

However, generative AI poses new ethical concerns, says Jessica Morley, PhD, who researches the governance, ethical, legal, and social implications of digital health technologies as a postdoctoral research associate at Yale’s new Digital Ethics Center. The stakes are higher, in part, because generative AI comes closer than previous AI technologies to mimicking human general intelligence.

The computing models underlying generative AI applications are general-purpose systems, including OpenAI’s GPT-4.1 and Google’s Gemini 2.5, rather than tightly controlled domain-specific tools, such as traditional machine learning-based models of disease diagnosis. “This is a systemic change,” says Morley. “How do you put bounds on something that can be easily used for many different things?”

The new technologies give rise to a host of ethical concerns in the realms of clinical care and research. Data are sometimes used and shared indiscriminately by AI technology providers, so safeguarding the privacy of patient data is even more critical now. Because of concerns about AI “hallucinations,” or gross errors, it’s also important for the users of these systems to be able to understand how the answers are formulated and to review them for accuracy.

Tackling sustainability challenges

At a time when humanity is threatened by the impact of climate change, there is growing awareness of the massive electricity and computing resources AI systems use. That’s because generative AI is much more energy-intensive than most other data-focused technologies. A study published in the peer-reviewed scientific journal Joule forecast that by 2027, AI computing worldwide will consume at least 85.4 terawatt-hours of electricity annually—more than many countries use in a year.

For socially conscious institutions such as Yale, intensive use of AI could make it more difficult to meet climate and sustainability goals. That’s one reason why Yale has joined Boston University, Harvard, the Massachusetts Institute of Technology, Northeastern University, and the University of Massachusetts along with other schools as equal partners in the Massachusetts Green High Performance Computing Center (MGHPCC) consortium in Holyoke, Mass. MGHPCC is a not-for-profit, state-of-the-art data center dedicated to supporting computationally intensive research. Yale’s inclusion in this consortium marks a milestone in its research infrastructure development, says Kiran Keshav, Yale director of the MGHPCC Transition. Members collaborate on research, pool knowledge, reduce redundancies in research computing operations, and benefit from a more sustainable source of energy for research computing infrastructure. This partnership provides a gateway to addressing challenges in computing on a scale too great for one academic institution to tackle alone. MGHPCC is the first university research data center to achieve LEED Platinum Certification, the top award from the U.S. Green Building Council.

Over time, Yale—including the medical school—will perform most of its research computing at the MGHPCC. “It’s a really good move on Yale’s part to recognize our increasing need and to look for a facility that can address that in a sustainable way,” says Amber Garrard, director of Yale’s Office of Sustainability, which manages all greenhouse gas emissions reporting for the university.

A multidisciplinary approach to problem solving

YSM and Yale New Haven Health already have numerous jointly administered bodies governing their data technology activities—including oversight of ethical matters. These include the Joint Health Data Governance Council, the Health Sciences Technology Advisory Committee, and the institutional review boards that govern human subjects research. All three bodies are currently reviewing AI technologies. Ohno-Machado plans to offer her department’s expertise to help them assess the far-reaching impacts of generative AI.

Medical ethics experts urge such review bodies to include many points of view when they consider the ethical implications of these new technologies. “Assessment criteria require expertise and insights from multiple disciplines both within and outside health care, ranging from medicine to computer science to social science to ethics and law,” says Bonnie Kaplan, PhD, a lecturer in the Yale School of Public Health’s Department of Biostatistics, Division of Health Informatics, and an associate of YSM’s Program for Biomedical Ethics. “Criteria also need to incorporate patient and community views.”

That kind of multidisciplinary collaboration is already fundamental for the Center for Clinical Ethics at Yale New Haven Health—a grassroots problem-solving organization whose primary role is to advise patients, family members, and hospital staff concerning difficult health care decisions. The center also has a policy-making role in the health system, and it uses multidisciplinary ethics committees in each of the system’s delivery networks to get that work done.

Tolchin, the neurologist who heads the center, says the group is evaluating generative AI technologies and developing policies concerning their use. Clinicians, ethicists, technologists, community members, and others participate in the deliberations. “This will be essential to protect patients and clinicians, and to allow us to benefit from these technologies while mitigating the risks,” he explains.

Among Tolchin’s concerns are potential biases in the systems, improper use of patient data, and AI hallucinations that provide erroneous, though often convincing, information. For starters, he believes, patients should be notified when AI systems are being used by their clinicians, and they should be informed—and asked to give consent—if their personal information is going to be used to train AI models.

The privacy and consent issues are of particular concern to biomedical researchers, who often perform longitudinal studies that involve following groups of patients over long periods of time. Generative AI offers the potential for researchers to capture information on treatments and outcomes from thousands or even tens of thousands of patients, and then to use that data to train computing models and provide clinicians with better guidance on treatment plans for particular patients. There’s a conundrum, though: How do you mine all that vital information without violating privacy rules?

One solution that is being explored by some medical researchers is creating so-called synthetic data. In this approach, researchers train generative models to understand the statistical patterns and relationships in existing real-world training data. Then the models use that knowledge to create synthetic data. This type of data mirrors the real-world data that was used to create it, but is completely artificial and isn’t related to any real patients, so it is not covered by privacy laws. The resulting models can be used by researchers to analyze a broad array of cases and treatment outcomes.

Innovation with careful experimentation

According to Tolchin, YSM leaders recognize that changes are coming so fast in generative AI, and the technologies are so complex that it can be difficult to apply ethical principles quickly to new applications. If an organization is too slow, it might delay the arrival of important new capabilities for research and clinical practice. If an organization is not cautious enough, it risks running afoul of the principles of biomedical ethics. That’s why technologists, researchers, and clinicians typically experiment with new applications in limited ways in pilot programs before using them more broadly.

The spirit of experimentation will be critical to fostering innovation at the university and identifying emergent ethical issues, according to Nicolas Gertler, an undergraduate studying cognitive science and a fellow at Yale’s Digital Ethics Center. He is the university’s first-ever AI Ambassador—a role within the Yale Poorvu Center for Teaching and Learning that involves explaining the technology to faculty and students. Gertler urges university students to take the initiative and experiment with AI technologies rather than waiting to see what new capabilities technology vendors will provide. “We should think about what we want the future to look like and help create it,” he says, and “we should implement an ethics-first approach.”

In the coming years, it will be important for YSM and other medical schools to consider the ethical implications of machines that will likely be able to out-think humans in most areas of cognition, will be capable of autonomous action, and may even possess sentience, according to David Rosenthal, MD, assistant professor of medicine (general medicine) and co-course director of the medical school’s Professional Responsibility course for first-year students. “We need to start thinking about what that means in health care,” he says.

Advances in AI raise many questions requiring answers. If machines are so smart, what tasks and responsibilities will humans reserve for themselves? What limits will we place on the machines, and how can that be done? Might it even be possible to encode biomedical ethics in the computing models and applications?

There are many more questions than answers in the space where AI crosses paths with ethics. The bottom line, says Kaplan, the informatics bioethicist, is this: “These technologies should be employed to support clinicians and patients in ways that keep human values and compassionate, quality care at the forefront.” 

Previous Article
A Milestone in Medicine
Next Article
Interview with an Ethics Chatbot