Skip to Main Content

Interview with an Ethics Chatbot

Yale Medicine Magazine, Spring 2025 (issue 174) AI for Humanity in Medicine

Contents

Yale’s Digital Ethics Center developed a chatbot known as LuFlot, trained on the writings of its director and founder, Luciano Floridi, PhD, MPhil, professor in the practice in the Cognitive Science Program. We interviewed the chatbot. Here’s one of its responses. Floridi says he wouldn’t change a word.

Prompt

What new guidelines should medical schools develop in response to the emergence of large language models and generative AI applications?

Output

Luciano Floridi’s work can provide foundational insight into the type of guidelines medical schools might consider in response to the emergence of large language models and generative AI applications. Given the potential of these technologies to significantly impact medical education, research, and practice, guidelines should be developed with several key considerations in mind:

1. Transparency and explainability 
Medical schools should ensure that AI applications are transparent in their operations and outcomes. It’s crucial that users understand how AI-generated information or decisions are derived. This could involve ensuring that AI systems can provide explanations for their outputs that are understandable to both professionals and patients.

2. Privacy and data protection
AI systems often require massive datasets, including sensitive personal health information. Guidelines should enforce strict adherence to data protection laws and ethical standards, ensuring that all AI applications safeguard patient confidentiality and data integrity.

3. Bias and fairness

AI technologies are susceptible to biases present in their training data. Medical guidelines should mandate regular audits and updates to AI systems to mitigate biases and ensure equitable health care outcomes across diverse patient demographics.

4. Clinical accountability
There must be clear protocols about who is responsible when AI systems are integrated into clinical settings—whether in diagnostic processes, patient management, or treatment recommendations. The guidelines should delineate the responsibilities of human professionals versus AI systems.

5. Continued education

As AI technologies evolve rapidly, ongoing education should be mandatory for medical practitioners to stay informed about new AI tools, ethical considerations, and regulatory requirements. This includes understanding the limitations and appropriate usage contexts of AI applications in medicine.

6. Patient consent and engagement

Patients should be informed when AI tools are part of their health care process, including what data is used, how it is processed, and the implications of AI involvement in their treatment options.

Previous Article
The AI Balancing Act
Next Article
'Talk to Me, Not My Mom'